v
20
March 2015: The Final Exam answer key has been posted to the class website
below, and also is available here.
v
12
March 2015: Please note that Saturday is “Pi day of the century:” 3/14/15.
Mnemonic for pi = “See, I have a rhyme assisting, my feeble brain, its
tasks oft-times resisting.” “See”=3 letters, “I”=1
letter, “have”=4 letters, ....
v
10
Mar 2015. The Quiz #4 key has been posted below and also is available here.
v
5 Mar 2015: As announced repeatedly in
lecture and posted to the class mailing list, we will be unable to extend the
coding project deadline of Fri., 13 Mar., 11:55pm. Late projects will lose 10%
of the project grade for each day or fraction thereof they are late, up to a
maximum of two days. No late projects will be accepted after Sun., 15 Mar.,
11:55pm. The reason is that we need the time to run the tournament.
v
5
Mar 2015: Dr. Lathrop’s office hours will end at 4pm on Tuesday, March
10, in order to accommodate the Mathias Niepert talk.
v
5
Mar 2015: Mathias Niepert
University
of Washington, Seattle
Tuesday,
March 10, 2015
Donald
Bren Hall Room 4011
4:00
pm - 5:00 pm
TITLE
Tractable
Probabilistic Models for Big Relational Data
ABSTRACT
The
extraction of knowledge from the world wide web and
other sources is a central problem in the data sciences with numerous
applications and wide-ranging implications for future technologies. There are
several existing information extraction projects such as the Google Knowledge
Graph, DBpedia, and NELL. These projects populate
large knowledge bases and are a valuable data set for bootstrapping statistical
relational models of the world's knowledge. These models enable innovative
technologies such as search engines that enrich keyword-based results with
entities, their attributes, and relations. Since inference and learning in
probabilistic models is NP-hard in general, there is a need to develop new
theories and algorithms that scale statistical relational models to large data
sets. To this end, I will present two lines of recent work. First, I will
describe symmetry-aware inference and learning, a framework that exploits both
symmetries and conditional independence to design classes of expressive yet
tractable probabilistic models. The framework provides a deep theoretical link
between symmetries and tractable inference in probabilistic models. Second, I
will present tractable probabilistic knowledge bases (TPKBs) featuring sublinear, disk-based, and parallel inference algorithms.
TPKBs are designed so as to always compile to a poly-sized circuit
representation. We present a TPKB we have learned from existing information
extraction projects and apply it to data extraction and integration problems
such as entity resolution and linking.
BIO
Mathias
Niepert is a postdoctoral research associate at the
University of Washington in Seattle. He obtained a PhD from Indiana University.
His work has won best paper awards at international conferences such as UAI,
AAAI, IJCNLP, and ESWC. He is the principal investigator of a Google faculty
research award and a bilateral DFG-NEH research award. Mathias has also
co-organized several successful workshops and he is co-founder of several
open-source digital humanities projects such as the Indiana Philosophy Ontology
Project and the Linked Humanities Project.
v
v
3
Mar 2015: Please, fill out your student evaluations for CS-171.
****
Every student who fills out a course evaluation for CS-171 will receive a bonus
of 1% added to their final grade, free and clear, off the curve, simply a
bonus. EEE will return to me the
names of students who fill out evaluations (but not the content, which remains
anonymous), provided that enough students fill out evaluations so that
anonymity is not compromised. I
will add 1% free bonus to the final grade of each such named student. ****
These
evaluations are important to UCI in monitoring our quality and success in
fulfilling our educational mission, and they are important to me in improving
the CS-171 experience.
Knowing
what positive features you found good and strong helps me know what to repeat
and emphasize. Many of the positive
features in the current offering of CS-171 were suggested as improvements by
previous year's students.
Please,
fill out your student evaluations for CS-171.
v
28
Feb 2015: The Computer Science Dept. is in the process of hiring another AI/ML
faculty member, which presents an exciting opportunity for you to hear some
cutting-edge AI/ML faculty candidate research talks. Typically, these faculty candidates will
present the very latest frontiers of AI/ML research, because they will have
achieved these new results quite recently in their PhD or post-doctoral
research programs.
If you are an interested student
who thinks it is exciting to go deeper into the AI/ML area, you will find it
fascinating to attend these talks:
****
* Mon 2
Mar
*
11am-noon DBH-4011
Faculty
Candidate Talk
ELIAS
BAREINBOIM
University
of California, Los Angeles
Title: Generalizability in Causal Inference
****
* Wed 4
Mar
*
11am-noon DBH-4011
Faculty
Candidate Talk
TAYLOR
BERG-KIRKPATRICK
University
of California, Berkeley
Title:
Structured Models for Unlocking Language Data
****
v
27
Feb 2015: From now on, Quizzes and Exams will be available for pick-up in
Discussion Section.
v
24
Feb 2015: The Quiz #3 answer key has been posted to the class website below,
and also is available here.
I have added a “STRATEGY HINT” box to the notes on problem #2,
which you may find helpful. Please let me know quickly if you experience any
read/permission problems.
v 22
Feb 2015: As posted to the class mailing list, the deadline to submit the
working "draft" version of your AI has been extended to Friday, 27
February 2015, 11:55 PM. We have added the additional requirement that your
“draft” AI in the “draft” tournament must beat or tie AI_poor in at least one of the six games it will play
against AI_poor in the “draft” tournament
(please see previous postings to the class email). This additional requirement is
intended to prevent teams from simply entering the "dummy" AI from
the class website with no smarts at all, which was *never* our intent.
The extended deadline will give you time to meet this requirement.
v
19
Feb 2015: The deadline to submit the working "draft" version of
your AI has been extended to Sunday, 22 February 2015, 11:55 PM. Please see and heed Michael Beyeler’s email message of today that describes
*minimum* requirements.
v
19
Feb 2015: Effective immediately, the project grading structure is changed as
follows:
* AI_Poor, AI_Average, and AI_Good also will be entered into the Final Tournament
(these AIs are available in the "student coding resources" part of
the Project section of the class website).
* You
will lose 20% of your Project grade if your AI does not beat or tie AI_Poor in the Final Tournament. It is sufficient to beat
or tie it in any one of your games.
* You
will lose 10% of your Project grade if your AI does not beat or tie AI_Average in the Final Tournament (total loss of 30% if
your AI always loses to both AI_Poor and AI_Average). It is sufficient to beat or tie it in any one
of your games.
* You
will GAIN 10% BONUS added to your Project grade if your AI beats or ties AI_Good in the Final Tournament. It is sufficient to beat
or tie it in any one of your games.
* As
previously stated, the top 10% in the Final Tournament will receive 10% BONUS,
the second 10% will receive 9% BONUS, the third 10%
will receive 8% BONUS, and so on. So, if you are clever and your AI is smart, you can receive up to a
total of 20% BONUS.
v 13 Feb 2015: Happy Friday the 13th!!
The FQ’2015 Mid-term Exam key has been posted to the class website below,
and also is available here. Please let me know quickly if you
experience any read/permission problems.
v 10 Feb 2015: For Thurs., 12 Feb. (the
Mid-term Exam), please try to arrive in class and get settled a few minutes
early. We would like to pass out the Exam quickly, so that you have the maximum
amount of time to work on it.
v 10 Feb 2015: Dr. Lathrop’s
office hours are cancelled for Thurs., 12 Feb. (immediately after Mid-term
Exam).
v 4 Feb 2015: A kind and helpful student
has brought it to my attention that the PDF reader on a Mac (iPad) sometimes
has difficulty correctly reading the previous CS-171 tests. For example, in
problems #3a and #3b on Quiz #2 from SQ’2004, the erroneous
“Y” on the key was corrected by an overlay in the PDF file of a red
X through the Y, and next to it a red N. However, this overlay is invisible on
a Mac (iPad), leading to incorrect understanding of the right answer.
Sometimes, the Mac PDF software appears to be incompatible with the Windows PDF
software. If you are using a Mac to read the previous CS-171 test PDF files and
something looks wrong, please look at it again from Windows. If it still looks
wrong, please bring it to my attention.
v 3 Feb 2015: The Quiz #2 answer key has
been posted below and is available here.
v 27 Jan 2015: Michael Beyeler has kindly updated ConnectK.cpp and readme.txt,
which have been posted to the student coding resources in the Project section
below. The changes to ConnectK.cpp fix a comment wrapping problem. The changes
to readme.txt give instructions on how to compile the code on Linux.
v 27 Jan 2015: Very few students voiced
an opinion about the lecture slides color scheme; of those few, a majority
preferred black on white.
v 27 Jan 2015: For your convenience, a
new section has been added to the class website, below: Important Dates.
v 25 Jan 2015: A revised Quiz #1 key has
been posted to the class website.
v 22 Jan 2015: Bright, clever, and
attentive students have found discrepancies between Quiz #1 and prior test keys
posted as study guides below. They have been rewarded with Bonus Points for
finding an error in the class material. Consequently, for Quiz #1 only, these
answers will receive full credit: (a) on problem 2, “Performance”
instead of “Performance measure”; and (b) on problem 3, answers for
BFS and IDS that did the goal-test after the node was popped off the queue.
These discrepancies on prior test keys will be corrected shortly. After they
are corrected, such answers will not receive full credit in the future.
v 20 Jan 2015: Michael Beyeler will give the “Special Topics” lecture
(Tue., 10 Mar.) on the topic of “Computational Neuroscience.”
v 20 Jan 2015: The Quiz #1 answer key
has been posted below and is also available here.
v Please always be sure that you have
downloaded the latest version of the CS-171 lecture notes. In particular, I am revising the content
of the lecture slides to improve student understanding in parallel with
revising the color scheme to improve visual acuity. As stated below in the
“Syllabus” section:
“Please note: I
sometimes tweak or revise the lecture slides before or after the lecture, based
upon perceived improved student comprehension; please, always ensure that you
have the most current, up-to-date version.”
v
There
are now two CS-171 MessageBoard forums at EEE:
Class
Discussion; and
Seeking project programming team partner. (Please use this forum if you are
seeking a programming team partner for the class project.)
v
Current
announcements will appear here, at top-level, for quick and easy inspection.
Fri., 16 Jan., midnight: Deadline to
notify the TA (mbeyeler@uci.edu) about your team status.
Tue., 20 Jan., Quiz #1.
Tue., 3 Feb., Quiz #2.
Tue., 10 Feb., Catch-up, Review for
Mid-term Exam.
Thu.,
12 Feb., Mid-term Exam.
Extended:
Fri., 27 Feb., 11:55pm: Deadline to deposit a working
"draft" version of your AI in the EEE DropBox
(REQUIRED; see minimal requirements).
Tue., 24 Feb., Quiz #3.
Tue., 10 Mar., Quiz #4; Computational
Neuroscience (Michael Beyeler guest lecture).
Thu.,
12 Mar., Catch-up, Review for Final Exam.
Fri., 13 Mar., 11:55pm: Deadline to
deposit the final version of your AI in the EEE DropBox.
Thu.,
19 Mar., 1:30-3:30pm: Final Exam.
This is a broad
introductory survey course. We will move rapidly through the basic fundamentals
of a large number of AI topics. For every topic that we touch, there are
specialized techniques available to the sophisticated practitioner that will go
beyond what we are able to cover in 10 short weeks. The bright, interested, and
motivated student is encouraged to pursue advanced studies. In any case, if you
work hard, study hard, and master the presented material, you will emerge from
this course with a basic grasp of some of the fundamental methods that we use
to engineer intelligent systems.
The
course is based on, and the UCI bookstore has, the 3rd edition. The
assigned textbook reading is required, and is fair game for quizzes and
exams. You
place yourself at a distinct disadvantage if you do not have the textbook. I expect that you have a personal copy
of the textbook, and quizzes and exams are written accordingly.
Please
purchase or rent your own personal textbook for the quarter (and then resell it
back to the UCI Bookstore at the end if you don't want it for reference).
Please do not jeopardize
your precious educational experience with the false economy of trying to save a
few dollars by not having a personal copy of the textbook.
Also,
for your convenience, I have requested that a copy of the textbook be placed on
reserve in the UCI Science Library. There is a two-hour check-out limit. However,
please understand that with high student enrollments, it is unrealistic to
expect that these thin reserves will always be available when you need
them. Please
purchase or rent your own personal textbook.
I do deplore the high cost of textbooks. You are likely to find the book cheaper
if you search online at EBay.com, Amazon.com, and related sites.
A
student kindly contributed this link to a blog that offers a PDF of the course
textbook, for which I cannot vouch, but which may be helpful:
http://crazy-readers.blogspot.com/2013/08/artificial-intelligence-modern-approach.html
You
can also try to search the Internet for “artificial intelligence a modern
approach pdf 3rd edition”. Several more hits turned up the last time I
did so.
A
student kindly contributed the following suggestion, for which I cannot vouch,
but which may be helpful:
Hello,
I just wanted to point out that there does exist an
international edition of the book which can be bought for around $40-50. I
cannot comment on what specific differences there are for this particular book,
though they are usually very small (exercises moved around, etc).
Obviously, it is in paperback.
http://www.valorebooks.com/affiliate/buy/siteID=e79mzf/ISBN=0136042597
http://www.biblio.com/books/360025589.html
Personally I plan on using this book for a while so I
bought the hardcover version, but I just wanted to point out that this is an
option for those looking for a more 'economical' route.
~ XXXXXX [name anonymized to protect student privacy]
v
Effective
immediately, the project grading structure is changed as follows:
* AI_Poor, AI_Average, and AI_Good also will be entered into the Final Tournament
(these AIs are available in the "student coding resources" part of
the Project section of the class website).
* You
will lose 20% of your Project grade if your AI does not beat or tie AI_Poor in the Final Tournament. It is sufficient to beat
or tie it in any one of your games.
* You
will lose 10% of your Project grade if your AI does not beat or tie AI_Average in the Final Tournament (total loss of 30% if
your AI loses to both AI_Poor and AI_Average).
It is sufficient to beat or tie it in any one of your games.
* You
will GAIN 10% BONUS added to your Project grade if your AI beats or ties AI_Good in the Final Tournament. It is sufficient to beat
or tie it in any one of your games.
* As
previously stated, the top 10% in the Final Tournament will receive 10% BONUS,
the second 10% will receive 9% BONUS, the third 10%
will receive 8% BONUS, and so on.
Your
Bonus Points, if any, should be visible to you in EEE GradeBook.
If for some reason you have been awarded a Bonus Point, but you did not get a
notification from me or it did not appear in EEE GradeBook,
please do not hesitate to email me as a reminder, just to avoid an unlikely
error.
The following represents a preliminary syllabus. Some changes in the
lecture sequence may occur due to earthquakes, fires, floods, wars, natural disasters,
unnatural disasters, or the discretion of the instructor based on class
progress.
Background Reading and Lecture Slides will be changed or revised as the
class progresses at the discretion of the instructor.
Please note: I sometimes
tweak or revise the lecture slides before or after the lecture, based upon
perceived improved student comprehension; please, always ensure that you have
the most current, up-to-date version.
Please read the assigned textbook reading and review the
lecture notes in advance of each lecture, then again
after each lecture.
Tue., 6 Jan., Introduction, Agents.
Read
in Advance: Textbook Chapters 1-2.
Lecture
slides: Introduction, Agents [PDF; PPT].
Optional Cultural Interest:
IBM Watson: Final Jeopardy! and the Future of Watson
AI vs. AI.
Two chatbots talking to each other.
Optional
Reading:
John
McCarthy, “What
Is Artificial Intelligence?”
AAAI,
AI Overview.
Thu., 8 Jan., Uninformed Search.
Read
in Advance: Textbook Chapter 3.1-3.4.
Lecture
slides (three parts):
(1)
Introduction to Search [PDF; PPT]; and
(2)
Uninformed Search [PDF; PPT].
Optional Cultural Interest:
Boston Dynamics Big Dog (new
video March 2008)
Cheetah Robot runs 28.3 mph;
a bit faster than Usain Bolt
Optional Reading:
Newell & Simon’s “Symbols and Search” Turing
Award Lecture (1976).
Herbert
Simon was awarded a Nobel Prize (in economics, 1978).
Tue., 13 Jan., Heuristic Search.
Read
in advance: Textbook Chapter
3.5-3.7.
Lecture
slides: Heuristic Search [PDF; PPT.
Optional
Cultural Interest:
Infinite Mario AI - Long
Level
An attempt at a Mario AI using the A* path-finding algorithm.
It
claims the bot won both Mario AI competitions in 2009.
“You
can see the path it plans to go as a red line, which updates when it detects
new obstacles at the right screen border. It uses only information visible on
screen.”
See
also http://www.marioai.org/.
Interesting
search algorithm visualization web page.
Optional Cultural Interest:
A* Search in Interplanetary Trajectory Design, courtesy of Eric Trumbauer, former
CS-271 student.
Eric
comments, “One thing to possibly discuss with the last slide is that the
itinerary it settles on does stay at a higher energy for a little bit until it
passes closest to Europa, maximizing the velocity before the insertion sequence
to the lower energy. This is indeed
optimal behavior, as opposed to immediately reducing its energy as a Greedy
Best First algorithm using this heuristic would want to do.”
A*
Search in Protein Structure Prediction, Lathrop and Smith, J. Mol. Biol.
255(1996)641-665
Optional Reading:
Alan Turing’s classic paper on AI (1950).
Alan Turing is the most famous computer scientist of all time.
The Turing Award is the highest honor in computer science.
The Turing Machine is still our fundamental theoretical model of computation.
Turing’s work on the Enigma code in WWII led to programmable computers.
AAAI/AI Topics: The Turing Test: “Can Machines Think?”
Wikipedia “Computing Machinery and Intelligence”
Thu., 15 Jan., Local Search.
Read in advance: Textbook Chapter 4.1-4.2.
Lecture
slides (two parts):
(1)
Local Search [PDF;
PPT]; and
(2)
Representation [PDF;
PPT].
Optional URLs:
“Hill
Climbing with Simulated Annealing”
The
program learns to build a car using a genetic algorithm.
If
you let this program run for a long time (>> 30 generations), you will
see that eventually it produces cars well suited to the terrain. This outcome
illustrates a general theme of genetic algorithms: very, very slow; but,
eventually, good performance. After all, it took ~3.6 billion years to evolve
humans from bacteria (http://en.wikipedia.org/wiki/Timeline_of_evolutionary_history_of_life).
Please note that this eventual good performance of genetic algorithms is
conditional upon a representation that allows good solutions to sub-problems to
be combined simply, by cross-over, into a globally good solution; if the vector
position of the features is completely randomized within the chromosome, any
such good performance is lost.
Optional
Reading:
Minton,
et. al., 1990, AAAI "Classic
Paper" Award recipient in 2008.
How to solve the 1 Million Queens problem and schedule space
telescopes.
Optional
Lecture Slides:
Optional Ungraded Homework:
Fri., 16 Jan., midnight: Deadline
to notify the TA (mbeyeler@uci.edu) about your team status.
(1.1)
What is your team name --- creativity is encouraged!
(1.2)
Who is your partner? or are you a
solo team?
There
is an EEE CS-171 Message Board "Seeking project programming team
partner" intended for use by students seeking a project partner.
Tue., 20 Jan., Quiz
#1 (answer key here);
start Games/Adversarial Search.
Read in advance: Textbook
Chapter 5.1-5.5.
Lecture
slides: Games/Adversarial Search/MiniMax Search [PDF; PPT].
Optional
Cultural Interest:
RoboCup 2012 Standard Platform: USA / Germany (Final).
Optional URL: “Complete Map of Optimal Tic-Tac-Toe Moves.”
Optional
Reading:
Campbell, et al., 2002, Artificial
Intelligence, “Deep Blue.” [PDF]
(URL
http://www.sciencedirect.com/science/article/pii/S0004370201001291)
Details about the AI system that beat the human chess champion.
Thu., 22 Jan., finish Games/Adversarial Search.
Read in advance: Textbook
Chapter 5.1-5.5.
Lecture
slides: Games/Adversarial Search/Alpha-Beta Pruning [PDF; PPT].
Optional Cultural Interest:
Arthur
C. Clarke “Quarantine.”
A science fiction short story written by a classic master, in 188
words.
He
was challenged to write a science fiction short story that would fit on a
postcard.
Optional Reading: Chaslot, et al.,
“Monte-Carlo
Tree Search: A New Framework for Game AI,”
in Proceedings
of the Fourth Artificial Intelligence and Interactive Digital Entertainment
Conference,
AAAI Press, Menlo Park, pp. 216-217, 2008.
An interesting combination of Local Search (Chapter 4) and Game
Search (Chapter 5).
Optional URL: “Everything
Monte Carlo Tree Search” website.
Optional Ungraded Homework:
Tue., 27 Jan., start Constraint Satisfaction.
Read in advance: Textbook
Chapter 6.1-6.4, except 6.3.3.
Lecture
slides: Constraint Satisfaction Problems [PDF;
PPT].
Optional Cultural Interest:
Thu., 29 Jan., finish
Constraint Satisfaction.
Read
in advance: Textbook Chapter 6.1-6.4, except 6.3.3.
Lecture
slides: Constraint Propagation [PDF;
PPT].
Optional Cultural Interest:
Tesla Model S P85D AWD and
auto-pilot demo
Google Car: It Drives Itself
- ABC News
[Part 1/3] The Evolution of
Self-Driving Vehicles
[Part 2/3] How Google's
Self-Driving Car Works
[Part 3/3] Google's
Self-Driving Golf Carts
DARPA Urban Challenge Highlights
DARPA Urban Challenge: Ga
Tech hits curb
DARPA Urban Challenge - Sting
Racing crash
[DARPA] Team Oshkosh attempts
forced Entry to Main Exchange
[DARPA] Alice's Crash
(spectator view)
[DARPA] Alice's Crash
(road-finding camera) [different view of above; long]
DARPA Urban Challenge Crash
Cornell MIT
DARPA Urban Challenge - robot
car wreck [different view of above]
Optional
Reading:
Autonomous car - Wikipedia,
the free encyclopedia
“Autonomous
Driving in Traffic: Boss and the Urban Challenge” (2009).
Tue.,
3 Feb., Quiz #2 (answer key here);
start Propositional Logic.
Read
in advance: Textbook Chapter 7.1-7.4.
Lecture slides: Propositional Logic A [PDF; PPT].
Optional
Cultural Interest (snakes, spiders, and a talking head!):
“Asterisk - Omni-directional
Insect Robot Picks Up Prey #DigInfo”
“Freaky AI robot, taken from Nova science now”
Optional
Ungraded Homework:
Thu., 5 Feb., finish Propositional Logic.
Read
in advance: Textbook Chapter 7.5 (optional: 7.6-7.8).
Lecture slides: Propositional Logic B [PDF; PPT].
Additional
Discussion lecture slides [PDF].
Optional
Cultural Interest:
“Janken
(rock-paper-scissors) Robot with 100% winning rate”
Tue.,
10 Feb., Catch-up,
Review for Mid-term Exam.
Read in advance: Textbook Chapters 1-7 (only sections assigned above).
Lecture
slides: Catch-up, Review, Question&Answer [PDF; PPT].
Optional
Cultural Interest:
“Quadrocopter Pole Acrobatics”
“Nano
Quadcopter Robots swarm video”
The
Stanford Autonomous Helicopter performing an aerobatic airshow under computer
control:
“Stanford Autonomous
Helicopter - Airshow #1”
“Stanford Autonomous
Helicopter - Airshow #2 Redux”
No
homework --- study for the Mid-term Exam.
Thu., 12 Feb., Mid-term Exam (answer key here).
Happy Lincoln’s Birthday!
Read in advance: Textbook Chapters 1-7 (only sections assigned above).
Lecture
slides: Catch-up, Review, Question&Answer
(above).
Optional Cultural Interest:
“hitchBOT | Making my way across
Canada, one ride at a time.”
“Canada's
hitchBOT travels 4,000 miles to test human-robot
bonds --- LA Times.”
Tue.,
17 Feb., Review Mid-term Exam; start First Order Logic
Read in advance: Textbook Chapter 8.1-8.2.
Lecture
slides: First Order Logic Syntax [PDF; PPT].
Optional Reading:
Cyc is a large-scale knowledge-engineering project:
“CYC: A Large-Scale Investment in Knowledge Infrastructure,” Lenat, 1995
“Searching for Commonsense: Populating Cyc from the Web,” Matuszek et al, AAAI 2005
Cyc - Wikipedia, the free encyclopedia.
Optional
Ungraded Homework:
Thu., 19 Feb., finish First Order Logic; Knowledge
Representation.
Happy Chinese New Year! Kung Hei Fat Choy! Wishing you Great Happiness and Prosperity!
Read in advance: Textbook Chapter 8.3-8.5.
Lecture slides (two parts):
(1) First Order Logic Semantics [PDF; PPT]; and
(2) First Order Logic
Knowledge Representation [PDF;
PPT].
Optional
Lecture slides: First Order Logic Inference [PDF; PPT].
Read in advance: Textbook
Chapter 9.1-9.2, 9.5.1-9.5.5.
Tue., 24 Feb., Quiz #3 (answer key here); Probability, Uncertainty, Bayesian Networks.
Read in advance: Textbook Chapters 13, 14.1-14.2.
Lecture
slides (two parts):
(1)
Reasoning Under Uncertainty [PDF; PPT].
(2)
Bayesian Networks [PDF;
PPT].
Optional
Cultural Interest:
Video of Judea Pearl’s 2011 Turing Award lecture.
The Mechanization of Causal
Inference: A “mini” Turing Test and Beyond.
Optional URL: “Peter Norvig 12. Tools of AI: from logic to probability.”
Optional
Cultural Interest:
“Flexible Muscle Based
Locomotion for Bipedal Creatures” --- video
“Flexible Muscle-Based Locomotion for Bipedal Creatures” --- paper.
Read in advance: Textbook Chapter 18.1-18.4.
Lecture
slides: Intro to Machine Learning [PDF; PPT].
Optional
Reading:
Ferrucci, et al., 2010, “Building
Watson: An Overview of the DeepQA Project”
“Machine learning”
- Wikipedia, the free encyclopedia
“Data mining” -
Wikipedia, the free encyclopedia
Optional
URL: “Google
reveals it is developing a computer so smart it can program ITSELF.”
Optional URL: Proof that Decision Tree information gain is always non-negative (problem 3, pp. 4-5).
Optional Ungraded Homework:
Tue.,
3 Mar., finish Learning from Examples.
Read in advance: Textbook Chapter 18.5-18.12, 20.1-20.3.2.
Lecture slides:
Learning Classifiers [PDF; PPT].
Optional
Lecture slides: Viola & Jones, Learning, Boosting, Vision [PDF; PPT] (read
the two papers immediately below)
Optional Reading: Viola & Jones, 2004, “Robust Real-Time Face Detection”
Optional Reading: Freund & Schapire, 1999, “A Short Introduction to Boosting”
Optional
Reading: Danziger, et al., 2009, “Predicting
Positive p53 Cancer Rescue Regions Using Most Informative Positive (MIP) Active
Learning”
Optional
Reading: Kim & Xie, 2014, “Handwritten
Hangul recognition using deep convolutional neural networks”
Optional
Reading: Baldi, Sadowski,
& Whiteson, 2014, “Searching
for Exotic Particles in High-Energy Physics with Deep Learning”
Optional
Reading: Gaffney, et al., 2007, “Probabilistic
clustering of extratropical cyclones using regression
mixture models”
Optional
Reading: Lathrop, et al., 1999, “Knowledge-based Avoidance of
Drug-Resistant HIV Mutants”
Optional Ungraded Homework:
Thu., 5 Mar., Clustering
(unsupervised learning) and Regression (statistical numeric learning).
Read in advance: Textbook Chapter 18.6.1-2, 20.3.1.
Lecture slides:
Clustering (Unsupervised Learning) [PDF; PPT].
Optional
Cultural Interest:
“IBM simulates 530 billon neurons, 100 trillion synapses on supercomputer”
“Speech Recognition Breakthrough for the Spoken, Translated Word”
Thu., 12 Mar., Catch-up, Review for Final Exam.
Read in advance: Textbook, review all assigned reading.
Lecture
slides: Review, Catch-up, Question&Answer [PDF; PPT].
Fri., 13 Mar., midnight: Deadline to deposit the final version of your
AI in the EEE DropBox.
Your
EEE DropBox submission must be a single
“zipped” file named “yourLastName_yourUCINetID_yourTeamName.”
Please
see Fri., 20 Feb. (above), for details
of what to submit (plus,
‘doc’ must contain your Project Report).
Please
deposit only one submission per team.
Thu., 19 Mar., 1:30-3:30pm. (answer
key here)
Connect-K Game.
This project corresponds to Game Search (Chapter 5 in your book). Your
job is to write an AI agent that can beat you at Connect-K, i.e., to write the
adversarial search (game search) controller for a video game world. Shells are
available in C++ and Java.
The tournament shell
will allow you to run different versions of your AI against each other, or
against your fellow students’ AIs, to explore which “great
ideas” make things better or worse. Bright and clever students might
implement a Local Search method (perhaps using genetic algorithms?) to explore
different improvements to their AI automatically, while they sleep?
I expect to be able to
run a tournament within which your AI controllers will compete against each
other for Bonus Points. Everyone’s AI will be entered into the tournament
automatically; the bonus points are simply free, based on how many games your
AI wins against other AIs. The top 10% winners will get 10% added to their
project grade as a bonus; the second 10% will get 9%; the third 10% will get
8%; and so on.
v
Effective
immediately, the project grading structure is changed as follows:
* AI_Poor, AI_Average, and AI_Good also will be entered into the Final Tournament
(these AIs are available in the "student coding resources" part of
the Project section of the class website).
* You
will lose 20% of your Project grade if your AI does not beat or tie AI_Poor at least once in the Final Tournament.
* You
will lose 10% of your Project grade if your AI does not beat or tie AI_Average at least once in the Final Tournament (total loss
of 30% if your AI loses to both AI_Poor and AI_Average).
* You
will GAIN 10% BONUS added to your Project grade if your AI beats or ties AI_Good at least once in the Final Tournament.
* As
previously stated, the top 10% in the Final Tournament will receive 10% BONUS,
the second 10% will receive 9% BONUS, the third 10%
will receive 8% BONUS, and so on.
* So, if you are clever and your
AI is smart, you can receive up to a total of 20% BONUS.
The Project Report
template is available here [Word; PDF].
An example
dumb game is available; an example
smart game is available; a Project
Specification is available; a Report Template is available [Word; PDF]; a
collection of student coding
resources is available.
The coding resources
include:
(1) A Java shell.
(2) A C++ shell.
(3) A tournament shell,
which will let you play different versions of your AI against themselves to
refine your evaluation function.
(4) Three example AIs,
which you or your AI can play against: a good AI, an average AI, and a poor AI.
(5) The “DummyAI” source code, which your cleverness and
ingenuity will make smart.
(6) Several readme*.txt
files: readme.txt, readme-cPlusPlus.txt, readme-tournament.txt.
(7) ConnectK
hints, caveats, and heuristics.
(8) A changelog.txt.
Please note:
We'll run the tournament on SGE or a lab machine. The C++ target platform
should be x86. You should write your code to run on any x86 machine.
The OS is CentOS 6. We most likely will need to compile your code with CentOS 6
(RHEL 6) x86_64. Machines in the openlab.ics.uci.edu (family-guy.ics.uci.edu)
are CentOS 6.
Please note:
Connect-K, like most board games of its type, has a built-in advantage to the
first player (e.g., chess grandmasters try to win if they are white and play
first; they try to draw if they are black and play second). A
“fairer” game would have the first player make one move; then the
second player make two moves; then the first player make two moves; and so on,
alternately making two moves each, to neutralize the first-move advantage. The
point of this exercise is for you to write a “smart” AI, not to win
board games; nevertheless, be sure to run your AI both as first and second
player, then average the results.
Please note:
The shells may change as the quarter progresses. If so, we will try to keep the interface
the same, so that all you need do is change the surrounding shell.
Several of my various CS-171 projects were written by former CS-171 students who became interested in AI and signed up for CS-199 in order to pursue their interest and write interesting AI project shells. Please let me know if this is of interest to you (CS-171 grade of A- or better required).
Previous
CS-171 Quizzes, Mid-term exams, and Final exams are available here as study
guides.
As an
incentive to study this material, at least one question from a previous Quiz or
Exam will appear on every new Quiz or Exam. In particular, questions that many
students missed are likely to appear again. If you missed a question, please
study it carefully and learn from your mistake --- so that if it appears again,
you will understand it perfectly.
Also, a
student has recommended ‘quizlet.com’ as a good online study
resource. While I cannot vouch for it, apparently it contains several good
study aids for your textbook.
A kind and
helpful student has brought it to my attention that the PDF reader on a Mac
(iPad) sometimes has difficulty correctly reading the previous CS-171 tests.
For example, in problems #3a and #3b on Quiz #2 from SQ’2004, the
erroneous “Y” on the key was corrected by an overlay in the PDF
file of a red X through the Y, and next to it a red N. However, this overlay is
invisible on a Mac (iPad), leading to incorrect understanding of the right
answer. Sometimes, the Mac PDF software appears to be incompatible with the
Windows PDF software. If you are using a Mac to read the previous CS-171 test
PDF files and something looks wrong, please look at it again from Windows. If
it still looks wrong, please bring it to my attention.
Winter Quarter 2015:
Mid-term Exam and key.
Final Exam and key.
Fall Quarter 2014:
Mid-term Exam and key.
Final Exam and key.
Winter Quarter 2014:
Mid-term
Exam and key
Final
Exam and key
Fall Quarter 2013:
Mid-term
Exam and key
Final Exam and key
Fall Quarter 2012:
Mid-term Exam and key
Final
Exam and key
Winter Quarter 2012:
Mid-term Exam and key
Final Exam and key
Spring Quarter 2011:
Mid-term Exam and key
Final
Exam and key
Spring Quarter 2004:
Spring Quarter 2000:
Additional Online Resources may be posted as the class progresses.
Textbook website for Artificial Intelligence: A Modern Approach (AIMA).
AAAI
Digital Library of more
than 10,000 AI technical papers.
AAAI AI Magazine.
AAAI Author Kit.
Academic dishonesty is unacceptable
and will not be tolerated at the University of California, Irvine. It is the
responsibility of each student to be familiar with, and without exception to
adhere to, UCI's current academic honesty policies.
This class has a “zero
tolerance” policy toward academic dishonesty.
Any student who engages in cheating, forgery, dishonest conduct, plagiarism, or
collusion in dishonest activities, will receive an academic evaluation of ``F''
for the entire course, with a letter of explanation to the student's permanent
file. The ICS Student
Affairs Office will be involved at every step of the process.
Please take the time
to read the current UCI
Senate Academic Honesty Policy (UC Irvine Academic Senate Manual, Appendix
VIII), also reproduced in the Appendix of the
UCI Catalogue, which states, among other things:
“....
Academic dishonesty is unacceptable and will not be tolerated at the University
of California, Irvine. Cheating, forgery, dishonest conduct, plagiarism, and
collusion in dishonest activities erode the University’s educational,
research, and social roles. They devalue the learning experience and its
legitimacy not only for the perpetrators but for the entire
community....”
Please also review
the ICS
Undergraduate Student Policies: Academic Integrity, which states, among
other things:
“Academically
Honest Conduct: To be academically integrous means holding to values such as honesty, fairness,
respect, and accountability in your scholastic pursuits. Students are expected
to follow the rules and guidelines established by instructors for assignments
and exams, and to accept responsibility for his or her
own work....”
Please also review
Academic misconduct at UC Irvine. A
summary of that information is:
·
Any student cited for Academic
Misconduct will have the report kept on file for 5 years.
·
During this time, a second incident
report would likely trigger suspension or dismissal from UCI.
·
A single incident on file usually also
results in the student being ineligible for honors at graduation.
·
Many graduate and professional
programs request this information and it may affect admission.
Please also review
the UCI
Code of Student Conduct: Grounds for Discipline, which lists, among other
offenses:
·
102.01: All forms of academic
misconduct including but not limited to cheating, fabrication, plagiarism, or
facilitating academic dishonesty. Refer to Academic
Senate Policy on Academic Honesty.
·
102.02: Other forms of dishonesty
including but not limited to fabricating information, furnishing false
information, or reporting a false emergency to the University.
The policies in all of these documents will be adhered to
*scrupulously*. No exceptions.
We aggressively watch and monitor all
Quizzes and Exams. All Quizzes and Exams include your row#, seat#, ID#, and
ID#-to-right, so that we can reconstruct *exactly* the seating pattern in the
exam room. We know next to whom you were seated, and so we easily can check for
suspicious patterns of answers. All
source code and all written reports are subject to automatic text-based
plagiarism detection.
My responsibility is to the honest
students who succeed based on their own merit and hard work. If you are
cheating, then we are *aggressively* out to catch you. Penalties are severe.
The worst part of my job is to have to catch and penalize a cheating student.
Please do not do this to me or to you.
Dr.
Lathrop strives very hard to create a level playing field for all students.