CS - AI - Lect 1
CS - AI - Lect 1
So one good way to see what people are thinking and dreaming about technology, and in
particular artificial intelligence, is to look at things like science fiction.
C3PO
Who's the big gold one? C3PO (Humeniod Robot used in Star war)
o So no, we haven't completely been able to build C3PO,
o How would the other guy?
we don't actually know how to build AI systems that do a little bit of everything.
We know how to build systems that do one thing very well if they have enough data, if
they have enough compute.
AI Generations
But fundamentally, this is the '70s, and in the '70s, what we think about AI and what that kind
of technology could mean for the future,
Killer robots from the future. It's getting a little darker. If you actually look at it, these
Terminators
o Anyway, in the '80s, we start worrying about hardware. Maybe hardware could be
scary.
o Maybe this technology that we're building could turn against us.
In the '90s, we realized that software can be scary. Software can be very scary.
1
o And in fact, that's the difference between the hardware. And so you can see here,
where
you're moving from hope that this technology can make our lives better to, what if it doesn't?
Reality Check
we are and what we can and can't do.
So let's switch from science fiction and how we're thinking about the future to what's in the
news.
So it used to be that science fiction writers were the ones who got to think about, what
is AI going to do and how could it change the world?
But now everyday reporters get to watch developments and think about, what's
happening right now and how can we communicate that to society? So you've probably
seen a lot of these things in the news.
2
and we understood very well how to play. But Go was something that was very hard to
make progress on with AI methods.
There's all these things that you see on the news that we can do. But there's a lot of
things we can't do.
Autonomous Cars
And you can start to see in these forward-looking worries starting to reflect what you see in
the movies.
What's the difference between what we can do, what we can't do?
What are the big ideas that are going to keep recurring as we go through generation after
generation of solving things, hitting walls, breaking through those walls, and advancing this
technology?
So what is AI?
Actually, famously, AI is a self-defeating definition.
Because if you say, well, AI, that's all those things that require human intelligence.
3
it's the science of making machines that think rationally.
let's take that to mean, think correctly.
4
They work very differently than humans, but they come up with similar
moves.
So people decided that maybe we should be building systems not based on
how they think,but how they act.
This is actually a very important change from thought processes to
underlying to the resulting actions.
Turing Test
And who's heard of the Turing test?
Very famous initial AI idea from Alan Turing, which says, one way to tell intelligence is to put a
computer in one room and a human in another room, and have a human talk, presumably by
typewriter, to both entities and see if you can tell them apart.
o And if they're functionally equivalent, well, you've achieved something.
o Maybe you've achieved intelligence or maybe you've just achieved human
bluffability.
o Maybe that's like intelligence, maybe it was not.
Rational
we mean maybe something about like, unemotional.
But when we talk about rational in this course, we mean something very specific and technical.
It's not about what those goals are.
o You can have a robot that's designed to clean up dirt as well as possible.
o You might call it vacuum cleaner.
o You can have another robot that's designed to make things as messy as possible.
o And those two things might both be rational if they're optimally achieving the goals
that they have set for them.
o So rationality only concerns the decisions that are being made, not the thought process
behind them.
o There can be many ways to get to a correct decision.
o You can get that through computation, through looking at data, past experiences.
o And those goals are always going to be expressed in terms of the utility of different
outcomes.
o You're in a situation, you have some actions you can take, it'll affect the world in some
possibly unknown way.
o And then you have utilities over the results.
o We'll unpack all of this during the rest of the lecture.
o But being rational, in a nutshell, is maximizing your expected utility.
o So this is an introduction to artificial intelligence.
o It would probably be better if we titled this course Computational Rationality.
5
What does maximize mean?
Well, one, brains are very good at making rational decisions, but they're not perfect and
you can get distracted by their limitations.
But really, the main issue is that brains aren't modular like software.
We can't pull bits out and look at them, and see what they do, and put them back in,
and replicate them, and follow
that kind of modularity.
To the extent that brains are modular, it doesn't look like software modularity.
There's a famous saying that brains are to intelligence as wings are to flight.
And one of the key things that made it possible to start getting automated flight was
when we stopped trying to make them flap their wings.
So sometimes you want to follow the existence proof and sometimes you want to build
things differently, because there's something about the engineering context
which changes those assumptions.
6
But that doesn't mean we haven't learned anything from the brain. And we certainly
have a ton more to learn from the brain.
There are a couple of things we learn from the brain.
One of the main ones is that there's really two components to making good decisions.
Remember, that's what AI is going to be about, making good decisions in a context.
One is memory.
Data.
You can make a good decision because you remember your experiences in the past.
Or an advantage humans have is remembering reading about other people's
experiences, so you don't actually have to make all the mistakes yourself.
So for example, one reason I might not touch that fire is, I've done that before and it
didn't go well, so I'm not going to do it again.
Another way to make good decisions is simulation, which is basically computation.
Unrolling the consequences of your actions according to a model.
What's going to happen next?
And playing what if in your head so that you can think through the consequence of
things without actually trying them.
So maybe I don't touch that fire because I can like play it forward in my head and realize
this is going to end poorly based on my model of how things work.
And of course for humans, those things are all intermeshed.
That model came from data and experiences.
And so in this class, one of the things we're going to do is talk a lot about both how
these two ways of making decisions are different, and also about how to interleave
them as
we get further into the course.
So to look at this course broadly and think about, what are you guys going to go through
for the next semester?
Well, the first part is really about getting intelligence or smart behavior emerging from
computation.
So we're going to think about search, satisfying constraints, thinking about uncertainty
and adversariality in the world.
And this is going to be about algorithms that, through computation, take a situation and
figure out something smart to do.
So the smart behavior comes from algorithms, from computation.
The second part of the course is going to be about making good decisions and having
intelligence on the basis of data and statistics.
And this is where machine learning comes in.
Here are all of your experiences.
7
Here's a new situation.
How should you act on the basis of what you've seen previously?
And of course then, we'll be able to interleave these things as we get further into the
course.
What are you actually going to do with all of these methods and all of this intelligent
behavior?
Think about things like natural language, and vision, and robotics, and games. So I think
Pieter's going to come and tell you a little bit about what's happened, stories so far in AI.
History of AI
[VIDEO PLAYBACK]
[MUSIC PLAYING]
- Hello again.
Dr. Wiesner, what really worries me today is, what's going to happen to us if machines
can think?
If you had asked me that question just a few years ago, I would've said it was very far-
fetched.
8
But it is confusing.
I've got some film that will illustrate this point, which I think will amaze you.
I don't think for a very long time we're going to have a difficult problem distinguishing a
man from a robot.
9
But I think the computer will be doing the things that man do when we say they're
thinking.
I'm convinced that machines can and will think in our lifetime.
- I confidently expect that when a matter of 10 or 15 years, something will emerge from
a laboratory which is not too far from the robot of science fiction fame.
- They had me reckon with ambiguity when they set out to use computers to translate
languages.
- A $500,000 super calculator, most versatile electronic brain known, translates Russian
into English.
Claims were made that the computer would replace most human translators.
When you go in for full scale production, what will the capacity be?
And this will be quite an adequate speed to cope with the hole-out with the Soviet
Union in just a few hours computer time a week.
- And finally, Mr. McDaniel, does this mean the end of human translators?
But as regards poetry and novels, no, I don't think we'll ever replace the translators for
that type of material.
10
- Mr. McDaniel, thank you very much.
[END PLAYBACK]
Let's take a look at the history of things that were being worked on at the time.
One of the first things people did was build a Boolean circuit model inspired by the
brain.
Turing wrote his Computing Machinery and Intelligence book and Turing test started to
exist.
Early AI programs played checkers and do some theorem proving, it could do reasoning
about geometry.
And then in 1965, a complete algorithm came about for logical reasoning.
Just like you saw in this video, they said, oh, maybe in five years we'll have machine
translation solved.
11
So people thought, OK, we have these engines that can do logical theorem proving,
reasoning, and so forth.
If we can just put enough knowledge, enough facts together-- and fact one implies fact
two, those kind of propositions--
then maybe that allows us to reason about anything in the future and starts putting
more and more information in these systems.
And this settled in an AI winter, where very little work happened on AI in industry, and
very little funding from government would go to research labs
that tried to work in AI.
Then in the '90s, there was a resurgence through statistical approaches, where there
was a lot of combining of new statistical ideas with sub-field expertise.
And then 2012 onwards, deep learning started to come on the scene quite a lot.
And people got just as excited again,it seems, as they were back in the '50s and '70s.
But one fundamental difference right now is that AI is used in many, many industries.
And we'll give you some examples in the remainder of this lecture of many application
domains where AI is already useful and can be expected to be even more useful
So we'll do a raise of hands for each one of these questions and see what you think.
OK.
Indeed.
But a robot that quietly plays back and forth with you is definitely available at this point.
DAN KLEIN: We already talked about this, but how about play a decent game of
Jeopardy?
Yup. Absolutely. So you might worry here that the computers are going to take all of our
game shows from us.So far, that hasn't happened.
IETER ABBEEL: How about driving safely up a mountain road? Raise of hands.
So this actually happened in 2011. So this is a done thing. Under certain conditions and
so forth, but it's happened.
13
But this is actually really important. We'll give it a question mark.
Autonomous driving is getting better and better.
But this goes to prove that just because you can do an initial step of a technology
doesn't mean that once you get to real world conditions and complex environments and
high safety
PIETER ABBEEL: How about buying a week's worth of groceries on the web?
The computer just uses Instacart and everything will show up.
DAN KLEIN: What about buy a week's worth of groceries at the Berkeley Bowl?
It's packed, you've got to make sure you don't bump your car into other people.
And then you've got to distinguish 73 varieties of Apple. And it's tricky, right?
I give this one a no, I can't send a robot to go do my Berkeley Bowl shopping yet.
PIETER ABBEEL: How about discovering and proving a new mathematical theorem?
If you give it a statement,it might figure out how to start from some actions to get to
whether that statement is true or not.
But the big question then still is, what should it try to prove? So many things to prove.
5 plus 7 is 12.
5 plus 8 is 13.
14
So many, so many things.
And so, when it's going to really decide what is worth proving and building abstractions
around is a whole other question.
DAN KLEIN: How about converse successfully with another person for an hour? Can we
do this?
But not in a general. So there's a whole history of computers basically bluffing. And chat
bots, for a while, can bluff.
And sooner or later, you figure out that you're either talking to a computer or this
conversation is deeply weird in some other way.
PIETER ABBEEL: How about performing a surgical operation? So keep your hands up if
you say yes.
Now, continue to keep your hand up if you're happy to have a robot to open heart
surgery on you?
Couple of you. We do surgical robotics research in my lab. I'd love to find a few more
participants.
The surgeries that do happen with a robot tend to happen through a human operator
still totally operating the robot to get the surgery done.
DAN KLEIN: What about, translate spoken Chinese into spoken English in real time?
All the pieces of this actually work pretty well, at least under limited context and
circumstances.
PIETER ABBEEL: How about folding laundry and putting away the dishes?
I think the mixed answer is pretty accurate in that there is some pretty cool initial results
starting to do this.
But it's nowhere near the level that you can just get a robot at home that's going to do
this for you.
15
DAN KLEIN: What about writing an intentionally funny story?
One fun thing about this lecture is, we can more or less keep the same list and over
time, things just move from one column to the other as AI begins to be able to do
these things better and better.
That said, this is a case where we still can't write intentionally funny stories.
So let me give you some examples of what this means. And not just examples, but what
it says about how we need to approach AI and what's going to work and what's not.
So let's go back in time to 1984. This is a very famous system from Roger Schank called
the Tale-Spin system.
The end."
Natural Language
16
So in natural language-- this is actually the area where I work-- is a bunch of kinds of
technologies that deal with different aspects of how humans communicate with each
other, and therefore with computers under appropriate circumstances.
So for example, speech recognition is taking the sounds that somebody says and
mapping them onto basically text.
Not necessarily understanding that text, but transcribe it. Text to speech synthesis.
It's going from something you want to say to a WAV file that embodies that speech
verbally.
These are two areas which have undergone amazing progress in the past five to 10
years.
A lot of that's been neural nets. But actually, even before that, a lot of that was just
large data methods applied appropriately.
There's also dialog systems, which would fit in between. When you say something to a
system, what if it's going to actually formulate a response and say something back?
This technology is nowhere near as far along. This is something that we're still figuring
out how to do, how to build a system that you can have a meaningful sustained
conversation with.
OK.
But let me give you an example on the text to speech front of the kinds of things that go
into it, and how we are and aren't able to do a good job of it today.
So this is an automatic transcription system. There have been big advances. The state of
the art systems are actually better than this today, but I think it's a good example.
Machine Translation
Sometimes machine translation between similar languages, like English and French, can
work amazing.
17
It depends on the amount of data, the difference between the language, the amount of
compute
you put into it, a whole bunch of different kinds of contextual factors.
Your phone has more power, your toaster has more power than a supercomputer
from the '50s.
And a whole bunch of other stuff falls under the domain of natural language.
Web search, understanding how your query relates to what's on the page and mixing
that with things like link analysis.
Or the big thing now is click stream data, about what people do and don't end up using
and clicking on.
Text classification, spam filtering, there's a whole bunch of things that basically anchor
into natural language and analysis of natural language.
Mapping natural language from one thing to another works a whole lot better than
understanding it deeply today.
AlphaGo beating Lee Sedol. This is a huge advance.
Logic
So computers can play your video games for you, they can go to the gym for you. Turns
out, they can also probably do your math homework for you.
There's a long tradition, like I said earlier, about logical inference. That's one of the
earliest places that AI was applied.
Also, these logical methods are used for things like fault diagnosis, cases where you
need to really be able to trace through what the computers doing.
A big issue with machine learning overall is often the computer does well, but you can't
tell when it makes mistakes or what's gone wrong, or why it made the decision it made.
18
And this has lots of consequences. But people have made lots of progress in logical
systems taken very broadly.
One particular place where there's been a ton is in satisfiability solvers, which are used
for all kinds of things.
Maybe, as Pieter said earlier, one of the biggest things that's going on is, AI is
everywhere now. It's not just chess and checkers.
Applied AI automates all kinds of things. In some ways, it's the glue or the electricity, as
people say, of a lot of modern business.
So of course search engines, but also all that route planning, maps, traffic, logistics,
things like medical diagnosis, help desks being automated, spam and fraud
detection,all the intelligence in your camera, in your thermostat.
All of that is increasingly being driven by AI to give you smarter devices, product
recommendations, and increasingly much.
So not only is AI doing more-- not only can we do more, but we can do more kinds of
things. We still can't build
one system that can do a little bit of everything. We build systems for each individual
task.
But there are a lot of tasks that we can improve with AI augmentation now.
That's an agent that perceives an ax. So think about this little guy here trying to get that
apple down. The abstraction.
It's a very important abstraction, is that you have an agent and you have an
environment.
19
You control the agent, you don't control the environment.
So what comes into the agents are percepts from the sensors.
What goes out of the agents are actions from the actuators.
And the question mark in the middle, that's the agent's behavior.
We write the decision making procedures that map from what you perceive to what you
do.
And that's what this course is about. Lots of different techniques for different kinds of
conceptualization of the boundary between agent and environment.
We're going see a lot of examples of agents and environments and where that boundary
is set.
When you drive, it's your eyes and your hands that are basically the boundary.
But if you're an autonomous car, it's the camera and the control lines into the wheels.
OK. If you have never played Pac-Man, but you're too embarrassed to admit it, go play a
game of Pac-Man on 188.
Pac-Man is an agent.
20
Its sensors perceive the state of the world, which is the labeling of all the dots and ghost
positions and all of that.
Its actuators are like a little joystick-- up, down, left, right. And then the environment is
the ghosts, and who knows what they'll choose to do and how the world will evolve.
So we're going to show you to close out the day here-- you want to come join? ou will
build this soon.
OK.
So what I want you to do is, I want you think about-- this is going to be Pac-Man. My
hands are off the keyboard. This is being played by an algorithm.
So this is going to be behavior that appears to be intelligent deriving from-- who knows.
But start thinking today, what's going on here in the agent function?
21