Ai Module 1
Ai Module 1
ARTIFICIAL INTELLIGENCE
INTRODUCTION
AI Applications
What is AI ?
Intelligent Agents
Agent Environments
Problem Formulation
To introduce AI and key paradigms of AI
NADINE ASIMO
ERICA
Natural skin, realistic hands
Returns a greeting, makes eye contact, and can remember
NADINE: A
all the conversations had with it.
It is able to answer questions autonomously in several
languages,
HUMANOID SOCIAL
Simulate emotions both in gestures and facially,
depending on the content of the interaction with the user.
Nadine can recognise persons it has previously seen, and
AFTER PROF. NADIA its demeanour can change according to what is said to it.
Nadine has a total of 27 degrees of freedom for facial
expressions and upper body movements.
THALMANN FROM With persons it has previously encountered, it remembers
facts and events related to each person.
NTU SINGAPORE It can assist people with special needs by reading stories,
showing images, put on Skype sessions, send emails, and
communicate with other members of the family.
(2013) It can play the role of a receptionist in an office or be
dedicated to be a personal coach
ASIMO: A
HUMANOID ASIMO
ROBOT (HONDA
2000-2018)
NON-HUMANOID ROBOTS
ABOUT
Ride Sharing apps like Uber, Lyft
Jarvis by Facebook
Email categorization by Gmail
WHERE AI IS
Automated customer support: chatbots
Personalized shopping experience
Finance: Stock brokerage prediction
WITH
Smart home devices
Creative arts: watson BEAT
Security and surveillance
PLAYING CHESS PROVING WRITING
MATHEMATICAL POETRY
THEOREMS
AI PROBLEMS
What is intelligence?
What is thinking
What is a machine? Is
computer a machine?
Are humans intelligent?
Can a machines think? If
yes, are we machines?
•Artificial Intelligence is a branch of
WHAT IS ARTIFICIAL
INTELLIGENCE? computer science by which we can create
intelligent machines which can behave like
a human, think like humans, and able
to make decisions
AI a term coined
by McCortey •Artificial Intelligence is the study
of systems that
Think Humanly Think Rationally
Acting/behaving Humanly Acting/behaving Rationally
ACTING HUMANLY: TURING TEST
Human beings are intelligent
To be called intelligent, a machine must produce
responses that are indistinguishable from those of a
human
Alan Turing
Machine Learning:
Adapting to new Robotics: To
Computer Vision: To
circumstances and to manipulate objects and
perceive objects
detect and extrapolate move about.
patterns
To get inside the actual workings of human
minds
COGNITIVE
Brain Imaging: observing the brain in
action
APPROACH
brings together computer models from AI and
experimental techniques from psychology to
construct precise and testable theories of the
human mind.
THINKING RATIONALLY: THE LAWS OF THOUGHT APPROACH
Greek philosopher Aristotle was one of the first to attempt to codify "right thinking," that is,
irrefutable reasoning processes. His famous syllogisms provided patterns for argument
structures that always gave correct conclusions given correct premises.
For example, "Socrates is a man; all men are mortal; therefore Socrates is mortal."
These laws of thought were supposed to govern the operation of the mind, and initiated the field
of logic.
There are two main obstacles to this approach. First, it is not easy to take informal knowledge
and state it in the formal terms required by logical notation. Second, there is a big difference
between being able to solve a problem "in principle" and doing so in practice.
Even problems with just a few dozen facts can exhaust the computational resources of any
computer unless it has some guidance as to which reasoning steps to try first.
ACT RATIONALLY : THE RATIONAL AGENT
APPROACH
An agent is just something that acts.
Agents can be classified according to the
environment in which they are like Intelligent
Agents, Human Agent, Robotic agent, Biological
Agents, Software/computer Agents, Hardware
Agents, Interface Agents, Mobile Agents,
Reactive Agents, Information Agents,
AGENTS
Distributed Artificial Intelligence Agents DAI …
Computer Agents does:
operate autonomously
perceive their environment
persist over a prolonged time period
adapt to change
create and pursue goals
AI AGENTS
PROPERTIES
Autonomy
Reactivity.
OF AGENT
RATIONALITY
An agent should "do the right thing", based on what it can perceive and the actions it can perform. The right
action is the one that will cause the agent to be most successful
A rational agent always selects an action based on the percept sequence it has received so as to maximize its
(expected) performance measure given the percept it has received and the knowledge possess by it.
Definition Ideal Rational Agent: For each possible percept sequence, a rational agent should select an action
that is expected to maximize its performance measure, given the evidence provided by the percept sequence
and whatever built-in knowledge the agent has.
Percepts may not supply all
relevant information. We can
Rational is different behave rationally even when
faced with incomplete
RATIONALITY
Rational is different Rationality maximizes expected
outcome while perfection
from being perfect maximizes actual outcome.
DEFINITION: The autonomy of an agent is the extent
to which its behavior is determined by its own
experience(in dynamic unpredictable
environments ), rather than knowledge of designer.
Agents can perform actions in order to modify future
percepts so as to obtain useful
information: information gathering, exploration.
Extremes
No autonomy – ignores environment/data
AUTONOMY
Complete autonomy – must act randomly/no program
Example: baby learning to crawl
Ideal
Idesign agents to have some autonomy
Possibly become more autonomous with experience
DEFINITION: A reactive system is one that maintains
an ongoing interaction with its environment, and
responds to changes that occur in it (in time for the
response to be useful)
Ways to achieve reactivity
Reactive architectures
[Situation – Action] rules
Layered, behaviour-based architectures
REACTIVITY
Deliberative architectures
Symbolic world model, long-term goals
Reasoning, planning
Hybrid architectures
Reactive layer + deliberative layer
Must first specify the setting for
intelligent agent design
Performance
Environment
PEAS: Actuators PEAS
Sensors
Percepts
Actions
PAGE Description Goals
Environment
Performance measure: An objective criterion for success of an
agent's behavior. The performance measure depends on the agent
function.
Performance
Environment Actuators Sensors
Measure
camera, sonar,
safe, fast, speedometer, GPS,
legal, roads, other steering, odometer, engine
comfortable traffic, accelerator,
trip, sensors,
maximize pedestrians, brake, signal, keyboard,
profits customers horn, display accelerometer
MEDICAL DIAGNOSIS SYSTEM
Performance
Environment Actuators Sensors
Measure
Screen display
Keyboard (entry of
(questions, tests,
Healthy patient, symptoms, findings,
diagnoses,
minimize costs, Patient, hospital, patient's answers)
treatments,
lawsuits staff referrals)
PART-PICKING ROBOT
Performance
Environment Actuators Sensors
Measure
Performance
Environment Actuators Sensors
Measure
Screen display
(exercises,
Maximize student's Keyboard
suggestions,
score on test Set of students corrections)
EXAMPLES PAGE DESCRIPTIONS
STRUCTURE OF INTELLIGENT AGENTS
THE JOB OF AI IS TO DESIGN THE AGENT
PROGRAM: A FUNCTION THAT
IMPLEMENTS THE AGENT MAPPING FROM
Agent’s structure can be viewed as:
PERCEPTS TO ACTIONS.
Agent = Architecture + Agent
WE ASSUME THIS PROGRAM WILL RUN ON
SOME SORT OF ARCHITECTURE
Program
COMPUTING DEVICE, WHICH WE WILL Architecture = the machinery that
CALL THE ARCHITECTURE.
an agent executes on.
THE ARCHITECTURE MIGHT BE A PLAIN Agent Program = an
COMPUTER, OR IT MIGHT INCLUDE implementation of an agent
SPECIAL-PURPOSE HARDWARE FOR
CERTAIN TASKS, SUCH AS PROCESSING function. (….our focus )
CAMERA IMAGES OR FILTERING AUDIO
INPUT.
AI AGENT ENVIRONMENT
Fully Observable(vs. partially
observable)
AGENT Deterministic(vs. stochastic)
It works by finding a rule whose condition matches the current situation (as
defined by the percept) and then doing the action associated with that rule.
Consider the following more obvious case: from time to time, the driver looks in the rear-view mirror to check on the locations of nearby vehicles.
When the driver is not looking in the mirror, the vehicles in the next lane are invisible (i.e., the states in which they are present and absent are
indistinguishable); but in order to decide on a lane-change maneuver, the driver needs to know whether or not they are there.
The problem illustrated by this example arises because the sensors do not provide access to the complete state of the world.
In such cases, the agent may need to maintain some internal state information in order to distinguish between world states that generate the same
perceptual input but nonetheless are significantly different. Here, "significantly different" means that different actions are appropriate in the two
states.
MODEL-BASED REFLEX AGENTS
They use a model of the world to choose their actions. They maintain an internal state.
Model − knowledge about “how the things happen in the world”.
Internal State − It is a representation of unobserved aspects of current state depending on percept history.
Updating the state requires the information about −
•How the world evolves.
•How the agent’s actions affect the world.
GOAL-BASED AGENTS
GOAL BASED AGENTS
They choose their actions in order to achieve goals. Goal-based approach is more flexible than reflex agent since the
knowledge supporting a decision is explicitly modeled, thereby allowing for modifications.
Goal − It is the description of desirable situations.
At a road junction, the taxi can turn left, right, or go straight on. The right decision depends on where the taxi is trying to get to. In
other words, as well as a current state description, the agent needs some sort of goal information, which describes situations that are
desirable
The agent program can combine this with information about the results of possible actions (the same information as was
used to update internal state in the reflex agent) in order to choose actions that achieve the goal.
Sometimes, this will be simple, when goal satisfaction results immediately from a single action;
sometimes, it will be more tricky, when the agent has to consider long sequences of twists and turns to find a way to
achieve the goal. Search and planning are the subfields of AI devoted to finding action sequences that do achieve the
agent's goals.
UTILITY-BASED AGENTS
• Goals alone are not really enough to generate high-quality
behavior.
• For example, there are many action sequences that will get the
taxi to its destination, thereby achieving the goal, but some are
quicker, safer, more reliable, or cheaper than others.
• Goals just provide a crude distinction between "happy" and
"unhappy" states, whereas a more general performance
measure should allow a comparison of different world states
(or sequences of states) according to exactly how happy they
would make the agent if they could be achieved.
• Because "happy" does not sound very scientific, the customary
terminology is to say that if one world state is preferred to
another, then it has higher utility for the agent.
• Utility is therefore a function that maps a state onto a real
number, which describes the associated degree of happiness.
LEARNING AGENTS
Each agent is an abstract representation of the agent’s environment . It is an abstraction that denotes configuration
of agent
INITIAL STATE: The description of the starting configuration of the agent
An ACTION/OPERATOR takes the agent from on state to the another state. A state can have number of
successor states
A PLAN is sequence of actions
A GOAL is a description of a set of desirable states of the world. Goal states are often specified by a goal test
which any goal state must satisfy
PATH COST: path cost positive number. Usually path cost = sum of step costs
PROBLEM FORMULATION
Problem formulation means choosing a relevant set of states to consider, and a feasible set of operators for
moving from one state to another
Search is the process of imagining sequences of operators applied to the initial state, and checking which
sequence reaches the goal state
If you are able to represent the problem in terms of state space representation, you can solve the problem using the
search techniques which finds out the set of states and the corresponding actions that will start with initial states s
to final state goal G
PROBLEM FORMULATION:
Vacuum Cleaner
Missionaries and cannibals
Cryptarithmatic
N-Queens Problem (8-Queens Problem)
N-Puzzle Problem (N-Puzzle Problem)
Water Jug Problem
MLGC Problem (Man Lion Goat Cabbage)
PROBLEM FORMULATION: VACUUM CLEANER
Let the world contain just two locations. Each location may or may
not contain dirt, and the agent may be in one location or the other.
There are 8 possible world states
The agent has three possible actions in this version of the vacuum
world: Left, Right, and Suck.
Assume, for the moment, that sucking is 100% effective.
The goal is to clean up all the dirt. That is, the goal is equivalent
to the state set {7,8}.
Suppose that the agent's sensors give it enough information to tell
exactly which state it is in (i.e., the world is accessible); and
suppose that it knows exactly what each of its actions does. Then it
can calculate exactly which state it will be in after any sequence of
actions. For example, if its initial state is 5, then it can calculate
that the action sequence [Right,Suck] will get to a goal state. This
is the simplest case, which we call a single-state problem.
PROBLEM FORMULATION: VACUUM CLEANER
PROBLEM FORMULATION: MISSIONARIES & CANNIBALS
http://www.learn4good.com/games/puzzle/boat.htm
https://www.novelgames.com/en/missionaries/
PROBLEM FORMULATION: MISSIONARIES & CANNIBALS
Problem Formulation
States:
<m, c, b> representing the # of missionaries and the # of cannibals, and the position of the boat
Initial state:
<3, 3, 1>
Actions:
take 1 missionary, 1 cannibal, 2 missionaries, 2 cannibals, or 1 missionary and 1 cannibal across the river
Transition model:
state after an action
Goal test:
<0, 0, 0>
Path cost:
number of crossing
PROBLEM FORMULATION: CRYPTARITHMATIC
PROBLEM FORMULATION: CRYPTARITHMATIC
Goal test: puzzle contains only digits, and represents a correct sum.
State: 8*8 chessboard (empty, or one queen placed in first col in each
row, etc.)
Goal: all Queens on chessboard such that no two queens attack each other
Actions:
Placing one queen in one of the squares
Cost: Can be number of moves
8-PUZZLE PROBLEM
State: 3*3 pattern of tiles with each tile having a number [1-8] and one
tile blank
Actions:
The empty space can only move in four directions (Movement of empty
space): Up, Down, Right or Left
The empty space cannot move diagonally and can take only one step at
a time.
Cost: Can be number of moves
You are given an m liter jug and a n liter jug. Both the jugs are initially empty. The jugs don’t have markings to
allow measuring smaller quantities. You have to use the jugs to measure d liters of water where d is less than n. No
limitation of water usage
State: (X, Y) corresponds to a state where X refers to the amount of water in Jug1 and Y refers to the amount of
water in Jug2
Goal: Determine the path from the initial state (xi, yi) to the final state (xf, yf), where (xi, yi) is (0, 0) which
indicates both Jugs are initially empty and (xf, yf) indicates a state which could be (0, d) or (d, 0).
Actions:
Empty a Jug, Ex. (X, Y)->(0, Y) Empty Jug 1
Fill a Jug, Ex. (0, 0)->(X, 0) Fill Jug 1
Pour water from one jug to the other until one of the jugs is either empty or full, Ex. (X, Y) -> (X-d, Y+d)
SOLVING PROBLEMS BY SEARCHING
Intelligent agents are supposed to act in such a way that the environment goes through a sequence of states that
maximizes the performance measure.
In its full generality, this specification is difficult to translate into a successful agent design.
Imagine our agent in the city of Arad, Romania, toward the end of a touring holiday.
The agent has a ticket to fly out of Bucharest the following day. The ticket is nonrefundable, the agent's visa is
about to expire, and after tomorrow, there are no seats available for six weeks.
Now the agent's performance measure contains many other factors besides the cost of the ticket and the
undesirability of being arrested and deported. For example, it wants to improve its suntan, improve its Romanian,
take in the sights, and so on.
SOLVING PROBLEMS BY SEARCHING
Problem formulation is the process of deciding what actions and states to consider, and follows goal formulation.
Our agent has now adopted the goal of driving to Bucharest, and is considering which town to drive to from Arad.
There are three roads out of Arad, one toward Sibiu, one to Timisoara, and one to Zerind. None of these achieves
the goal, so unless the agent is very familiar with the geography of Romania, it will not know which road to
follow.
If the agent has no additional knowledge, then it is stuck. The best it can do is choose one of the actions at
random.
But suppose the agent has a map of Romania, either on paper or in its memory.
SOLVING PROBLEMS BY SEARCHING
The agent can use this information to consider subsequent stages of a hypothetical journey through each of the
three towns, to try to find a journey that eventually gets to Bucharest.
Once it has found a path on the map from Arad to Bucharest, it can achieve its goal; by carrying out the driving
actions that correspond to the legs of the journey.
In general, then, an agent with several immediate options of unknown value can decide what to do by first
examining ; different possible sequences of actions that lead to states of known value, and then choosing the best
one.
This process of looking for such a sequence is called search. A search algorithm takes a problem as input and
returns a solution in the form of an action sequence.
SOLVING PROBLEMS BY SEARCHING
Once the solution is found, the actions it recommends can be carried out. This is called the execution phase.
Thus, we have a simple "formulate, search, execute" design for the agent.
After formulating a goal and a problem to solve, the agent calls a search procedure to solve it.
It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do, and
then removing that step from the sequence.
Once the solution has been executed, the agent will find a new goal.
PROBLEM-
SOLVING
AGENTS
GENERATING ACTION SEQUENCES: ROUTE-FINDING PROBLEM FROM ARAD TO
BUCHAREST
FOR SEARCH Thus, nodes have depths and parents, whereas states do not.
It is quite possible for two different nodes to contain the same state, if that
TREES state is generated via two different sequences of actions.
FORMAL
VERSION OF THE
GENERAL
SEARCH
ALGORITHM
Search strategies evalution in terms of
four criteria:
Completeness: is the strategy
guaranteed to find a solution when
there is one?
SEARCH
Time complexity: how long does it
take to find a solution?
Space complexity: how much memory
STRATEGIES does it need to perform the search?
Optimality: does the strategy find the
highest-quality solution when there are
several different solutions?
Blind Search
Depth First Search
Breadth First Search
Iterative Deepening Search
Informed Search