0% found this document useful (0 votes)
25 views

Chapter 2

chapter-2 AI

Uploaded by

eliaschane19
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Chapter 2

chapter-2 AI

Uploaded by

eliaschane19
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Chapter 2

“Intelligent Agents”

Melaku M.

Target Group – G4 Software Engineering 1


►Introduction to agent

►Agent and Environment

►Structure of agents

►Intelligent and Rational agent

►Types of intelligent agents

Book: Artificial Intelligence, A Modern Approach (Russell & Norvig)


Introduction to Agents
❖An agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.
✓ Examples: Human agent; Robotic agent; Software agent

❖A human agent has eyes, ears, and other organs for sensors, and hands,
legs, mouth, and other body parts for effectors.

❖A robotic agent substitutes cameras, infrared range finders, bump and


…etc. for the sensors and various motors, grippers manipulators for the
effectors.
Agent and Environment

Figure 1: Agents interact with environments through sensors and actuators.


❖Sensing allows robot to perceive the world. Action allows the robots to affect the world.
❖Percept refer to the agent’s perceptual inputs.
❖Percept Sequence is the complete history of everything that the agent has perceived.
Vacuum-Cleaner World

Figure 2: A vacuum-cleaner world with just two locations


 Percepts: location and contents, e.g., [A, dirty]

 Actions: Left, Right, Suck, NoOp 5


Vacuum-Cleaner World

Figure 3: A vacuum-cleaner world with just two locations


6
Vacuum-Cleaner World

Figure 4: Partial tabulation of a simple agent function for the vacuum-cleaner world shown in Fig 2

Figure 5 The agent program for a simple reflex agent in the two-state vacuum environment. This
program implements the agent function tabulated in Fig 4. 7
Rationality and Knowledge
➢A rational agent “ does the right thing”.

 The right action is the one that will cause the agent to be most

successful.

➢Definition:

 For each possible percept sequence, a rational agent should select

an action that is expected to maximize its performance measure,


given the evidence provided by the percept sequence and whatever
built-in knowledge the agent has.
Rational Agent
►Rationality can be depend upon:-
The performance measure that defines the criterion of success.
The agent’s prior knowledge of the environment.
The actions that the agent can perform.
The agent’s percept sequence to date.

►Autonomous agent: The autonomy of an agent is the extent to


which its behavior is determined by its own experience(with ability to
learn and adapt).

9
Task Environment
❖To design a rational agent, we must specify the task environment.
 Specifying the task environment is always the first step in designing
agent.
❖PEAS: to specify a task environment
 Performance measure
 Environment
 Actuators
 Sensors
❖Performance measure: is the success criteria, that determines how
successful an agent is.
PEAS: Specifying for an automated taxi driver
►Performance measure: what are the desirable qualities that we would aspire
from our automated driver?
 Safe, fast, legal(minimize violation of traffic laws and other protocols),
comfortable trip, maximize profits.
►Environment: what are the driving environment that the taxi will face?
 roads, other traffic/vehicles, pedestrians, customers, potholes, traffic signs
►Actuators:
 steering, accelerator, brake, signal, horn(sounding a warning), display
►Sensors:
 Cameras(to see road), sonar and infrared( to detect distances to other cars and
obstacles), accelerometer(to control the vehicle properly, especially on curves),
speedometer, GPS, odometer, engine-fuel-electrical system sensor, keyboard or
microphone.
PEAS: Specifying for vacuum cleaner
►Performance measure:

 Cleanness, efficiency: amount of dirt cleaned within certain time, battery life,

power consumption, cost, speed,

►Environment:

 Rooms, wood floors, table, dirty, carpet, different obstacles

►Actuators:

 Wheel, brushes, vacuum extractor

►Sensors:

 cameras, dirt detection sensor, bump sensor, infrared wall sensors.


Structure of Agents
❖The task of AI is to design an agent program that implements the agent function.
❖The structure of intelligent agent can be viewed as:

Agent = Architecture + agent Program

❖Architecture → the machinery/computing device (e.g. PC, robotic car) with


physical sensors and actuators that an agent program will runs on.
❖Agent function maps any given percept sequence to an action
[f: P ∗ → A]
✓ The agent function is an abstract mathematical description
✓ Designer needs to construct a table that contains the appropriate action for
every possible percept sequence
❖The agent program runs on the physical architecture to produce f .
✓ concrete implementation, running within some physical system
Properties of Task Environments
Fully observable Vs Partially observable

❖Fully Observable: Agent’s sensors give it access to the complete state of the
environment at each point of time.

✓ Fully observable environments are convenient because the agent not need to

maintain any internal state to keep track history of the world.

✓ Example: Chess game, Cross Word

❖Partially observable: Parts of the environment are inaccessible (parts of the


state are simply missing from the sensor data)

✓ Agent must make informed guesses about world

✓ Example: Self-driving car, Poker


Single vs. multi-agent
❖ If only one agent is involved in an environment and operating by
itself then such an environment is called single agent environment.
❖If multiple agents are operating in an environment then such
environment is called multi-agent environment.
✓For example, an agent solving a crossword puzzle by itself is
clearly in a single-agent environment, whereas an agent playing
chess is in a two agent environment.

✓Competitive vs. cooperative, chess game vs Self-driver


Static vs. dynamic
►Dynamic environment: the environment can change while the agent is
deliberating over what to do. Static environments don’t change
►Static environments are easy to deal with because the agent does not need
to keep looking at the world while it is deciding on an action.
►However, in dynamic environment agents need to keep looking at the
world while it is deciding on an action.
✓Tax driving is an example of a dynamic environment whereas
crossword Cross Word, Poker are an example of a static environment.
Episodic vs. sequential
➢ In an episodic task environment, the agent’s experience is divided into
atomic episodes.

➢Choice of action in each episode depends only on the episode itself.

➢ Means: agent performs independent task in each episode.

✓ The agents current decision doesn’t affect future decisions.

✓ Many classification tasks are episodic.

✓ For example, an agent that has to spot defective products on an assembly line

bases each decision on the current part, regardless of previous decisions.


Episodic vs. sequential
➢In sequential environments the agent operates in the series of
connected episode.
✓The agents the current choice/decision will affect future decisions.

✓Chess. Cross Word, Poker and taxi driver are sequential: in both
cases, short-term actions can have long-term consequences.
Deterministic vs. Stochastic
❖ Deterministic environment: an agent's current state and selected
action can determine the next state of the environment.
 Eg. Cross Word

❖A stochastic environment is random in nature and cannot be determined


completely by an agent.

 Taxi driving is clearly stochastic in this sense, because one can never

predict the behavior of traffic exactly. Chess is example of


deterministic.
Discrete vs. continuous
❖ Discrete Environment: A limited number of distinct, clearly
defined percepts and actions. At any given state there are only
finitely many actions to choose from.
✓For example, the chess comes under discrete environment
because it has a finite number of distinct states, and discrete set
of percepts and actions.
✓Taxi driving is a continuous-state and continuous-time problem:
the speed and location of the taxi is continuous values.
21
Activity 2.1
For each of the following activities, give a PEAS description of the
task environment and characterize it in terms of the properties of
task environments
a) Medical diagnosis system
b) Interactive AI tutor
c) Credit card fraud detection
d) Part parking robot
e) Image Analysis
22
Intelligent Agents
❖An intelligent agent is an autonomous entity which act upon an
environment using sensors and actuators for achieving goals.
❖An intelligent agent may learn from the environment to achieve their
goals.
❖An intelligent Agent:
▪ Must have the ability to perceive/sense its environment
▪ Must be able to make decision
▪ Must be able to take an action based on decision taken
▪ Must be able to take rational action
Types of Intelligent Agents
❖Intelligent agents are grouped in to five classes based on their degree
of perceived intelligence and capability.
➢ Simple reflex agents

➢Model based reflex agents

➢Goal based agents Agents arranged in order of increasing generality

➢Utility based agents

➢Learning agents
Simple reflex Agents
❖These agents selects an action on the basis of the current percept, ignoring
the rest of the percept history.
❖It works on the condition-action rule:-which means it directly maps the
current percept to action.
❖For example, the vacuum agent whose agent function is tabulated in Figure
2.3 is a simple reflex agent, because its decision is based only on the current
location and on whether that location contains dirt.
❖If location A is dirty.” Then, this triggers some established connection in the
agent program to the action “Suck” .
❖We call such a connection, condition–action rule, written as
Example 1: if location A is dirty then suck
Example 2: if car-in-front-is-braking then initiate-braking.
This agents only succeed in the environment is fully observable.
Figure: schematic diagram of simple reflex Agents
Cont.…

INTERPRET-INPUT function generates an abstracted description of the current state from the percept.
RULE-MATCH function returns the first rule in the set of rules that matches the given state description.
Model based reflex agents
❖The most effective way to handle partial observability is for the agent to keep track
of the part of the world it can’t see now.
❖Maintains internal state: keeps track of percept history in order to reveal some of
the unobservable aspects of the current state.
 For other driving tasks such as changing lanes, the agent needs to keep track of
where the other cars are if it can’t see them all at once.
❖The agent combines current percept with the internal state to generate updated
description of the current state.
❖Updating internal state information requires two kinds of knowledge to be encoded
in the agent program.
a) how the world evolves in-dependently from the agent.
b) how the agent actions affects the world.
 Store previously-observed information
Figure: structure of the Model based reflex agents:showing how the current percept is
combined with the old internal state to generate the updated description of the current state
Cont.…
Goal Based Agent
❖Knowing something about the current state of the environment is not always
enough to decide what to do. For example, at a road junction, the taxi can turn
left, turn right, or go straight on. The correct decision depends on where the
taxi is trying to get to.
 In addition to current state description, the agent needs “goal information”.

The agent program combines “goal information” with the “model”. This
allows the agent to choose among multiple possibilities, select the one which
reaches a goal state.
Usually requires Searching and planning to finding action sequences that
achieve the agent’s goals. E.g.: GPS system finding path to certain destination
Figure: Goal based agents: Decision making of this kind is fundamentally different from the
condition– action rules , this involves consideration of the future—both “What will happen
if I do such-and-such?”.Will that make me happy?”
Utility based agents
❖Goals alone are not enough to generate high-quality behavior in most
environments. For example, many action sequences will get the taxi to its
destination (thereby achieving the goal) but some are quicker, safer, more reliable,
or cheaper than others.
❖Utility functions assigns a score to any given sequence of environment states.
❖Based on utility, agent chooses the actions that maximize its expected
utility/performance measure/ for each states of the world.
❖Example: Taxi driving agent can easily distinguish between more and less
desirable ways of getting to its destination.
❖The term utility, can be used to describe “how happy they would make the agent”
Figure: Utility based agents. It uses a model of the world, along with a utility function
that measures its preferences among states of the world. Then it chooses the action that
leads to the best expected utility.
Learning agents
❖ Learning agent is capable of learning from its experience. It starts with basic
knowledge and then able to act and adapt autonomously through learning, to improve
its performance over time.
❖ A learning agent is divided into four conceptual components.
i. Learning element, which is responsible for making improvements
 It uses feedback from the critic on how the agent is doing and determines how the
performance element should be modified to do better in the future.
ii. Performance element, which is responsible for selecting external actions. It is
considered to be the entire agent: it takes in percepts and decides on actions.
iii. Critic tells the learning element how well the agent is doing with respect to a fixed
performance standard.
iv. Problem generator: It is responsible for suggesting actions that will lead to new
and informative experiences.
35
Figure: Learning agents
Example
 To make the overall design more concrete, let us return to the automated taxi
example. The performance element consists of whatever collection of
knowledge and procedures the taxi has for selecting its driving actions. The taxi
goes out on the road and drives, using this performance element. The critic
observes the world and passes information along to the learning element. For
example, after the taxi makes a quick left turn across three lanes of traffic, the
critic observes the shocking language used by other drivers. From this
experience, the learning element is able to formulate a rule saying this was a
bad action, and the performance element is modified by installation of the new
rule. The problem generator might identify certain areas of behavior in need of
improvement and suggest experiments, such as trying out the brakes on
different road surfaces under different conditions.
Agent programs
 The agent programs that we design in this book all have the same skeleton: they
take the current percept as input from the sensors and return an action to the
actuators.4 Notice the difference between the agent program, which takes the
current percept as input, and the agent function, which takes the entire percept
history. The agent program takes just the current percept as input because
nothing more is available from the environment; if the agent’s actions need to
depend on the entire percept sequence, the agent will have to remember the
percepts
 We describe the agent programs in the simple pseudocode language
 Figure 2.3—represents explicitly the agent function that the agent program
embodies.
 To build a rational agent in this way, we as designers must construct a table that
contains the appropriate action for every possible percept sequence.
38

You might also like