0% found this document useful (0 votes)
8 views

Chapter-2 Intelligent Agent

Uploaded by

kenabadane0938
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Chapter-2 Intelligent Agent

Uploaded by

kenabadane0938
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 32

• Agents and environments

• Rationality
• PEAS (Performance measure, Environment, Actuators, Sensors)
• Class of Environment
• Agent types
• I want to build a robot that will
• Clean my house
• Cook when I don’t want to
• Wash my clothes
• Cut my hair
• Fix my car (or take it to be fixed)
• Take a note when I am in a meeting
• Handle my emails (Information filtering agent)
i.e. do the things that I don’t feel like doing…
• AI is the science of building software or physical agents that act
rationally with respect to a goal.
• Software agents: also called a softbot (software robot)
• It is an agent that operates within the confines of the computer
or computer network.
• It interacts with a software environment by issuing commands
and interpreting the environments feedback.
• Softbots effectors are commands (e.g. mv or compress in UNIX
shell) to change the external environments state.
• E.g. mail handling agent, information filtering agent

• Physical agents
• are robots that have the ability to move and act in the physical
world and can perceive and manipulate objects in that world,
possibly for responding to new perceptions.
Agent
• Agent is something that perceives its environment through
sensors and acts upon that environment through effectors.
• The agent is assumed to exist in an environment in which it
perceives and acts
• An agent is rational since it does the right thing to achieve the
specified goal.
Agent
Human Agent Physical Agent
Sensors Eyes, Ears, Nose Cameras, Scanners, Mic,
infrared range finders

Effectors/ Hands, Legs, Mouth Various Motors (artificial


Actuators hand, artificial leg),
Speakers, Radio
• A rational agent should strive to "do the right thing", based on
what it can perceive and the actions it can perform. The right
action is the one that will cause the agent to be most successful
• What does right thing mean? one that will cause the agent to be most
successful and is expected to maximize goal achievement, given the
available information
• A rational agent is not omniscient
• An Omniscient agent knows the actual outcome of its actions, and can
act accordingly, but in reality omniscience is impossible.
• Rational agents take action with expected success, where as omniscient
agent take action with 100% sure of its success

• Are human beings Omniscient or Rational agent?


• Alex was walking along the road to Bus Station; He saw an old
friend across the street. There was no traffic.
• So, being rational, he started to cross the street.
• Meanwhile a big banner falls off from above and before he
finished crossing the road, he was flattened.
Was Alex irrational to cross the street?
• This points out that rationality is concerned with expected
success, given what has been perceived.
• Crossing the street was rational, because most of the time,
the crossing would be successful, and there was no way you
could have foreseen the falling banner.
• The EXAMPLE shows that we can not blame an agent for failing
to take into account something it could not perceive. Or for failing
to take an action that it is incapable of taking.
• In summary what is rational at any given point depends on
PEAS (Performance measure, Environment, Actuators,
Sensors) framework.
• Performance measure
• The performance measure that defines degrees of success of the
agent
• Environment
• Knowledge: What an agent already knows about the environment
• Actuators – generating actions
• The actions that the agent can perform back to the environment
• Sensors – receiving percepts
• Perception: Everything that the agent has perceived so far
concerning the current scenario in the environment
• For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance
measure, given the evidence provided by the percept sequence
and whatever built-in knowledge the agent has.
• How do we decide whether an agent is successful or not?
• Establish a standard of what it means to be successful in an
environment and use it to measure the performance
• A rational agent should do whatever action is expected to
maximize its performance measure, on the basis of the
evidence provided by the percept sequence and whatever
built-in knowledge the agent has.

• What is the performance measure for “crossing the


road”?
• What about “Chess Playing”?
• Consider the task of designing an automated taxi driver
agent:
• Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
• Environment: Roads, other traffic, pedestrians, customers
• Actuators: Artificial legs & hands, Speaker
• Sensors: Cameras, GPS, engine sensors, recorder
(microphone)
• Goal: driving safely from source to destination point
• What about designing an automated taxi agent
• Performance measure: safe, fast, legal, comfortable,
maximizes profits
• Environment: roads (highway, passage, …), other traffic,
pedestrians, customers…
• Actuators: steering, accelerator, display (for customers),
horn (communicate with other vehicles), …
• Sensors: cameras, sonar, speedometer, GPS, engine sensors,
keyboard, …
Examples: Agents for Various Applications
Agent type Percepts Actions Goals Environment

Interactive Typed words, Print exercises, Maximize Set of


English Keyboard suggestions, student's score students
tutor corrections on test
Medical Symptoms, Questions, tests, Healthy person, Patient,
diagnosis patient's answers treatments minimize costs hospital
system
Part- Pixels of varying Pick up parts and Place parts in Conveyor
picking intensity sort into bins correct bins belts with
robot parts
Satellite Pixels of varying Print a Correct Images from
image intensity, color categorization of categorization orbiting
analyser scene satellite
Refinery Temperature, Open, close Maximize purity, Refinery
controller pressure readings valves; adjust yield, safety
temperature
• Consider the need to design a “player
agent” for the national team. It may be
chess player, football player, tennis player,
basket player, etc…
• Identify what to perceive, actions to take, the
environment it interacts with?
• Identify sensors, effectors, goals,
environment and performance measure that
should be integrated for the agent to be
successful in its operation?
• A physical agent has two parts: architecture + program
• Architecture
• Runs the programs
• Makes the percept from the sensors available to the programs
• Feeds the program’s action choices to the effectors
• Programs
• Accepts percept from an environment and generates actions
• Before designing an agent program, we need to know the possible percept
and actions
• By enabling a learning mechanism, the agent could have a
degree of autonomy, such that it can reason and take decision
• Actions are done by the agent on the environment.
Environments provide percepts to an agent.
• Agent perceives and acts in an environment. Hence in
order to design a successful agent , the designer of the
agent has to understand the type of the environment it
interacts with.
• Properties of Environments:
 Fully observable vs. Partially observable
 Deterministic vs. Stochastic
 Episodic vs. Sequential
 Static vs. Dynamic
 Discrete vs. Continuous
 Single agent vs. Multiagent
• Does the agent’s sensory see the complete state of the
environment?
• If an agent has access to the complete state of the
environment, then the environment is accessible or fully
observable.
• An environment is effectively accessible if the sensors
detect all aspects that are relevant to the choice of
action.
• Taxi driving is partially observable.
• Is there a unique mapping from one state to another
state for a given action?
• The environment is deterministic if the next state is
completely determined by
• the current state of the environment and
• the actions selected and executed by the agent.

• Taxi driving is non-deterministic (i.e. stochastic).


• Does the next “episode” or event depend on the
actions taken in previous episodes?
• In an episodic environment, the agent's experience is
divided into "episodes".
• Each episode consists of the agent perceiving and then
performing a single action, and the choice of action in each
episode depends only on the episode itself.
• The quality of its action depends just on the episode itself.
• In sequential environment the current decision could
affect all future decisions
• Taxi driving is sequential.
• Can the world change while the agent is thinking and
on purpose?
• If the environment can change while the agent is on purpose, then we say
the environment is dynamic for that agent
• otherwise it is static.

• Taxi driving is dynamic.


• Are the distinct percepts & actions limited or
unlimited?
• If there are a limited number of distinct, clearly defined percepts and
actions, we say the environment is discrete.
• otherwise it is continuous.
• Taxi driving is continuous.
• If an agent operate by itself in an environment, it is a
single agent environment.
• How do you decide whether another entity must be viewed
as an agent?
• Is it an agent or just a stochastically behaving object (ex: wave on a
beach)?
• Classify multiagent environment as (partially) competitive
and/or (partially) cooperative
• Ex: Taxi is partially competitive and partially cooperative
Environment Types
Below are lists of properties of a number of familiar environments
Single
Problems Observable Deterministic Episodic Static Discrete
agent
Crossword Yes Yes No Yes Yes Yes
Puzzle
Part-picking No No Yes No No No
robot
Web shopping No No No No Yes No
program
Tutor No No No Yes Yes Yes
Medical No No No No No No
Diagnosis
Taxi driving No No No No No No
• Hardest case: an environment that is inaccessible, sequential,
stochastic, dynamic, continuous and multi-agent, which is true in the
real world.
• Basic types in order of increasing generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
• Learning agents
• It works by finding a rule whose condition matches the
current situation (as defined by the percept) and then doing
the action associated with that rule.
E.g. If the car in front brakes, and its brake lights come on,
then the driver should notice this and initiate braking,
• Some processing is done on the visual input to establish the
condition. If "The car in front is braking"; then this triggers
some established connection in the agent program to the action
"initiate braking". We call such a connection a condition-
action rule written as: If car-in-front-is breaking then
initiate-braking.
• Humans also have many such conditions. Some of which are
learned responses. Some of which are innate (inborn)
responses
• Blinking when something approaches the eye.
Structure of a simple reflex agent
Simple Reflex Agent sensors

Environment
What the world
is like now

Condition - What action I


should do now
action rules
effectors
function SIMPLE-REFLEX-AGENT(percept) returns action
static: rules, a set of condition-action rules
state  INTERPRET-INPUT (percept)
rule  RULE-MATCH (state,rules)
action  RULE-ACTION [rule]
return action
• This is a reflex agent with internal state.
• It keeps track of the world that it can’t see now.
• It works by finding a rule whose condition matches the current
situation (as defined by the percept and the stored internal state)
• If the car is a recent model -- there is a centrally mounted brake
light. With older models, there is no centrally mounted, so what if
the agent gets confused?
• Is it a parking light? Is it a brake light? Is it a turn signal light?
• Some sort of internal state should be in order to choose an action.
• The camera should detect two red lights at the edge of the vehicle
go ON or OFF simultaneously.
• The driver should look in the rear-view mirror to check on the
location of near by vehicles. In order to decide on lane-change the
driver needs to know whether or not they are there.
• The driver sees, and there is already stored information, and then
does the action associated with that rule.
State sensors
How the world evolves What the world

Environment
is like now
What my actions do

Condition - action rules


What action I
should do now

effectors

function REFLEX-AGENT-WITH-STATE (percept) returns action


static: state, a description of the current world state
rules, a set of condition-action rules
state  UPDATE-STATE (state, percept)
rule  RULE-MATCH (state, rules)
action  RULE-ACTION [rule]
state  UPDATE-STATE (state, action)
return action
• Choose actions that achieve the goal (an agent with
explicit goals)
• Involves consideration of the future:
 Knowing about the current state of the environment is not always
enough to decide what to do.
• For example, at a road junction, the taxi can turn left,
right or go straight.
 The right decision depends on where the taxi is trying to get to. As well
as a current state description, the agent needs some sort of goal
information, which describes situations that are desirable. E.g. being at
the passenger's destination.
• The agent may need to consider long sequences, twists
and turns to find a way to achieve a goal.
State sensors

How the world evolves What the world


is like now

Environment
What my actions do

What it will be like


if I do action A

Goals
What action I
should do now

effectors

function GOAL_BASED_AGENT (percept) returns action


state  UPDATE-STATE (state, percept)
action  SELECT-ACTION [state, goal]
state  UPDATE-STATE (state, action)
return action
• Goals are not really enough to generate high quality
behavior.
• For e.g., there are many action sequences that will get the
taxi to its destination, thereby achieving the goal. Some are
quicker, safer, more reliable, or cheaper than others. We
need to consider Speed and safety

• When there are several goals that the agent can aim
for, non of which can be achieved with certainty.
Utility provides a way in which the likelihood of
success can be weighed up against the importance of
the goals.
• An agent that possesses an explicit utility function can
make rational decisions.
State sensors
How the world evolves What the world is
like now
What my actions do

Environment
What it will be like
if I do action A

Utility How happy I will be


in such as a state

What action I should


do now
effectors

function UTILITY_BASED_AGENT (percept) returns action


state  UPDATE-STATE (state, percept)
action  SELECT-OPTIMAL_ACTION [state, goal]
state  UPDATE-STATE (state, action)
return action

You might also like