0% found this document useful (0 votes)
27 views

2 Intelligent Agent

The document discusses different types of intelligent agents and their environments. It covers topics like table driven agents, simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and the characteristics of environments including performance measures, observability, determinism and more. The document provides detailed explanations and examples for each topic.

Uploaded by

Sawaira Kazmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

2 Intelligent Agent

The document discusses different types of intelligent agents and their environments. It covers topics like table driven agents, simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and the characteristics of environments including performance measures, observability, determinism and more. The document provides detailed explanations and examples for each topic.

Uploaded by

Sawaira Kazmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter#2

Intelligent Agent
Salah Ud Din
Lecturer
Department of Computer Science
COMSATS University Islamabad, Attock
Agents
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that environment
through actuators
Human agent:
eyes, ears, and other organs for sensors;
hands, legs, mouth, and other body parts for actuators
Robotic agent:
cameras and infrared range finders for sensors;
various motors for actuators
Software agent: receives keystrokes, file contents, and network
packets as sensory inputs and acts on the environment by
displaying on the screen,writing files, and sending network
packets
Agents and Environment

• Percept to refer to the agent's perceptual inputs at any given


instant.
• Agent's percept sequence is the complete history of everything
the agent has ever perceived.
• Mathematically an agent's behavior is described by the agent
function that maps any given percept sequence to an action.
agent = architecture + program
• Internally, the agent function for an artificial agent will be
implemented by an agent program.
Vacuum-cleaner world
Performance measure
• Performance measure: An objective criteria for success of an
agent's behavior. E.g.,
– performance measure of a vacuum-cleaner agent could be
amount of dirt cleaned up, amount of time taken, amount of
electricity consumed, amount of noise generated, etc.
– When an agent is plunked down in an environment, it
generates a sequence of actions according to the percepts it
receives.
– This sequence of actions causes the environment to go
through a sequence of states.
– If the sequence is desirable, then the agent has performed
well.
• An agent is autonomous if its behavior is determined by its own
percepts & experience (with ability to learn and adapt) without
depending solely on build-in knowledge.
Task Environment
• Before we design an intelligent agent, we must specify its “task
environment”:
• Task environments, which are essentially the "problems" to which
rational agents are the "solutions."
PEAS:
Performance measure
Environment
Actuators
Sensors
Rational agents
• Performance measure: An objective criteria for the success of
an agent's behavior
• Rational Agent: For each possible percept sequence, a rational
agent should select an action that is expected to maximize its
performance measure, based on the evidence provided by the
percept sequence and whatever built-in knowledge the agent
has.
• An omniscient agent knows the actual outcome of its actions
and can act accordingly.
• Rationality is distinct from omniscience (all-knowing with
infinite knowledge)
• What is rational at any given time depends on four things:
• The performance measure that defines the criteria of
success.
• The agent's prior knowledge of the environment.
• The actions that the agent can perform.
• The agent's percept sequence to date.
Environment types
Environment types
• Episodic (vs. sequential): In an episodic task environment, the
agent's experience is divided into atomic episodes.
• In each episode the agent receives a percept and then performs a
single action.
• Next episode does not depend on the actions taken in previous
episodes. For example, an agent that has to spot defective parts
on an assembly line bases each decision on the current part,
regardless of previous decisions; moreover, the current decision
doesn't affect whether the next part is defective.
• In sequential environments, the current decision could affect all
future decisions.
• Chess and taxi driving are sequential: in both cases, short-term
actions can have long-term consequences.
• Episodic environments are much simpler than sequential
environments because the agent does not need to think ahead
Environment types
• Fully observable (vs. partially observable): If an agent's sensors
give it access to the complete state of the environment at each
point in time, task environment is fully observable.
• A task environment is effectively fully observable if the sensors
detect all aspects that are relevant to the choice of action.
• Fully observable environments are convenient because the agent
need not maintain any internal state to keep track of the world.
• An environment might be partially observable
– because of noisy and inaccurate sensors or because parts of the state are simply
missing from the sensor data
– for example, a vacuum agent with only a local dirt sensor cannot tell whether there
is dirt in other squares.
• If the agent has no sensors at all then the environment is
unobservable.
Environment types
• Deterministic (vs. stochastic): If the next state of the
environment is completely determined by the current
state and the action executed by the agent, then
environment is deterministic; otherwise, it is stochastic.
• In principle, an agent need not worry about uncertainty in
a fully observable, deterministic environment.
• Taxi driving is clearly stochastic in this sense, because one
can never predict the behavior of traffic exactly.
• The vacuum world is deterministic, but variations can
include stochastic elements such as randomly appearing
dirt and an unreliable suction mechanism.
• Nondeterministic environment is one in which actions are
characterized by their possible outcomes, but no
probabilities are attached to them.
Environment types
• Static (vs. dynamic): The environment is unchanged while an
agent is deliberating.
• Static environments are easy to deal with because the agent
need not keep looking at the world while it is deciding on an
action, nor need it worry about the passage of time.
• The environment is semi-dynamic if the environment itself
does not change with the passage of time but the agent's
performance score does)
• Taxi driving is clearly dynamic: the other cars and the taxi
itself keep moving while the driving algorithm dithers about
what to do next.
• Single agent (vs. multi-agent): An agent operating by itself in an
environment.
• whereas an agent playing chess is in a two-agent
environment.
Environment types
• Discrete (vs. continuous): The discrete/continuous distinction
applies to the state of the environment, to the way time is
handled, and to the percepts and actions of the agent.
– For example, the chess environment has a finite number of distinct states
(excluding the clock), Chess also has a discrete set of percepts and actions.
• Taxi driving is a continuous-state and continuous-time problem:
the speed and location of the taxi and of the other vehicles sweep
through a range of continuous values and do so smoothly over
time.
• Known (vs. Unknown): In a known environment, the outcomes (or
outcome probabilities if the environment is stochastic) for all
actions are given.
• if the environment is unknown, the agent will have to learn how it
works in order to make good decisions.
Agent types

• Five basic types in order of increasing generality:


• Table Driven agents
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
Table Driven Agent
A table that contains the appropriate action for every possible percept sequence.

table lookup
for entire history

The lookup table for chess—a tiny, well-behaved fragment of the real world—would
have at least 1080 entries.
Table Driven Agent
• The size of table (the number of atoms in the observable
universe is less than 1080) means that
– no physical agent in this universe will have the space to store the
table
– the designer would not have time to create the table
– no agent could ever learn all the right table entries from
its experience
– even if the environment is simple enough to yield a
feasible table size, the designer still has no guidance about
how to fill in the table entries.
• Despite all this, TARLE-DRIVEN-AGFNT does do what we
want: it implements the desired agent function.
• The key challenge for AI is to find out how to write programs
that, to the extent possible, produce rational behavior from a
smallish program rather than from a vast table.
Simple reflex agents
• The simplest kind of agent is the simple reflex agent.
• These agents select actions on the basis of the current percept, ignoring
the rest of the percept history.
• For example, the vacuum agent whose agent function is a simple reflex
agent, because its decision is based only on the current location and on
whether that location contains dirt.
• Simple reflex behaviors occur even in more complex environments. Imagine
yourself as the driver of the automated taxi. If the car in front brakes and its brake
lights come on, then you should notice this and initiate braking.
• We call such a connection a condition-action rule, written as
if car-in-front-is-braking then initiate-braking.

function REFLEX-VACUUM-AGENT( location,status) returns an action


if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Simple reflex agents

Model-based reflex agents
Effective way to handle partial observability is for the agent to keep track of the
world it can't see now. That is, the agent should maintain some sort of internal
state that depends on the percept history and thereby reflects at least some of the
unobserved aspects of the current state.
• For the braking problem, the internal state is not too extensive—just the previous
frame from the camera, allowing the agent to detect when two red lights of the
vehicle go on or off simultaneously.
• Updating this internal state information as time goes by requires two kinds of
knowledge to be encoded in the agent program.
• First, some information about how the world evolves independently of the agent
for example, that an overtaking car generally will be closer behind than it was a
moment ago.
• Second, some information about how the agent's own actions affect the world for
example, when the agent turns the steering wheel clockwise, the car turns to the
right.
• This knowledge about "how the world works"—whether implemented in simple
Boolean circuits or in complete scientific theories—is called a model of the world.
An agent that uses such a model is called a model-based agent.
Model-based reflex agents

function MODEL-BASED-REFLEX-AGENT(percept.) returns an action


persistent state, the agent's current conception of the world state
model, a description of how the next state depends on current state and action
rules, a set of condition—action rules
action, the most recent action, initially none
state — UPDATE-STATE(state, action, percept, model)
rule. — RULE MATCH(state, rules)
action rule.Action
return action
Goal-based agents
• Knowing something about the current state of the environment is not
always enough to decide what to do. For example. at a road junction, the
taxi can turn left, turn right, or go straight on. The correct decision
depends on where the taxi is trying to get to.
• In other words, as well as a current state description, the agent needs
some sort of goal information that describes situations that are desirable
—for example, being at the passenger's destination. The agent program
can combine this with the model to choose actions that achieve the goal.
Utility-based agents
• Goals alone are not enough to generate high-quality behavior in most
environments. For example, many action sequences will get the taxi to its
destination, but some are quicker, safer, more reliable, or cheaper than
others.
• Goals just provide a crude binary distinction between "happy" and
"unhappy" states. A more general performance measure should allow a
comparison of different world states according to exactly how happy they
would make the agent.
• Because "happy" does not sound very scientific, economists and
computer scientists use the term utility instead.
• Performance measure assigns a score to any given sequence of
environment states, so it can easily distinguish between more and less
desirable ways of getting goal.
Utility-based agents

Learning agents
Turing proposes to build learning machines and then to teach them.
• This is now the preferred method for creating state-of-the-art systems. Learning
has another advantage; it allows the agent to operate in initially unknown
environments and to become more competent than its initial knowledge.
• A learning agent can be divided into four conceptual components.
• Learning element is responsible for making improvements
• Performance element is responsible for selecting external actions. The
performance element is the entire agent: it takes in percepts and decides on
actions.
• Learning element uses feedback from the critic on how the agent is doing and
determines how the performance element should be modified to do better in the
future.
• Critic tells the learning element how well the agent is doing with respect to a fixed
performance standard. The critic is necessary because the percepts themselves
provide no indication of the agent's success.
• Problem generator is responsible for suggesting actions that will lead to new and
informative experiences.
Learning agents

State Representations
Atomic representation each state of the world is indivisible—it has no
internal structure.
• Consider the problem of finding a driving route from one end of a country to
the other via some sequence of cities For the purposes of solving this
problem, it may suffice to reduce the state of world to just the name of the
city we are in—a single atom of knowledge;
• A "black box" whose only discernible property is that of being identical to or
different from another black box.
• Factored representation splits up each state into a fixed set of variables or
attributes, each of which can have a value.
• Two different factored states can share some attributes this makes it much
easier to work out how to turn one state into another.
• Factored representations can also represent uncertainty—for example,
ignorance about the am finding a driving route ount of gas in the tank can be
represented by leaving that attribute blank.
• A higher-fidelity description for finding a driving route problem might need to
pay attention to how much gas is in the tank, current GPS coordinates,
whether or not the oil warning light is working, and so on.

State Representations
Structured representation in which objects and their various and varying
relationships can be described explicitly.
• For many purposes, it is required to understand the world as having things
in it that are related to each other, not just variables with values.

You might also like