16CS314 -
Artificial Intelligence
Structure of an intelligent agent
• Intelligent agent= agent program+
architecture
• Two types of agent
– A Skeleton Agent
– A table driven Agent
Skeleton agent
– Agent receives only single percept as its
input
– Percept new input then updates the memory
with new percept
– Goal or performance measure is applied to
judge the agent performance
Skeleton agent
Table driven agent
– Indexed percept sequence with its
corresponding action.
– Percept check the table for same percept
and performs its corresponding action
Table driven agent
Types of agent
1.Simple Reflex Agent
2.Model based Reflex Agent
3.Goal Based Agent
4.Utility Based Agent
5Learning Agent.
Types of agent
PEASPEAS
1.Simple Reflex Agent
Example
• ATM agent system if PIN
matches with given account number
then customer gets money.
PEASPEAS
1.Simple Reflex Agent
Ignore the rest of the percept history and act only on
the basis of the current percept.
condition-action rule. A condition-action rule is a rule
that maps a state i.e, condition to an action.
If the condition is true, then the action is taken, else
not
PEASPEAS
1.Simple Reflex Agent
Property
• Very limited intelligence.
• No knowledge of non-perceptual parts of state.
• Usually too big to generate and store.
• If there occurs any change in the environment, then
the collection of rules need to be updated.
PEASPEAS
1.Simple Reflex Agent
PEASPEAS
Model Based Reflex Agent
• Example
• A car driving agent which
maintains its own internal state
and then take action as
environment appears to it.
PEASPEAS
Model Based Reflex Agent
• It works by finding a rule whose condition matches
the current situation.
• can handle partially observable environments by
use of model about the world.
• The agent has to keep track of internal state which is
adjusted by each percept and that depends on the
percept history.
PEASPEAS
Model Based Reflex Agent
Updating the state requires the information about :
• how the world evolves in-dependently from the agent,
• how the agent actions affects the world.
PEASPEAS
Model Based Reflex Agent
PEASPEAS
Model Based Reflex Agent
PEASPEAS
Goal Based Agent
• Example
• Agent searching a
solution for 8 queen puzzle
PEASPEAS
Goal Based Agent
• Agents take decision based on how far they are currently
from their goal(description of desirable situations).
• This allows the agent a way to choose among multiple
possibilities, selecting the one which reaches a goal state.
• The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents
more flexible.
PEASPEAS
Goal Based Agent
• They usually require search and planning.
• The goal based agent’s behavior can easily be
changed.
PEASPEAS
Goal Based Agent
PEASPEAS
Utility Based Agent
• Example
• Military planning robot
which provides certain plan of
action to be taken.
• Its environment is too
complex and excepted
performance is also high
PEASPEAS
Utility Based Agent
• When there are multiple possible alternatives, then to
decide which one is best, utility based agents are
used.
• They choose actions based on a preference
(utility) for each state
PEASPEAS
Utility Based Agent
• Utility describes how “happy” the agent is.
• Because of the uncertainty in the world, a utility agent
chooses the action that maximizes the expected
utility.
PEASPEAS
Utility Based Agent
PEASPEAS
Learning Agent
• Allow the agent to operate in unknown environments
initially and then become more competent that its
initial knowledge
PEASPEAS
Learning Agent
• Learning element- improvements form feedback of
critic
• Performance element- select external actions and its
equivalent to agent
• Critic- necessary to judge the success of the percept
with respect to performance standard and returns the
feedback to learning element
PEASPEAS
Learning Agent
• Problem generator- responsible for suggesting actions
that will lead to new and informative experiences.
PEASPEAS
Learning Agent