0% found this document useful (0 votes)
32 views

CS361 Lec 03

This document discusses different types of intelligent agents and their structure. It defines an agent as anything that can perceive its environment and act upon it. The goal is to design rational agents that perform well based on some objective measure. It describes simple reflex agents that act based solely on current percepts without memory. Model-based reflex agents use internal memory and a percept history to create a model of the environment to determine actions. Agents with explicit goals are more advanced as they consider future states and have a description of a desired goal situation to guide decision making.

Uploaded by

Omar Ahmed
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

CS361 Lec 03

This document discusses different types of intelligent agents and their structure. It defines an agent as anything that can perceive its environment and act upon it. The goal is to design rational agents that perform well based on some objective measure. It describes simple reflex agents that act based solely on current percepts without memory. Model-based reflex agents use internal memory and a percept history to create a model of the environment to determine actions. Agents with explicit goals are more advanced as they consider future states and have a description of a desired goal situation to guide decision making.

Uploaded by

Omar Ahmed
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Introduction to

Artificial Intelligence
Chapter 2: The Structure of Agents

Dr. Ahmed Fouad Ali


The Structure of Agents
What is an Intelligent Agent?

• An agent is anything that can:-

• Perceive its environment through sensors, and


act upon that environment through actuators

• Goal: Design rational agents that do a “good


job” of acting in their environments
• success determined based on some objective
performance measure
Vacuum Cleaner World
Action Percept sequence
Environment: square A and B Right ]A, Clean[
Percepts: location and content, e.g. [A, Dirty]
Suck ]A, Dirty[
Actions: Left, Right, Suck, NoOp
Left ]B, Clean[
Suck ]B, Dirty[
Right ]A, Clean[ ,]A, Clean[
Suck ]A, Dirty[ ,]A, Clean[

Right ]A, Clean[ ,]A, Clean[ ,]A, Clean[


Suck ]A, Dirty[ ,]A, Clean[ ,]A, Clean[
Agent Function and Agent Program
Agent Function (percepts ==> actions)

Maps from percept histories to actions f: P*  A

The agent program runs on the physical architecture to


produce the function f

agent = architecture + program

Action := Function(Percept Sequence) If (Percept


Sequence) then do Action

Example: A Simple Agent Function for Vacuum World

If (current square is dirty) then suck


Else move to adjacent square
Table Agent Program

function TABLE-DRIVEN-AGENT(percept ) returns an action


persistent: percepts, a sequence, initially empty table, a
table of actions, indexed by percept sequences, initially fully
specified
append percept to the end of percepts
action ←LOOKUP (percepts, table)
return action
Table Agent Program : Advantage and disadvantage

Advantages Disadvantages
• Easy to implement. • Input is searched in table, whose
• Simple architecture for simple running time is directly associated
problems with size of table.
• We can add input-output pair • Everything (Input-Output) need to be
whenever we want written in table.
• No Learning Capability.
Agent Types
 Simple reflex agents  Agents with goals
 are based on condition-action rules and  are agents which in addition to state
implemented with an appropriate information have a kind of goal
production system information which describes desirable
situations.
 They are stateless devices which do not
have memory of past world states.  Agents of this kind take future events into
consideration
 Reflex Agents with memory (Model-
Based)  Utility-based agents
 have internal state which is used to  base their decision on classic axiomatic
keep track of past states of the world utility-theory in order to act rationally
A Simple Reflex Agent

• The Simple reflex agents are the simplest agents.


• These agents take decisions on the basis of the
current percepts and ignore the rest of the percept
history.
• These agents only succeed in the fully observable
environment.
• The Simple reflex agent does not consider any part of
percepts history during their decision and action
process.
• The Simple reflex agent works on Condition-action
rule, which means it maps the current state to action.
A Simple Reflex Agent
 Example:
if car-in-front-brakes
then initiate braking

 Agent works by finding a rule whose


condition matches the current situation
 rule-based systems

function SIMPLE-REFLEX-AGENT(percept ) returns


 But, this only works if the current an action
percept is sufficient for making the persistent: rules, a set of condition–action rules
correct decision state←INTERPRET-INPUT(percept )
rule←RULE-MATCH(state, rules)
action ←rule.ACTION
return action
Example: Simple Reflex Vacuum Agent

function REFLEX-VACUUM-AGENT([location,status]) returns an action


if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Example: Air Condition Agent
• If we have an agent and we set if then rule on it;
Condition = if temp is more 45, switch on AC.
• When agents sense the environment's current
situation is more than 45, it’ll take action, the agent
must have all the information about the room.
• If we make it more complex then we'll need a more
advanced agent or another reflex agent will be used
https://www.inventorairconditioner.com/blog/faq/what-are-the-smart-
to sense that if there is anyone in the room. like, if wifi-ready-air-conditioners

the room is not empty, then switch on AC


Advantage of Simple Reflex Agent

•Easy to implement (Only IF-ELSE structure is

required)

•Very fast as compared to others

•Efficient because of Condition-actions rules

•Simple reflex agents are fixed memory agents


Limitation to simple reflex agents
• They have very limited intelligence.

• They do not have knowledge of non-perceptual

parts of the current state.

• Mostly too big to generate and to store.

• Not adaptive to changes in the environment.

• Only applicable for simple and small systems


https://blog.kainexus.com/improvement-disciplines/lean/6-
big-limitations-of-huddle-boards
(limited inputs and outputs).
Model Based Reflex Agent
• A model-based reflex agent is one that uses internal
memory and a percept history to create a model of
the environment in which it's operating and make
decisions based on that model.
• The term percept means something that has been
observed or detected by the agent.
• The model-based reflex agent stores the past
percepts in its memory and uses them to create a
model of the environment.
• The agent then uses this model to determine which
action should be taken in any given situation.
Model Based Reflex Agent
 Updating internal state requires two kinds of
encoded knowledge

 knowledge about how the world changes


(independent of the agents’ actions)

 knowledge about how the agents’ actions


affect the world. function MODEL-BASED-REFLEX-AGENT(percept ) returns an
action
persistent: state, the agent’s current conception of the
 But, knowledge of the internal state is not always world state
enough. model , a description of how the next state
depends on current state and action
rules, a set of condition–action rules
 How to choose among alternative decision action, the most recent action, initially none
state←UPDATE-STATE(state, action, percept ,model )
paths (e.g., where should the car go at an rule←RULE-MATCH(state, rules)
intersection)? action ←rule.ACTION
 Requires knowledge of the goal to be return action
achieved
Model Based Reflex Agent Example
• For example, a robot may be programmed to avoid

obstacles in its path. It slowly builds a model of the

environment as it moves around. As it encounters

obstacles, it stores this percept in its memory and

updates its model accordingly. As it encounters new

obstacles that are similar to past encounters, the https://www.pngwing.com/en/search?q=obstacle

robot can use memory and interpretation skills to

identify the obstacle and take the appropriate action.


Model Based Reflex Agent Advantage and Disadvantage

Advantages

• Efficient than simple reflex and table driven agents

• Can work in partially observable task environment.

Disadvantages

•No information about goal state.


https://stock.adobe.com/fr/search/images?k=disadvantage

•Limited Intelligence
Agents with Explicit Goals
 Knowing current state is not always enough.

 State allows an agent to keep track of unseen parts


of the world, but the agent must update state based
on knowledge of changes in the world and of
effects of own actions.
 Goal = description of desired situation
 Examples:
Reasoning about actions
 Decision to change lanes depends on a goal to go
somewhere (and other factors);
 Notes: • reflex agents only act based on pre-
computed knowledge (rules)
 Search and Planning are concerned with finding
• goal-based (planning) act by
sequences of actions to satisfy a goal.
reasoning about which actions
 Reflexive agent concerned with one action at a
achieve the goal
time.
• less efficient, but more adaptive and
 Classical Planning: finding a sequence of actions
flexible
that achieves a goal.
Agents with Explicit Goals Advantage and Disadvantage

Advantages

• Efficient than previous agents.

Disadvantages

• Cannot differentiate between two states that which

one is best. https://www.cyclonis.com/what-advantages-disadvantages-using-


password-manager/

• No Learning Capabilities (Limited Intelligence)


A Complete Utility-Based Agent
• The problem is goal based agent is that it cannot
distinguish two states whether state “A” is best or state
“B” is? For this, agent must evaluate states.
• Evaluation will be performed through calculation
known as Utility Function or evaluation function.
• They choose actions based on a preference (utility) for
each state.
• A utility function maps a state onto a real number
(known as utility value) which describes the associated
degree of happiness i-e you are 70% closer to goal is
more informative than current state is not goal.
A Complete Utility-Based Agent

Preferred world state has higher


utility for agent = quality of being
useful

Examples
quicker, safer, more reliable ways to
get where going;
price comparison shopping
Utility Function

Utility function: state ==> U(state) = 4 allows rational decisions in two kinds of
measure of happiness situations
h evaluation of the tradeoffs among conflicting
goals
h evaluation of competing goals
A Complete Utility-Based Agent Advantage and Disadvantage

Advantages

• Find best state using evaluation function or utility function.

• Can work efficiently in Continuous environment.

Disadvantages

• Limited Intelligence.
Learning agents

• A learning agent in AI is the type of agent which

uses machine learning techniques (such as 

Reinforcement Learning) in order to learn from

its past experiences.

• This is the only agent which can perform in every

type of environment. It starts to act with basic

knowledge and then able to act and adapt

automatically through learning.


Learning agents
 All previous agent-programs describe methods
 Critic :
for selecting actions.
provides feedback on agents performance based
on fixed performance standard
 Yet it does not explain the origin of these
programs.
 Learning mechanisms can be used to perform
 Learning element: introduce improvements
in performance element
this task.
 Teach them instead of instructing them.
 Advantage is the robustness of the program
 Performance element: selecting actions
toward initially unknown environments. based on percepts
 Corresponds to the previous agent programs

 Problem generator: suggests actions that will


lead to new and informative experiences.
 Exploration vs. exploitation
Part of this presentation and taken from the following resources

https://www.javatpoint.com/types-of-ai-agents

https://skilllx.com/types-of-agents-in-artificial-intelligence/

You might also like