0% found this document useful (0 votes)
8 views

Agents

Uploaded by

alomisimunif9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Agents

Uploaded by

alomisimunif9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Artificial Intelligence

and Machine Learning


Intelligent Agents
:Lecture Topics

 What is an Agent?
 Structure of an AI Agent
 Learning Agents
 Example of Agents
Agents in Artificial Intelligence

 An agent is anything that can be


viewed as perceiving its environment
through sensors and acting upon that
environment through actuators.
 It is no accident that this diagram
is the same as the diagram for the
interaction of the human nervous
system with its environment.
Agents in Artificial Intelligence

 Percept refers to the agents perceptual input at any given time.


 Percept Sequence the complete history of everything the agent
has ever perceived.
 An agents choice of action at any given instant can depend on
the entire percept sequence observed to date but not on any
thing it has not perceived.
 The agent function describe the agent behavior.
 map any given percept sequence to an action.

 it is an abstract mathematical description


Agents in Artificial Intelligence

 Percepts: location and state of the environment.


 Actions: Left, Right, Suck.
 There are only two locations square A and square B the agent
perceives which square it is in and whether there is dirt in the square
It can choose to move left right or suck.
Agents in Artificial Intelligence

 Rational Agent one that does the right thing.


 What does it mean to do the right thing?
 We answer by considering the consequences of the agents behavior:
 An agent generate a sequence of actions according to the
percepts it receives
 This sequence of actions causes the environment to go through
a sequence of states
 If the sequence is desirable then the agent has performed
well
 Desirability is captured by a performance measure that
evaluates any given sequence of environment states.
 Example: Performance measure of a vacuum cleaner agent
could be amount of dirt cleaned up, amount of time taken,
amount of electricity consumed, amount of noise generated, etc.
Agents in Artificial Intelligence

 Rationality of Agents depends on:


 The performance measure that defines the criterion of success.
 The agents prior knowledge of the environment.
 The actions that the agents can perform.
 The agents percept sequence to date.
 A definition of a rational agent:
For each possible percept sequence a rational agent should
select an action that is expected to maximize its performance
measure given the evidence provided by the percept sequence
and whatever built in knowledge the agent has
Specifying the Task Environment

 To design a rational agent First we must specify the task environment:


 Which are the “problem” specification for which the agent is a
solution
 PEAS to specify a task environment
 Performance measure
 Environment
 Actuators
 Sensors
Specifying the Task Environment

 PEAS Taxi Driver Agent


 Performance measure: Safe, fast, legal, comfortable trip, maximize
profits, minimize impact on other road users.
 Environment: Roads, other traffic, police, pedestrians, customers,
weather.
 Actuators: Steering wheel, accelerator, brake, signal, horn, display,
speech.
 Sensors: Cameras, radar, speedometer, GPS, engine sensors,
accelerometer, microphones, touchscreen/keyboard.
Environment Types

 Fully observable vs Partially observable


 Single agent vs Multi agent
 Static vs Dynamic
 Deterministic vs Stochastic
 Episodic vs Sequential
 Discrete vs Continuous
Fully observable vs Partially
observable
 An environment is fully observable when the sensors can detect all
aspects that are relevant to the choice of action, otherwise it is
partially observable.
 If an agent's sensors give it access to the complete state of the
environment at each point in time, then the task environment is fully
observable.
 An environment might be partially observable because of noisy
inaccurate sensors or because parts of the state are simply missing
from the sensor data.
 FO environments are more convenient than PO ones.
 Examples:
 A vacuum agent with only a local one dirt sensor cannot tell
whether there is dirt in other squares FO.
 Taxi driver PO.
Single agent vs Multi agent

 Single agent is an agent operating by itself in an environment.


 For multiagent environment may be competitive or cooperative.
 Examples:
 Crossword is a single agent while chess is two agents.
Static vs Dynamic

 If the environment can change while an agent is deliberating, then it is


dynamic for that agent otherwise it is static.
 Static environments are easy to deal with because the agent
 Need not keep looking at the world while it is deciding on an action.
 Need not to worry about the passage of time.
 The environment is semi dynamic if the environment itself does NOT
change with the passage of time but the agent's performance score does.
 Examples:
 Taxi driving is dynamic.
 Chess when played with a clock is semi dynamic.
 Crossword puzzles are static.
Deterministic vs Stochastic

 If the next state of the environment is completely determined by the current


state and the action executed by the agent, then the environment is
deterministic.
 Otherwise, it is stochastic implies that uncertainty about outcomes is
quantified in terms of probabilities.
 If the environment is partially observable then it could appear to be
stochastic.
 If the environment is deterministic except for the actions of other agents,
then the environment is strategic.
 Examples: Vacuum world is deterministic while taxi driver is not.
 An environment is uncertain if it is not fully observable or not deterministic.
 A non deterministic environment is one in which actions are characterized by
their possible outcomes, but no probabilities are attached to them.
Episodic vs Sequential

 In an episodic environment an agents action is divided into atomic


episodes.
 Each episode consists of the agent perceiving and then performing
a single action.
 The choice of action in each episode do not depend on previous
decisions/actions but it depends only on the episode itself.
 For example a defective parts detection agent.
 In sequential environment the current decisions affect all future
decisions.
 Mail sorting task is episodic while chess playing is sequential.
Discrete vs Continuous

 Applies to
 The state of the environment
 The chess environment has a finite number of states
 Taxi driving is a continuous state environment
 The way time is handled
 Taxi driving is a continuous time problem
 The percept and actions of the agents
 Chess has discrete set of percepts and actions
 Taxi driving actions are continuous
 Most card games are discrete.
Structure of an AI Agent

 To understand the structure of Intelligent Agents, we should be familiar with


Architecture and Agent programs.
 Architecture is the machinery that the agent executes on. It is a device with
sensors and actuators, for example, a robotic car, a camera, and a PC.
 An agent program is an implementation of an agent function.
 An agent function is a map from the percept sequence(history of all that an
agent has perceived to date) to an action.
 All agent programs can have the same skeleton
 Input = current precepts
 Output = action
 Program = manipulates input to produce output
Structure of an AI Agent

 Goal of AI
 Given a PEAS task environment
 Construct agent function f
 Design an agent program that implements f on a particular
architecture

(Agent = Architecture + Program)


Structure of an AI Agent

 In general, the architecture


 Makes the percept from the sensors available to the program
 Run the program, and
 Feeds the programs action choices to the actuators
 The agent program take the current percept as inputs from the
sensors and return an action to the actuators.
 If the agents actions depend on the entire percept sequence, then the
agent will have to remember the percepts.
Types of Agents

 Agents can be grouped into five classes based on their degree of


perceived intelligence and capability :
 Simple Reflex Agents
 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent
Simple Reflex Agents

 Simple reflex agents ignore the rest of the percept history and act
only on the basis of the current percept.
 The agent function is based on the condition-action rule.
 If the condition is true, then the action is taken, else not.
 This agent function only succeeds when the environment is fully
observable.
Model-Based Reflex Agents
 It works by finding a rule whose condition matches the current
situation.
 A model-based agent can handle partially observable
environments by the use of a model about the world. The agent has
to keep track of the internal state which is adjusted by each percept
and that depends on the percept history.
 The current state is stored inside the agent which maintains some
kind of structure describing the part of the world which cannot be
seen.
Goal-Based Agents
 These kinds of agents take decisions based on how far they are
currently from their goal(description of desirable situations).
 Their every action is intended to reduce their distance from the goal.
 This allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state.
Utility-Based Agents
 Utility-Based Agents used when there are multiple possible
alternatives, then to decide which one is the best.
 They choose actions based on a preference (utility) for each state.
 Sometimes achieving the desired goal is not enough. We may look for
a quicker, safer, cheaper trip to reach a destination.
 Utility describes how “happy” the agent is.
Learning Agents

 A learning agent in AI is the type of agent that can learn from its past
experiences or it has learning capabilities.
 It starts to act with basic knowledge and then is able to act and adapt
automatically through learning.
 A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by
learning from the environment.
2. Critic: The learning element takes feedback from critics which
describes how well the agent is doing with respect to a fixed
performance standard.
Learning Agents

3. Performance element: It is responsible for selecting external


action.
4. Problem Generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
Example of Agents

 Intelligent personal assistants: These are agents that are designed to


help users with various tasks, such as scheduling appointments,
sending messages, and setting reminders. Examples of intelligent
personal assistants include Siri, Alexa, and Google Assistant.
 Autonomous robots: These are agents that are designed to operate
autonomously in the physical world. They can perform tasks such as
cleaning, sorting, and delivering goods. Examples of autonomous
robots include the Roomba vacuum cleaner and the Amazon delivery
robot.
 Gaming agents: These are agents that are designed to play games,
either against human opponents or other agents. Examples of gaming
agents include chess-playing agents and poker-playing agents.
Example of Agents

 Traffic management agents: These are agents that are designed to


manage traffic flow in cities. They can monitor traffic patterns, adjust
traffic lights, and reroute vehicles to minimize congestion. Examples
of traffic management agents include those used in smart cities
around the world.
 A software agent has Keystrokes, file contents, received network
packages that act as sensors and displays on the screen, files, and
sent network packets acting as actuators.
 A Robotic agent has Cameras and infrared range finders which act as
sensors and various motors act as actuators.

You might also like