0% found this document useful (0 votes)
12 views

Lecture 2 - Agents

artificial intelligence lecture

Uploaded by

arteamurati5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lecture 2 - Agents

artificial intelligence lecture

Uploaded by

arteamurati5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Introduction to Artificial Intelligence

Intelligent Agents

Lecturer: Dr. Igli Hakrama


Metropolitan Tirana University
Designing Rational Agents
o An agent is an entity that perceives and
acts.
o A rational agent selects actions that
maximize its (expected) utility.
o Characteristics of the percepts,
environment, and action space dictate
techniques for selecting rational actions
o This course is about:

Environment
o General AI techniques for a variety of Sensors
Percepts

Agent
problem types
o Learning to recognize when and how a ?
new problem can be solved with an
existing technique Actuators
Actions
Pac-Man as an Agent

Agent Environment
Sensors Percepts
?
Actuators Actions

Pac-Man is a registered trademark of Namco-Bandai Games, used here for educational purposes
Agents
• An agent is anything that can be viewed as perceiving its
environment through sensors and acting upon that
environment through actuators – Norvig&Russell

• Human agent:
– eyes, ears, and other organs for sensors;
– hands, legs, mouth, and other body parts for actuators

• Robotic agent:
– cameras and infrared range finders for sensors
– various motors for actuators
Another Definition of Agents
• No single, common definition for concept „Agent“.
Example for controversy: learning ability (adaptability):
May not be desired in e.g. flight control system.
• Woolridge [1]: „An agent is a computer system that is
situated in some environment, and that is capable of
autonomous action in this environment in order to meet
its objectives“
• Sometimes required / demanded: Mobility (in real or
virtual environments), Veracity (Agent will not
communicate wrong information), Benevolence (no
conflicts, fulfillment of all goals will be tried)
• goal orientation is sometimes called Rationality.
(Controversial. Compare 1.4)
Simple Agent Schema & Example
Agent

Sensor Action
Input Output

Environment

Example: Control Systems: Thermostat


too cold  heating on
temperature OK  heating off

Agents have limited effectoric capabilities (set of actions).


Actions have Preconditions.
Rational agents
• Rationality
– Performance measuring success
– Agents prior knowledge of environment
– Actions that agent can perform
– Agent’s percept sequence to date
– Rational Agent: For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance measure, given
the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
o Rational is different from omniscience
o Percepts may not supply all relevant information
o E.g., in card game, don’t know cards of others.
o Rational is different from being perfect
o Rationality maximizes expected outcome while perfection maximizes
actual outcome.
Autonomy in Agents
The autonomy of an agent is the extent to which its
behaviour is determined by its own experience,
rather than knowledge of designer.

o Extremes
o No autonomy – ignores environment/data
o Complete autonomy – must act randomly/no program
o Example: baby learning to crawl
o Ideal: design agents to have some autonomy
o Possibly become more autonomous with experience
PEAS Model
• PEAS: Performance measure, Environment, Actuators, Sensors
• Must first specify the setting for intelligent agent design
• Consider, e.g., the task of designing an automated taxi driver:
– Performance measure: Safe, fast, legal, comfortable
trip, maximize profits
– Environment: Roads, other traffic, pedestrians,
customers
– Actuators: Steering wheel, accelerator, brake, signal,
horn
– Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine sensors, keyboard
PEAS Example I

o Agent: Part-picking robot


o Performance measure: Percentage of parts in
correct bins
o Environment: Conveyor belt with parts, bins
o Actuators: Jointed arm and hand
o Sensors: Camera, joint angle sensors
PEAS Example II

o Agent: Interactive Instructor


o Performance measure: Maximize student's score
on test
o Environment: Set of students
o Actuators: Screen display
(exercises, suggestions, corrections)
o Sensors: Keyboard
Agents and environments

• The agent function maps from percept


histories to actions:
• [f: P*  A]

• The agent program runs on the physical


architecture to produce f
• agent = architecture + program
Vacuum-cleaner world

o Percepts: location and contents, e.g., [A,Dirty]


o Actions: Left, Right, Suck, NoOp
o Agent’s function  look-up table
o For many agents this is a very large table
Environment types

• Fully observable (vs. partially


observable)
• Or Accessible vs Inaccessible (Wooldridge)
• Deterministic (vs. stochastic)
• Episodic (vs. sequential)
• Static (vs. dynamic)
• Discrete (vs. continuous)
• Single agent (vs. multiagent):
Fully observable vs. Partially observable

o Is everything an agent requires to choose its


actions available to it via its sensors? Perfect or
Full information.
o If so, the environment is fully accessible
o If not, parts of the environment are inaccessible
o Agent must make informed guesses about world.
o In decision theory: perfect information vs.
imperfect information.
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Fully Partially Partially Partially Fully Fully
Deterministic (vs. stochastic)

o Does the change in world state


o Depend only on current state and agent’s action?
o Non-deterministic environments
o Have aspects beyond the control of the agent
o Utility functions have to guess at changes in world

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Deterministic Stochastic Stochastic Stochastic Stochastic Deterministic
Episodic (vs. sequential):

o Is the choice of current action


o Dependent on previous actions?
o If not, then the environment is episodic
o In non-episodic environments:
o Agent has to plan ahead:
o Current choice will affect future actions

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Sequential Sequential Sequential Sequential Episodic Episodic
Static (vs. dynamic):
Benign vs Adversarial
o Static environments don’t change
o While the agent is deliberating over what to do
o Dynamic environments do change
o So agent should/could consult the world when choosing actions
o Alternatively: anticipate the change during deliberation OR make decision very
fast
o Semidynamic: If the environment itself does not change with the
passage of time but the agent's performance score does.
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Static Static Static Dynamic Dynamic Semi

Another example: off-line route planning vs. on-board navigation system


Discrete (vs. continuous)

o A limited number of distinct, clearly defined


percepts and actions vs. a range of values
(continuous)

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Discrete Discrete Discrete Continuous Continuous Continuous
Single agent (vs. multiagent):

o An agent operating by itself in an environment or


there are many agents working together

Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Single Multi Multi Multi Single Single
Summary.

Observable Deterministic Episodic Static Discrete Agents

Cross Word Fully Deterministic Sequential Static Discrete Single

Poker Fully Stochastic Sequential Static Discrete Multi

Backgammon Partially Stochastic Sequential Static Discrete Multi

Taxi driver Partially Multi


Stochastic Sequential Dynamic Conti

Part picking robot Partially Stochastic Episodic Dynamic Conti Single

Image analysis Fully Deterministic Episodic Semi Conti Single


Artificial Intelligence a modern
approach
Choice under (Un)certainty

Fully
Observable
yes
no
Deterministic no

yes

Certainty: Uncertainty
Search
Agent types
o Four basic types in order of
increasing generality:
o Simple reflex agents
o Reflex agents with state/model
o Goal-based agents
o Utility-based agents
o All these can be turned into learning
agents
o https://github.com/aimacode/aima-java/
Simple reflex agents
Simple reflex agents
o Simple but very limited intelligence.
o Action does not depend on percept history, only on current
percept.
o Therefore no memory requirements.
o Infinite loops
o Suppose vacuum cleaner does not observe location. What do you do
given location = clean? Left of A or right on B -> infinite loop.
o Fly buzzing around window or light.
o Possible Solution: Randomize action.
o Thermostat.
o Chess – openings, endings
o Lookup table (not a good idea in general)
o 35100 entries required for the entire game
States: Beyond Reflexes
• Recall the agent function that maps from percept histories to
actions:
[f: P*  A]
o An agent program can implement an agent function by
maintaining an internal state.
o The internal state can contain information about the state of
the external environment.
o The state depends on the history of percepts and on the
history of actions taken:
[f: P*, A* S A] where S is the set of states.
o If each internal state includes all information relevant to
information making, the state space is Markovian.
States and Memory: Game Theory

o If each state includes the information about


the percepts and actions that led to it, the
state space has perfect recall.
o Perfect Information = Perfect Recall + Full
Observability + Deterministic Actions.
Model-based reflex agents
 Know how world evolves
 Overtaking car gets closer
from behind
 How agents actions affect the
world
 Wheel turned clockwise takes

you right
 Model base agents update their
state
Goal-based agents

• knowing state and environment?


Enough?
– Taxi can go left, right, straight
• Have a goal
o A destination to get to
o Uses knowledge about a goal to guide its
actions
o E.g., Search, planning
Goal-based agents

• Reflex agent breaks when it sees brake lights. Goal based


agent reasons
– Brake light -> car in front is stopping -> I should stop -> I should
use brake
Utility-based agents
o Goals are not always enough
o Many action sequences get taxi to destination
o Consider other things. How fast, how safe…..
o A utility function maps a state onto a real
number which describes the associated
degree of “happiness”, “goodness”, “success”.
o Where does the utility measure come from?
o Economics: money.
o Biology: number of offspring.
o Your life?
Utility-based agents
Learning agents

 Performance element
is what was previously
the whole agent
 Input sensor
 Output action
 Learning element
 Modifies
performance
element.
Learning agents

 Critic: how the agent is


doing
 Input: checkmate?
 Fixed

 Problem generator
 Tries to solve the
problem differently
instead of
optimizing.
 Suggests exploring
new actions -> new
problems.
Learning agents(Taxi driver)
o Performance element
o How it currently drives
o Taxi driver Makes quick left turn across 3 lanes
o Critics observe shocking language by passenger and other
drivers and informs bad action
o Learning element tries to modify performance elements for
future
o Problem generator suggests experiment out something
called Brakes on different Road conditions
o Exploration vs. Exploitation
o Learning experience can be costly in the short run
o shocking language from other drivers
o Less tip
o Fewer passengers
The Big Picture:
AI for Model-Based
Agents
Planning
Decision Theory Action Reinforcement
Game Theory Learning

Knowledg Learnin
e
Logic Machine Learningg
Probability Statistics
Heuristics
Inference
The Picture for Reflex-Based Agents

Actio
n Reinforcement
Learning
Learnin
g
• Studied in AI, Cybernetics, Control Theory,
Biology, Psychology.
Discussion Question
o Model-based behaviour has a large overhead.
o Our large brains are very expensive from an
evolutionary point of view.
o Why would it be worthwhile to base behaviour on a
model rather than “hard-code” it?
o For what types of organisms in what type of
environments?
Agents acting in an environment:
inputs and output

D. Poole and A. Mackworth 2017


Inputs to an agent
o Abilities - the set of possible actions it can perform
o Goals/Preferences - what it wants, its desires, its
values,...
o Prior Knowledge - what it comes into being
knowing, what it doesn't get from experience,...
o History of stimuli
o (current) stimuli | what it receives from environment now
(observations, percepts)
o past experiences | what it has received in the past
Example agent: autonomous car
o abilities: steer, accelerate, brake
o goals: safety, get to destination, timeliness . . .
o prior knowledge: street maps, what signs mean,
what to stop for . . .
o stimuli: vision, laser, GPS, voice commands . . .
o past experiences: how breaking and steering a
affects direction and speed. . .
Example agent: robot
o abilities: movement, grippers, speech, facial
expressions,...
o goals: deliver food, rescue people, score goals,
explore,. . .
o prior knowledge: what is important feature,
categories of objects, what a sensor tell us,. . .
o stimuli: vision, sonar, sound, speech
recognition, gesture recognition,. . .
o past experiences: effect of steering,
slipperiness, how people move,. . .
Example agent: teacher
o abilities: present new concept, drill, give test,
explain concept,. . .
o goals: particular knowledge, skills, inquisitiveness,
social skills,. . .
o prior knowledge: subject material, teaching
strategies,. . .
o stimuli: test results, facial expressions, errors,
focus,. . .
o past experiences: prior test results, e
ects of teaching strategies, . . .
Other Agents
o thermostat for heater
o medical doctor
o user interface
o bee
o smart home
What are the … ?
o abilities:
o goals:
o prior knowledge:
o stimuli:
o past experiences:

You might also like