0% found this document useful (0 votes)
46 views41 pages

University of Dar Es Salaam Coict: Department of Computer Science & Eng

This document provides an overview of intelligent agents and their properties. It defines an agent as a computer system capable of flexible autonomous action in an environment to meet its design objectives. Key properties of intelligent agents discussed include reactivity, proactiveness, social ability, and the ability to balance reactive and goal-oriented behavior. The document also distinguishes agents from objects and expert systems.

Uploaded by

samwel sitta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views41 pages

University of Dar Es Salaam Coict: Department of Computer Science & Eng

This document provides an overview of intelligent agents and their properties. It defines an agent as a computer system capable of flexible autonomous action in an environment to meet its design objectives. Key properties of intelligent agents discussed include reactivity, proactiveness, social ability, and the ability to balance reactive and goal-oriented behavior. The document also distinguishes agents from objects and expert systems.

Uploaded by

samwel sitta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 41

University of Dar es Salaam

COICT
Department of Computer Science & Eng.

IS365: Artificial Intelligence


Lecture 2 – Intelligent Agents

Lecturer: Nyamwihula, W.
Block B: Room B09
Mobile: 0784281242/0754588001
Email: [email protected]
What is an Agent?
 The main point about agents is they are
autonomous: capable of acting independently,
exhibiting control over their internal state
 Thus: an agent is a computer system capable
of autonomous action in some environment in
order to meet its design objectives
SYSTEM

input output

ENVIRONMENT

2
What is an Agent?
 Trivial (non-interesting) agents:
 thermostat
 UNIX daemon (e.g., biff)
 An intelligent agent is a computer system
capable of flexible autonomous action in
some environment
 By flexible, we mean:
 reactive
 pro-active
 social

3
An agent and its environment
sensors

percepts
environment ?
agent

actions

effectors
Diagram of an agent

What AI should fill


Simple Terms

Percept
 Agent’s perceptual inputs at any given instant
Percept sequence
 Complete history of everything that the agent has
ever perceived.
Agent function & program

Agent’s behavior is mathematically


described by
 Agent function
 A function mapping any given percept
sequence to an action
Practically it is described by
 An agent program
 The real implementation
Agent function & program – cont’d

 The agent function maps from percept histories


to actions:
[f: P*  A]
 The agent program runs on the physical
architecture to produce f
 agent = architecture + program


Examples of agents in different types of applications
Agent type Percepts Actions Goals Environment

Medical diagnosis Symptoms, Questions, tests, Healthy patients,


system findings, treatments minimize costs Patient, hospital
patient's answers

Satellite image Pixels of varying Print a categorization of Correct Images from


analysis system intensity, color scene categorization orbiting satellite

Part-picking robot Pixels of varying Pick up parts and sort Place parts in Conveyor belts
  intensity into bins correct bins with parts

Temperature,
Refinery controller pressure Open, close valves; Maximize purity, Refinery
  readings adjust temperature yield, safety  
       

Interactive English Print exercises,


tutor Typed words suggestions, Maximize Set of students
    corrections student's score  
  on test  
Reactivity
 If a program’s environment is guaranteed to be fixed, the
program need never worry about its own success or
failure – program just executes blindly
 Example of fixed environment: compiler
 The real world is not like that: things change, information
is incomplete. Many (most?) interesting environments are
dynamic
 Software is hard to build for dynamic domains: program
must take into account possibility of failure – ask itself
whether it is worth executing!
 A reactive system is one that maintains an ongoing
interaction with its environment, and responds to changes
that occur in it (in time for the response to be useful)

10
Proactiveness
 Reacting to an environment is easy (e.g.,
stimulus  response rules)
 But we generally want agents to do things
for us
 Hence goal directed behavior
 Pro-activeness = generating and
attempting to achieve goals; not driven
solely by events; taking the initiative
 Recognizing opportunities

11
Balancing Reactive and Goal-Oriented
Behavior
 We want our agents to be reactive,
responding to changing conditions in an
appropriate (timely) fashion
 We want our agents to systematically work
towards long-term goals
 These two considerations can be at odds with
one another
 Designing an agent that can balance the two
remains an open research problem
12
Social Ability
 The real world is a multi-agent environment:
we cannot go around attempting to achieve
goals without taking others into account
 Some goals can only be achieved with the
cooperation of others
 Similarly for many computer environments:
witness the Internet
 Social ability in agents is the ability to interact
with other agents (and possibly humans) via
some kind of agent-communication language,
and perhaps cooperate with others
13
Other Properties
 Other properties, sometimes discussed in the context of
agency:
 mobility: the ability of an agent to move around an electronic

network
 veracity: an agent will not knowingly communicate false

information
 benevolence: agents do not have conflicting goals, and that

every agent will therefore always try to do what is asked of it


 rationality: agent will act in order to achieve its goals, and will

not act in such a way as to prevent its goals being achieved


— at least insofar as its beliefs permit
 learning/adaption: agents improve performance over time

14
Agents and Objects
 Are agents just objects by another name?
 Object:
 encapsulates some state
 communicates via message passing
 has methods, corresponding to operations
that may be performed on this state

15
Agents and Objects
 Main differences:
 agents are autonomous:
agents embody stronger notion of autonomy than
objects, and in particular, they decide for themselves
whether or not to perform an action on request from
another agent
 agents are smart:
capable of flexible (reactive, pro-active, social) behavior,
and the standard object model has nothing to say about
such types of behavior
 agents are active:
a multi-agent system is inherently multi-threaded, in that
each agent is assumed to have at least one thread of
active control
16
Objects do it for free…

 agents do it because they want to


 agents do it for money

17
Agents and Expert Systems
 Aren’t agents just expert systems by another
name?
 Expert systems typically disembodied ‘expertise’
about some (abstract) domain of discourse (e.g.,
blood diseases)
 Example: MYCIN knows about blood diseases in
humans
 It has a wealth of knowledge about blood diseases, in the
form of rules
 A doctor can obtain expert advice about blood diseases
by giving MYCIN facts, answering questions, and posing
queries

18
Agents and Expert Systems
 Main differences:
 agents situated in an environment:
MYCIN is not aware of the world — only
information obtained is by asking the user
questions
 agents act:
MYCIN does not operate on patients
 Some real-time (typically process control)
expert systems are agents

19
Intelligent Agents and AI
 Aren’t agents just the AI project?
Isn’t building an agent what AI is all about?
 AI aims to build systems that can
(ultimately) understand natural language,
recognize and understand scenes, use
common sense, think creatively, etc. — all
of which are very hard
 So, don’t we need to solve all of AI to build
an agent…?

20
Intelligent Agents and AI
 When building an agent, we simply want a
system that can choose the right action to
perform, typically in a limited domain
 We do not have to solve all the problems of
AI to build a useful agent:
a little intelligence goes a long way!
 Oren Etzioni, speaking about the commercial
experience of NETBOT, Inc:
“We made our agents dumber and dumber
and dumber…until finally they made money.”
21
Environments – Accessible vs.
inaccessible
 An accessible environment is one in which
the agent can obtain complete, accurate,
up-to-date information about the
environment’s state
 Most moderately complex environments
(including, for example, the everyday
physical world and the Internet) are
inaccessible
 The more accessible an environment is, the
simpler it is to build agents to operate in it
22
Environments –
Deterministic vs. non-deterministic
 A deterministic environment is one in which
any action has a single guaranteed effect —
there is no uncertainty about the state that
will result from performing an action
 The physical world can to all intents and
purposes be regarded as non-deterministic
 Non-deterministic environments present
greater problems for the agent designer

23
Environments - Episodic vs. non-
episodic
 In an episodic environment, the performance

of an agent is dependent on a number of


discrete episodes, with no link between the
performance of an agent in different scenarios
 Episodic environments are simpler from the
agent developer’s perspective because the
agent can decide what action to perform
based only on the current episode — it need
not reason about the interactions between
this and future episodes

24
Environments - Static vs. dynamic
 A static environment is one that can be
assumed to remain unchanged except by the
performance of actions by the agent
 A dynamic environment is one that has other
processes operating on it, and which hence
changes in ways beyond the agent’s control
 Other processes can interfere with the agent’s
actions (as in concurrent systems theory)
 The physical world is a highly dynamic
environment
25
Environments – Discrete vs.
continuous
 An environment is discrete if there are a

fixed, finite number of actions and percepts


in it
 Russell and Norvig give a chess game as an
example of a discrete environment, and taxi
driving as an example of a continuous one
 Continuous environments have a certain
level of mismatch with computer systems
 Discrete environments could in principle be
handled by a kind of “lookup table”
26
Agents as Intentional Systems
 When explaining human activity, it is often useful to
make statements such as the following:
Janine took her umbrella because she
believed it was going to rain.
Michael worked hard because he wanted
to possess a PhD.
 These statements make use of a folk psychology, by
which human behavior is predicted and explained
through the attribution of attitudes, such as believing
and wanting (as in the above examples), hoping,
fearing, and so on
 The attitudes employed in such folk psychological
descriptions are called the intentional notions
27
Agents as Intentional Systems
 The philosopher Daniel Dennett coined the term
intentional system to describe entities ‘whose
behavior can be predicted by the method of
attributing belief, desires and rational acumen’
 Dennett identifies different ‘grades’ of intentional
system:
‘A first-order intentional system has beliefs and
desires (etc.) but no beliefs and desires about
beliefs and desires. …A second-order intentional
system is more sophisticated; it has beliefs and
desires (and no doubt other intentional states) about
beliefs and desires (and other intentional states) —
both those of others and its own’
28
Agents as Intentional Systems
 Is it legitimate or useful to attribute
beliefs, desires, and so on, to
computer systems?

29
Agents as Intentional Systems
 McCarthy argued that there are occasions
when the intentional stance is appropriate:
‘To ascribe beliefs, free will, intentions, consciousness, abilities, or wants
to a machine is legitimate when such an ascription expresses the same
information about the machine that it expresses about a person. It is useful
when the ascription helps us understand the structure of the machine, its
past or future behavior, or how to repair or improve it. It is perhaps never
logically required even for humans, but expressing reasonably briefly what
is actually known about the state of the machine in a particular situation
may require mental qualities or qualities isomorphic to them. Theories of
belief, knowledge and wanting can be constructed for machines in a
simpler setting than for humans, and later applied to humans. Ascription of
mental qualities is most straightforward for machines of known structure
such as thermostats and computer operating systems, but is most useful
when applied to entities whose structure is incompletely known’.

30
Agents as Intentional Systems
 What objects can be described by the intentional
stance?
 As it turns out, more or less anything can. . .
consider a light switch:
‘It is perfectly coherent to treat a light switch as a
(very cooperative) agent with the capability of
transmitting current at will, who invariably transmits
current when it believes that we want it transmitted
and not otherwise; flicking the switch is simply our
way of communicating our desires’. (Yoav Shoham)
 But most adults would find such a description
absurd!
Why is this?
31
Agents as Intentional Systems
 The answer seems to be that while the intentional stance
description is consistent,

. . . it does not buy us anything, since we essentially


understand the mechanism sufficiently to have a
simpler, mechanistic description of its behavior. (Yoav
Shoham)
 Put crudely, the more we know about a system, the less
we need to rely on animistic, intentional explanations of its
behavior
 But with very complex systems, a mechanistic, explanation
of its behavior may not be practicable
 As computer systems become ever more complex, we
need more powerful abstractions and metaphors to explain
their operation — low level explanations become
impractical. The intentional stance is such an abstraction
32
Agents as Intentional Systems
 The intentional notions are thus abstraction tools, which
provide us with a convenient and familiar way of describing,
explaining, and predicting the behavior of complex systems
 Remember: most important developments in computing are
based on new abstractions:
 procedural abstraction

 abstract data types

 objects

Agents, and agents as intentional systems, represent a


further, and increasingly powerful abstraction
 So agent theorists start from the (strong) view of agents as
intentional systems: one whose simplest consistent
description requires the intentional stance
33
Agents as Intentional Systems
 This intentional stance is an abstraction tool — a
convenient way of talking about complex systems, which
allows us to predict and explain their behavior without
having to understand how the mechanism actually works
 Now, much of computer science is concerned with
looking for abstraction mechanisms (witness procedural
abstraction, ADTs, objects,…)
So why not use the intentional stance as an
abstraction tool in computing — to explain,
understand, and, crucially, program computer systems?
 This is an important argument in favor of agents

34
Agents as Intentional Systems
 Other 3 points in favor of this idea:
 Characterizing Agents:
 It provides us with a familiar, non-technical way
of understanding & explaining agents
 Nested Representations:
 It gives us the potential to specify systems that
include representations of other systems
 It is widely accepted that such nested
representations are essential for agents that
must cooperate with other agents

35
Agents as Intentional Systems
 Post-Declarative Systems:
 This view of agents leads to a kind of post-declarative
programming:
 In procedural programming, we say exactly what a system
should do
 In declarative programming, we state something that we want to
achieve, give the system general info about the relationships
between objects, and let a built-in control mechanism (e.g.,
goal-directed theorem proving) figure out what to do
 With agents, we give a very abstract specification of the system,
and let the control mechanism figure out what to do, knowing
that it will act in accordance with some built-in theory of agency
(e.g., the well-known Cohen-Levesque model of intention)

36
An aside…
 We find that researchers from a more mainstream
computing discipline have adopted a similar set of ideas…
 In distributed systems theory, logics of knowledge are used
in the development of knowledge based protocols
 The rationale is that when constructing protocols, one often
encounters reasoning such as the following:
IF process i knows process j has
received message m1
THEN process i should send process j
the message m2
 In DS theory, knowledge is grounded — given a precise
interpretation in terms of the states of a process; we’ll
examine this point in detail later

37
Abstract Architecture for Agents
 Assume the environment may be in any of a finite
set E of discrete, instantaneous states:

 Agents are assumed to have a repertoire of


possible actions available to them, which transform
the state of the environment:

 A run, r, of an agent in an environment is a


sequence of interleaved environment states and
actions:

38
Abstract Architecture for Agents

 Let:
 R be the set of all such possible finite
sequences (over E and Ac)
 RAc be the subset of these that end with an
action
 RE be the subset of these that end with an
environment state

39
State Transformer Functions
 A state transformer function represents behavior
of the environment:

 Note that environments are…


 history dependent
 non-deterministic
 If (r)=, then there are no possible successor
states to r. In this case, we say that the system
has ended its run
 Formally, we say an environment Env is a triple
Env =E,e0, where: E is a set of environment
states, e0 E is the initial state, and  is a state
transformer function
40
Agents

 Agent is a function which maps runs to


actions:

An agent makes a decision about what action


to perform based on the history of the system
that it has witnessed to date. Let AG be the
set of all agents

41

You might also like