0% found this document useful (0 votes)
92 views

By DR Narayana Swamy Ramaiah Professor, Dept of CSE SCSE, FET, JAIN Deemed To Be University

The document discusses intelligent agents and artificial intelligence. It defines an agent as anything that perceives its environment through sensors and acts upon the environment through actuators. An intelligent agent is a goal-directed agent that perceives its environment using observations and built-in knowledge to act through actuators. The document discusses the properties of intelligent agents including performance measures, environments, actuators, and sensors. It also discusses different types of agents including rational agents, omniscient agents, and software agents.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

By DR Narayana Swamy Ramaiah Professor, Dept of CSE SCSE, FET, JAIN Deemed To Be University

The document discusses intelligent agents and artificial intelligence. It defines an agent as anything that perceives its environment through sensors and acts upon the environment through actuators. An intelligent agent is a goal-directed agent that perceives its environment using observations and built-in knowledge to act through actuators. The document discusses the properties of intelligent agents including performance measures, environments, actuators, and sensors. It also discusses different types of agents including rational agents, omniscient agents, and software agents.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

ARTIFICIAL INTELLIGENCE

Intelligent Agent

By

Dr Narayana Swamy Ramaiah


Professor, Dept of CSE
SCSE,FET, JAIN Deemed to be University
Intelligent Agents
• What is an Agent?
 An agent can be viewed as anything that perceives its environment through sensors and acts upon that environment
through actuators.
 For example, human being perceives their surroundings through their sensory organs known as sensors and take actions
using their hands, legs, etc., known as actuators.

Agents interact with the environment through sensors and actuators


AGENT TERMINOLOGY
• Percept − It is agent’s perceptual inputs at a given instance.
• Percept Sequence − It is the history of all that an agent has perceived till date.
• Performance Measure of Agent − It is the criteria, which determines how
successful an agent is.
• Behavior of Agent − It is the action that agent performs after any given
sequence of percepts.
• Agent Function − It is a map from the precept sequence to an action, it is an
abstract mathematical explanation.
• Agent Program – its an practical and physical implementation of agent function
 For example, an automatic hand-dryer detects signals (hands) through its sensors. When we
bring hands nearby the dryer, it turns on the heating circuit and blows air. When the signal
detection disappears, it breaks the heating circuit and stops blowing air.
Assignment 1- History of AI
Assignment 2- Vacuum cleaner world example
Intelligent Agents

• Intelligent Agent
 An intelligent agent is a GOAL-DIRECTED AGENT. It perceives its environment through its sensors using the
observations and built-in knowledge, acts upon the environment through its actuators.

• Properties of Intelligent Agents


 For example a self driving car would be having following PEAS :-
 Performance: Safety, time, legal drive, comfort.
 Environment: Roads, other cars, pedestrians, road signs.
 Actuators: Steering, accelerator, brake, signal, horn.
 Sensors: Camera, sonar, GPS, speedometer, odometer, accelerometer, engine
sensors, keyboard.
Rationality of an Agent

• Rational Agent
 A rational agent is an agent which takes the right action for every perception. By
doing so, it maximizes the performance measure, which makes an agent be the most
successful.
• It is expected from an intelligent agent to act in a way that maximizes its performance
measure. Therefore, the rationality of an agent depends on four things:
 The performance measure which defines the criterion of success.
 The agent’s built-in knowledge about the environment.
 The actions that the agent can perform.
 The agent’s percept sequence until now.
 For example: score in exams depends on the question paper as well as our knowledge.

• Note: Rationality maximizes the expected performance, while perfection maximizes the
actual performance which leads to omniscience.
Autonomy

Information
Exploration Learning Autonomy
Gathering

Our definition requires a rational agent not only to explore and gather information but also to learn as much as
possible from what it perceives. The agent’s initial configuration could reflect some prior knowledge of the
environment, but as the agent gains experience this may be modified and augmented. to the extent of autonomy.
• Omniscient Agent
 An omniscient agent is an agent which knows the actual outcome of its action in advance. However, such agents are
impossible in the real world.

• Software Agents
 It is a software program which works in a dynamic environment. These agents are also known as Softbots because
all body parts of software agents are software only.

 For example, video games, flight simulator, etc.


Task Environment
• In designing an agent, the first step must always be to specify the task environment as fully as possible.
• A task environment is a problem to which a rational agent is designed as a solution.
• Consequently, in 2003, Russell and Norvig introduced several ways to classify task environments.
• However, before classifying the environments, we should be aware of the following terms:
 Performance Measure: It specifies the agent’s sequence of steps taken to achieve its target by measuring different
factors.
 Environment: It specifies the interaction of the agent with different types of environment.
 Actuators: It specifies the way the agent affects the environment by taking expected actions.
 Sensors: It specifies the way the agent gets information from its environment.
• These terms acronymically called as PEAS (Performance measure, Environment, Actuators, Sensors).
• To understand PEAS terminology in more detail, let’s discuss each element in the following example:
Properties of Environment
• Fully observable and Partially observable : An agent’s sensors give it access to the complete state of the
environment at each point in time, if fully observable, otherwise not.
 For example, chess is a fully observable environment, while poker is not
• Deterministic and Stochastic (Non Deterministic):
 The next state of the environment is completely determined by the current state and the action executed by the agent. (If
the environment is deterministic except for the actions of other agents, then the environment is strategic). 
 Stochastic environment is random in nature and cannot be completely determined.
 For example, 8-puzzle has a deterministic environment, but driverless car does not.
• Static and Dynamic :
 The static environment is unchanged while an agent is deliberating. (The environment is semi-dynamic if the
environment itself does not change with the passage of time but the agent’s performance score does.).
 A dynamic environment, on the other hand, does change. 
 For Example
 Mail Agents and filtering agents are examples for static environments. Mail
agents are responsible for replying email automatically and filtering agents
are responsible for processing a large volume of information and extracting
summary information from it.
 Software agents that search the internet and returns the cheapest product of a
particular brand is an example of a dynamic agent (mobile or roaming
agent)
 Backgammon has static environment and a Roomba has dynamic.
• Discrete / Continuous :
 An environment is discrete if there are a fixed, finite number of actions and percepts in it
(For example, chess);
 Continuous AI environments rely on unknown and rapidly changing data sources. (For
example, Self driving car, Multi-player video games ).
• Single Agent and Multi-Agent (Competitive vs. Collaborative) : 
 An agent operating just by itself has a single agent environment.
 GO or Chess are examples
 However if there are other agents involved, then it’s a multi agent environment. 
 Self-driving cars have multi agent environment.
• Accessible / Inaccessible − If the agent’s sensory apparatus can have access to the complete
state of the environment, then the environment is accessible to that agent.
• Episodic / Non-episodic − In an episodic environment, each episode consists of the agent
perceiving and then acting. The quality of its action depends just on the episode itself.
Subsequent episodes do not depend on the actions in the previous episodes. Episodic
environments are much simpler because the agent does not need to think ahead.
Structure of agents
• The goal of artificial intelligence is to design an agent program which implements an agent function i.e.,
mapping from percepts into actions. A program requires some computer devices with physical sensors and
actuators for execution, which is known as architecture.

• Therefore, an agent is the combination of the architecture and the program i.e.

Agent = Architecture + Program

• Note: The difference between the agent program and agent function is that an agent program takes the
current percept as input, whereas an agent function takes the entire percept history.
Types of Agent Programs
• Varying in the level of intelligence and complexity of the task, the following
four types of agents are there:
 Simple reflex agents: It is the simplest agent which acts according to the current percept only,
pays no attention to the rest of the percept history. The agent function of this type relies on the
condition-action rule – “If condition, then action.” It makes correct decisions only if the
environment is fully observable. These agents cannot ignore infinite loop when the
environment is partially observable but can escape from infinite loops if the agents randomize
its actions.
 Example: iDraw, a drawing robot which converts the typed characters into writing without storing the past data.

• Note: Simple reflex agents do not maintain the internal state and do not depend
on the percept theory.
• Model-based agent: These type of agents can handle partially observable environments by maintaining
some internal states. The internal state depends on the percept history, which reflects at least some of the
unobserved aspects of the current state. Therefore, as time passes, the internal state needs to be updated
which requires two types of knowledge or information to be encoded in an agent program i.e., the evolution
of the world on its own and the effects of the agent’s actions.
 Example: When a person walks in a lane, he maps the pathway in his mind.
 Goal-based agents: It is not sufficient to have the current state information unless the goal is not
decided. Therefore, a goal-based agent selects a way among multiple possibilities that helps it to reach
its goal.
 Note: With the help of searching and planning (subfields of AI), it becomes easy for the Goal-based agent to reach its destination.
• Utility-based agents: These types of agents are concerned about the performance measure. The agent
selects those actions which maximize the performance measure and devote towards the goal.
 Example: The main goal of chess playing is to ‘check-and-mate’ the king, but the player completes several small goals previously.

 Note: Utility-based agents keep track of its environment, and before reaching its main goal, it completes several tiny goals that may
come in between the path.
 Learning agents: The main task of these agents is to teach the agent machines to operate in an unknown
environment and gain as much knowledge as they can.

 A learning agent is divided into four conceptual components:


o Learning element: This element is responsible for making improvements.
o Performance element: It is responsible for selecting external actions according to the percepts it takes.
o Critic: It provides feedback to the learning agent about how well the agent is doing, which could
maximize the performance measure in the future.
o Problem Generator: It suggests actions which could lead to new and informative experiences.

• Example: Humans learn to speak only after taking birth.

• Note: The objective of a Learning agent is to improve the overall performance of the agent.
Working of an agent program’s components
• The function of agent components is to answer some basic questions like “What is the
world like now?”, “what do my actions do?” etc.

• We can represent the environment inherited by the agent in various ways by distinguishing
on an axis of increasing expressive power and complexity as discussed below:
 Atomic Representation: Here, we cannot divide each state of the world. So, it does not have
any internal structure. Search, and game-playing, Hidden Markov Models, and Markov
decision process all work with the atomic representation.
 Factored Representation: Here, each state is split into a fixed set of attributes or variables
having a value. It allows us to represent uncertainty. Constraint satisfaction, propositional
logic, Bayesian networks, and machine learning algorithms work with the Factored
representation.
 Note: Two different factored states can share some variables like current GPS location, but two different
atomic states cannot do so.
• Structured Representation: Here, we can explicitly describe various and
varying relationships between different objects which exist in the world.
 Relational databases and first-order logic, first-order probability models, natural language
understanding underlie structured representation.

You might also like