Artificial Intelligence - Lesson 2
Artificial Intelligence - Lesson 2
INTELLIGENCE
By
LUBINGA HUDSON
0773 892 222 / 0753 625 222
[email protected]
Agents in AI
What is covered in this lesson?
• Agents
• Intelligent Agents
• Agents and Environment
• Structure and Agents
• Types of Intelligent Agents
Agents
• AI is defined as a study of rational agents.
• A rational agent could be anything which makes decisions, as a person, firm,
machine, or software.
• It carries out an action with the best outcome after considering past and
current percepts (agent’s perceptual inputs at a given instance).
• An AI system is composed of an agent and its environment.
• The agents act in their environment. The environment may contain other
agents.
• An agent is anything that can be viewed as :
• perceiving its environment through sensors and
• acting upon that environment through actuators
• Note : Every agent can perceive its own actions (but not always the
effects)
Intelligent Agents
• An intelligent agent is a program that can make decisions or perform a
service based on its environment, user input and experiences.
• These programs can be used to autonomously gather information on a
regular, programmed schedule or when prompted by the user in real
time.
• Intelligent agents may also be referred to as a bot, which is short for
robot.
• Typically, an agent program, using parameters the user has provided,
searches all or some part of the internet, gathers information the user is
interested in and presents it to them on a periodic or requested basis.
• Data intelligent agents can extract any specifiable information, such as
included keywords or publication date.
• In agents that employ artificial intelligence (AI), user input is collected
using sensors, like microphone or cameras, and agent output is
delivered through actuators, like speakers or screens.
• The practice of having information brought to a user by an agent is
called push technology.
• Common characteristics of intelligent agents are adaptation based on
experience, real time problem solving, analysis of error or success
rates and the use of memory-based storage and retrieval.
• For enterprises, intelligent agents can be used for applications in data
mining, data analytics and customer service and support (CSS).
• Consumers can also use intelligent agents to compare the prices of
similar products and notify the user when a website update occurs.
• Intelligent agents are also similar to software agents which are
autonomous computer programs.
Agents and Environment
• An AI system is composed of an agent and its environment. The agents act in their
environment. The environment may contain other agents.
The Nature of Environments
• Some programs operate in the entirely artificial environment confined to keyboard
input, database, computer file systems and character output on a screen.
• In contrast, some software agents (software robots or softbots) exist in rich, unlimited
softbots domains.
• The simulator has a very detailed, complex environment. The software agent needs to
choose from a long array of actions in real time.
• A softbot designed to scan the online preferences of the customer and show interesting
items to the customer works in the real as well as an artificial environment.
• The most famous artificial environment is the Turing Test environment, in which one
real and other artificial agents are tested on equal ground. This is a very challenging
environment as it is highly difficult for a software agent to perform as well as a human.
Turing Test
• The success of an intelligent behavior of a system can be measured with
Turing Test.
• Two persons and a machine to be evaluated participate in the test. Out
of the two persons, one plays the role of the tester.
• Each of them sits in different rooms.
• The tester is unaware of who is machine and who is a human.
• He interrogates the questions by typing and sending them to both
intelligences, to which he receives typed responses.
• This test aims at fooling the tester.
• If the tester fails to determine machine’s response from the human
response, then the machine is said to be intelligent.
Properties of Environment
• Discrete / Continuous − If there are a limited number of distinct, clearly defined, states of the
environment, the environment is discrete (e.g., chess); otherwise it is continuous (e.g., driving).
• Observable / Partially Observable − If it is possible to determine the complete state of the environment at
each time point from the percepts it is observable; otherwise it is only partially observable.
• Static / Dynamic − If it does not change while an agent is acting, then it is static; otherwise it is dynamic.
• Single agent / Multiple agents − The environment may contain other agents which may be of the same or
different kind as that of the agent.
• Accessible / Inaccessible − If the agent’s sensory apparatus can have access to the complete state of the
environment, then the environment is accessible to that agent.
• Deterministic / Non-deterministic − If the next state of the environment is completely determined by the
current state and the actions of the agent, then the environment is deterministic; otherwise it is non-
deterministic.
• Episodic / Non-episodic − In an episodic environment, each episode consists of the agent perceiving and
then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not
depend on the actions in the previous episodes. Episodic environments are much simpler because the
agent does not need to think ahead
Structure of Intelligent Agents
• To understand the structure of Intelligent Agents, we should be familiar with
Architecture and Agent Program.
• Architecture is the machinery that the agent executes on. It is a device with
sensors and actuators, for example : a robotic car, a camera, a PC.
• Agent program is an implementation of an agent function.
• An agent function is a map from the percept sequence (history of all that an
agent has perceived till date) to an action.
Examples of Agents:-
• A software agent has Keystrokes, file
contents, received network packages which
act as sensors and displays on the screen,
files, sent network packets acting as
actuators.
• A Human agent has eyes, ears, and other
organs which act as sensors and hands, legs,
mouth, and other body parts acting as
actuators.
• A Robotic agent has Cameras and infrared
range finders which act as sensors and
various motors acting as actuators.
Types of Agents
Agents can be grouped into four classes based on their degree
of perceived intelligence and capability :
1. Simple Reflex Agents
2. Model-Based Reflex Agents
3. Goal-Based Agents
4. Utility-Based Agents
5. Learning Agent
Simple reflex agents
• Simple reflex agents ignore the rest of the percept history and act only on the basis of the current
percept.
• Percept history is the history of all that an agent has perceived till date.
• The agent function is based on the condition-action rule.
• A condition-action rule is a rule that maps a state i.e., condition to an action.
• If the condition is true, then the action is taken, else not.
• This agent function only succeeds when the environment is fully observable.
• For simple reflex agents operating in partially observable environments, infinite loops are often
unavoidable.
• It may be possible to escape from infinite loops if the agent can randomize its actions.
• Problems with Simple reflex agents are :
1. Very limited intelligence.
2. No knowledge of non-perceptual parts of state.
3. Usually too big to generate and store.
4. If there occurs any change in the environment, then the collection of rules need to be updated.
Simple reflex agents
Model-based reflex agents
• It works by finding a rule whose condition matches the current situation.
• A model-based agent can handle partially observable environments by
use of model about the world.
• The agent has to keep track of internal state which is adjusted by each
percept and that depends on the percept history.
• The current state is stored inside the agent which maintains some kind
of structure describing the part of the world which cannot be seen.
• Updating the state requires information about :
1. how the world evolves in-dependently from the agent, and
2. how the agent actions affects the world.
Model-based reflex agents
Goal-based agents
• These kind of agents take decision based on how far they are
currently from their goal (description of desirable situations).
• Their every action is intended to reduce its distance from the goal.
• This allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state.
• The knowledge that supports its decisions is represented explicitly
and can be modified, which makes these agents more flexible.
• They usually require search and planning.
• The goal-based agent’s behavior can easily be changed.
Goal-based agents
Utility-based agents
• The agents which are developed having their end uses as building blocks are called
utility based agents.
• When there are multiple possible alternatives, then to decide which one is best,
utility-based agents are used.
• They choose actions based on a preference (utility) for each state. Sometimes
achieving the desired goal is not enough.
• We may look for a quicker, safer, cheaper trip to reach a destination. Agent happiness
should be taken into consideration. Utility describes how “happy” the agent is.
• Because of the uncertainty in the world, a utility agent chooses the action that
maximizes the expected utility.
• A utility function maps a state onto a real number which describes the associated
degree of happiness.
Utility-based agents
Learning Agent
• A learning agent in AI is the type of agent which can learn from its
past experiences or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• A learning agent has mainly four conceptual components, which are:
1. Learning element :It is responsible for making improvements by learning
from the environment
2. Critic: Learning element takes feedback from critic which describes how
well the agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem Generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
Learning Agent
END
“By far the greatest danger of Artificial Intelligence is that
people conclude too early that they understand it.”
― Eliezer Yudkowsky