Agents
Agents
What is an Agent?
Structure of an AI Agent
Learning Agents
Example of Agents
Agents in Artificial Intelligence
Applies to
The state of the environment
The chess environment has a finite number of states
Taxi driving is a continuous state environment
The way time is handled
Taxi driving is a continuous time problem
The percept and actions of the agents
Chess has discrete set of percepts and actions
Taxi driving actions are continuous
Most card games are discrete.
Structure of an AI Agent
Goal of AI
Given a PEAS task environment
Construct agent function f
Design an agent program that implements f on a particular
architecture
Simple reflex agents ignore the rest of the percept history and act
only on the basis of the current percept.
The agent function is based on the condition-action rule.
If the condition is true, then the action is taken, else not.
This agent function only succeeds when the environment is fully
observable.
Model-Based Reflex Agents
It works by finding a rule whose condition matches the current
situation.
A model-based agent can handle partially observable
environments by the use of a model about the world. The agent has
to keep track of the internal state which is adjusted by each percept
and that depends on the percept history.
The current state is stored inside the agent which maintains some
kind of structure describing the part of the world which cannot be
seen.
Goal-Based Agents
These kinds of agents take decisions based on how far they are
currently from their goal(description of desirable situations).
Their every action is intended to reduce their distance from the goal.
This allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state.
Utility-Based Agents
Utility-Based Agents used when there are multiple possible
alternatives, then to decide which one is the best.
They choose actions based on a preference (utility) for each state.
Sometimes achieving the desired goal is not enough. We may look for
a quicker, safer, cheaper trip to reach a destination.
Utility describes how “happy” the agent is.
Learning Agents
A learning agent in AI is the type of agent that can learn from its past
experiences or it has learning capabilities.
It starts to act with basic knowledge and then is able to act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by
learning from the environment.
2. Critic: The learning element takes feedback from critics which
describes how well the agent is doing with respect to a fixed
performance standard.
Learning Agents