0% found this document useful (0 votes)
213 views58 pages

Chapter 2 Agents and Environment

Uploaded by

aryanjadhav400
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
213 views58 pages

Chapter 2 Agents and Environment

Uploaded by

aryanjadhav400
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Unit 2

Agents and Environment


Agents in Artificial Intelligence

• An AI system can be defined as the study of the rational agent and its
environment. The agents sense the environment through sensors and
act on their environment through actuators.
• An AI agent can have mental properties such as knowledge, belief,
intention, etc.
What is an Agent?
• An agent can be anything that perceive its environment through
sensors and act upon that environment through actuators. An Agent
runs in the cycle of perceiving, thinking, and acting. An agent can be:
• Human-Agent: A human agent has eyes, ears, and other organs which
work for sensors and hand, legs, vocal tract work for actuators.
• Robotic Agent: A robotic agent can have cameras, infrared range
finder, NLP for sensors and various motors for actuators.
• Software Agent: Software agent can have keystrokes, file contents as
sensory input and act on those inputs and display output on the
screen.
• Hence the world around us is full of agents such as thermostat,
cellphone, camera, and even we are also agents.
Agent Environment in AI
Agent Environment in AI

• An environment is everything in the world which surrounds the agent,


but it is not a part of an agent itself. An environment can be described
as a situation in which an agent is present.
• The environment is where agent lives, operate and provide the agent
with something to sense and act upon it. An environment is mostly
said to be non-feministic.
Features of Environment
As per Russell and Norvig, an environment can have various features
from the point of view of an agent:
• Fully observable vs Partially Observable
• Static vs Dynamic
• Discrete vs Continuous
• Deterministic vs Stochastic
• Single-agent vs Multi-agent
• Episodic vs sequential
• Known vs Unknown
• Accessible vs Inaccessible
What is an Agent?
• Sensor: Sensor is a device which
detects the change in the environment
and sends the information to other
electronic devices. An agent observes
its environment through sensors.
• Actuators: Actuators are the
component of machines that converts
energy into motion. The actuators are
only responsible for moving and
controlling a system. An actuator can
be an electric motor, gears, rails, etc.
• Effectors: Effectors are the devices
which affect the environment.
Effectors can be legs, wheels, arms,
fingers, wings, fins, and display screen.
Types of AI Agents
• Agents can be grouped into five classes based on their degree of
perceived intelligence and capability. All these agents can improve
their performance and generate better action over the time. These
are given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
Simple Reflex agent:
Simple Reflex agent:
• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of
the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history
during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which means
it maps the current state to action. Such as a Room Cleaner agent, it
works only if there is dirt in the room.
Simple Reflex Agent
• Example: A thermostat that turns on the heater when the temperature
drops below a certain threshold but doesn’t consider previous temperature
readings or long-term weather forecasts.

• Characteristics of Simple Reflex Agent:


• Reactive: Reacts directly to current sensory input without considering past
experiences or future consequences.
• Limited Scope: Capable of handling simple tasks or environments with
straightforward cause-and-effect relationships.
• Fast Response: Makes quick decisions based solely on the current state,
leading to rapid action execution.
• Lack of Adaptability: Unable to learn or adapt based on feedback, making it
less suitable for dynamic or changing environments.
Problems for the simple reflex agent design approach:

• They have very limited intelligence


• They do not have knowledge of non-perceptual parts of the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.
Model-based reflex agent
Model-based reflex agent
• The Model-based agent can work in a partially observable
environment, and track the situation.
• A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world," so it is called
a Model-based agent.
• Internal State: It is a representation of the current state based on percept
history.
• These agents have the model, "which is knowledge of the world" and
based on the model they perform actions.
• Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
Model-Based Reflex Agents
• Example: A self-driving system not only responds to present road conditions
but also takes into account its knowledge of traffic rules, road maps, and
past experiences to navigate safely.

• Characteristics Model-Based Reflex Agents


• Adaptive: Maintains an internal model of the environment to anticipate
future states and make informed decisions.
• Contextual Understanding: Considers both current input and historical data
to determine appropriate actions, allowing for more nuanced
decision-making.
• Computational Overhead: Requires resources to build, update, and utilize
the internal model, leading to increased computational complexity.
• Improved Performance: Can handle more complex tasks and environments
compared to simple reflex agents, thanks to its ability to incorporate past
experiences.
Goal-based agents
Goal-based agents
• The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent
by having the "goal" information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and planning,
which makes an agent proactive.
Goal-Based Agents
• Example: A delivery robot tasked with delivering packages to specific
locations. It analyzes its current position, destination, available routes, and
obstacles to plan an optimal path towards delivering the package.
• Characteristics of Goal-Based Agents:
• Purposeful: Operates with predefined goals or objectives, providing a clear
direction for decision-making and action selection.
• Strategic Planning: Evaluates available actions based on their contribution
to goal achievement, optimizing decision-making for goal attainment.
• Goal Prioritization: Can prioritize goals based on their importance or
urgency, enabling efficient allocation of resources and effort.
• Goal Flexibility: Capable of adapting goals or adjusting strategies in
response to changes in the environment or new information.
Utility-based agents
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra
component of utility measurement which makes them different by
providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to
achieve the goal.
• The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
• The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
Utility-Based Agents
• Example: An investment advisor algorithm suggests investment options by
considering factors such as potential returns, risk tolerance, and liquidity
requirements, with the goal of maximizing the investor’s long-term financial
satisfaction.

• Characteristics of Utility-Based Agents:


• Multi-criteria Decision-making: Evaluates actions based on multiple criteria, such
as utility, cost, risk, and preferences, to make balanced decisions.
• Trade-off Analysis: Considers trade-offs between competing objectives to identify
the most desirable course of action.
• Subjectivity: Incorporates subjective preferences or value judgments into
decision-making, reflecting the preferences of the decision-maker.
• Complexity: Introduces complexity due to the need to model and quantify utility
functions accurately, potentially requiring sophisticated algorithms and
computational resources.
Learning Agents
Learning Agents
• A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by learning from
environment
• Critic: Learning element takes feedback from critic which describes that how well
the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for
new ways to improve the performance.
Example:
• An e-commerce platform employs a recommendation system.
Initially, the system may depend on simple rules or heuristics to
recommend items to users. However, as it collects data on user
preferences, behavior, and feedback (such as purchases, ratings, and
reviews), it enhances its suggestions gradually. By utilizing machine
learning algorithms, the agent constantly enhances its model by
incorporating previous interactions, thus enhancing the precision and
significance of product recommendations for each user. This system’s
adaptive learning process improves anticipating user preferences and
providing personalized recommendations, ultimately boosting the
user experience and increasing engagement and sales for the
platform.
Characteristics of Learning Agents:
• Adaptive Learning: Acquires knowledge or improves performance
over time through experience, feedback, or exposure to data.
• Flexibility: Capable of adapting to new tasks, environments, or
situations by adjusting internal representations or behavioral
strategies.
• Generalization: Extracts general patterns or principles from specific
experiences, allowing for transferable knowledge and skills across
different domains.
• Exploration vs. Exploitation: Balances exploration of new strategies or
behaviors with exploitation of known solutions to optimize learning
and performance.
6. Rational Agents
• A rational agent can be said to those, who do the right thing, It is an
autonomous entity designed to perceive its environment, process
information, and act in a way that maximizes the achievement of its
predefined goals or objectives. Rational agents always aim to produce
an optimal solution.
• Example: A self-driving car maneuvering through city traffic is a
sample of a rational agent. It uses sensors to observe the
environment, analyzes data on road conditions, traffic flow, and
pedestrian activity, and makes choices to arrive at its destination in a
safe and effective manner. The self-driving car shows rational agent
traits by constantly improving its path through real-time information
and lessons from past situations like roadblocks or traffic jams.
Characteristics of Rational Agents
• Goal-Directed Behavior: Rational agents act to achieve their goals or
objectives.
• Information Sensitivity: They gather and process information from
their environment to make informed decisions.
• Decision-Making: Rational agents make decisions based on available
information and their goals, selecting actions that maximize utility or
achieve desired outcomes.
• Consistency: Their actions are consistent with their beliefs and
preferences.
• Adaptability: Rational agents can adapt their behavior based on
changes in their environment or new information.
Characteristics of Rational Agents
• Optimization: They strive to optimize their actions to achieve the best
possible outcome given the constraints and uncertainties of the
environment.
• Learning: Rational agents may learn from past experiences to improve their
decision-making in the future.
• Efficiency: They aim to achieve their goals using resources efficiently,
minimizing waste and unnecessary effort.
• Utility Maximization: Rational agents seek to maximize their utility or
satisfaction, making choices that offer the greatest benefit given their
preferences.
• Self-Interest: Rational agents typically act in their own self-interest,
although this may be tempered by factors such as social norms or altruistic
tendencies.
7. Reflex Agents with State

• Reflex agents with state enhance basic reflex agents by incorporating


internal representations of the environment’s state. They react to
current perceptions while considering additional factors like battery
level and location, improving adaptability and intelligence.
• Example: A vacuum cleaning robot with state might prioritize cleaning
certain areas or return to its charging station when the battery is low,
enhancing adaptability and intelligence.
Characteristics of Reflex Agents with State
• Sensing: They sense the environment to gather information about the
current state.
• Action Selection: Their actions are determined by the current state,
without considering past states or future consequences.
• State Representation: They maintain an internal representation of the
current state of the environment.
• Immediate Response: Reflex agents with state react immediately to
changes in the environment.
• Limited Memory: They typically have limited memory capacity and do not
retain information about past states.
• Simple Decision Making: Their decision-making process is straightforward,
often based on predefined rules or heuristics.
8. Learning Agents with a Model
• Learning agents with a model are a sophisticated type of artificial
intelligence (AI) agent that not only learns from experience but also
constructs an internal model of the environment. This model allows the
agent to simulate possible actions and their outcomes, enabling it to make
informed decisions even in situations it has not directly encountered
before.
• Example: Consider a self-driving car equipped with a learning agent with a
model. This car not only learns from past driving experiences but also
builds a model of the road, traffic patterns, and potential obstacles. Using
this model, it can simulate different driving scenarios and choose the safest
or most efficient course of action. In summary, learning agents with a
model combine the ability to learn from experience with the capacity to
simulate and reason about the environment, resulting in more flexible and
intelligent behavior.
Characteristics of Learning Agents with a Model
• Learning from experience: Agents accumulate knowledge through
interactions with the environment.
• Constructing internal models: They build representations of the
environment to simulate possible actions and outcomes.
• Simulation and reasoning: Using the model, agents can predict the
consequences of different actions.
• Informed decision-making: This enables them to make choices based
on anticipated outcomes, even in unfamiliar situations.
• Flexibility and adaptability: Learning agents with a model exhibit
more intelligent behavior by integrating learning with predictive
capabilities.
9. Hierarchical Agents
• Hierarchical agents are a type of artificial intelligence (AI) agent that
organizes its decision-making process into multiple levels of
abstraction or hierarchy. Each level of the hierarchy is responsible for
a different aspect of problem-solving, with higher levels providing
guidance and control to lower levels. This hierarchical structure allows
for more efficient problem-solving by breaking down complex tasks
into smaller, more manageable subtasks.
• Example: In a hierarchical agent controlling a robot, the highest level
might be responsible for overall task planning, while lower levels
handle motor control and sensory processing. This division of labor
enables hierarchical agents to tackle complex problems in a
systematic and organized manner, leading to more effective and
robust decision-making.
Characteristics of Hierarchical Agents
• Hierarchical structure: Decision-making is organized into multiple
levels of abstraction.
• Division of labor: Each level handles different aspects of
problem-solving.
• Guidance and control: Higher levels provide direction to lower levels.
• Efficient problem-solving: Complex tasks are broken down into
smaller, manageable subtasks.
• Systematic and organized: Hierarchical agents tackle problems in a
structured manner, leading to effective decision-making.
10. Multi-agent systems
• Multi-agent systems (MAS) are systems composed of multiple
interacting autonomous agents. Each agent in a multi-agent system
has its own goals, capabilities, knowledge, and possibly different
perspectives. These agents can interact with each other directly or
indirectly to achieve individual or collective goals.
• Example: A Multi-Agent System (MAS) example is a traffic
management system. Here, each vehicle acts as an autonomous agent
with its own goals (e.g., reaching its destination efficiently). They
interact indirectly (e.g., via traffic signals) to optimize traffic flow,
minimizing congestion and travel time collectively.
Characteristics of Multi-agent systems
• Autonomous Agents: Each agent acts on its own based on its goals
and knowledge.
• Interactions: Agents communicate, cooperate, or compete to achieve
individual or shared objectives.
• Distributed Problem Solving: Agents work together to solve complex
problems more efficiently than they could alone.
• Decentralization: No central control; agents make decisions
independently, leading to emergent behaviors.
• Applications: Used in robotics, traffic management, healthcare, and
more, where distributed decision-making is essential.
Structure of an AI Agent
• The task of AI is to design an agent program which implements the
agent function. The structure of an intelligent agent is a combination
of architecture and agent program. It can be viewed as:
• Agent = Architecture + Agent program
• Following are the main three terms involved in the structure of an AI
agent:
• Architecture: Architecture is machinery that an AI agent executes on.
• Agent Function: Agent function is used to map a percept to an action.
• Agent program: Agent program is an implementation of agent
function. An agent program executes on the physical architecture to
produce function f.
Agent Environment in AI
• An environment is everything in the world which surrounds the agent, but it is not a part of an agent itself. An
environment can be described as a situation in which an agent is present.

• The environment is where agent lives, operate and provide the agent with something to sense and act upon it.
An environment is mostly said to be non-feministic.

• Features of Environment
• As per Russell and Norvig, an environment can have various features from the point of view of an agent:

• Fully observable vs Partially Observable


• Static vs Dynamic
• Discrete vs Continuous
• Deterministic vs Stochastic
• Single-agent vs Multi-agent
• Episodic vs sequential
• Known vs Unknown
• Accessible vs Inaccessible
Agent Environment in AI
Fully observable vs Partially Observable
• If an agent sensor can sense or access the complete state of an environment at each point in time
then it is a fully observable environment, it is partially observable. For reference, Imagine a
chess-playing agent. In this case, the agent can fully observe the state of the chessboard at all times.
Its sensors (in this case, vision or the ability to access the board's state) provide complete
information about the current position of all pieces. This is a fully observable environment because
the agent has perfect information about the state of the world.
• A fully observable environment is easy as there is no need to maintain the internal state to keep
track of the history of the world. For reference, Consider a self-driving car navigating a busy city.
While the car has sensors like cameras, lidar, and radar, it can't see everything at all times. Buildings,
other vehicles, and pedestrians can obstruct its sensors. In this scenario, the car's environment is
partially observable because it doesn't have complete and constant access to all relevant
information. It needs to maintain an internal state and history to make informed decisions even
when some information is temporarily unavailable.
• An agent with no sensors in all environments then such an environment is called unobservable. For
reference, think about an agent designed to predict earthquakes but placed in a sealed, windowless
room with no sensors or access to external data. In this situation, the environment is unobservable
because the agent has no way to gather information about the outside world. It can't sense any
aspect of its environment, making it completely unobservable.
Deterministic vs Stochastic:
• If an agent's current state and selected action can completely determine the next state of
the environment, then such an environment is called a deterministic environment. For
reference, Chess is a classic example of a deterministic environment. In chess, the rules
are well-defined, and each move made by a player has a clear and predictable outcome
based on those rules. If you move a pawn from one square to another, the resulting state
of the chessboard is entirely determined by that action, as is your opponent's response.
There's no randomness or uncertainty in the outcomes of chess moves because they
follow strict rules. In a deterministic environment like chess, knowing the current state
and the actions taken allows you to completely determine the next state.
• A stochastic environment is random and cannot be determined completely by an agent.
For reference, The stock market is an example of a stochastic environment. It's highly
influenced by a multitude of unpredictable factors, including economic events, investor
sentiment, and news. While there are patterns and trends, the exact behavior of stock
prices is inherently random and cannot be completely determined by any individual or
agent. Even with access to extensive data and analysis tools, stock market movements can
exhibit a high degree of unpredictability. Random events and market sentiment play
significant roles, introducing uncertainty.
• In a deterministic, fully observable environment, an agent does not need to worry about
uncertainty.
Episodic vs Sequential:
• In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action. For example, Tic-Tac-Toe is a classic example of an
episodic environment. In this game, two players take turns placing their symbols (X or O)
on a 3x3 grid. Each move by a player is independent of previous moves, and the goal is to
form a line of three symbols horizontally, vertically, or diagonally. The game consists of a
series of one-shot actions where the current state of the board is the only thing that
matters for the next move. There's no need for the players to remember past moves
because they don't affect the current move. The game is self-contained and episodic.
• However, in a Sequential environment, an agent requires memory of past actions to
determine the next best actions. For example, Chess is an example of a sequential
environment. Unlike Tic-Tac-Toe, chess is a complex game where the outcome of each
move depends on a sequence of previous moves. In chess, players must consider the
history of the game, as the current position of pieces, previous moves, and potential
future moves all influence the best course of action. To play chess effectively, players need
to maintain a memory of past actions, anticipate future moves, and plan their strategies
accordingly. It's a sequential environment because the sequence of actions and the history
of the game significantly impact decision-making.
Single-agent vs Multi-agent
• If only one agent is involved in an environment, and operating by itself then such
an environment is called a single-agent environment. For example, Solitaire is a
classic example of a single-agent environment. When you play Solitaire, you're the
only agent involved. You make all the decisions and actions to achieve a goal,
which is to arrange a deck of cards in a specific way. There are no other agents or
players interacting with you. It's a solitary game where the outcome depends
solely on your decisions and moves. In this single-agent environment, the agent
doesn't need to consider the actions or decisions of other entities.
• However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment. For reference, A soccer match is
an example of a multi-agent environment. In a soccer game, there are two teams,
each consisting of multiple players (agents). These players work together to
achieve common goals (scoring goals and preventing the opposing team from
scoring). Each player has their own set of actions and decisions, and they interact
with both their teammates and the opposing team. The outcome of the game
depends on the coordinated actions and strategies of all the agents on the field.
It's a multi-agent environment because there are multiple autonomous entities
(players) interacting in a shared environment.
Static vs Dynamic:
• If the environment can change itself while an agent is deliberating then such an
environment is called a dynamic environment
• Static environments are easy to deal with because an agent does not need to continue
looking at the world while deciding on an action. For reference, A crossword puzzle is an
example of a static environment. When you work on a crossword puzzle, the puzzle itself
doesn't change while you're thinking about your next move. The arrangement of clues and
empty squares remains constant throughout your problem-solving process. You can take
your time to deliberate and find the best word to fill in each blank, and the puzzle's state
remains unaltered during this process. It's a static environment because there are no
changes in the puzzle based on your deliberations.
• However, for a dynamic environment, agents need to keep looking at the world at each
action. For reference, Taxi driving is an example of a dynamic environment. When you're
driving a taxi, the environment is constantly changing. The road conditions, traffic,
pedestrians, and other vehicles all contribute to the dynamic nature of this environment.
As a taxi driver, you need to keep a constant watch on the road and adapt your actions in
real time based on the changing circumstances. The environment can change rapidly,
requiring your continuous attention and decision-making. It's a dynamic environment
because it evolves while you're deliberating and taking action.
Discrete vs Continuous:
• If in an environment, there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment it
is called a continuous environment.
• Chess is an example of a discrete environment. In chess, there are a finite number
of distinct chess pieces (e.g., pawns, rooks, knights) and a finite number of
squares on the chessboard. The rules of chess define clear, discrete moves that a
player can make. Each piece can be in a specific location on the board, and players
take turns making individual, well-defined moves. The state of the chessboard is
discrete and can be described by the positions of the pieces on the board.
• Controlling a robotic arm to perform precise movements in a factory setting is an
example of a continuous environment. In this context, the robot arm's position
and orientation can exist along a continuous spectrum. There are virtually infinite
possible positions and orientations for the robotic arm within its workspace. The
control inputs to move the arm, such as adjusting joint angles or applying forces,
can also vary continuously. Agents in this environment must operate within a
continuous state and action space, and they need to make precise, continuous
adjustments to achieve their goals.
Known vs Unknown
• Known and unknown are not actually a feature of an environment, but it is an agent's state of
knowledge to perform an action.
• In a known environment, the results of all actions are known to the agent. While in an unknown
environment, an agent needs to learn how it works in order to perform an action.
• It is quite possible for a known environment to be partially observable and an Unknown
environment to be fully observable.
• The opening theory in chess can be considered as a known environment for experienced chess
players. Chess has a vast body of knowledge regarding opening moves, strategies, and responses.
Experienced players are familiar with established openings, and they have studied various
sequences of moves and their outcomes. When they make their initial moves in a game, they have a
good understanding of the potential consequences based on their knowledge of known openings.
• Imagine a scenario where a rover or drone is sent to explore an alien planet with no prior
knowledge or maps of the terrain. In this unknown environment, the agent (rover or drone) has to
explore and learn about the terrain as it goes along. It doesn't have prior knowledge of the
landscape, potential hazards, or valuable resources. The agent needs to use sensors and data it
collects during exploration to build a map and understand how the terrain works. It operates in an
unknown environment because the results and consequences of its actions are not initially known,
and it must learn from its experiences.
Accessible vs Inaccessible
• If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called inaccessible.
• For example, Imagine an empty room equipped with highly accurate temperature sensors.
These sensors can provide real-time temperature measurements at any point within the
room. An agent placed in this room can obtain complete and accurate information about
the temperature at different locations. It can access this information at any time, allowing
it to make decisions based on the precise temperature data. This environment is
accessible because the agent can acquire complete and accurate information about the
state of the room, specifically its temperature.
• For example, Consider a scenario where a satellite in space is tasked with monitoring a
specific event taking place on Earth, such as a natural disaster or a remote area's
condition. While the satellite can capture images and data from space, it cannot access
fine-grained information about the event's details. For example, it may see a forest fire
occurring but cannot determine the exact temperature at specific locations within the fire
or identify individual objects on the ground. The satellite's observations provide valuable
data, but the environment it is monitoring (Earth) is vast and complex, making it
impossible to access complete and detailed information about all aspects of the event. In
this case, the Earth's surface is an inaccessible environment for obtaining fine-grained
information about specific events.
Turing Test in AI
• In 1950, Alan Turing introduced a
test to check whether a machine
can think like a human or not, this
test is known as the Turing Test. In
this test, Turing proposed that the
computer can be said to be an
intelligent if it can mimic human
response under specific conditions.
• Turing Test was introduced by
Turing in his 1950 paper,
"Computing Machinery and
Intelligence," which considered the
question, "Can Machine think?"
Turing Test
• The Turing test is based on a party game "Imitation game," with some
modifications. This game involves three players in which one player is
Computer, another player is human responder, and the third player is
a human Interrogator, who is isolated from other two players and his
job is to find that which player is machine among two of them.

• Consider, Player A is a computer, Player B is human, and Player C is an


interrogator. Interrogator is aware that one of them is machine, but
he needs to identify this on the basis of questions and their
responses.
Turing Test
• The conversation between all players is via keyboard and screen so
the result would not depend on the machine's ability to convert
words as speech.
• The test result does not depend on each correct answer, but only how
closely its responses like a human answer. The computer is permitted
to do everything possible to force a wrong identification by the
interrogator.
• The questions and answers can be like:
• Interrogator: Are you a computer?
• PlayerA (Computer): No
• Interrogator: Multiply two large numbers such as
(256896489*456725896)
Turing Test
• Player A: Long pause and give the wrong answer.
• In this game, if an interrogator would not be able to identify which is
a machine and which is human, then the computer passes the test
successfully, and the machine is said to be intelligent and can think
like a human.
• "In 1991, the New York businessman Hugh Loebner announces the
prize competition, offering a $100,000 prize for the first computer to
pass the Turing test. However, no AI program to till date, come close
to passing an undiluted Turing test".
History of Turing Test
• The Turing Test, introduced by Alan Turing in 1950, is a crucial milestone in
the history of artificial intelligence (AI). It came to light in his paper titled
'Computing Machinery and Intelligence.' Turing aimed to address a
profound question: Can machines mimic human-like intelligence?
• This curiosity arose from Turing's fascination with the concept of creating
thinking machines that exhibit intelligent behavior. He proposed the Turing
Test as a practical method to determine if a machine can engage in natural
language conversations convincingly, making a human evaluator believe it's
human.
• Turing's work on this test laid the foundation for AI research and spurred
discussions about machine intelligence. It provided a framework for
evaluating AI systems. Over time, the Turing Test has evolved and remains a
topic of debate and improvement. Its historical importance in shaping AI is
undeniable, continuously motivating AI researchers and serving as a
benchmark for gauging AI advancements.
Variations of the Turing Test
• Total Turing Test: This extended version of the Turing Test goes beyond text-based
conversations. It assesses the machine's capacity to comprehend and respond to
not just words but also visual and physical cues presented by the interrogator. This
includes recognizing objects shown to it and taking requested actions in response.
Essentially, it examines if the AI can interact with the world in a way that reflects a
deeper level of understanding.
• Reverse Turing Test: In a twist on the traditional Turing Test, the roles are reversed
here. In this variation, it's the machine that plays the role of the interrogator. Its
task is to differentiate between humans and other machines based on the
responses it receives. This reversal challenges the AI to evaluate the intelligence of
others, highlighting its ability to detect artificial intelligence.
• Multimodal Turing Test: In a world where communication takes many forms, the
Multimodal Turing Test assesses AI's capability to understand and respond to
various modes of communication concurrently. It examines whether AI can
seamlessly process and respond to text, speech, images, and potentially other
modes simultaneously. This variation acknowledges the diverse ways we
communicate and tests if AI can keep up with our multifaceted interactions.
Chatbots to attempt the Turing test:
• ELIZA: ELIZA was a Natural language processing computer program created
by Joseph Weizenbaum. It was created to demonstrate the ability of
communication between machine and humans. It was one of the first
chatterbots, which has attempted the Turing Test.
• Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was
designed to simulate a person with Paranoid schizophrenia(most common
chronic mental disorder). Parry was described as "ELIZA with attitude."
Parry was tested using a variation of the Turing Test in the early 1970s.
• Eugene Goostman: Eugene Goostman was a chatbot developed in Saint
Petersburg in 2001. This bot has competed in the various number of Turing
Test. In June 2012, at an event, Goostman won the competition promoted
as largest-ever Turing test content, in which it has convinced 29% of judges
that it was a human. Goostman resembled as a 13-year old virtual boy.
Features required for a machine to pass the Turing test:
• Natural language processing: NLP is required to communicate with
Interrogator in general human language like English.
• Knowledge representation: To store and retrieve information during
the test.
• Automated reasoning: To use the previously stored information for
answering the questions.
• Machine learning: To adapt new changes and can detect generalized
patterns.
• Vision (For total Turing test): To recognize the interrogator actions
and other objects during a test.
• Motor Control (For total Turing test): To act upon objects if
requested.
Limitation of Turing Test
• Not a True Measure of Intelligence: Passing the Turing Test doesn't
guarantee genuine machine intelligence or consciousness. Critics, like
John Searle's "Chinese Room" argument, contend that a computer
can simulate human-like responses without understanding or
consciousness.
• Simplicity of Test Scenarios: The Turing Test primarily focuses on
text-based interactions, which might not fully assess a machine's
capacity to comprehend and respond to the complexities of the real
world.

You might also like