0% found this document useful (0 votes)
19 views

Ai Mod1 Note

Uploaded by

Alen Elias
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Ai Mod1 Note

Uploaded by

Alen Elias
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

SYLLABUS- MODULE 1

Introduction- What is Artificial Intelligence Foundation of AI, History of AI,


Applications of AI, Intelligent Agents- Agents and Environment, Good Behaviour, The
concept of rationality, nature of environment, Structure of Agents

ARTIFICIAL INTELLIGENCE
Artificial intelligence allows machines to replicate the capabilities of the human mind. From
the development of self-driving cars to the development of smart assistants like Siri and Alexa,
AI is a growing part of everyday life.
Artificial intelligence is a wide-ranging branch of computer science concerned with building
smart machines capable of performing tasks that typically require human intelligence.
 An intelligent entity created by humans.
 Capable of performing tasks intelligently without being explicitly instructed.
 Capable of thinking and acting rationally and humanely.

The definitions on the left measure success in terms of fidelity to human performance, the ones
on the right measure against an ideal performance measure, called rationality. A system is
rational if it does the “right thing,” given what it knows.
Turing Test
The Turing test was developed by Alan Turing (A computer scientist) in 1950. He proposed
that the “Turing test is used to determine whether or not a computer(machine) can think
intelligently like humans”?

1
Imagine a game of three players having two humans and one computer, an interrogator(as a
human) is isolated from the other two players. The interrogator’s job is to try and figure out
which one is human and which one is a computer by asking questions from both of them. To
make things harder computer is trying to make the interrogator guess wrongly. In other
words, computers would try to be indistinguishable from humans as much as possible.

The “standard interpretation” of the Turing Test, in which player C, the interrogator,
is given the task of trying to determine which player – A or B – is a computer and which
is a human. The interrogator is limited to using the responses to written questions to
make the determination
The conversation between interrogator and computer would be like this:
C(Interrogator): Are you a computer?
A(Computer): No
C: Multiply one large number to another, 158745887 * 56755647
A: After a long pause, an incorrect answer!
C: Add 5478012, 4563145
A: (Pause about 20 seconds and then give an answer)10041157
If the interrogator wouldn’t able to distinguish the answers provided by both humans and
computers then the computer passes the test and the machine(computer) is considered as
intelligent as a human. In other words, a computer would be considered intelligent if its
conversation couldn’t be easily distinguished from a human’s. The whole conversation would
be limited to a text-only channel such as a computer keyboard and screen.

Acting humanly: The Turing Test approach

A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer

The computer would need to possess the following capabilities:

 natural language processing to enable it to communicate successfully in English

2
 knowledge representation to store what it knows or hears
 automated reasoning to use the stored information to answer questions and to draw
new conclusions
 machine learning to adapt to new circumstances and to detect and extrapolate patterns
 TOTAL TURING TEST- To pass the total Turing Test, the computer will need
computer vision to perceive objects, and
 robotics to manipulate objects and move about

Thinking humanly: The cognitive modeling approach


 introspection—trying to catch our own thoughts as they go by
 psychological experiments—observing a person in action
 brain imaging—observing the brain in action
 cognitive science brings together computer models from AI and experimental
techniques from psychology to construct precise and testable theories of the human
mind.

Thinking rationally: The “laws of thought” approach


SYLLOGISM: an instance of a form of reasoning in which a conclusion is drawn from two
given or assumed propositions, “Socrates is a man; all men are mortal; therefore, Socrates is
mortal.”
LOGIC: study of laws of thought to govern the operation of the mind not easy to take informal
knowledge and state it in the formal terms required by logical notation

3
Even problems with just a few hundred facts can exhaust the computational resources of any
computer unless it has some guidance as to which reasoning steps to try first.

Acting rationally: The rational agent approach


An agent is just something that acts. Rational behavior is doing the right thing. Right thing is
expected to maximize goal achievement, given available information
A computer agent does the following
 operate autonomously,
 perceive their environment,
 persist over a prolonged time period,
 adapt to change, and
 create and pursue goals
Rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty,
the best expected outcome. Correct inference is not all of rationality in some situations, there
is no provably correct thing to do, but something must still be done. There are also ways of
acting rationally that cannot be said to involve inference
Requirements for AI
 NATURAL LANGUAGE PROCESSING
o To enable it to communicate successfully
 KNOWLEDGE REPRESENTATION
o Knowledge representation to store what it knows or hears;
 AUTMATED REASONING
o Automated reasoning to use the stored information to answer questions and to
draw new conclusions
 MACHINE LEARNING
o machine learning to adapt to new circumstances and to detect and extrapolate
patterns.
 COMPUTER VISION
o Computer vision to perceive objects
 ROBOTICS
o Robotics to manipulate objects and move about

4
We can measure if Artificial Intelligence is acting like a human by the following test:
 Turing Test
 The Cognitive Modelling Approach
 The Law of Thought Approach
 The Rational Agent Approach

Fields in AI

1. Machine Learning
Machine learning is a feature of artificial intelligence that provides the computer with the
capability to automatically gather data and learn from the experience of the problems or cases
they have encountered rather than specially programmed to perform the given task or work.

The machine learning emphasizes the growth of the algorithms which can scrutinize the data
and make predictions of it. The main use of this is in the healthcare industry where it is used
for diagnosis of the disease, medical scan interpretation, etc.

5
Pattern recognition is a sub-category of machine learning. It can be described as the automatic
recognition of the blueprint from the raw data using computer algorithms.

2. Deep learning
It is the process of learning by processing and analyzing the input data by several methods until
the machine discovers the single desirable output. It is also known as the self-learning of the
machines.

The machine runs various random programs and algorithms to map the input raw sequence of
input data to output. By deploying the various algorithms like neuroevolution and other
approaches like gradient descend on a neural topology the output y is raised finally from the
unknown input function f(x), assuming that x and y are correlated.

3. Neural Networks
The neural networks are the brain of artificial intelligence. They are the computer systems
which are the replica of the neural connections in the human brain. The artificial
corresponding neurons of the brain are known as the perceptron.
The stack of various perceptron joining together makes the artificial neural networks in the
machines. Before giving a desirable output, the neural networks gain knowledge by processing
various training examples.

With the use of different learning models, this process of analyzing data will also give a
solution for many associated queries that were unanswered previously.

Deep learning in association with the neural networks can unfold the multiple layers of hidden
data including the output layer of complex problems and is an aide for the subfields like speech
recognition, natural language processing, and computer vision, etc.

4. Cognitive Computing
The purpose of this component of artificial intelligence is to initiate and accelerates the
interaction for complex task completion and problem-solving between humans and machines.

While working on various kinds of tasks with humans, the machines learn and understand
human behavior, sentiments in various distinctive conditions and recreate the thinking process
of humans in a computer model.

By practicing this, the machine acquires the ability to understand human language and image
reflections. Thus the cognitive thinking along with artificial intelligence can make a product
that will be having human-like actions and can also have data handling capabilities.

Cognitive computing is capable of taking accurate decisions in case of complex problems. Thus
it is applied in the area which needs to improve solutions with optimum costs and is acquired
by analyzing natural language and evidence-based learning.

5. Natural Language Processing


With this feature of artificial intelligence, computers can interpret, identify, locate, and process
human language and speech.

6
The concept behind introducing this component is to make the interaction between the
machines and the human language seamless and the computers will become capable of
delivering logical responses towards human speech or query.

The natural language processing focus on both the verbal and written section of human
languages means both active and passive modes of using algorithms.

The Natural Language Generation (NLG) will process and decode the sentences and words that
humans used to speak (verbal communication) while the Natural Language Understanding
(NLU) will emphasize the written vocabulary to translate the language in the text or pixels
which can be understood by machines.

The Graphical User Interfaces (GUI) based applications of the machines are the best example
of natural language processing.

6. Computer Vision
The computer vision is a very vital part of artificial intelligence as it facilitates the computer to
automatically recognize, analyze, and interpret the visual data from the real world images and
visuals by capturing and intercepting them.

It incorporates the skills of deep learning and pattern recognition to extract the content of
images from any data given, including images or video files within PDF document, Word
document, PPT document, XL file, graphs, and pictures, etc.

Suppose we have a complex image of a bundle of things then only seeing the image and
memorizing it is not easily possible for everyone. The computer vision can incorporate a series
of transformations to the image to extract the bit and byte detail about it like the sharp edges
of the objects, unusual design or color used, etc.

This is done by using various algorithms by applying mathematical expressions and statistics.
The robots make use of computer vision technology to see the world and act in real-time
situations.

The application of this component is very vastly used in the healthcare industry to analyze the
health condition of the patient by using an MRI scan, X-ray, etc. Also used in the automobile
industry to deal with computer-controlled vehicles and drones.

7
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

 Can formal rules be used to draw valid conclusions?


 How does the mind arise from a physical brain?
 Where does knowledge come from?
 How does knowledge lead to action?

Rationalism: power of reasoning in understanding the world

Dualism: there is a part of the human mind (or soul or spirit) that is outside of nature, exempt
from physical laws

Materialism: brain’s operation according to the laws of physics constitutes the mind

Induction: general rules are acquired by exposure to repeated associations between their
elements

Logical positivism: doctrine holds that all knowledge can be characterized by logical theories
connected, ultimately, to observation sentences that correspond to sensory inputs; thus,
logicalpositivism combines rationalism and empiricism

confirmation theory: attempted to analyze the acquisition of knowledge from experience

2. Mathematics

 What are the formal rules to draw valid conclusions?


 What can be computed?

8
 How do we reason with uncertain information?
Three fundamental areas in mathematics for AI are

1. logic,
2. computation, and
3. probability.

George Boole: worked out the details of propositional, or Boolean, logic

Gottlob Frege: creating the first order logic that is used today

Euclid’s algorithm: first nontrivial algorithm

Kurt G¨odel: incompleteness theorem

Alan Turing: characterize exactly which functions are computable. Turing machine

Tractability: problem is called intractable if the time required to solve instances of the problem
grows exponentially with the size of the instances

 NP-completeness

Despite the increasing speed of computers, careful use of resources will characterize intelligent
systems.

Theory of probability: deal with uncertain measurements and incomplete theories.

3. Economics

• How should we make decisions so as to maximize payoff?


• How should we do this when others may not go along?
• How should we do this when the payoff may be far in the future?

studying how people make choices that lead to preferred outcomes

Decision theory: combines probability theory with utility theory, provides a formal and
complete framework for decisions made under uncertainty

Game theory: Von Neumann and Morgenstern, a rational agent should adopt policies that are
(or least appear to be) randomized. game theory does not offer an unambiguous prescription
for selecting actions

4. Neuroscience

How do brains process information?

Neuroscience is the study of the nervous system, particularly the brain

Aristotle wrote, “Of all the animals, man has the largest brain in proportion to his size.”

Nicolas Rashevsky: the first to apply mathematical models to the study of the nervous system.
9
History of Artificial Intelligence

The idea of “artificial intelligence” goes back thousands of years, to ancient philosophers
considering questions of life and death. In ancient times, inventors made things called
“automatons” which were mechanical and moved independently of human intervention. The
word “automaton” comes from ancient Greek, and means “acting of one’s own will.” One of the
earliest records of an automaton comes from 400 BCE and refers to a mechanical pigeon created
by a friend of the philosopher Plato. Many years later, one of the most famous automatons was
created by Leonardo da Vinci around the year 1495.

So while the idea of a machine being able to function on its own is ancient, for the purposes of
this article, we’re going to focus on the 20th century, when engineers and scientists began to
make strides toward our modern-day AI.

Groundwork for AI:

1900-1950In the early 1900s, there was a lot of media created that centered around the idea of
artificial humans. So much so that scientists of all sorts started asking the question: is it possible
to create an artificial brain? Some creators even made some versions of what we now call
“robots” (and the word was coined in a Czech play in 1921) though most of them were relatively
simple. These were steam-powered for the most part, and some could make facial expressions
and even walk.

Dates of note:

 1921: Czech playwright Karel Čapek released a science fiction play “Rossum’s
Universal Robots” which introduced the idea of “artificial people” which he named
robots. This was the first known use of the word.
 1929: Japanese professor Makoto Nishimura built the first Japanese robot,
named Gakutensoku.
 1949: Computer scientist Edmund Callis Berkley published the book “Giant Brains, or
Machines that Think” which compared the newer models of computers to human brains.
Birth of AI: 1950-1956

This range of time was when the interest in AI really came to a head. Alan Turing published his
work “Computer Machinery and Intelligence” which eventually became The Turing Test, which
experts used to measure computer intelligence. The term “artificial intelligence” was coined and
came into popular use.

Dates of note:

 1950: Alan Turing published “Computer Machinery and Intelligence” which proposed a
test of machine intelligence called The Imitation Game.
 1952: A computer scientist named Arthur Samuel developed a program to play checkers,
which is the first to ever learn the game independently.
 1955: John McCarthy held a workshop at Dartmouth on “artificial intelligence” which is
the first use of the word, and how it came into popular usage.

10
AI maturation: 1957-1979

The time between when the phrase “artificial intelligence” was created, and the 1980s was a
period of both rapid growth and struggle for AI research. The late 1950s through the 1960s was
a time of creation. From programming languages that are still in use to this day to books and
films that explored the idea of robots, AI became a mainstream idea quickly.

The 1970s showed similar improvements, such as the first anthropomorphic robot being built in
Japan, to the first example of an autonomous vehicle being built by an engineering grad student.
However, it was also a time of struggle for AI research, as the U.S. government showed little
interest in continuing to fund AI research.

Notable dates include:

 1958: John McCarthy created LISP (acronym for List Processing), the first programming
language for AI research, which is still in popular use to this day.
 1959: Arthur Samuel created the term “machine learning” when doing a speech about
teaching machines to play chess better than the humans who programmed them.
 1961: The first industrial robot Unimate started working on an assembly line at General
Motors in New Jersey, tasked with transporting die casings and welding parts on cars
(which was deemed too dangerous for humans).
 1965: Edward Feigenbaum and Joshua Lederberg created the first “expert system” which
was a form of AI programmed to replicate the thinking and decision-making abilities of
human experts.
 1966: Joseph Weizenbaum created the first “chatterbot” (later shortened to
chatbot), ELIZA, a mock psychotherapist, that used natural language processing (NLP)
to converse with humans.1968: Soviet mathematician Alexey Ivakhnenko published
“Group Method of Data Handling” in the journal “Avtomatika,” which proposed a new
approach to AI that would later become what we now know as “Deep Learning.”
 1973: An applied mathematician named James Lighthill gave a report to the British
Science Council, underlining that strides were not as impressive as those that had been
promised by scientists, which led to much-reduced support and funding for AI research
from the British government.
 1979: James L. Adams created The Standford Cart in 1961, which became one of the
first examples of an autonomous vehicle. In ‘79, it successfully navigated a room full of
chairs without human interference.
 1979: The American Association of Artificial Intelligence which is now known as
the Association for the Advancement of Artificial Intelligence (AAAI) was founded.
AI boom: 1980-1987

Most of the 1980s showed a period of rapid growth and interest in AI, now labeled as the “AI
boom.” This came from both breakthroughs in research, and additional government funding to
support the researchers. Deep Learning techniques and the use of Expert System became more
popular, both of which allowed computers to learn from their mistakes and make independent
decisions.

11
Notable dates in this time period include:

 1980: First conference of the AAAI was held at Stanford.


 1980: The first expert system came into the commercial market, known as XCON
(expert configurer). It was designed to assist in the ordering of computer systems by
automatically picking components based on the customer’s needs.
 1981: The Japanese government allocated $850 million (over $2 billion dollars in
today’s money) to the Fifth Generation Computer project. Their aim was to create
computers that could translate, converse in human language, and express reasoning on a
human level.
 1984: The AAAI warns of an incoming “AI Winter” where funding and interest would
decrease, and make research significantly more difficult.
 1985: An autonomous drawing program known as AARON is demonstrated at the AAAI
conference.
 1986: Ernst Dickmann and his team at Bundeswehr University of Munich created and
demonstrated the first driverless car (or robot car). It could drive up to 55 mph on roads
that didn’t have other obstacles or human drivers.
 1987: Commercial launch of Alacrity by Alactrious Inc. Alacrity was the first strategy
managerial advisory system, and used a complex expert system with 3,000+ rules.
AI winter: 1987-1993

As the AAAI warned, an AI Winter came. The term describes a period of low consumer, public,
and private interest in AI which leads to decreased research funding, which, in turn, leads to few
breakthroughs. Both private investors and the government lost interest in AI and halted their
funding due to high cost versus seemingly low return. This AI Winter came about because of
some setbacks in the machine market and expert systems, including the end of the Fifth
Generation project, cutbacks in strategic computing initiatives, and a slowdown in the
deployment of expert systems.

Notable dates include:

 1987: The market for specialized LISP-based hardware collapsed due to cheaper and
more accessible competitors that could run LISP software, including those offered by
IBM and Apple. This caused many specialized LISP companies to fail as the technology
was now easily accessible.
 1988: A computer programmer named Rollo Carpenter invented the chatbot
Jabberwacky, which he programmed to provide interesting and entertaining conversation
to humans.
AI agents: 1993-2011

Despite the lack of funding during the AI Winter, the early 90s showed some impressive strides
forward in AI research, including the introduction of the first AI system that could beat a
reigning world champion chess player. This era also introduced AI into everyday life via
innovations such as the first Roomba and the first commercially-available speech recognition
software on Windows computers.

12
The surge in interest was followed by a surge in funding for research, which allowed even more
progress to be made.

Notable dates include:

 1997: Deep Blue (developed by IBM) beat the world chess champion, Gary Kasparov, in
a highly-publicized match, becoming the first program to beat a human chess champion.
 1997: Windows released a speech recognition software (developed by Dragon Systems).
 2000: Professor Cynthia Breazeal developed the first robot that could simulate human
emotions with its face,which included eyes, eyebrows, ears, and a mouth. It was called
Kismet.
 2002: The first Roomba was released.
 2003: Nasa landed two rovers onto Mars (Spirit and Opportunity) and they navigated the
surface of the planet without human intervention.
 2006: Companies such as Twitter, Facebook, and Netflix started utilizing AI as a part of
their advertising and user experience (UX) algorithms.
 2010: Microsoft launched the Xbox 360 Kinect, the first gaming hardware designed to
track body movement and translate it into gaming directions.
 2011: An NLP computer programmed to answer questions named Watson (created by
IBM) won Jeopardy against two former champions in a televised game.
 2011: Apple released Siri, the first popular virtual assistant.
Artificial General Intelligence: 2012-present

That brings us to the most recent developments in AI, up to the present day. We’ve seen a surge
in common-use AI tools, such as virtual assistants, search engines, etc. This time period also
popularized Deep Learning and Big Data..

Notable dates include:

 2012: Two researchers from Google (Jeff Dean and Andrew Ng) trained a neural
network to recognize cats by showing it unlabeled images and no background
information.
 2015: Elon Musk, Stephen Hawking, and Steve Wozniak (and over 3,000 others) signed
an open letter to the worlds’ government systems banning the development of (and later,
use of) autonomous weapons for purposes of war.
 2016: Hanson Robotics created a humanoid robot named Sophia, who became known as
the first “robot citizen” and was the first robot created with a realistic human appearance
and the ability to see and replicate emotions, as well as to communicate.
 2017: Facebook programmed two AI chatbots to converse and learn how to negotiate,
but as they went back and forth they ended up forgoing English and developing their
own language, completely autonomously.
 2018: A Chinese tech group called Alibaba’s language-processing AI beat human
intellect on a Stanford reading and comprehension test.
13
 2019: Google’s AlphaStar reached Grandmaster on the video game StarCraft 2,
outperforming all but .2% of human players.
 2020: OpenAI started beta testing GPT-3, a model that uses Deep Learning to create
code, poetry, and other such language and writing tasks. While not the first of its kind, it
is the first that creates content almost indistinguishable from those created by humans.
 2021: OpenAI developed DALL-E, which can process and understand images enough to
produce accurate captions, moving AI one step closer to understanding the visual world.

INTELLIGENT AGENTS

An agent is anything that can be viewed as perceiving its environment through sensors
andacting upon that environment through actuators

An agent can be anything that perceive its environment through sensors and act upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking,
and acting. An agent can be:

o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cellphone, camera, and even
we are also agents.

Before moving forward, we should first know about sensors, effectors, and actuators.

Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.

Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.

14
Intelligent Agents:

An intelligent agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals. An intelligent agent may learn from the environment to achieve
their goals. A thermostat is an example of an intelligent agent.

Following are the main four rules for an AI agent:

o Rule 1: An AI agent must have the ability to perceive the environment.


o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.

The Vacuum Cleaner World

15
This particular world has just two locations: squares A and B. The vacuum agent perceives
which square it is in and whether there is dirt in the square. It can choose to move left, move
right, suck up the dirt, or do nothing. One very simple agent function is the following: if the
current square is dirty, then suck; otherwise, move to the other square.

Percepts: location and contents, e.g., [A,Dirty]


Actions: Left, Right, Suck, NoOp
Agent’s function  look-up table
◦ For many agents this is a very large table

Rational Agent:

A rational agent is an agent which has clear preference, models uncertainty, and acts in a way
to maximize its performance measure with all possible actions.

16
A rational agent is said to perform the right things. AI is about creating rational agents to use
for game theory and decision theory for various real-world scenarios.

For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong
action, an agent gets a negative reward.

Vacuum Cleaner Revisited

We might propose to measure performance by the amount of dirt cleaned up in a single eight-
hour shift. With a rational agent, of course, what you ask for is what you get. A rational agent
can maximize this performance measure by cleaning up the dirt, then dumping it all on the
floor, then cleaning it up again, and so on. A more suitable performance measure would reward
the agent for having a clean floor.

For example, one point could be awarded for each clean square at each time step (perhaps with
a penalty for electricity consumed and noise generated).

As a general rule, it is better to design performance measures according to what one actually
wants in the environment, rather than according to how one thinks the agent should behave.

Note: Rational agents in AI are very similar to intelligent agents.

Rationality:

The rationality of an agent is measured by its performance measure. Rationality can be judged
on the basis of following points:

o Performance measure which defines the success criterion.


o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.

Note: Rationality differs from Omniscience because an Omniscient agent knows the actual
outcome of its action and act accordingly, which is not possible in reality.

Structure of an AI Agent

The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It can be
viewed as:

1. Agent = Architecture + Agent program

17
Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.

Agent Function: Agent function is used to map a percept to an action.

1. f:P* → A

Agent program: Agent program is an implementation of agent function. An agent program


executes on the physical architecture to produce function f.

PEAS Representation

PEAS System is used to categorize similar agents together. The PEAS system delivers the
performance measure with respect to the environment, actuators, and sensors of the respective
agent. Most of the highest performing agents are Rational Agents.

PEAS stands for a Performance measure, Environment, Actuator, Sensor.

PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is made
up of four words:

o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors

Here performance measure is the objective for the success of an agent's behavior.

PEAS for self-driving cars:

Let's suppose a self-driving car then PEAS representation will be:

Performance: Safety, time, legal drive, comfort

Environment: Roads, other vehicles, road signs, pedestrian

Actuators: Steering, accelerator, brake, signal, horn

Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.

18
Example of Agents with their PEAS representation

Types of Environments in AI
An environment in artificial intelligence is the surrounding of the agent. The agent takes input
from the environment through sensors and delivers the output to the environment through
actuators. There are several types of environments:

 Fully Observable vs Partially Observable


 Deterministic vs Stochastic
 Competitive vs Collaborative
 Single-agent vs Multi-agent
 Static vs Dynamic
 Discrete vs Continuous
 Episodic vs Sequential
 Known vs Unknown

1. Fully Observable vs Partially Observable


 When an agent sensor is capable to sense or access the complete state of an agent
at each point in time, it is said to be a fully observable environment else it is
partially observable.
 Maintaining a fully observable environment is easy as there is no need to keep
track of the history of the surrounding.
19
 An environment is called unobservable when the agent has no sensors in all
environments.
 Examples:
 Chess – the board is fully observable, and so are the opponent’s moves.
 Driving – the environment is partially observable because what’s
around the corner is not known.
2. Deterministic vs Stochastic
 When a uniqueness in the agent’s current state completely determines the next
state of the agent, the environment is said to be deterministic.
 The stochastic environment is random in nature which is not unique and cannot
be completely determined by the agent.
 Examples:
 Chess – there would be only a few possible moves for a coin at the
current state and these moves can be determined.
 Self-Driving Cars- the actions of a self-driving car are not unique, it
varies time to time.
3. Competitive vs Collaborative
 An agent is said to be in a competitive environment when it competes against
another agent to optimize the output.
 The game of chess is competitive as the agents compete with each other to win
the game which is the output.
 An agent is said to be in a collaborative environment when multiple agents
cooperate to produce the desired output.
 When multiple self-driving cars are found on the roads, they cooperate with each
other to avoid collisions and reach their destination which is the output desired.
4. Single-agent vs Multi-agent
 An environment consisting of only one agent is said to be a single-agent
environment.
 A person left alone in a maze is an example of the single-agent system.
 An environment involving more than one agent is a multi-agent environment.
 The game of football is multi-agent as it involves 11 players in each team.
5. Dynamic vs Static
 An environment that keeps constantly changing itself when the agent is up with
some action is said to be dynamic.
 A roller coaster ride is dynamic as it is set in motion and the environment keeps
changing every instant.
 An idle environment with no change in its state is called a static environment.
 An empty house is static as there’s no change in the surroundings when an agent
enters.
6. Discrete vs Continuous
 If an environment consists of a finite number of actions that can be deliberated in
the environment to obtain the output, it is said to be a discrete environment.
 The game of chess is discrete as it has only a finite number of moves. The number
of moves might vary with every game, but still, it’s finite.
 The environment in which the actions are performed cannot be numbered i.e. is
not discrete, is said to be continuous.
 Self-driving cars are an example of continuous environments as their actions are
driving, parking, etc. which cannot be numbered.
7.Episodic vs Sequential

20
 In an Episodic task environment, each of the agent’s actions is divided into
atomic incidents or episodes. There is no dependency between current and
previous incidents. In each incident, an agent receives input from the environment
and then performs the corresponding action.
 Example: Consider an example of Pick and Place robot, which is used to
detect defective parts from the conveyor belts. Here, every time robot(agent) will
make the decision on the current part i.e. there is no dependency between current
and previous decisions.
 In a Sequential environment, the previous decisions can affect all future
decisions. The next action of the agent depends on what action he has taken
previously and what action he is supposed to take in the future.
 Example:
 Checkers- Where the previous move can affect all the following
moves.
8. Known vs Unknown
 In a known environment, the output for all probable actions is given. Obviously,
in case of unknown environment, for an agent to make a decision, it has to gain
knowledge about how the environment works.

Structure of Agent- Types of Agent Programs

The job of AI is to design an agent program that implements the agent function the mapping
from percepts to actions.

Its program will run on some sort of computing device with physical sensors and actuators
called the architecture

agent = architecture + program

Architecture makes the percepts from the sensors available to the program, runs the program,
and feeds the program’s action choices to the actuators as they are generated

21
Agent program: use current percept as input from the sensors and return an action to the
actuators

Agent function: takes the entire percept history

To build a rational agent in this way, we as designers must construct a table that contains the
appropriate action for every possible percept sequence.

Let P be the set of possible percepts and let T be the lifetime of the agent (the total number
of percepts it will receive)

The lookup table will contain entries. Consider the automated taxi: the visual
input from a single camera comes in at the rate of roughly 27 megabytes per second (30
frames per second, 640 × 480 pixels with 24 bits of color information). This gives a lookup
table with over 10250,000,000,000 entries for an hour’s driving.

Even the lookup table for chess a tiny, well-behaved fragment of the real world would have
at least 10150 entries.

The daunting size of these tables (the number of atoms in the observable universe is less than
1080) means that

a) no physical agent in this universe will have the space to store the table,
b) the designer would not have time to create the table,
c) no agent could ever learn all the right table entries from its experience, and
d) even if the environment is simple enough to yield a feasible table size, the
designer still has no guidance about how to fill in the table entries.

Types of Agent Programs

Four basic kinds of agent programs that embody the principles underlying almost all
intelligent systems:
1. Simple reflex agents;
2. Model-based reflex agents;
3. Goal-based agents; and
4. Utility-based agents

Simple reflex agents

22
Select actions on the basis of the current percept, ignoring the rest of the percept history

Agents do not have memory of past world states or percepts.

So, actions depend solely on current percept.

Action becomes a “reflex.”

Uses condition-action rules.

 The INTERPRET-INPUT function generates an abstracted description of the current state


from
 the percept, and. The RULE-MATCH function returns the first rule in the set of rules that
matches the given state description. Note that the description in terms of “rules” and
“matching” is purely conceptual; actual implementations can be as simple as a collection
of logic gates implementing a Boolean circuit

This will work only if the correct decision can be made on the basis of only the current
percept—that is, only if the environment is fully observable.

23
Even a little bit of unobservability can cause serious trouble. For example, the braking rule
given earlier assumes that the condition car-in-front-is-braking can be determined from the
current percept—a single frame of video.

This works if the car in front has a centrally mounted brake light. Infinite loops are often
unavoidable for simple reflex agents operating in partially observable environments. Escape
from infinite loops is possible if the agent can randomize its actions.

Model-based reflex agents

It works by finding a rule whose condition matches the current situation

Key difference (wrt simple reflex agents):

◦ Agents have internal state, which is used to keep track of past states of the
world.
◦ Agents have the ability to represent change in the World.

The current state is stored inside the agent which maintains some kind of structure describing
the part of the world which cannot be seen.

Internal state information as time goes by requires two kinds of knowledge to be encoded in
the agent program

1. we need some information about how the world evolves independently of the
agent
2. we need some information about how the agent’s own actions affect the world

Knowledge about “how the world works is called a model of the world. An agent that uses
such a model is called a model-based agent.

24
UPDATE-STATE, which is responsible for creating the new internal state description.

Goal-based agents

These kinds of agents take decisions based on how far they are currently from their goal

Key difference wrt Model-Based Agents:

In addition to state information, have goal information that

Goal describes desirable situations to be achieved.

Search and planning are the subfields of AI devoted to finding action sequences that achieve
the agent’s goals

Agents of this kind take future events into consideration.

What sequence of actions can I take to achieve certain goals?

Choose actions so as to (eventually) achieve a (given or computed) goal

25
Utility-based agents

Goals alone are not enough to generate high-quality behavior in most environments. Goals just
provide a crude binary distinction between “happy” and “unhappy” states. Because “happy”
does not sound very scientific, economists and computer scientists use the term utility instead

An agent’s utility function is essentially an internalization of the performance measure. If the


internal utility function and the external performance measure are in agreement, then an agent
that chooses actions to maximize its utility will be rational according to the external
performance measure.

26

You might also like