cst401 - Ai - Module 1 Note 3
cst401 - Ai - Module 1 Note 3
in for notes
***************************************************************************
SYLLABUS- MODULE 1
Introduction- What is Artificial Intelligence Foundation of AI, History of AI, Applications of
AI, Intelligent Agents- Agents and Environment, Good Behaviour, The concept of rationality,
nature of environment, Structure of Agents
***************************************************************************
ARTIFICIAL INTELLIGENCE
Artificial intelligence allows machines to replicate the capabilities of the human mind. From
the development of self-driving cars to the development of smart assistants like Siri and Alexa,
AI is a growing part of everyday life.
Artificial intelligence is a wide-ranging branch of computer science concerned with building
smart machines capable of performing tasks that typically require human intelligence.
➢ An intelligent entity created by humans.
➢ Capable of performing tasks intelligently without being explicitly instructed.
➢ Capable of thinking and acting rationally and humanely.
The definitions on the left measure success in terms of fidelity to human performance, the ones
on the right measure against an ideal performance measure, called rationality. A system is
rational if it does the “right thing,” given what it knows.
Turing Test
The Turing test was developed by Alan Turing(A computer scientist) in 1950. He proposed
that the “Turing test is used to determine whether or not a computer(machine) can think
intelligently like humans”?
Imagine a game of three players having two humans and one computer, an interrogator(as a
human) is isolated from the other two players. The interrogator’s job is to try and figure out
which one is human and which one is a computer by asking questions from both of them. To
make things harder computer is trying to make the interrogator guess wrongly. In other
words, computers would try to be indistinguishable from humans as much as possible.
The “standard interpretation” of the Turing Test, in which player C, the interrogator,
is given the task of trying to determine which player – A or B – is a computer and which
is a human. The interrogator is limited to using the responses to written questions to
make the determination
The conversation between interrogator and computer would be like this:
C(Interrogator): Are you a computer?
A(Computer): No
C: Multiply one large number to another, 158745887 * 56755647
A: After a long pause, an incorrect answer!
C: Add 5478012, 4563145
A: (Pause about 20 seconds and then give an answer)10041157
If the interrogator wouldn’t able to distinguish the answers provided by both humans and
computers then the computer passes the test and the machine(computer) is considered as
intelligent as a human. In other words, a computer would be considered intelligent if its
conversation couldn’t be easily distinguished from a human’s. The whole conversation would
be limited to a text-only channel such as a computer keyboard and screen.
A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a computer
Even problems with just a few hundred facts can exhaust the computational resources of any
computer unless it has some guidance as to which reasoning steps to try first.
➢ Turing Test
➢ The Cognitive Modelling Approach
➢ The Law of Thought Approach
➢ The Rational Agent Approach
Fields in AI
1. Machine Learning
Machine learning is a feature of artificial intelligence that provides the computer with the
capability to automatically gather data and learn from the experience of the problems or cases
they have encountered rather than specially programmed to perform the given task or work.
The machine learning emphasizes the growth of the algorithms which can scrutinize the data
and make predictions of it. The main use of this is in the healthcare industry where it is used
for diagnosis of the disease, medical scan interpretation, etc.
2. Deep learning
It is the process of learning by processing and analyzing the input data by several methods until
the machine discovers the single desirable output. It is also known as the self-learning of the
machines.
The machine runs various random programs and algorithms to map the input raw sequence of
input data to output. By deploying the various algorithms like neuroevolution and other
approaches like gradient descend on a neural topology the output y is raised finally from the
unknown input function f(x), assuming that x and y are correlated.
3. Neural Networks
The neural networks are the brain of artificial intelligence. They are the computer systems
which are the replica of the neural connections in the human brain. The artificial
corresponding neurons of the brain are known as the perceptron.
The stack of various perceptron joining together makes the artificial neural networks in the
machines. Before giving a desirable output, the neural networks gain knowledge by processing
various training examples.
With the use of different learning models, this process of analyzing data will also give a
solution for many associated queries that were unanswered previously.
Deep learning in association with the neural networks can unfold the multiple layers of hidden
data including the output layer of complex problems and is an aide for the subfields like speech
recognition, natural language processing, and computer vision, etc.
4. Cognitive Computing
The purpose of this component of artificial intelligence is to initiate and accelerates the
interaction for complex task completion and problem-solving between humans and machines.
While working on various kinds of tasks with humans, the machines learn and understand
human behavior, sentiments in various distinctive conditions and recreate the thinking process
of humans in a computer model.
By practicing this, the machine acquires the ability to understand human language and image
reflections. Thus the cognitive thinking along with artificial intelligence can make a product
that will be having human-like actions and can also have data handling capabilities.
Cognitive computing is capable of taking accurate decisions in case of complex problems. Thus
it is applied in the area which needs to improve solutions with optimum costs and is acquired
by analyzing natural language and evidence-based learning.
The concept behind introducing this component is to make the interaction between the
machines and the human language seamless and the computers will become capable of
delivering logical responses towards human speech or query.
The natural language processing focus on both the verbal and written section of human
languages means both active and passive modes of using algorithms.
The Natural Language Generation (NLG) will process and decode the sentences and words that
humans used to speak (verbal communication) while the Natural Language Understanding
(NLU) will emphasize the written vocabulary to translate the language in the text or pixels
which can be understood by machines.
The Graphical User Interfaces (GUI) based applications of the machines are the best example
of natural language processing.
6. Computer Vision
The computer vision is a very vital part of artificial intelligence as it facilitates the computer to
automatically recognize, analyze, and interpret the visual data from the real world images and
visuals by capturing and intercepting them.
It incorporates the skills of deep learning and pattern recognition to extract the content of
images from any data given, including images or video files within PDF document, Word
document, PPT document, XL file, graphs, and pictures, etc.
Suppose we have a complex image of a bundle of things then only seeing the image and
memorizing it is not easily possible for everyone. The computer vision can incorporate a series
of transformations to the image to extract the bit and byte detail about it like the sharp edges
of the objects, unusual design or color used, etc.
This is done by using various algorithms by applying mathematical expressions and statistics.
The robots make use of computer vision technology to see the world and act in real-time
situations.
The application of this component is very vastly used in the healthcare industry to analyze the
health condition of the patient by using an MRI scan, X-ray, etc. Also used in the automobile
industry to deal with computer-controlled vehicles and drones.
Dualism: there is a part of the human mind (or soul or spirit) that is outside of nature, exempt
from physical laws
Materialism: brain’s operation according to the laws of physics constitutes the mind
Induction: general rules are acquired by exposure to repeated associations between their
elements
Logical positivism: doctrine holds that all knowledge can be characterized by logical theories
connected, ultimately, to observation sentences that correspond to sensory inputs; thus logical
positivism combines rationalism and empiricism
Mathematics
1. logic,
2. computation, and
3. probability.
Gottlob Frege: creating the first order logic that is used today
Alan Turing: characterize exactly which functions are computable. Turing machine
Tractability: problem is called intractable if the time required to solve instances of the problem
grows exponentially with the size of the instances
• NP-completeness
Despite the increasing speed of computers, careful use of resources will characterize intelligent
systems.
Economics
Decision theory: combines probability theory with utility theory, provides a formal and
complete framework for decisions made under uncertainty
Game theory: Von Neumann and Morgenstern, a rational agent should adopt policies that are
(or least appear to be) randomized. game theory does not offer an unambiguous prescription
for selecting actions
Neuroscience
Aristotle wrote, “Of all the animals, man has the largest brain in proportion to his size.”
Nicolas Rashevsky: the first to apply mathematical models to the study of the nervous system.
Artificial Intelligence is not a new word and not a new technology for researchers. This
technology is much older than you would imagine. Even there are the myths of Mechanical
men in Ancient Greek and Egyptian Myths. Following are some milestones in the history of
AI which defines the journey from the AI generation to till date development.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.
A boom of AI (1980-1987)
o Year 1980: After AI winter duration, AI came back with "Expert System". Expert
systems were programmed that emulate the decision-making ability of a human expert.
o In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
o Google has demonstrated an AI program "Duplex" which was a virtual assistant and
P;89+which had taken hairdresser appointment on call, and lady on other side didn't
notice that she was talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data
science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.
INTELLIGENT AGENTS
An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators
An agent can be anything that perceiveits environment through sensors and act upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking,
and acting. An agent can be:
o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and even
we are also agents.
Before moving forward, we should first know about sensors, effectors, and actuators.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals. An intelligent agent may learn from the environment to achieve
their goals. A thermostat is an example of an intelligent agent.
This particular world has just two locations: squares A and B. The vacuum agent perceives
which square it is in and whether there is dirt in the square. It can choose to move left, move
right, suck up the dirt, or do nothing. One very simple agent function is the following: if the
current square is dirty, then suck; otherwise, move to the other square.
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and acts in a way
to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents to use
for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong
action, an agent gets a negative reward.
We might propose to measure performance by the amount of dirt cleaned up in a single eight-
hour shift. With a rational agent, of course, what you ask for is what you get. A rational agent
can maximize this performance measure by cleaning up the dirt, then dumping it all on the
floor, then cleaning it up again, and so on. A more suitable performance measure would reward
the agent for having a clean floor.
For example, one point could be awarded for each clean square at each time step (perhaps with
a penalty for electricity consumed and noise generated).
As a general rule, it is better to design performance measures according to what one actually
wants in the environment, rather than according to how one thinks the agent should behave.
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be judged
on the basis of following points:
Note: Rationality differs from Omniscience because an Omniscient agent knows the actual
outcome of its action and act accordingly, which is not possible in reality.
Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It can be
viewed as:
Following are the main three terms involved in the structure of an AI agent:
1. f:P* → A
PEAS Representation
PEAS System is used to categorize similar agents together. The PEAS system delivers the
performance measure with respect to the environment, actuators, and sensors of the respective
agent. Most of the highest performing agents are Rational Agents.
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is made
up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
Types of Environments in AI
An environment in artificial intelligence is the surrounding of the agent. The agent takes input
from the environment through sensors and delivers the output to the environment through
actuators. There are several types of environments:
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential
• Known vs Unknown
1. Fully Observable vs Partially Observable
• When an agent sensor is capable to sense or access the complete state of an agent
at each point in time, it is said to be a fully observable environment else it is
partially observable.
• Maintaining a fully observable environment is easy as there is no need to keep
track of the history of the surrounding.
The job of AI is to design an agent program that implements the agent function the mapping
from percepts to actions.
His program will run on some sort of computing device with physical sensors and actuators
called the architecture
Architecture makes the percepts from the sensors available to the program, runs the program,
and feeds the program’s action choices to the actuators as they are generated
Agent program: use current percept as input from the sensors and return an action to the
actuators
To build a rational agent in this way, we as designers must construct a table that contains the
appropriate action for every possible percept sequence.
Let P be the set of possible percepts and let T be the lifetime of the agent (the total number
of percepts it will receive)
The lookup table will contain entries. Consider the automated taxi: the visual
input from a single camera comes in at the rate of roughly 27 megabytes per second (30
frames per second, 640 × 480 pixels with 24 bits of color information). This gives a lookup
table with over 10250,000,000,000 entries for an hour’s driving.
Even the lookup table for chess a tiny, well-behaved fragment of the real world would have
at least 10150 entries.
The daunting size of these tables (the number of atoms in the observable universe is less than
1080) means that
a) no physical agent in this universe will have the space to store the table,
b) the designer would not have time to create the table,
c) no agent could ever learn all the right table entries from its experience, and
d) even if the environment is simple enough to yield a feasible table size, the
designer still has no guidance about how to fill in the table entries.
Four basic kinds of agent programs that embody the principles underlying almost all
intelligent systems:
1. Simple reflex agents;
2. Model-based reflex agents;
3. Goal-based agents; and
4. Utility-based agents
Select actions on the basis of the current percept, ignoring the rest of the percept history
The INTERPRET-INPUT function generates an abstracted description of the current state from
the percept, and. The RULE-MATCH function returns the first rule in the set of rules that
matches the given state description. Note that the description in terms of “rules” and
“matching” is purely conceptual; actual implementations can be as simple as a collection of
logic gates implementing a Boolean circuit
This will work only if the correct decision can be made on the basis of only the current
percept—that is, only if the environment is fully observable.
Even a little bit of unobservability can cause serious trouble. For example, the braking rule
given earlier assumes that the condition car-in-front-is-braking can be determined from the
current percept—a single frame of video.
This works if the car in front has a centrally mounted brake light. Infinite loops are often
unavoidable for simple reflex agents operating in partially observable environments. Escape
from infinite loops is possible if the agent can randomize its actions.
◦ Agents have internal state, which is used to keep track of past states of the
world.
◦ Agents have the ability to represent change in the World.
The current state is stored inside the agent which maintains some kind of structure describing
the part of the world which cannot be seen.
Internal state information as time goes by requires two kinds of knowledge to be encoded in
the agent program
1. we need some information about how the world evolves independently of the
agent
2. we need some information about how the agent’s own actions affect the world
Knowledge about “how the world works is called a model of the world. An agent that uses
such a model is called a model-based agent.
UPDATE-STATE, which is responsible for creating the new internal state description.
Goal-based agents
These kinds of agents take decisions based on how far they are currently from their goal
Search and planning are the subfields of AI devoted to finding action sequences that achieve
the agent’s goals
Utility-based agents
Goals alone are not enough to generate high-quality behavior in most environments. Goals just
provide a crude binary distinction between “happy” and “unhappy” states. Because “happy”
does not sound very scientific, economists and computer scientists use the term utility instead