0% found this document useful (0 votes)
61 views

AI Sem-4 Textbook

Uploaded by

manav5shinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

AI Sem-4 Textbook

Uploaded by

manav5shinde
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 188

1: H•

II
SIM 5-IJOS.CSE (DS),CSE (IIML),

,
IIML, DE.
SIM 7 EllCTRDIIIC EIIIIL ,_,.

ARTIFICIAi INTElllGENCE J
)
\ --e Semester 5 {CSC503) .._

--~
EDITlOt1
JUNE,20 23
I
Compute( Science and Engineering (Data Science)
ComJ:iuter Science & Engineering (Artificial Intelligence and Machine Leaming)
Artificial Intelligence and Data Science
Artificial Intelligence and Machine Leaming
Data Engineering
~
/ ~
~
"-

--e Semester 7 (ELDO7 013)- ~


Electronics Engineering ~
--e Semester 6 (ECC602) .._ ~
l Electronics and Computer Science

, , Prof. R. M. Baph ana


Adjunct Faculty at (C.O.E.P) Pune (Teaching Al & Robotics for M. Tech. & Ph. D students)

iialiii~. . . . . 'iaP"!!ir~:::~ II
~....... .4'

Q•·
·.t

. ;':•::r :Jj 1, ~
IJ!'•ff • ~ ... , rt '1 I

; r -
, ,
... ,- '
. -

t✓ With Solve~ L9.!est


fk1J
~
EcH
PUBLIC
-NEo
ATION S
UNIVERS11Y·" ·-- STION PAPERS.
7,0/ze-re }lutlzo-is ltispile 7nn=tio,i
t✓ Lab lnclu •·:~: - - - - A Sachin Shah Venture
• www.techneobooks.in ■ [email protected]
r

,-------------__,.;;~~-~~;.......,.4
Unive rsity of Mumb ai

Art ific ial Int ell ige nce


Semester 5 : ► Computer Science and Engineering (Data Science)
(CSCSOl) ► Computer Science & Engineering (Artificial Intelligence and Machine
Learning)
► Artificial Intelligence and Data Science
► Artificial Intelligence and Machine Learning
► Data Engineering
Semester 7 : Department Optional Course m
(ELDO70 13) ► El tr . E ngmeenn
. . g
ec omcs

~ :j ...I'>'

St~iptly ~s per the New Syllabus of Mumbai University,


w
.,.•,·. 't. • ...; t e.f.:acadernlc
_,, veai
6♦s
2022-2023
_ -,
·- ·
..

Prof. R. M. Baph ana


Adjunct Faculty,
Government College of Engineering,
Pune (C.O.E.P)
(Teaching Artificial Intelligence and Robotics for M. Tech. & Ph. D students)

fklTEcH -NE o
~ PUBL ICAT IONS
MS-126

Wlieu Jlu(/ims l11spi"tc ltuto{'{dion I WI Ililll 111


A Sachi n Shah Ventu re
............
syllabus ... _
University of Mumbai
Artificial Intellig ence (Code : CSC50 3)
--...,
semest er 5
. . . ) • computer Science and Engineering (ArtificiaJ
• computer Science a~d Engin~n)ng (~~~ ~ 1::igence and Data Science • Artificial Intelligence
Intelligence and Machine Leaming • . , ,c, . • D ta Engineering
and Machine Learning a
~

--
Course Code
CSC503
Course Name
Artificial Intelligence
Credit
03
-
Pre-requisite : C Program ming
Course Objectives : The course aims
I. To gain perspective of AI and its foundations.
2. To study different agent architectures and properties of the environment
3. To understand the basic principles of AI towards problem solving, inference, perception. knowledge reprtSeulation,
and learning.
4. To investigate probabilistic reasoning under uncertain and incomplete information.
5. To explore the current scope, potential, limitations, and implications of intelligent systems.
Course Outcomes
After successful completion of the course students will be able to
I. Identify the characteristics of the environment and differentiate between various agent architectures.
·2. Apply the most suitable search strategy to design problem solving agents.
3. Represent a natural language description of statements in logic and apply the inference rules to design Knowledge
Based agents.
4. Apply a probabilistic model for reasoning under uncertainty.
5. Comprehend various learning techniques.
6. Describe the various building blocks of an expert system for a given real word problem.
Detailed Syllabus
Module Detailed Content Hoon
1 Introduct ion to Artlficial Intelligen ce 3
1.1 ~fi~ial lnt~lligence (Al), AI Perspectives : Acting and Thinking humanly, Acting and
Thmkmg rabonally.
1.2 History of Al, Applications of Al, The present state of AI, Ethics in AL (Refer Chapter 1)
-
2 Intelligen t Agents 4
-

2.1 Introduction of agents, Structure of Intelligent Agent, Characteristics of Intelligent Agents.


2.2 Types of Agents : Simple Reflex, Model Based, Goal Based, Utility Based Agents.
2.3 Envi_ronment Types_ :Deterministic, Stochastic, Static, Dynamic, Observable,
Se!lll-observable, Smgle Agent, Multi Agent (Refer Chapter 2)
-


3 Solving Problems by Searching 12

3.1 Definition, State space representation, Problem as a state space search, Problem formulation.
Well-defined problems.
3.2 Solving Problems by Searching, Performance evaluation of search strategies, Time
Complellity, Space Complexity, Completeness, Optimality.
3.3 Uninformed Search : Depth First Search, Breadth First Search, Depth Limited Search,
Iterative Deepening Search, Uniform Cost Search, Bidirectional Search
3.4 Informed Search : Heuristic Function, Admissible Heuristic, lnformcd Search Technique,
Greedy Best First Search, A• Search, Local Search : Hill Climbing Search, Simulated
Annealing Search, Optimization : Genetic AJgorithlIL

3.S Game Playing, Adversarial Search Techniques, Mini-max Search, Alpha-Beta Pruning.
(Refer Chapter 3)

4 Knowledg e and Reasoning 10

4.1 Definition and importance of Knowledge, Issues in Knowledge Representation, Knowledge


Representation Systems, Properties of Knowledge Representation Systems.
4.2 Propositional Logic (PL) : Syntax, Semantics, Formal logic-connectives, truth tables,
tautology, validity, well-formed-formula, Introduction to logic programming (PROLOG).
4.3 Predicate Logic : FOPL, Syntax, Semantics, Quantification, Inference rules in FOPL

4.4 Forward Chaining, Backward Chaining and Resolution in FOPL (Refer Chapter 4)

5
5 Reasoning Under Uncertain ty

5.1 Handling Uncertain Knowledge, Random Variables, Prior and Posterior Probability,
Inference using Full Joint Distribution.

5.2 Bayes' Rule and its use, Bayesian Belief Networks, Reasoning in Belief Networks.
(Refer Chapter S)

5
6 Planning and Leaming

6.1 The planning problem, Partial order planning, total order planning.

6.2 Learning in Al, Learning Agent, Concepts of Supervised, Unsupervised, Semi-Supervised


Leaming, Reinforcement Learning, Ensemble Learning.

6.3 Expert Systems, Components of Expert System : Knowledge base, Inference engine, user
interface, working memory, Development of Expert Systems. (Refer Chapter 6)

Total 39

Q> Assessment
Internal ~ment
is compleled
Assessment consists of two class tests of 20 IIlllrks each. The first-class test is to be conducted when approx.~ syllabus
and second class test when additional 40% syllabus is completed. Duration of each test shall be one hour.
End Semester Theory Examination
I. Question paper will consist of 6 questions, each carrying 20 mm-ks.
2. The srudeots need to solve a total of 4 questions.
3. Question No. I will be compulsory and based on the entire syllabus.
4. Remaining question (Q.2 to Q.6) will be selected from all the modules.
-- ... - -
[ Artificial Intelligence Lab (CSL502) l
Lab Code Lab Name Cttd.it

~
I CSLS02
I Artilidal lntdligence Lab I 1
I
Prerequisite : C Programming Language
Lab Objectives
I To design suitable Agent Architecture for a given real world Al problem.
2 To implement knowledge representation and reasoning in Al language.
3 To design n Problem-Solving Agent.
4 To incorporate reasoning under uncertainty for an Al agent.
lab Outcomes
At the end or the course, students will be able to -
I Identify suitable Agent Architecture for a given real world Al problem.
2 lmplement simple programs using Prolog.
3 lmplemcnt various search techniques for a Problem-Solving Agent.
4 Represent natural language description as statements in Logic and apply infcreoce rules to iL
5 Construct a Bayesian Belief Network for a given problem and draw probabilistic inferences from it.
Suggested Experiments : Students are required to complete at least IO experiments.
Sr. ' ii':=",!'.-- -~ -•.,.,··.,11,, .I: ,~ Name' of tbe &p,;'~..;~➔ ~ Q ~
"''No.·. 1., J,1,"~ ,_,
'J
A
7j ll".l
, . •1$4' ~--;;-~
~ G
"' • ·.,.. Ii',?.!! ~·~~~,...
~ , - ~ - a'Pf<CU.""'
M
\"li"'~~l,'fw )
I Provide the PEAS description and TASK Environment for a given Al problem.
2 Identify suitable Agent Architecture for the problem. ~

3 Write simple programs using PROLOG as an Al programming Language.


4 lmplement any one of the Uninformed search techniques.
5 lmplement any one of the lnformed search techniques e.g. A.Star algorithm for 8 puzzle problem.
6 lmplement adversarial search using min-max algorithm.
7 Implement any one of the Local Search techniques e.g. Hill Cl.imbing, Simula!ed Annealing, Genetic algorithm.
g Prove the goal sentence from the following set of statements in FOPL by applying forward. backward and resolution inference
algorithms.
9 Create a Bayesian Network for the given Problem Statement and draw inferences from it. (You can use any Belief and
Decision Networks Tool for modeling Bayesian Networks).
JO Implement a Planning Agent
II Design a prototype of an expert system.
12 Case study of any existing successful Al system.
Term Work
I. Term work should consist of 10 experiments.
2. Journal must include at least 2 assignments.
3. The final certification and acceptance of tenn work ensures thnt satisfactory performance of laboratory work and minimum
passing marks in term work.
4. Total 25 Marks (Experiments: 15 Marks, Attendance Theory & Practical: 05 Marks, Assignments: 05 Marks)
Oral & Practical exam
Based on the entire syllabus.
□□□
-

Syllabus ...
University of Mumbai
Artificial Intelligence (Code : ELDLO7013)
-
Semester 7 : Electronics Engineering

Teaching Scheme Credits Assigned


Course Code Course
Practical
Name TW/Practical
Theory and Oral Tutorial Theory Tutorial Total
and Oral

ELDW7013 Artlflclal
Intelligence
03 - - 03 - - 03

-
Examination Scheme

Theory Marks

Subject Code Subject Name Internal assessment


Term Practical
End Exam
Avg. or Work aodOnl Total
Sem. duration
Test 1 Test2 Test 1 and Exam Hours
Test2

Artificial
I
ELDW7013 Intelligence 20 20 20 80 03 - - 100

~ Course Objectives
I. To gain perspective of Al and its foundations.
2. To study different agent architectures and properties of the environment.
3. To understand the basic principles of Al towards problem solving, inference, perception, knowledge representation, and
learning.
4. To investigate probabilistic reasoning under uncertain and incomplete information.
5. To explore the current scope, potential, limitations, and implications of intelligent systems.
G" Course Outcomes . -
After successful completion of the course students will be able to
I. Identify the characteristics of the environment and differentiate between various agent architectnrcs.
2. Apply the most suitable search strategy to design problem solving agents.
3. Represent a natural language description of statements in logic and apply the inference rules to design Knowledge
Based agents.
4. Apply a probabilistic model for reasoning under uncertainty.
5. Comprehend various learning techniques.
6. Describe the various building bloc.ks of an expert system for a given real word problem.
Note : The actlon verbs according to Bloom's taxonomy are highlighted in bold.

-
--
r
f

Module
Ir
No.
Unlt
No. ~ h
-~
_....
! •.
.-
~ ..-+ ·; .
... ,e,,l!_i.-•~-"'...

Introduction to Artificial Intelllgence


Contents
.. ~
.f!.,4f/,

; ,«.ll'N'
.-~ .,~•,
i: ::_C:fs, fii ~i,{- _,/{
__ . .,.,-u _ •c, ,.,,..., .c·_;,'"'1',: ~cl'.H~·- ,:-<
-~
. l
r.
Hrs.
."!!
~
l
1.1 Artificial Intelligence (Al), Al Perspectives : Acting and Thinking humanly, Acting and
1 Thinking rationally. s
1.2 History of Al, Applications of Al, The present state of Al, Ethics in Al. (Rerer Chapter 1)

Intelligent Agents

2.1 Introduction of agents, Structure of Intelligent Agent, Characteristics of Intelligent Agents.

2.2 Types of Agents: Simple Reflex. Model Based, Goal Based, Utility Based Agents.
2 6
2.3 Environment Types : Deterministic, Stochastic, Static, Dynamic, Observable, Scmi-<>bscrvable,
Single Agent, Multi Agent (Refer Chapter 2)

Solving Problems by Searching

3.1 Definition, State space representation, Problem as a state space search, Problem formulation.
Well-defined problems.

3.2 Solving Problems by Searching, Performance evaluation of search strategics. Time Complexity,
Space Complexity, Completeness, Optimality.

3.3 Uninformed Search : Depth First Search, Breadth First Search, Depth Limited Search, Iterative
Deepening Search, Uniform Cost Search, Bidirectional Scarcb. 8
3
3.4 Informed Search : Heuristic Function, Admissible He.uristic, Informed Search Technique.
Greedy Best First Search, A• Search, Local Search : Hill Climbing Search, Simulated
Annealing Search, Optimization : Genetic Algorithm.

3.5 GaJllC Playing, Adversarial Search Techniques. Mini-max Search, Alpha-Beta Pruning.

(Refer Chapter 3)

Knowledge and Reasoning

4.1 Definition and importance of Knowledge, Issues in Knowledge Representation, Knowledge


Representation Systems, Properties of Knowledge Representation Systems.

4.2 Propositional Logic (PL) : Syntax, Semantics, Formal logic-connectives, truth tables, tautology,
validity, well-formed-formula.
8
4
4.3 Predicate Logic : FOPL, Syntax, Semantics, Quantification, Inference rules
in FOPL,
Introduction to logic programming (PROLOG).

4.4 Forward Chaining, Backward Chaining and Resolution in FOPL


(Refer Chapter 4)
Reasoning Under Uncertainty

5 5.1 Handling Uncertain Knowledge, Random Variables, Prior and Posterior Probability, Inference
using Full Joint Distribution.
5.2 Bayes' Rule and its use, Bayesian Belief Networks, Reasoning in Belief Networks. 5
(Refer Chapter 5)

Planning and Leaming

6 6.1 The planning problem, Partial order planning, total order planning.
6.2 Leaming in AI, Leaming Agent, Concepts of Supervised, Unsupervised, Semi -Supervised 7
Leaming, Reinforcement Learning, Ensemble Leaming.

6.3 Expert Systems, Components of Expert System : Knowledge base, Inference engine, user
interface, working memory, Development of Expert Systems. (Refer Chapter 6)

TotaJ 39

Internal Assessment (IA)


Two tests must be conducted which should cover at least 80% of syllabus. The average marks of both the test will be
considered as final IA marks.
End Semester Examination
I. Question paper will consist of 6 questions, each of 20 Marks.
2. Total 4 questions need to be solved.
3. Question No. l will be compulsory and based on entire syllabus wherein sub questions of 2 to 5 Macks will be asked.
4. Remaining questions will be selected from all the modules.
□□□
I

. .
.

--
[ Index )

_. Chapter 1 : Introduction to Artificial Intelligence ....-......................................1-1 to 1-13

_. Chapter 2 : • Intelligent Agents ......................................................~................2·1 to·2-11

_. Chapter 3: Solving Problems by Searching...........~·········~······'.:··..~··~·:·~····;;~1 to 3-54

_. Chapter 4 : Knowledge and Reasoning .......................................................4-1 to 4-48

_. Chapter 5 : Reasoning Under Uncertainty ..................................................5-1 to 5-22

._. Chapter 6 : Planning and Leaming .............................................................6-1 to 6-31

♦ Lab Manua l ........... ........... ........... ........... ........... .......... L-1 to L-8

□□o
Module 1
CHAPTER
Introduction to
1 Artificial Intelligence

1.1 Artificial Intelligence (Al), Al Perspectives: Acting and Thlnklng humanly, Acting and Thinking rationally.
1.2 History of Al, Appllcatlons of Al, The present state of Al, Ethics In Al.

'I

1.1 Introduction to Artificial Intelligence ................................................................................................................................ 1-2


1.2 Al Perspectives ...............................................................................................................................................................1-3
GO. Define Information, knowledge and Intelligence. What Is the comparison between artificial
and human Intelligence................................................................................................................................................... 1-3
GO, Explain Intelligence and Artlficlal Intelligence. How does conventional computing differ from
the Intelligence computing? ............................................................................................................................................ 1-3
1.3 Acting humanly ...............................................................................................................................................................1-4
1.4 Thinking Rationally..........................................................................................................................................................1-5
1.5 History of Al ..................................................................................,................................................................................. 1-6
GO. Write a short note on: History of Artificial Intelligence? ............................................................................................... 1-6
1.6 Sub Areas and Applications of Al ................................................................................................................................... 1-8
GO. Explain different appllcatlons of Artificial Intelligence In various areas ?........................................................................ 1-8
1.7 Current trends In Al .......................................................................................................................................................1-10
1.8 Objectives and Ethics In Al ...........................................................................................................................................1-12
GO. What are different objectives of Al ?.............................................................................................................................1-12
• Chapter Enda ..............................................................................................................................................................1-13

.• '!

01 , I •I I
f (Introduction to Artificial lntelllgence) ... Page No. (1-~
Artificial Intelligence (MU • Al & OS / Electronics)

~ 1.1
Syllabus Topic: Artificial Intelligence

INTRODUCTION TO ARTIFICIAL INTELLIGENCE


-
'There are three kinds of intelligence; one kind understand things for itself, the other appreciates what olhus -..
can understand, the third understands neither for itself nor through others. This first kind is excellent, the second
good, and the third kind useless'.
- Nicolo Machiavelli
So what does the word 'intelligence' mean ? Let us define 'intelligence'.
1. According to the first definition

Someone's intelUgence is his ability to understand and learn things.


2. From the second definition

Intelligence is the ability to tbinlc and understand instead of doing things by instinct or automatically. These
definitions lead us to define what 'Thinking' is :
'Thinking is the activity of using your brain to consider a problem or to create an idea'.
So, in order to think. someone or something has to have a brain or, in other words, an organ that enables one
or something to learn and understand things, to solve problems and to make decisions.
So, now we can define intelligence as 'the ability to learn and understand, to solve problems and to make
decisions'. Now the question arises whether computers can be intelligent. or whether machines can think and
make decisions.
Here, we enter into the Domain of Artificial Intelligence. Artificial Intelligence (Al)' is intelligence
demonstrated by machines, as opposed to natural intelligence displayed by animals including humans.
It is defined as 'the field of study of intelligent agents'; any system that perceives its environment and takes
actions that maximise its chance of achieving its goals.
AI is also described as machines that perform functions that humans associate with human mind, such as
'learning' and 'problem solving'.
s- AI applications include
1. Advanced web search engines (e.g. Google)
2. Recommendation systems (used by You Tube, Amazon and Netfli:x)
3. Understanding human speech (such as Siri and Alexa),
4. Self-driving cars (e.g. Tesla)
5. Automated decision - making and competing at the highest level in strategic game systems (such as chess).
As machines become increasingly capable, tasks considered to require 'intelligence' are removed from the
definition of Al. This phenomenon is known as •AI effect'. For example, optical character recognition is
excluded from things considered to be Al. It is a routine technology.
The field was founded on the assumption that human intelligence "can be so precisely described that a
machine can be made to simulate it''.
This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with
human-like intelligence.

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU - Al & DS / Electronics) (Introduction to Artificial lntelligence) ... Page No. (1--3)
Science friction and futurology have also suggested that, with its enormous potential and power, AI may

l
become an existential risk to humanity.
(1) Humankind has given itself the scientific name homo sapiens-man the wise-because our mental capacities
are so important to our everyday lives and our sense of self. Toe field of artificial intelligence, br AI,
attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves.
e
(2) But unlike philosophy and psychology, which are also concerned with intelligence, artificial intelligenc
to study artificial intelligenc e
11 strives to build intelligent entities as well as understand them. Another reason
is that these constructed intelligent entities are interesting and usefuJ in their own right
in its
(3) Artificial intelligence bas produced many significant and impressive products even at this early stage
el
development.. Although no one can predict the future in detail, it is clear that computers with human-lev
our everyday lives and on the future course of
intelligence (or better) would have a huge impact on
civilization.
(4) Artificial intelligence addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain {brain},
whether biological or electronic, to perceive, understand, predkt, and manJpolate a world far larger and
more complicated than itself? How do we go about making something with those properties? These are hard
questions, but unlike the search for faster-than-light travel or an antigravity device, the researcher in artificial
intelligence has solid evidence that the quest is possible.
was
(5) Artificial intelligence is one of the newest disciplines. It was formalJy initiated in 1956, when the name
it
coined, although at that point work had been under way for about five years. AJong with modern genetics,
scientists in other disciplines . A student in
is regularly cited as the "field I wouJd most like to be in" by
physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein,
e,
and the rest, and that it takes many years of study before one can contribute new ideas. Artificial intelligenc
on the other hand, stiJl has openings for a full-time.
and
(6) Artificial intelligence (Al) is a field that has a long history but is still constantly and actively growing
everyday lives. It has uses in
changing. Artificial intelligence (Al) technology is increasingly prevalent in our
, a variety of industries from gaming, journalism/media, to finance, as well as in the state-of-the-art research
fields from robotics, medical diagnosis, and quantum science.
Syllabus Topic: Al Perspectives

·- -•~--- ----.-- ------ ------ ------ ------ ------ ------ -- -- - - - I


~ GQ. Define information , knowledge and intelligence. What is the comparison between artificial and human intelligence. 1

i GQ:• .Explain Intelligence and Artificial Intelligence. How does conventional computing differ from the intelligence ,:
~ I
I .,;· ·• '""'"'• ·• • . .; ,. I
I,.. ,,. ,computing?
---------------------------------------------------------------- • I
I '

(1) Informati on

• All data are information. However, there is some part of information that is not considered as a data
Such distinguis hed informatio n can be considered as a processed data, which makes decision making
easier. Processing involves an aggregation of data, calculations of data, correction s on data, etc. in
such a way that it generates the flow of messages.

• Information usually has some meaning and purpose that is; data within a context can be considered as
information.

(MS-126)
lil Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intern nee MU . Al & OS I Electronics Introduction to Artificial lntelli I
(2) Know ledge
Knowledge is a justified true belief. Knowledge is a store of inform
ation proven useful for 8 capacity to act.
II
(3) l"!telll gence
• Unlike belief and knowledge, intelligence is not information: it
is a process, or an innate capacity to
use information in order to respond to ever-changing requirements.
• It is a capacity to acquire, adapt, modify, extend and use inform
ation in order to solve problems.
Therefore, intelligence is the ability to cope with unpredictable circum
stances.
Q' (A) Huma n Intelli gence

• Human intelligence is the intellectual capacity of humans,


which is characterized by perceptio°'
consciousness, self-awareness, and volition.
• Intelligence enables humans to remember description of things
and use those descriptions in future
behaviours. It is a cognitive process.
• It gives humans the cognitive abilities to learn, from concepts,
understand, and reason, including the
capacities to recognize patterns, comprehend ideas, plan, solve
problems, and use language to
communicate. Intelligence enables humans to experience and think.
• (B) Artific ial Intelli gence

• Artificial intelligence (or Al) is both the intelligence of machin


es and the branch of computer science
which aims to create it, through "the study and design of intellig
ent agents" or "rational agents",
whereas; an intelligence agent is a system that perceives its
environment and takes actions which
maximize its chances of success.
• Achievements in artificial intelligence include constrained and
well-defined problems such as games,
crossw ord-so lving and optica l chara cter recognition and a
few more general problems such as
auton omou s cars. General intelligence or strong AI bas not yet
been achieved and is a long-term goal
of Al research.
Syllab us Topic : Acting and Think ing Huma nly

~~r 1 .3 ACTING HUMANLY


-~ .J;,

The first proposal for success in building a program and acts human
ly was the Turing Test. To be considered
intelligent a program must be able to act sufficiently like a human
to fool an interrogator. A human interrogates
the program and another human via a terminal simultaneous
ly. If after a reasonable period, the interrogator
cannot tell which is which, the program passes. To pass this test
requires:
1. Natural language processing 2. Knowledge representation
3. Automated reasoning 4. Machine learning
This test avoids physical contact and concentrates on "highe
I would require the program to also do:
r level" mental faculties. A total Turing test

l •

Comp uter vision
Robotics

(MS-126)
~Tech -Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU - Al & OS/ Electronics} (Introduction to Artificial lntelligence}... Page No. (1-5)
Thinking Humanly

This requires "getting inside" of the human mind to see bow it works and then comparing our computer
programs to this. This is what cognitive science attempts to do. Another way to do this is to observe ~-human
problem solving and argue that one's programs go about problem solving in a similar way.
Example

GPS (General Problem Solver) was an early computer program that attempted to model human thinking. Toe
developers were not so much interested in whether or not GPS solved problems correctly. They were more
interested in showing that it solved problems like people, going through the same steps and taking around the
same amount of time to perform those steps.
Syllabus Topic : Thinking and Acting Rationally

• Aristotle was one of the first to attempt to codify "thinking". His syllogisms provided patterns of argument
structure that always gave correct conclusions, giving correct premises.
• Example : All computers use energy. Using energy always generates heat Therefore, all computers generate
beat. This initiate the field of logic. Formal logic was developed in the late nineteenth century. This was the
first step toward enabling computer programs to reason logically. By 1965, programs existed that could,
given enough time and memory, take a description of the problem in logical notation and find the solution, if
one existed. The logicist tradition in AI hopes to build on such programs to create intelligence.
• There are two main obstacles to this approach: First, it is difficult to make informal knowledge precise
enough to use the logicist approach particularly when there is uncertainty in the knowledge. Second, there is
a big difference between being able to solve a problem in principle and doing so in practice.
Acting Ratlonally: The rational agent approach

• Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent is just something
that perceives and acts.
• In the logical approach to AI, the emphasis is on correct inferences. This is often part of being a rational
agent because one way to act rationally is to reason logically and then act on ones conclusions. But this is
not all of rationality because agents often find themselves in situations where there is no provably correct
thing to do, yet they must do something. There are also ways to act rationally that do not seem to involve
inference, e.g., reflex actions.
The study of AI as rational agent design has two advantages :
1. It is more general than the logical approach because correct inference is only a useful mechanism for
achieving rationality, not a necessary one.
2. It is more amenable to scientific development than approaches based on human behaviour or human thought_
because a standard of rationality can be defined independent of humans.
Achieving perfect rationality in c9mplex environments is not possible because the computational demands
are too high. However, we will study perfect rationality as a starting place.

(MS-126) Iii Tech-Neo Publications...A SACHIN SHAH Ventµ(!!


f Artif!Clal Intelligence (MU -Al & OS/ Electronics) (Introduction to Artificial lntelligence) ... Page No. (1~

Syllabus Topic : History of Al

~ 1.5 HISTORY OF Al

------ ------ ------ ------


of
------
Artificial
------
Intelligence 7
------ ____________________ :,
: GQ. Write a short note on : History
---~-------------------- -------------------
Artificial Intelligence is much older than you would imagine. Even there are the myths of Mechanical men in
Ancient Greek and Egyptian Myths.
to till
FoUowing are some milestones in the history of AI which defines the journey from the AI generation
date developm ent

Evolutlon of luring Birth of Al: Arst Chat.boat: Rrst Rrst Al El(pffl


Arllfidel Machine D.artmouth ELIZA lnt~lgenoe winer Stslffll
neurons Confe!~ Robot:
WABOT-1

t
I •• I I •• I ♦

Second Al IBM Ottp blue Al In Home!: IBM s Walson: Google now Chatbol Eugene Amazor,
Winer : first oompu~r Roomba Wins a quiz Goonnan:Wl ns Echo
to bNt a WOtld show a"Twlng lest
~Champio n

(1M)Fig. 1.5.1 History of AI

• Maturation of Artificial Intelligence (1943-1952)


o Year 1943: The first work which is now recognized as AI was done by Warren McCulloc h and Walter
pits in 1943. They proposed a model of artificial neurons.
o Year 1949 : Donald Hebb demonstrated an updating rule for modifying the connection strength
between neurons. His rule is now called Hebblan learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in
1950. Alan Turing publishes "Computing Machinery and Intelligence" in which he proposed a test
.
The test can check the machine's ability to exhibit intelligent behavior equivalent to human intelligence
called a Turing test.
• The birth of Artificial Intelligence (1952-1956)
0
Year 1955 : An Allen Newell and Herbert A. Simon created the "first artificial intelligence program
''Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems.
and find new and more elegant proofs for some theorems.

(MS-126)
Iii Tech-Neo Publications...A SACHJN SHAH Venture
Artificial lntelHgence (MU -Al & OS f Eleclrnnlcs) (lntro(b:don to Artifldal mtelligence)... Page No. (1-n-
o Year 195(; : The word • Artificial Intelligence" first adopted by American Computer scientist 1olm
McCarthy lt the Dartmouth Conference. For the first time, Al coined as an academic field.
• o , At that time high-level computer languages such as FORTRAN. LISP. or COBOL wem inYeoted: Aad
the enthusiasm for AI was very high at that time. - -, -
• 1begoldenyears-Earlyenthusiasm(l9.5~1974) • • • - • ·-. - • ,-, .... - ·•·. -,. a.·,:-""• • • -"~
• o • Year 1966 : The miearchers emphasized developing algorithms which cad~ so~~ mathematical'
problems. Josepli Weizenbawn created the fiDt Chaabois in 1966, which wu named as HL.JZA... . ~- • ~~~--
.. 0 . year lffl . : . The first. intelligent humanoid robot WU .. built in Japan .Which. WU .oamed. a&-
WABOT-1.
• The first Al winter (1974-1980)
, o The duration between years 1974 to 1980 was the first Al_ winla duration. Al_ winier rcCcrs to lbe_ time
period where computer scientist dealt with a sey_ere shortage of funding from government foi ~
researches. • · - • • .. " ,. ,. · · ·•· · •. ·_ , -. ~ •.. '·
o During AI winters, an interest of pob1icity on artificial intelligeoce was dccTeased.
.- ._,:t'·t: :. ;•i •• (.(:
• A boom of AI ( 1980-1987)
(ro Year 1980 : After Al winter duration, AI came back with '"Expert System'\ lhpc:rt. systems
WCR:
: ·:_. -. programmed that emulate the decision-mating ability of a bDman expert. .
-o. • In. the Year ·1980, the first natiooal conference of the American ~ ~ AriiticlaJlntclligcoce
( . . , :washeldatStanfordUniv,-;ev. ......~,...,J
•• • • • • ~- • • , • .,,• . - • • •
•_..,. :'.Ibe second Al winter.(1987-1993)
o The duration between the years 1987 to 1993 was the secood Al W"mll:r duratioo..
o Again Investors and government stopped in funding for Al research aS due to high cost ~- oot efficialt
result. The expert system such as XCON was very cost effective.. ·'
• The emergence of intelligent agents (1993-2011)
o Year 1997 : In the year 1997. IBM Deep Blue hem world chcu champion. Gary Xaspmo-.,. and
becametbefirstcomputertobeataworldcbesscbampion. .· .. ,.,_, -,,\ n; :.~ q
...,, e> Year 2002: for the first time, Al entered the home in the form ofR.oomba. a.vacumn~eancr..
o Year 2086 : AI came in the B ~ world till the year 2006. Companies like ~acebook, Twitter, and
Netflix also started using Al. •
• Deep learning, big data and artificial general intelligeocc (2011-present) _.. ... .1 t'. ; ~

o Year 2011 : In the year 2011. ffiM's Watson won jeopardy, a quiz show, where ii had to solve the
complex questions as well as riddles. Watson had proved that it coold unck.rstand oallmd language and
can solve tricky questions quickly.
o Year 2012 : Google bas launched an Android app featun: •Google now•. which was able to providc_1
information to the user as a prediction.
o Year 2014 : In the year 2014, ~ "Eugene Goostman" won a ,;ompetition in the infamous '7uriog
test.. ' :. ' .. '·• . '. " . . •
o Year 2018 : The "Project Debater" from IBM debtl~ <>q CQDq>~ topics-~ ~o mam:r dcbatas and
also performed extremely well ,· . ._;, : . , .. ,· . . .,, ,_ :- ;
_o Google 1w demonstrated an Al program "Oyplex" v,,bicb was'- virtual usistant and which bad taken
'hairdresser ·appointment on call. and lady mi other "side didn't notice that sbe was talking_: with the
OW::binC, • • ' • I

lilrech-Neo PublicationLA SAO-IN SHAH Ventuiw·•


f Artificlal Intelligence (MU. Al & OS/ Electronics) (Introduction to Artificial lntelligence) ... Page No. (1£J.

• Now Al has developed to a remarkable level. The concept of Deep learning, big data, and data science are
now trending like a boom. Nowadays companies like Google, Facebook, IBM, and Amazon are working
with Al and creating amazing devices. The future of Artificial Intelligence is inspiring and will come with
high intelligence.
Syllabus Topic : Appllcations of Al

~
1.6 ' SUB
' AREAS AND APPLICATIONS OF Al
I ., - I

• GQ. Explain different applications of Artificial Intelligence in various areas ? •


~---~----------------------------------------------------------•
Artificial Intelligence is revolutionizing the
industries with its applications and helping solve Applications of Al
complex problems.
(1) AI In Robotics

• Robotics is additional field where artificial


intelligence applications are commonly
used. Robots powered by Al use real-time
updates to sense obstacles in its path and
pre-plan its journey instantly.
11A3)Fig. 1.6.1 : Scope or Al Applications

• It can be used for -


o Carrying goods in hospitals, factories, and warehouses
o Cleaning offices and large equipment
o Inventory management
(2) AI In Agriculture

• Artificial Intelligence is used to identify defects and nutrient deficiencies in the soil. This is done using
computer vision, robotics, and machine learning, AI can analyze where weeds are growing.
• Al bots can help to harvest crops at a higher volume and faster pace than human laborers.
(3) AI In Gaming

• Another sector where Artificial Intelligence applications have found prominence is the gaming sector.
• It can also be used to predict human behavior using which game design and testing can be improved.
(4) AI In Automobiles

• Artificial Intelligence is used to build self-driving vehicles. Al can be used along with the vehicle's
camera, radar, cloud services, GPS, and control signals to operate the vehicle.
• Al can improve the in-vehicle experience and provide additional systems like emergency braking,
blind-spot monitoring, and driver-assist steering.
(5) AI In social media

• lnstagram : On Instagram, Al considers your likes and the accounts you follow to determine what
posts you are shown on your explore tab.

(MS-126) Iii Tech-Neo Publications...A SACHIN SHAH Venture


iAltlflclal Intelligence (MU· Al& DS / ElectronlC8)' · •• ,. - (lntrocklcClon to Altffk.ial lntelftgence)... Pape No. (1-8)
• Facebook :. Artificial Intdligcnce,is also: used along with a tool called DccpTen:. With this tool,
Facebook can understand conversations better. It can be used to translat.e posts from cliff~ JangnllF.
, ..automatically. . _ . . . _ . .,. • •• • ' .. '
;!;/;;i:! •I • •~ ·; I : ~,"/,:I : I:_', q,•,; ! ::·:, ;, ;'! ,.•,I'll o :. 1: '• k", < •

• Twitter: AI is used by Twittec for fraud detection.,removing propaganda. and hateful content. Twitt.a'
1,:1:; ::
. also uses AI. to recommend
''with:. :;., ' ..
twee~ that users might 'enjoy, based on what type of tweets they mgage
r;.·. •/.••:. :,!,-., ,·,, .' '•._, ,•.
_•. ,,-
,. •···, • ·'.<,. ';, .,( ~

'. . ,: .- ·'· ,' • .


• •• - · ~ • .. • • • '•":":"· •• i .. - • ·!r-.·-:-·,• •..~:- •.:,.;_:,· ... . -:. ·';,-·- .•- - -~.;-
, ~ . • • • , , _. • ! • :I . •

; ., . ' ._, Artifici-1 in~lligence applications are populat.:in the.marketiog,doioaiD as,wcn., " " ,, ,:.,.a"'-,l;<. • " '
•• - - •.... o • • Using AI, matkeren can: deliver highly targeted and pmonaliud ads with die help ofbehaVlotli!
. ~ysis, Pattel!I reco~~on, e~~ •~ also helps with retarg~ au~~~ at_~_ri~..~e ~~:~
better results and reduced fcelipgs of distrust and annoyance. ...
• ! ~l f '. . .~ . ' .I .. • . ' . •, J ' ' •• ) ,! • ..

. _../· :: ,AI. can belp w~th co~nt ~keting in~ way that matcli.es the brand's style and voice1 ltcan be~ to 1

handle rou~e ~ like performance. campaign reports, and much more. '. • . ' .. ; • • . . . . •
. . . . . . . . . ' . . .• . . ..·: .. ::
;}L., ~.: _ Chatbots powered by AI, Natural Language Processing, Narural Language Generatioo. and Natural
_, Language U[!.ders~g· -~ analyze the user's language· aid respond in tile ways hUlD3DS do. •. ' .
, :.:r! !l •· , AI can provi~ users witlueal-time ~~zatio~ based on their behaviour.and can be ~,to c:Jdit ~
optimize marketim; campajgI!,S to .fit a local market'.s ~ - ,_
". Computer. Vision : Face recognition. programs in use by. banks, government. etc. Haodwriting
recognition, electronics and .rruµiufacturing inspection, photo ~ baggage ~ n , reverse
e n ~ g to . .
automatically
. . . construct
. . a 3D geometric model
;

• Expert Systems : Another very important cognitive ability of human. being is Decision making. This
ability of human is based on experience and knowledge which makes ooe iot.elligent expert. Expert
systems are
required in industries and especially in an organization where analytics plays an ~ t
role. there it acts as a mediator to haridle multiple activities to make system efficient such as Flight
tracking system, medical system etc.
• Diagnostic Systems : MYCIN system for diagnosing bacterial infections of the blood and suggesting
treatmenl Pathfinder medical diagnosis system, which suggests tests and makes diagnoses
F'lnanclal Decision Making : Credit card companies, mortgage companies, banks employ 'Al systi:ms
to detect fraud and expedite financial transactions. By considering the usage patterns,·· AI can help
reduce the poasibility of credit card frauds taking place. Many customers prefer to buy a product or
service based on customer reviews. AI can help~ and ~ e fake revi,ews.
• Classffleatton Systems : Put information into OlMl of a fix(.d set of caiegcric:., USUl3 sevaal soum:s of
information. B.g.. financial decision making systems. NASA developed a system for classifying Ver}
1 •· . faint areas in aslronomical images into either stars or gataJics wi.lh ~ higli ~ b y . ~ from
human experts' cwsificatioos. •. . '.l , '
(7) Scheduling and Plennlng
Automatic scheduling for manufactwing.
:(1) Artlfldal Neural Networu • .1. ; • ' ..,- : •• , • :\ ·,,:, ·' '. • ' ;-: •

• System that simulate intelligence by ~ t'.o '.rqmhlcc the ~ of-'physical; corincctirm that
occur in animal brains.

(MS-126) I .. ; • lilrech-Neo P\Jblicatlons...A SAOllN SHAH..Venbn


. Introduction to Artificial lntelli ence ...Pa e No. 1.11
Artificlal lntelli ence MU - Al & DS / Electronics
.. h ·u·on clustenn·g • classification etc. ,
• Examples : Pattern recogrution, c aracter recogru •
(9) Fuzzy L011lc
. . I
and outp1
• The system which relies on degree of truth and change in states along with ~te of mputs
where output depends on feeding of the input. its state, and rate of change of this state.
a particu)
• Also we can say, probability is important in case of state of how input is given on this basis
output is attached to given input Examples: Consumer Electronics, automobiles etc.
I
Syllabus Topic: The Present State of Al

t~ 1.7 cu)tiwir'TRENDS !?f Al%,

While the COVID - 19 pandemic impacted many aspects of how we do business, it did
not diminish J
lives. AI remains a key trend when it comes to technolo~
impact of Artificial Intelligence (AI) on our everyday
and innovations that will fundamentally change how we live, work, and play in the near future.
- day lives. W~
AI is the force behind many modem technological comforts that are now part of our day -to
as health-ca re, reta
continuous research. technology has made massive developments in major fields such
digital age with hii
auto~_otive, manufacturing and finance. AI is one essential component that_ transforms the
prec1S1on and accuracy. So, here there is an overview of what we can expect 10 the years to come.

1. Robotic Process Automation (RPA) 2. Conversational Al


I I 3. The role of Al in healthcare 4. Increase in demand for ethical Al

I 5. AI for cyber security and knowledge breach


6. The Intersection of the Internet of Things with Al (AIO'I)
7. .t'latural Language 'Processing, (NLP) • 8. Reinforcement Leaming
J
9. Quantum Al
I I
10. Al-Powered Business and Forecasting and Analysis
11. Edge computing 12. Rise of a hybrid work force

(1) Robotic Process Automat ion (RPA)


l ractic
• To streamline business processes and reduce costs, they are turning to an evolving techn0 ogy P 1
called as robotic process automation (RPA)
· l · d ·
~• RPA is aimed at the automation of business processes. governed by busmess ogic, an organJSC
inputs.

~A solution ~ge from producing an automated email response to deploying thousands of bots• Ead
1
'J •
1s programmed m an ERP system to automate rule-based tasks.
(2) Convers ational AI
• Conversational Al increases the customer experience's reach, responsiveness and personalisatJon.
.
• To better understand what the human says and needs Al
, • g (NLD) anI
uses .natural .language processm
, • h
j I macbine ~earrung to provide a more nah,...,,1
..... ..., near uman - level mteraction.

~ Tech-Neo Publications...A SACHIN SHAH~


(MS-U6)

I
Artificial Intelligence (MU - Al & OS/ Electronics) (Introduction to Artificial lntelligence) ... Page No. (1-11)
(3) The role of AI In healthcare
• Big data has been extensively used to identify COVID patients and critical hot points.
• AI is already helping the health-care sector to a great degree with high accuracy besides, researchers
have developed thennal cameras and mobile applications to collect data for healthcare organisations.
• By leveraging data analysis and predicting various outcomes, AI can support healthcare facilities in
several unique ways.
• AI instruments offer insights into human health and also recommend preventive steps to avoid the
1 .. , spread of diseases.
• AI solutions also help doctors remotely track the health of their patients, thereby advancing
teleconsultation and remote care.
t •
(4) Increase In demand for ethical AI
• This demand is at the top of the list of emerging developments in technology.
• Looking at how trends are rapidly changing, values-based customers and workers expect businesses to
implement AI responsibly.
• Companies will actively choose to do business with partners committed to data ethics in the next few
years.
I ' '-

(5) AI for cyber security and knowledge breach


J

• In the coming years, knowledge will grow and will be accessible, and digital data will be at greater risk
of being compromised and exposed to hacking . AI will help deter cybcrcrimes in the future with
improved cyber security measures.
• Fake digital activity that match criminal trends will be detected by the AI-enabled framework.
(6) The Intersection of the Internet of Things with AI (AIOT)

• There is hardly any boundary between AI and IOT. Although both technologies have individual
characteristics, when used together, better and more unique possibilities open up.
•• The ability of AI to gain insights from data quiclcly makes IOT solutions more intelligent
(7) Natural Language Processing (NLP)
• NLP is one of the widely used applications of AI. N LP is used in Amazon, Alexa and Google Home.
• Toe need for writing or communicating with a screen has been eliminated by NLP as now humans can
communicate with robots that understand their language.
• NLP is used for sentiment analysis, machine translation, process description, auto-video caption
generation and chatbots is expected to increase.
• I I • .(J.J

(8) Reinforcement Leaming


• Reinforced Learning (RL) is a specific application of deep learning. Its work is based on its experience
to enhance the efficiency and effectiveness of data. 1

• Some cases of use of RL are robotics in planning business strategies, optimising advertisement content,
automating industries, controlling aircraft, and making motion control robots.
,n'
(9) Quantum AI

• To measure the Qubits for use in supercomputers", advanced companfcs will begin using qu'intum
supremacy. Because of quantum bits, quantum computers solve problems at a 9uic~ p~ than classic
computers do.

,(M5-U6)
li1 Tech-Neo Publications...A SACHIN SHAH Venture
U - Al & OS / 8ectronics Introduction to Artificial lntelli ence ... Pa e No.

• Also they assist in the interpretation of data and then forecast several unique trends•
• Quantum computers will help multiple organisations identify inaccessible issues and also predict
meaningful solutions. Future computers will also be used in fields like healthcare, finance and
chemistry.
(10) AI-Powere d Business and Forecasting and Analysis
• AI solutions help in redefining business processing with real-time alerts.
• Content-intelligent technologies, along with AI-supportive practices, will assist digital workers to
develop outstanding abilities.
• Such skills can help them cope with the automation of natural language, judgment, context formation,
reasoning and data-related insights.
(11) Edge computing
• Edge computing provides gadgets with servers and data storage to access their devices and allows them
to put data into them. It is defined as data processing in real-time and is more powerful than 'cloud
.. \
computing services' .
• There is another instance of edge computing that uses nodes. It is a mini-server located in the vicinity of
a local telecommunications provider.
• Nodes help to build a bridge between the local service provider. It costs less, saves time and provides
customers with fast service.
(12) Rise of a hybrid work force
• Post the COVID-19 pandemic, companies will change on to RPA bandwagon. which means that
cognitive AI and RPA will be widely applied to cope with high volume, repetitive activities.
• If usages grow, the office will move to a hybrid workforce environmenL
i 11 • The human workforce will work with various digital assistants. The emergence of a hybrid workforce
will imply more collaborative experiences with Al.
Syllabus Topic: Ethics In Al

~ 1.8 '.~B)ECTIVES AND ETHICS IN Al


.-~ -- - . -are--different
______
GQ.. What______
-- -______- -- -----
--objectives______ - --- .. - -- --- -- - - - - -- -- --- - - -- - - - - -- -- --- ---,
______
1
I
of AI? ____________
~
r,,.;,~.K;.; .i~~l~______
~ J it~
______
•h) _______ __._....\
• I
Below are the eight aims and objectives of artificial intelligence :
IQ> Objective #1 : Artificial Intelligen ce solves problems
When it comes to artificial intelligence, there is a strong urge to create AI programs that look. act, and
feel
like real humans. However, many scientists now understand that the real goal is not to make a human-Like
robot. Instead, they would rather create a robot that works to make our lives easier, no matter what it looks
or
sounds like. Moving forward, it is likely that we will see some serious work being put into the ability for
Al
to learn and understand, and less on forcing them to act like real humans. That will probably just come with
time.
IQ> Objective #2 : Artificial intelligen ce complete s multiple tasks

Completing multiple tasks is another aims and objectives of artificial intelligence. One of the largest
difficulties to overcome has been making it possible for an AI program or a "robot'' to do more than one
taSk.
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU • Al & DS / Electronics) (Introduction to Artificial lntelligence) ... Page No. (1-13)

It is very easy to program a system to complete a certain task. For instance, it can bring an item from point A
to point B.
However, if you want the program to understand that it must pick up the item and then either bring it to point
A or throw it in the trash based on arbitrary rules that a human would know that's a different story. Id
simpler terms, it might be a while before your housemaid is a robot
Ii" Objective #3 : Artificial Intelligence shapes the future of every company

AI is quickly becoming a crucial tool for all companies. They are using this technology to streamline their
processes. It's no secret that the goal is to continue this trend for as many low-level tasks as possible. It
ultimately saves the companies money in the long run, and it allows them to up productivity in other areas.
Ii" Objective #4 : Artificial intelligence prepares for a boom in big data

Big data has already taken the world by storm. Big data is the large-scale, and sometimes even random,
collection of data about people's lives, habits, conversations and more. AI will be able to do much more for
the analysis of this data than humans ever did, so data-driven research, advertisements, and content are going
to explode. '
s- Objective #5 : Artificial intelligence creates synergy between humans and AI
One of the key goals in AI is to develop a strong synergy between AI and humans, so that they can work
together to enhance the capabilities of both.
Ii" Objective #6 : Artificial intelligence is good at problem-solving

So far, AI is unable to employ advanced problem-solving abilities. That is, it can tell you a factual answer,
but cannot analyze a specific situation and make a decision based on the very specific context of that
situation.
Ii" Objective #7 : Artificial Intelligence helps with planning

One of the most human traits in existence is the ability to plan and make goals and subsequently accomplish
them. And one of the goals for AI is to have AI be able to do these things.
li" Objective #8 : Artificial Intelligence performs more complex tasks

The key goal is this: to develop AI programs that can complete more and more complex tasks. Already the
abilities are shocking, although not yet widespread. However, over time these will develop and ultimately.
scientists hope, be able to do basically the same things humans can do.

Cliapter Ends...
□□□

I • ,

I
f
I. I . I ;r Q
.-- --- --- --- --- -1 ·Module 2
CHAPTER
.,. ,Intelligent Agents
1t : . J

i. :H :

, •• ,!

2.1 Introduction of agems, StructlJre of lntemgent Agent. Characteriltics of lr1ta1geo,t. . _


. -2.2, ; _Types of Agents: Simple Reflex, Model Based, Goal Based, Utility Based Agents.
2.S Environment Types: Deterministic, Stochastic, Slalic, Dyna,ric, OtJaervable, Semi-ob9ervable,
Single Agent, Multi Agent •• • •

2.1 Introduction of Agents ...... .' •• :..... ,•• _.,•. .-.... i:•... •·:· .... ' ... :......... • ....................... ..................... • ........... ·_.....- ................ 2·2
• ... 2. 1.1 Intelligent Agent ..... ·.: .. •• ... --....... ·:·: ...... •... ,·.. ,•. •..-: .............. •.............................................._................. • ..-...... 2-2
2-3
2.2 Structure of lntelfigent agents .............. '.......................:...:.....'.........................................................................................
2.3 Characterisllcs of·lnteUlgent Agent ...................................:............................................................................................. 2-3
U_Q. Define lntelUgent Agent. What are the characteristics of lnlellgent Agent? .......... 2--3
............2-'
2,4 Simple Reff ex Agent...........................................................................................................................................
i
Model-based Reflex Agent ..........................................................................................................·-······-·"···· ................ 2~
2.5
UQ. Explain Model based Reflex agent with block diagram.
Ir·.1U Q 2 aI Dec. 19. 10 M,irks O 2(c,. Dec 16. 5 r,lilrk:; 0 5Ib1 l.1.11 1t, 0 r.h,~s ...................................-.... ~
2.6 A Goal-based Reflex Agent..........................................................................................................................._ ....,......... 2-6
UQ. Elq)lain Goal Based with block diagram.
1MU · 0 2(bl. Dec 18 0 2(61. Dec. 17 10 f.1.11ks Q ltd) l.liJ, 17. -l fl.1;ir\,.:;> .........................................................2-6
2.7 An Ullllty·based Reflex Agent., ......................................................................................................................................2-7
UO. Explain Utility beSed agent with block diagram (r.lU • a. 2 in. Dec. 19 10 M;irks a. 21c). Dec 16 5 r.i~1ks
a 2(b. Dec 18. O :>(B). Dc-c. 17. 10 M,11ks o 1(d). l\1;i, 17 4 r,1:irks. a 21~1. Dec 19 10 r•.1cirks
.............................................................................................................................................2-7
o 5(bJ. M;iv 16 5 r,1arksl
2.8 Comparison of Model Based Agent and UtiHty Based Agent......................................................................................... 2-8
UQ. Compare of Model based Agent and Utility based Agent. .................- ....... 2-8
2.9 Comparison Model Based Agent wHh Goal Based Agent..............................................................................._...... - .... 2-8
UQ. Compare Model Based Agent with Goal Based Agent .................................- ..... 2-41
2.10 Types of environment..............................................................................................- .............._._........- •.•..·-·-···-··-..•N
2.10.1 Complete YS. l"Cl?mplete Env_ironments ............................................'. .............................................................. 2·11
2.10.2 Competitive vs. Collaborative Enllironments ...............................................................:.................................... 2-1'1
• Chapter Enda .. •............................ •................................................' .....·.:.... •........................................: .• •......... ......... 2-11
Artificial lntelli ence MU - Al & OS/ Electronics . •:
Syllabus Topic : Introduction of Agents

An agent is just something that acts (agent comes from the Latin agree, to do).
In artificial intelligence, an intelligent agent (IA) is an autonomous entity which observes through sensors
and acts upon an environment using actuators (i.e. it is an agent) and directs its activity towards achieving goals.
• Intelligent agents may also learn or use knowledge to achieve their goals.
• They may be very simple or very complex, Example : a reflex machine such as a thermostat is an intelligent
agent.
• An agent is anything that can perceive its environment through sensors and acts upon that environment
through effectors.
Agent's structure can be viewed as :
(1) Agent= Architecture+ Agent program. (2) Architecture= the machinery that an agent executes on.
(3) Agent program = an implementation of an agent function.
• IA like Rabul and Gopal are examples of intelligence as they use sensors to perceive a request made by the
user and automatically collect the data from the internet without the user's help.
• That can be used to gather information about its perceived
environment such as weather and time. Thus, an intelligent Agent Senecn
agent is an autonomous entity which acts upon an
environment using sensors and actuates for achieving
goals. An intelligent agent may learn from the environment
;~
to achieve their goals.
• The term 'percept' means the agent's perceptual inputs at r=L,~.. ,
any given instant An agent's percept sequence is the
complete history of everything that the agent has
perceived. We illustrate this idea in the Fig. 2.1.1.
~ '
AdLa18s

--+-t action

Fig. 2.1.1: Agent's interact with environments


through sensors and actua.tors
Thus, an agent's behaviour can be described by the 'agent function' that maps any given percept sequence
to an action.

~ 2. t . t lntelllgent Agent

• An intelligent agent is a programme that can make decisions or perform a service based on its environment.
user input and experiences.
• These programs can be used autonomously to gather information on a regular, programmed schedule oc
when prompted by the user in real time.
• IA may be simple or complex - a thermostat is considered as an example of an intelligent agent, as is a
human being, as is any system that meets the definition, such as firm, a state or total quantity.

(MS-126)
~ Tech-Neo Publications..A SACHlN SHAH VentU"
Artificial Intelligence (MU • Al & OS/ Electronics) (Intelligent Agents) ...Page No. (2-3)

Syllabus Topic : Structure of Intelligent Agent

..i 2.2, mucruu OF INnLLIGENT AGENTS


n and agent programme. Arcrutecture
(1) The IA structure consists of three main parts: arcbjtecture, agent functio
refers to machlnery or devices that consist of actuators and sensors
. The IA executes on this machinery.
tool we can obtain great detail or get a
The tool allows the adjusting of image detaiJs and clarifies. Using this
contrast of the image and visualize
smoother picture with less detail This is the main tool to increase the
more details of the image.
wrucb act as sensors and ilisplay on the
(2) A software agent has file contents, received network packages
has eyes, ears and other organs which
screen, files, sent network packets acting as actuators. A human agent
rs.
act as sensors, and bands, legs, mouth and other body parts acting as actuato
make a change on any of the knowledge
(3) The learning element is responsible for improvements. This can
ive steps in the percept sequence;
components in the agent. One way of learning is to observe parts of success
from this the agent can learn how the world evolves.
their own free choices. The structure
(4) Agency is the capacity of individuals to act independently and to make
against autonomy in determining
versus agency debate may be understood as an issue of socialization
structure.
whether an individual acts as a free agent or in a manner dictated by social
that Is performed after a given
So far we have discussed the behaviour of the agent, i.e. the action
the agent works. The job of 'Artificial
sequence of precepts. Now we begin to study bow the inside of
n-the function from precepts to action.
intelligence' is to design an agent program that designs the agent functio
This is 'architecture' or 'structure' of intelligent agent
principles of abnost all intelligent
We mention four basic kinds of agent programs that cover the
systems.
They are:

(I) Simple reflex agents (2) Model-based reflex agents


(3) Goal-based agents; and (4) Utility-based agents

to generate actions. Then we shall


Each of these programs combine particular component in particular ways
e the performance of their components
see how to convert alJ these agents into learning agents. This will improv
in wrucb the components themselves can
so as to have better actions. Then we sbalJ also describe variety of ways
be represented within the agent.
Syllabus Topic : Characteristics of Intelligent Agents

~ 2.3, ,, cHAR Acniu mcs OF 1NnLL1GENT AGENT


·--~- --""\ ---.-- ----- ----- --- - - ~ -. -..-)
,
Define Intelligent Agent Wh?t are~the characteristics of Intelligent Agent?
: UQ.
~--- ---- --~- ---- ---- ---- ---- ----
Intelligent
J---
agents
----
engage d
----
in
----erce----
e-comm
-----------·
gather information until
1. Mobility : Using computer-Network,
search parameter are complete.

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Artlflclal Intel ence MU • Al & OS / Elec
f tronlcs lntelli ent A ants ...Pa N
8
o. ~'.'-\
2. Goal-oriented : Intelligent agen
ts carry out the particular taSk prov
ided by '~ r Sla ~m ent or goa1s,.
moves around from one mac hine to anoth er and can react in response to lt
. • their envrromnent and t~L
initiative to exhibit goal directed beh ""Cl
aviour.
3. Independent : Intelligent agen
t is self-dependent, in the sense that
it functions an its own without hllln
intervention. It makes decisions on ait
its own and initiate them. It commun ndepend
iofonnation and other agents and achi icat es i entl y with data ot
eves the objectives and tasks an beh
alf of the user.
4. Intelligent : Intelligent agent can
collect data more intelligently. The
existing knowledge of its user and y can reason out things based on the
environment on past experiences inte
lligently• To evaluate condition in
the external environmental intellige
nt agents follow present rules.
S. Reduce net traf fic : Agents com
municate and co-operate with othe
perform the taSks, such as informa r agents quickly. Thi s way, they can
tion searches quick1y and efficient
thereby. ly. And network traffic gets reduce

6. Multiple tasks : Multiple task


s can be performed by an intellige
nt agent simultaneously. Thi s help
human from monotonous clerical wor s the
k.
Syllabus Topic : Types of Age
nts : Simple Reflex, Model Bas
ed, Goal Based, Utility Based Age
nts
lttt ;:'2:1 SIMPLE ftlFLEJC AGENT

(1) In artificial intelligence, a sim


ple reflex agent is a 'typ e of intellig
ent age nt' that performs actions base
solely on the current situation, with d
an intelligent agent generally being
and then acts. The agent cannot lear one that perceives its environment
n or take past percepts into account
to modify its behavior.
(2) The 'sim ple reflex agent' wor
ks on condition-action rule , which
means it map s the cur ren t stat e
acti on. Since 'simple reflex agent' to
is based on the present condition so
Problems with simple reflex agents it is call ed as con ditio n-ac tion rule.
are: very limited intelligence. No kno
the state is required. wle dge of non -per cep tual part s of
(3) Usually too big to generate and
store. The simple reflex based age
nt is designed only to respond to
currently occurring problem. If the the
knowledge of the entire environmen
t is given, then simple rcfle.x agent
perfectly rational. is
(4) Thi s agent selects actions base
d on the agent's current perception,
and not based on past perceptions.
example, if a mars lander found a For
rock in a specific place it needed to
collect then it would collect it, if it
was a 'sim ple reflex agent' then if
it found the same rock in a differen
does not take into account that it has t place, it would still pick it up. as
it
already picked it up.
(5) This is useful when a quick auto
mated response is needed. Humans
have a very similar reaction to fire
example; our brain pulls our hand for
away without thinking about any pos
the path of your arm this is called as sibi lity that ther e cou ld be dan ga in
a reflex action.

(MS-126)
~Te ch• Neo Publications...A SAC
HIN SHAH Venture
Artificial lntelllgence (MU· Al & OS I Electronlcs) (lntemgent Agents) ... Page No. (2-5)

reflex agent'
(6) This kind of connection-action rule, written as it hand is in fire, then pulls it away. The 'simple
rules,
bas a library of such rules so that if a certain situation should arise, it is in the set of condition-action
bot have
the agent wilt lcnow bow to react with minimal reasoning. These agents are simple to work with
very limited intelligence, such as picking up 2 rock samples. Refer Fig. 2.4.1.
IJ '•fl
[=:J (rectangles): To represent the current internal state of the agent's decision process.
~ I (ovals): To represent the background information used in the process. 'I I

Agent J
Sensors

now

......,.....__ ,....._..,____ - 1

I! '
~C:" -I Acoo~ be I l
Effectors
~.,
--t-t-

Fig. 2.4.1 : Simple Reflex Agent In Al

FunctJon : SIMPLE-REFLEX-AGENT (percent) returns on action.


StatJc : Rules, a set of condition - action rules.
State ~ INTERPRET - INPUT (percept)
Rule~ RULE - MATCH (State, rules)
Action~ RULE-ACTION (rule)
I •
Return Action

~ 2,5 MODEL-BASED REFLEX AGENT


- - - - - - - - - --- - - - - - - - - ~ - --- - - - -...- - - ---- - - - - -,'"'I.- --r1
- - - - - - - - - .., - - -. - - - - -
·- - - - -

: UQ. Explain Model based Reflex agent with block diagram.


I
(MU • Q. 2(a). Dec. 19. 10 Marks. Q. 2(c), Dec. 16. 5 Marl1s. Q. S(b), May 16. 5 M.ul1s)
I
½--- ------ ------ ------ ------ ------ ------ ------ ------uses
------
the
---~-
percept-history
·
to
(1) A model-based reflex agent needs memory ror storing the precept history; it
is the
help to reveal the current unobservable aspects of the environment. An example of this IA class
d how the
self-screening mobile-vision, where it is necessary to check the percept history to fully understan
world is evolving.
track of the
(2) Model-based reflex agents are made to deaJ with partial accesslbilfty; they do this by keeping
so it
world it can see now. It does this by keeping an internal state that depends on what it has seen before,
holds information on the unobserved aspects of the current state.

(MS-126)
~ Tech-Neo Publications...A SACHIN SHAH Venture
Artificial lntelli ence MU - Al & OS/ Electronics

(3) A simple refleit agent selects


actions based on the agent's Sensors
current perception of the world
What the wor1d is
and not based on past like now
perception. A model based reflex
agent is designed to deal with
What my actions do
i
3
~
partial accessibility. They do this !Condition-action rules l • What action I
should do now
by keeping track of the part of the
world it can see now. Agents Actuators - - - -

Fig. 2.S.1 : Model-based reflex agent in Al

Function REFLEX-AGENT WITH-STATE (percept)


Returns an action.
Static : State, a description of the current world state rules, a
set condition-action rules. action, the most
recent action, initially none.
State ~ UPDATE - STATE (State, action, percept)
Rule ~ RUL E-M ATC H (State, rules)
Action ~ RULE - ACTION (rule)
Return Action

1..i---2., A GOAL-BASED REFLEX AGE NT,


1

• - . : ; - - - -- - - - - - - - - - - - -
} yQ. ,explain Goal Based with block diagram.
- - - - - - - - - - - - - - -

t
- - - - -~ - - -~~r~- -~ - -1"1 ~~~ -r-- - - - - -1
{ ~ .: ' "'=" ,:
il1,'
- --- - .,- ---- - --- - - -- - - - - -- - -- - - - - - --- - -- - -- - - - - - - - - - - -- - -- - _...,_ - - ___
• • • ' '
< t;J,• ., I
~·- I

(1) A goal-based reflex agent has a goal and has a strate


gy to reach that goal. All actions arc taken to reach this
,
goal.
(2) More precisely, from a set of possible actions,
it selects the one that improves the progress towards the
(not neces goal
sarily the best one).
(3) A goal-based agent has an agenda. Unlike a simpl
e refleit agent that makes decisions based solely on
current environment, a goal-based agent is capable of the
thinking beyond the present moment to decide the best
actions to take in order to achieve its goal.
(4) Goal-based agents expand the capabilities of the
model-based agent by having the 'goal' information.
choose an action, so that they can achieve the goal. They
(5) These agents may have to consider a long seque
nce of possible actions, before deciding whether the
achieved or not goal is
(6) A goal-based algorithm uses searching and plann
ing to act in the most-efficient solution to achieve
goal. the
Rem ark : Conclusion is a statement to Goal-based
agent, but is not considered as goal-based agent.

(MS-126)
Ii] Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU - Al & OS/ Elecironics) (lnteffigent Agents) ... Page No. (2-7)
s . - ___..___

______...,
,:-
• What rrrJ actlons do
,J

what action I
Goals
shouldclooow

Fig. 2.6.1 : Goal-b~ agent diagram

1.., 2.1 AN UTILITY-BASED R.EFLEX AGENT


- .- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -1 - - - - - - - - - --- - - - - ~ ~ - - - - - - - - - -"-'r
~ UQ, Explain utility based agent with block diagram. • •• • -: ~
I ~

I"- (MU - Q. 2(a), Dec. 19, 10 Marks, Q. 2(c), Dec. 16, 5 Marks, Q. 2(b), Dec. 18, Q. 2(8),
~ .,, .. 11
,. ~ \
·------------------------------------- -------------------~-----:....
(1) An utility-based reflex agent is like the goal-based agent but
with measure of 'how much happy' an action would make 5en90fa
it rather than the goaJ-based binary feedback [happy,
unhappy].
(2) This kind of agents provides the best solution. An example is ig
the route recommendation system which solves the 'best'
route to reach a destination.
Condi11on -
action
What action i
(3) The agents which are developed having their end-uses as rules -< I should do naw
building blocks are called utility-based agents. When there
are multiple possible alternatives, then to decide which one
r
Adue1onJ
Adlon
'~- ~
is best, utility-based agents are used. They choose actions
based on a preference (utility) for each state. Fig. 2.7.J
(4) Goal-based agents are important as they are used to expand the capabilities of the model-based agent by
having the 'goal' information.
(5) They choose an action, in order that they will achieve the goal. Utility based agent-act based not only goals
but also the samples, thanks to achieving the goal.
(6) The utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in
order to perform the best action. The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
(7) In artificial intelligence, utility function ~lgns values to certain actions that the AI can talce. An AI
agent's preferences over all possible outcomes can be captured by a function that maps the outcomes to a
utility value, the higher the number, the more that agent likes that outcome.
(8) In Economics, the utility function measures the welfare or satisfaction of a consumer as a function of the
consumption of real goods, such as food or clothing. Utility function is widely used in rational choice
theory to analyze human behavior.

(MS-126) lil Tech-Neo Publications...A SACHIN SHAH Venture


Artificial lntelr nee MU - Al & OS I Electronics lntelli ent A

(9) Utility theory bases its beliefs upon individual's preferences. It is a theory postulated in ec
behavior of individuals based on premise; people can consistently rank order their choice
their preferences. We can state that individual's preferences are intrinsic.
--· 2.8
' COMPARISON ~F MODEL BASED AGENT AND UTILITT BASED AGENT
r;-------------------------------------------------------------
' t14~ • Compare of Model based Agent and Utility based Agent. Q. (MU - 4(b). May 17, 4 Marks)
------------------ - - - - -- - - -- - - ---------------------------·
le \
, Sr. Model based Agent .. . Utility based Agent.
',.,

,No~
1.
- ' J

Goal-based agents are very important as Utility based agent-act based not only goals but also the
they are used to expand the model-based simplest thanks to achieving the goal.
agent by having the goal information.
1\ I
2. They choose an action in order that they A utility based agent makes decisions based on the
will achieve the goal. maximum utility of its choices.
3. A model based reflex agent that uses It's usefulness (utility) of the agent that makes itself
percept history and internal memory to distinct from its counterparts.
make decisions about the 'mod.el' of the
world around it.
4. - Internal memory allows these agents to A goal-based agent makes decisions based simply on
store some of tlieir navigation history to achieving a set goal. Suppose you want to travel from
I \
help understand things about their current Pune to Mumbai. Mumbai is the goal and the goal based
environment even when everything that agent will get you there.
I they need to know cannot be directly
observed.
5. Model-based agent uses GPS to understand But if you come across a closed road, the utility-based
its location and predict upcoming drivers. agent will analyse other routes to get you there.
And it will select the best option for maximum utility.
•.__ Hence, the utility-based agent is a step above the goal-
based agent.
6. Model-based reflex agents are made to Utility based agent is more agile and sophisticated since
deal with partial accessibility. It does this it has some decision making capabilities.
by keeping an internal state that depends
: l on what it has seen before. So it holds (

information on the unobserved aspects of


the current state.

i..~.r,,_~;i6'"',;;n;Jiahf- --ii-.._)t-: -,...,~--.-,;,,,-..a-:;.- - ~;;- - - - - - - - -:/i'; - - - - - - - - - - - - - - - - - -,t,


~~~£~~~-~1:aJ~j~;{'~~~~~~~~~~~~-----------t-------------------~
(A) Model Based Agents
(1) Model based reflex agents are made to deal with partial accessibility;
(2) They do this by keeping track of the part of the world it can see;
(3) It keeps an internal state that depends on what it has seen before.
(4) So it holds information on the unobserved aspects of the current state.
I
I (MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Ventti!
Artificial Intelligence (MU • Al & OS/ Electronics) (Intelligent Agents) ... Page No. (2-9)
(8) Goal-Based Agents

(I) A goal-based agent has an agenda. It operates on a goal based in front of it and makes decisions based
on how best to reach that goal.
(2) A goal-based agent is capable of thinking beyond the present moment to decide the best actions to talce
to achieve its goal.
(3) A goal based agent operates as a search and plannJng function.
(4) It targets the goal ahead and finds the right action in order to reach iL
Syllabus Topic : Environment Types : Deterministic, Stochastic, Sta11c, Dynamic, Observable, Semi-
observable, Single Agent, Multi Agent

~ 2.10 TYPES OF ENVIRONMENT


( I"

The agent environment in artificial Intelligence is classified into different types. The environment is
categorized based on how the agent is dealing with it.
Classification is as follows:
J. Fully observable & Partially observable 2. Static & Dynamic
3. Discrete & Continuous 4. Deterministic & Stochastic
5. Single-agent & Multi-agent 6. Episodic & Sequential
7. Known & Unknown 8. Accessible & Inaccessible

1. Fully observable & Partially observable


• As with the name itself the environment of the agent is observed all the time. At each point in time, the
complete state of the environment is sensed or accessed by the sensor. This type of completely observed
environment is called fully observable and else if it is not sensed or observed continuously with varying
time then it is partially observable.
• Ao environment that is not at
Sensors _ _ _ _ _ Environment _ _it_ls• Fr.lty obseNable
Aooass
observed or accessed by any .._..,........., AJ each point environment
sensor for any time is called lntlme
an unobservable environment. Agent _ _...
Since it doesn't need to have a No need to maintain any lntemal
track over the world, a fulJy state to keep track of the world
observable environment is
more convenient. Fig. 2.10.1
• In real life, Chess is an example of fulJy observable because each player of the chess game gets to see
the whole board. Another example of a fully observable environment is the road, While driving a car on
the road (environment), the driver (agent) can see all the traffic signals, conditions, and pedestrians on
the road at a given time and drive accordingly.
• A card game can be considered as an example of a partially observable.
• H'ere some of the cards are discarded into a pile facedown .
The user is only able to see his

I

cards. The used cards, cards States are = Partially observed
[ N~sy + __1':in_ocu_so_ra_:_1
1 + missing envlronmen
reserved for the future, are not
visible to the user. Fig. 2.10.2

(MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Venture


Artificial Intel!' ence MU - Al & OS I Electronics

2. Static and dynamic


• An environment that remains always unchanged by the action of the agent is called a static
cnvironmenL A static environment is the simplest one which is easy to deal with since the agent doesn't
need to keep track of the world during an action. But an environment is said l~ be dynami~ if it changes
by the action of the agent. A dynamic environment keeps constantly changing. An environment that
keeps constant with time and the perfonnance score of the agent will change with time is called a semi-
dynamic environment.
• The crossword puzzle can be considered as an example of a Static environment since the problem in the
crossword puzzle is set paused at the beginning Crossword puzzle, the environment remains constant
and the environment doesn't expand or shrink itremains the same.
• For a Dynamic environment., we can consider a roller coaster ride as an example. The environment
keeps changing for every instantas it is set in motion. The height, mass, velocity, different
cnergjes(kinetic, potential), centripetal force, etc will vary from time totime.
3. Discrete IL continuous
• An environment with a finite number of possibilities is called a discrete environment Por a discrete
environment., there is a finite number of actions or percepts to be performed to reach the final goal. Por
a continuous environment., the number of percepts remains unknown and continuous.
• In a chess game, the possible movements for each piece are finite. Like the king can move only one
square in any direction until that square is not attacked by an opponent piece. So the possible
movements for the particular piece are fixed and it can be considered as an example for a Discrete
environment But the number of movements will vary for each game.
• Self-driving cars arc an example of a continuous environment. The surroundings change over time,
traffic rush, speed of other vehicles on the road, etc will vary continuously over time.
4. Deterministic and Stochastic
• An environment that ignores uncertainty is called a deterministic cnvironmcnL For a deterministic
environment, the upcoming condition or state can be determined by the present condition or state of the
environment and the present action of the agent or the action selected by the agent. An environment
with a random nature is called a stochastic environment.
• The upcoming state cannot be determined by the current state or by the agent. Most of the real-world Ai
applications are classified under stochastic type. An environment is stochastic only if it is partially
observable.
• For each piece on the chessboard, the present position of them can set the next coming action. There is
no case of uncertainty. Which all steps can be taken by a piece from the present position can be
determined and so, it can be grouped under a deterministic environment.
• But for a self-driving car, the coming actions can't be determined in the present state because the
environment is varying continuously. Maybe the car has to push the brake or maybe push the
accelerator fully depending on the environment at that time. actions cannot be determined and is an
example of a stochastic environment.
S. Single-Agent vs Multi-Agent

• An environment that consists of only a single agent is called a single-agent environment. All the
operations over the environment are performed and controlled by this agent only. If the environment
consists of more than one agent or multiple agents for conducting the operations then such an
environment is called a multiple agent environment. '

(MS-126)
Iii Tech-Neo Publications..A SACHIN SHAH Ventute
Artificial Intelligence (MU - Al & DS / Electronics) (Intelligent Agents)... Pape No. (2-11)
• In a vacuum cleaning environment, the vacuum cleaner is the only agent involved in the environment.
And it can be considered as an eumple of a single--agent environmenL
• Multi-Agent Systems, computer-based environments with multiple interacting agents are the best
example of a multi-agent environment Computer games are the common MAS application. Biological
agents, Robotic agents, computational agents, software agents, etc are some of the agents sharing the
environment in a computer game.
6. Episodic a. Sequential
• An environment with a series of actions where the current action of an agent will not make any
influence on future action. It is also called the non-sequential environment/Episodic environment.
Sequential or non-episodic environments are where the current action of the agent will affect the future
action.
• For a classification task, the agent will receive the information from the environment concerning the
I time, and actions are performed only on those pieces of information. The current action doesn't have

l. any influence on the future one, so it can be grouped under an episodic environmenL
But for a chess game, the current action of a particular piece can influence the future action. If the coin
takes a step forward now, the n.ext coming actions depend on this action wbae to move. And it is
sequential.
1. Known a. Unknown
• Known & unknown is an agent's state rather than the property of an enviroomenL If all the possible
results of all the actions are known to the agent then it is a known environment And if the agent is not
aware of the result of the actions and it needs to learn about the environment to make decisions. it is
called an unknown environment
8. Accessible a. Inaccessible
• If the sensors of .Uie agent can have complete access to the state of the environment or the agent can
~

access complete information about the environmental state then it is called an accessible enviromneot.
Else it is inaccessible or the agent doesn't have complete access to the environmental state.

Crossword Puzzle Fully Determines Sequential Static Discrete Single


Taxi driving Partially Stochastic Sequential Dynamic Con Multi
Medical Diagnosis Partially Stochastic Sequential Dynamic Coo Single
Image Analysis Fully Deterministic Episodic Semi Con Single

a. 2.10.1 Complete vs. Incomplete Environments


Complete AI environments are those on which, at any give time, we have enough information to complete a
~ranch of the problem. Chess is a classi_c exam.pie o~ ~ complete AI en~nment Poker, o~ the other band, is an

l
mcomplete environments as AI strategies can t anttc,pate many moves 10 advance and, IDStead, they focus on
finding a good 'equilibrium" at any given time.
a. 2.10.2 Compeddve vs. Colbboradve Eavlroamats
Competitive AI environments face AI agents against each other in order to optimize a specific outcome.
Games such as GO or Chess are examples of competitive to avoid collisions or smart. home sensors interactiom
are examples of collaborative AI environments. •
QQpM,bd.s...
ClClC

I , I
Module 3
CHAPTER
Solving Problems by
3
r

Searching '

f.

'I

Definition, State space representation, Problem as a state space search, Problem formulation, WeD-deflned prot,lems.
Solving Problems by Searching, Performance evaluation of search strategies, TIITl8 ~xity. Space Complexity,
Completeness, Optimality. Uninformed Search: Depth First Search, Breadth First Search, Depth Limited Search, Iterative
Deepening Search, Uniform Cost Search, Bidirectional Search. Informed Search: Heuristic Function, ~
Heuristic, Informed Search Technique, Greedy Best First Search, A• Search, Local Search: HiD ClimtlCng Search,
Simulated Annealing Search, Optimization: Genetic Algorithm. Game Playing, Adversarial Search Techniques. Minknax
Search, Alpha•Beta Pruning.
• t

3.1 Definition................................................................................................................................... ,_... _ ...............___ 3-4


3.2 Searching............................................................................................................................. .................- ...·--·· 34
i; , 3.2.1 Node Representation in Search Tree ....................................................................................................·-·-·3-4
3.3 Problem formulation .....................................................................................-..................................___..._ .....- .. 3-5
3.3.1 Problem Solving Agent ...................................................................................................•--··--·-··--3-5
3.3.2 Simple Problem•Solving Agent. .........................................................................................____.._ _ _ 3-5
3.4 Performance Measures ...........................................................................................- ..............· - · · · - · - · - - - -..-- 3-o
3.4.1 Types of Performance Measures ............................................................:......_,......· - - - - · - · - · · - -3-o
3.4.2 Uninformed Search (Blind Search) .................................................................................................._ _ _ 3-o
UQ. Explain with example various uninformed search techniques. ·-·-·-3-o
3.5 Depth First Search (DFS) ............................................................................................................_ _ ..____.. __ 3-7
3.5.1 DFS Algorithm ....................................................................................................................._ _ _ _ _ _ 3-7
3.5.2 Performance Measures of DFS ..............................................................................._ ..... _ . _ _ _ _.. 3-9
3.5.3 Advantages and Disadvantages of Depth First Search ......................................._ _ _ 3-10
3.5.4 Applications of DFS ................................................................................................. · - - - - - · - - - - 3-10
3.5.5 Solved Example on DFS .................................................................... ,...................................- ...· ..··-·--·3-10
UEx. 3.5.1 (MU - Q. 2{b), May 18, 10 Marks, Q. 2(a), Dec. 15, 10 llarb)...............- ..................- ....- ................ 3-10
• UEx. 3.5.2 (MU-Q. S(b), May 16, 10 Marks ) ..............................................................................................................3-10
3.6 Breadth•first Search (BFS) .............................................................................................................._........_...........___. 3-11
3.6.1 BFS Traversal Algorithm ......................................................................._,............- ..................._ ................ 3-11
3.6.2 Performance Measures for BFS ........................................................._,......... _.............- .............................. 3-12
3.6.3 BFS Algorithm for Extra Memory................................................................_,.........................................._ .... 3-12
3.6.4 Execution of BFS Algorithm .................................................................................................-...... .3-13
3.6.5 Advantages and Disadvantages of BFS.......................................................,.....................,........................... 3-13
3.6.6 Applications of BFS Algorithm ................................................................................................_ ........._.... 3-13
3.6.7 Performance Measures of BFS ......................................................................................................................3-14
Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searching) ••• Page No. (3-_3l
1
3.6.8 Limitations of Breadth Arst Search .................................................................•.•.•••••••••••••••••••••••••••••••••••••••.
... 3-14
3.6.9 Example of Breadth Arst Search ............................................................................................................
....... 3-14
3.7 Depth Limited Search ..............................................................................................................................
.....................3-15
3.8 Iterative Deepening Search Technique (IDS or IOOFS) ........................................................................
....................... 3-16
UQ. Explain Iterative Deepening search algorithms based on performance measure with justification;
complete, optimal, Time and Space complexity. .................................... 3-16
3.8.1 IOOFS Algorithm ............................................................................................................................................
3-16
3.8.2 Advantages of IOOFS..............................................................................................................................
....... 3-18
3.9 UnifOITTl Cost Search ..............................................................................................................................
....................... 3-18
3.9.1 Algorithm of U.C.S..........................................................................................................................................
3-18
3.9.2 Execution of Algorithm of Unifonn Cost Search ..........................................................................................
... 3-18
3.9.3 Example of Unlfonn Cost Search (Linear Displacement) ........................................................................
....... 3-19
3.9.4 Advantages and Disadvantages of U.C.S .....................................................................................................
3-22
3.9.5 Perfonnance Measures ..............................................................................................................................
....3-22
3.9.6 Solved Example on Curved Oisplacement .....................................................................................................
3-22
UEx. 3.9.1 {MU - Q. 1(b), Dee. 16, 5 Mam) ..........................................................................................
.....................3-22
3.10 Bidirectional Search ..............................................................................................................................
..............·-·-· 3-23
3.11 lnfonned Search ..............................................................................................................................
.............................3-24
3.11.1 Example for lnfonned Search............................................................................................................
.....- .... 3-25
3.11.2 Algorithm for Beam Search ............................................................................................................
................ 3-25
3.12 Heuristic Function ..............................................................................................................................
...........................3-26
UQ. What is heuristic function ? MU. a. 3 b. Dec. 18. 10 Marks. 0. 1 c . r.la 18. 4 P.larks. a l(c
Ma 17. 4 Marks. a. 3 b. Dec. 16. s Marks. a. l(c. Dec. 15. 3 P,larks .................................................... 3-26
3.12.1 Simple Heuristic Functions ............................................................................................................
................. 3-26
3.12.2 Problem Characteristics for Heuristic Search ........................................................................
........................3-27
3.12.3 Is the Problem Decomposable ? ............................................................................................................
........3-27
3.12.4 Can Solution Steps be Ignored or Undone ? ..........................................................................................
....... 3-27
3.13 Best Flrst Search ..............................................................................................................................
............................3-28
3.13.1 Steps of the Search Process In BFS ..........................................................................................
.................... 3-29
3.13.2 Algorithm for best-first Search ..................................................................................................................
....... 3-29
3.14 Greedy Best First Search and A• Best First Search ........................................................................
............................. 3-29
UQ. Explain A• Algorithm with an example.
a. 2 b . Ma 19. 5 Marks. 0. 6 c . Ma 18. 5 II.larks. a. 6(b). r.la 17. 5 r.larks) ............................. 3-29
MU·
3.14.1 Greedy Best First Search Algorithm ..........................................................................................
..................... 3-30
3.14.2 Solved Examples..............................................................................................................................
..............3-31
3.15 A• and AO• Search ..............................................................................................................................
.........................3-32
3.16 Admlsibility of A• ........................................................................................................................-
................................ 3-33
3.17 Local Search Algorithms ............................................................................................................
...................................3-34
UQ. Write short note on : Local search Algorithms . ............. 3-34
3.18 Hill Climbing Algorithm ..............................................................................................................................
.................... 3-34
UQ. Explain Hill Climbing and its Drawback in details.
MU - a. 2 b. Ma 19. 5 Marks. a. 6 b. P.ta 18. 5 Marks. a. 2 A. Dec. 17. 10 r.larks .
• . .......................................................................................................................... 3-34
UQ. Explain Hill-<:limbing algorithm with an example. l\.1U • 0. 1 d . Dec. 17. 5 P.larks .....................................
3-34
3.18.1 Algorithm for Hill-Climbing Procedure ..........................................................................................
.................. 3-34
3.18.2 Explanation of Hill Climbing Algorithm ..........................................................................................
................. 3-35
3.18.3 Simple Hill Climbing Algorithm ............................................................................................................
...........3-35
3.19 Steepest-Ascent Hill Climbing Algorithm ..........................................................................................
............................ 3-35
3.19.1 Algorithm : Steepest-ascent Hill Climbing ..........................................................................................
............3-36
3.19.2 Limitations of Steepest-ascent Hill Climbing ..........................................................................................
........3-36

(MS-126) Iii Tech-Neo Publications...A $ACHIN SHAH Venture


Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchlng) ... Page No. (3-3)
ua. State limitations of steepest-ascent hill climbing
MU· a. 2 b. Ma 19. 5 Marks. a. 6 b. Ma 18. 5 Marks. a. 2 A.
o I ........................................................................................... 3-36
ua. What are the problems/frustrations that occur in hill climbing technique? Illustrate with an example.
MU· a. 4 b. Dec. 15. 6 Marks. a. 1 d. Ma 17. 5 Marks ··················••·•••••••••••••......................
................ 3-36
3.19.3 Ways of Dealing with Local Maxima Plateau and Ridge Problems................................................................ 3-37
3.20 Simulated Annealing (sa) ...........................................................................................................................................•.. 3-37
ua. Define the term simulated annealing. Explain simulated Annealing with suitable example.
MU· 2 0 . Dec.
A . 10 Marks. a. 2 Ma 16. 10 Marks ••••••••••••••••••••••••••••••••••••••••••••••
•••••••••••• ......... 3-37
17.
b .

3.20.1 Types and Use of Simulated Annealing ......................................................................................................... 3-38


3.20.2 Simulated Annealing In Machine Leaming ..................................................................................................... 3-38
3.21 Parameter for S.A.........................................................................................................................................................3-38
3.22 Genetic Algorithm .....................................................................................................................................................•..•3-39
ua. Write a short note on genetic algorithm. .......................... 3-39
3.22.1 Comparison between Traditional and Genetic Algorithm ............................................................................... 3-39
3.22.2 Basic Terminology ...................................................................................................................................•...•.. 3-39
ua. Define the terms chromosome, fitness function, crossover and mutation as used in Genetic algorithms.
MU· 0. 5 a. Ma 18. 10 Marks ······················
····························································································3-39
UQ. Explain how genetic algorithms worl<. Define the terms chromosome, fitness function, crossover
and mutation as used in Genetic algorithms. ........................................... 3-39
3.22.3 Optimisation Problems ...................................................................................................................................3-41
3.22.4 Initialisation .....................................................................................................................................................3-41
3.22.5 Selection ......................................................................................................................................................•..3-41
3.22.6 Genetic Operators .........................................................................................................................................• 3-41
ua. Explain how genetic algorithms worl<. ....................................................... 3-41
3.22.7 Advantages of Genetic Algorithm ............................................................................................................·-··· 3-42
3.22.8 Limitations of Genetic Algorithm ..................................................................................................................... 3-43
3.22.9 Applications of Genetic Algorithm .................................................................................................................. 3-43
3.23 AdversArial sEARCH .................................................................................................................................................... 3-43
3.23.1 Types of Games in Al ..................................................................................................................................... 3-44
3.23.2 Characteristics of Adversarial Search (A.S.) .................................................................................................. 3-45
3.23.3 Comparison of Search and Games ................................................................................................................ 3-45
3.24 Techniques required to get the best Optimal Solution ..........................................................................................- .... 3-46
325 Game Playing ...............................................................................................................................................................3-46
3.25.1 Zero Sum Game ........................................................................................................................................-.3-46
ua. Write short note on : Game Playing. ·······-··· 3-46
3.25.2 Elements of Game Playing Search ................................................................................................................ 3-47
ua. Draw a game tree for a tic-tac•toe problem. ......................................... 3-47
325.3 Some More Examples of Game Playing/Adversarial Search ......................................................................... 3-48
3.25.4 Types of Algorithms in Adversarial Search .................................................................................................... 3-48
3.26 Game Tree .................................................................................................................................................................... 3-48
3.26.1 nc-Tac-Toe Problem .................................................................................................................................- .. 3-51
3.26.2 Limitations of Game Trees .............................................................................................................................3-51
3.27 Minmax Procedure ........................................................................................................................................................3-51
UQ. Explain Min max and Alpha beta pruning algorithms for adversarial search with example.
MU· 0. 5 b. Ma 17. 10 Marks
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• .. •••••••••••••••••••••••••••••••••....._. 3-51
3.27.1 Properties of Min Max Algorithm ..........................................................................................................·-······· 3-52
UEx. 3.27.1 (MU· Dec.15, Dec.19, 10 Marks) .......................................................................................................... 3-53
UEx. 3.27.2 (MU - May 18, 10 Marks) ......................................................................................................................... 3-53
• Chapter Ends .......................................................................................................................................................3-54

(MS·l26) ~Tech•Neo Publications...A SACHlN SHAH Venture


Artificial lntelli ence MU-Al & OS/ Electronics
Solvin Problems b Searchin ... Pa e No. 3-4
Thus, BST divides all its sub-trees into two
Syllabus Topic: Definition, State space segments; the left sub-tree and the right sub-tree
representation and it can be defined as left-sub-tree (keys)< node
(key)~ right_sub-tree (keys)
~ 3. t DEFINITION Iii' Represe ntation

The state search representation fonns the basis of BST is a collection of nodes arranged in a way
most of Al methods. Its structure corresponds to the where they maintain BST properties. Each node has a
structure problem solving into two important ways. key and bas an associated value.
It allows for a formal definition of a problem as While searching. the desired key is compared to
per the need to convert some given situation into some the keys in BST and if found, the associated value is
desired situation using a set of pem:tissible operation. retrieved. We mention below a pictorial representation
ofBST
Syllabus Topic : Problem as a State Space Search

~ 3.2 SEARCHING
,- - - - - - - - - - - -;..;.-,-,- -,..r,". ":"' - - ~ - ~.,,.-
I r..n
...~
What,s-searc
• .
hmg1 ~ , <'·S • (;,,. "'""' ;;:.,,~
1
:-- •. i.:.-~A ..,,,, v-- -~,
~._ - - - - - - - - - -- - - _,,_ - - - - - - - -~- - ~~u Fig, 3.2.1
Search plays a major role in solving many
artificial intelligence (Al) problems. Search is a Note that the root node key (27) bas all less.. I
universal problem-solving mechanism in Al. ln valued keys on the left sub-tree and tbe higher valued
many problems, sequence of steps required to solve a k_eys on the right sub-tree.
problem is not known in advance but must be
Iii' Basic Operatio n
determin ed by systematic trial-and-error exploration of
alternatives. The basic operations of a tree
Search techniques try to "pre-play" the game by (i) Search : Searches an element in a tree
evaluatin g the future states (game tree search) and may (ii) Insert : Inserts an e.lement in a tree
use also heuristic to prone bad choices or speed things (iii) Pre-order Traversal : Traverses a tree in a pre.
up. They theoretically can make an exact and perfect
order manner
choice, but are slow.
(iv) In-order Traversal : Traverses a tree in an in-order
The problems that are addressed by A1 search
manner
algorithm s fall into three general- classes :
(v) Post-order Traversal : Traverse a tree in a post-
1. Single-ag ent path-finding problems order manner.
2. Two players games (vi) Remade: There must be no duplicate nodes.
3. Constrai nt- satisfaction problems
Remark
a. 3.2. t Node Representation In Surch Tree Let us suppose we want to search for the number :
(i) We start at the root
A binary search tree (BS1) is a tree in which all
the nodes follow the below mentioned properties (ii) We compare tbc value to be searched with the
value of the root
(i) The value of the key of the left sub-tree is less
than the value of its parent (root) node's key. (iii) It is equal. we complete the search.
(iv) If it is lesser, we need to go to the left sub-tree.
(ii) The value of the key of the right sub-tree is greater
since in a binary subtree all the clements in the left
than or equal to the value of its parent (root)
subtree are lesser and all the clements in the right
node's key. subtree are greater.

(MS-126) lilrech-N eo Publications...A SACHIN SHAH Venture


Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searching)...Page No. (3-5)
(v) In this traversal, al each slep we discard one of the Agent may have no idea where il is; solution
sub-trees.
(if any) is a sequence.
(vi) We go on reducing like this till we find the
3. NondeLerministic and/or partially observable ~
elemenl or till our search is reduced to only one
node. contingency problem percepts provide new
information about current state.
(vii)The search here is a binary search and hence is
called as Binary search lree. 4. Unknown state space ~ exploration problem

I Note: After reaching the end, just Insert that node at


left Ot less than current), else right
("onJine") Solution is a tree or policy often
interrelated search, and execution.

Syllabus Topic : Problem formulation, a. 3.3.1 Problem SoMnr Agent


Well-defined problems i - -.-- -.-,;- - - - . .- -.. .-:- - - - -.- ;::- - -:- "'l.-,-,=,..:- :;- -1
, GQ, Desaibe problem soMng agent . •

[~ 3.3 PROBLEM FORMULATION


L---------------------~-------•1
Problem-solving agents : A goai' fonnulation.
based on the current situation and the performance
:-~ -B~;fl; ~;,;1~ ;~b~e::~~n~.- - -.: - - - - - - - -: measure is required for problem solving.
•- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - .,J • Problem formulation is the process of deciding
Problem solving consists of using generic or ad what actions and states to consider, given a goal.
hoc methods, in an orderly manner, for finding In general, an agent with several options for
solutions to problems. Some of the problem-solving action of unknown value can decide what to do by
techniques use artificial intelligence, compuler science, first examining different possible sequences of
engineering, mathematics, or psychoanalysis. actions (that lead to states of known value) and
Problems can also be classified into two different then choosing the best sequence.
types: ill-defined and well-defined, from which • A search algorithm takes a problem as input aod
appropriate solutions are to be made. rerurns a solution in the form of an action
1. Ill-defined problems are those that do not have sequence.
clear goals, solution paths, or expected solution.
a. 3.J.2 Simple Problem-Solvfnr A,mt
2. Well-defined problems have specific goals,
clearly defined solution paths, and clear expected Function SIMPLE-PROBLEM-SOLVING- AGENr
solutions. These problems also allow for more (percept) returns an action Persistent::
initial planning than ill-defined problems. seq.an action sequence, initially empty,
Being able to solve problems sometimes involves state. !IOme description of the currenl world ~
dealing with pragmatics Oogic) and semantics
(interpretation of the problem). It requires the goal; a goal initially null.
ability to understand what the goal of the problem problem. a problem formulation,
is and what rules could be applied to represent the u.
state+- UPDATE-STATE(state percept)
key to solving the problem. Sometimes the
problem requires some abstract thinking and H seq. ia empty then
coming up with a creative solution. goal+- FORMULAT&.GOAL(State)
Problem types problem+- FORMULATE-PROBLEM
I. Deterministic, fully observable ⇒ single-state (State, goal)
problem seq. +- SEARCH( problem}
Agent knows exactly which stale the problem will H seq. = failure then return a null action action
be in; it's solution is a sequence. +- FlRSf(seq.)
2. Non-observable ⇒ conformant problem seq. +- RESI'(seq.) retwn action

(MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Venture


Solvin Problems b Searchin ... Pa e No. 3-6
ArtiflCial lntelll ence MU-Al & OS I Electronics 'th and we ignore all the remaining
algon m
Syllabus topic : Performance evaluation of search elements.
Based on this infonnation, performance analysis
1tn1tegles

of an al gon'lhm can be defined
• . as 'Perfonnance
. f an algorithm ts the process of
~ l.4 PERfORHANCE MEASURES analys1s o ired b th
calculating space and time requ y at
Performance measuremenl is the process of algorithm'.
• Pe.rformance analysis of an algorithm is
collecting, analysing and / or reporting • performed by using the following measures :
information regarding the performance of an
individual, group, organization, system or (() Space complexity : Space req~ired to complete
component. the wk of that algorithm. It includes program
• Moulin defines the term with a forward looking space and data space.
organizational focus 'the process of evaluating (ii) Time complexity : Time required to complete the
how well organizations are managed and the value wk of that aJgorithm.
they deliver for the customers and other This involves the following steps :
stakeholders'. •
(a) Implement the aJgorilhm completely.
• The most common such frameworks include :
(b) Determine the time required for each basic
(I) BaJanced scorecard : It is used by organisations operation.
10 manage the implementation of corporate
(c) Identify unknown quantities that can be used LO
strategies. describe the frequency of execution of the basic
(U) Key performance indicator : Il is a method for operations.
choosing important / critical performance
measures, usually in an organisational content
a. l.4. t Types of Perfonmnce Me.uures
(a) Workload or output measures. These measures
• Performance measure algorithms are as follows :
indicate the amount of work performed or number
I. Time complexity : It is a measure of amount of of services received.
Lime for an aJgorilhm lo execute.
(b) Efficiency measures.
2. Space efficiency : It is a measure of lhe amounl of
(c) Effectiveness or outcome measures.
memory needed for an algorithm to execute.
(d) Productivity measures.
3. Complexity theory : It is a study of algorithm
performance. Syllabus topic : Depth First Search : Time
4. FunctJon dominance : It is a comparison of cosl Complexity, Space Complexity, Completene",
functions. Optimality. Uninformed Search
5. Performance of an algorithm means predicting the
resources which are required to an algorithm 10 a. 3.4.2 Uninformed Se.1rch (Bllnd Se.vcb)
perform iLS task. ~------------------------------·
: GQ. What is blind or uninformed search ?
• It means that when we have multiple algorithms I I: I
to solve a problem, we need to select a suitable : UQ. Explain with examile various uninformed search 1

algorithm 10 solve that problem. ,· techniques. ru•l•lf@IMSilh$1@!ffl:


• To compare algorithms, we use a set of L~•----------------------------l
(I) They have no additional information about states
parameters or set of elements like memory
other than those provided in the problem
required by that algorithm, the execution speed of
definition. They can only generate successors
that algorithm, easy to understand, easy to
implement, etc. and distinguish between goal state and non•
goal state.
• To analyse an algorithm, we consider only the
s ace and time r uired b that articular
(MS-126)
~Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchlng) ... Page No. (3-n
(2) These are commonly used search procedures
which explore all the alternating options during
the search process. They do not have any dome in
specific knowledge. All they need are the initial
state, the final state and a set of legal operators.
Most important search techniques are as follows :
I. Depth first search.
Fig. 3.5.1 : Depth First Search (DFS) order In wbkh the
2. Breadth first search.
nodes are visited
3. Uniform-cost search.
4. Depth-limited search. a. 3.5.1 DFS Algorithm
5. Iterative deepening search. DFS algorithm traverses a graph in a dcptbward
6. Bidirectional search. motion and uses a stack to remember to get the next
vertex to start a search, when a dead end occurs in any
~ 3.5 DEPTH FIRST SEARCH (DFS) iteration.
Refer Fig. 3.5.2. In this example, DFS aJgori1hm
(1) DFS search is the distributive file system. The
traverses from S to A to D to G to E to B first, then to
distributive file function provides the ability to
F and lastly to C. It follows the following rules :
logically group shares on multiple servers and to
transparently link shares into a single namespace. ► Rule 1 : Visit the adjacent unvisited vertex.
DFS organizes shared resources on a network in a Marlc it as visited and display iL
tree-like structure. J.
(2) DFS is a file-system with data stored on a server. Role 2 : If no adjacent vertex is fouod. choose
the vertex from the stack. (There will
The data is accessed and processed on if it was
appear all the vertices from the stack.
stored on the local client machine.
which do not have adjacent vcniccs).
(3) The DFS makes it convenient to share information J.
and files among users on a network in a controlled ► Role 3 : Repeat Rule 1 and Rule 2 till the stack
and authorised way. becomes empty.
(4) DFS is used in topological sorting, scheduling
problems, cycle detection in graphs and solving
puzz1es, other applications involve analysing
networks e.g. testing, if a graph is bipartite
(5) Depth First Search (DFS) is an algorithm for
traversing or searching tree or graph data
structures.
(6) The algorithm starts at the root node (in case of a
graph, selecting some arbitrary node as the root
node) and explores as far as possible along each
branch before backtracking. Refer Fig. 3.5.1.
Fig. 3.5.2

(MS-126) [§;]Tech-Neo Publications...A SACHIN SHAH Venture


solvin Problems ...Pa e No. 3-S
Artiftcial lntelll enoe MU-Al & OS/ Electronics
Description
Step Traversal
I.

Initialise the stack.

Stack

Fig. 3.5.3

2.

Mark S as visited and put it onto the stack.


Explore any unvisited adjacent node from S.
I We have three nodes and we can pick any of
~ Top- m
Stack
them. Here we take the node in alphabetical
order.

Flg.3.5.4
l
,1 3.

I.'f I Mark A as visited and put it onto the stack.


Explore any unvisited adjacent node from A.
Top- ~ Bothe S and D are adjacent to A, but we want
[ml unvisited nodes only.
Stack

Fig. 3.5.5

4.

I!
Visit D and mark it as visited and put onto the

Top-
m
rn
stack. Here we have B and C nodes, which are
adjacent to D and both are unvisited. But again
I S£l
we choose in an alphabetical order.
Stack

Fig. 3.5.6

(MS-126)
~Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searching) ... Page No. (3-9)

Step Traversal Description


5.

We choose B, mark it as visited and put onto


C Top- the stack. Here B does not have any unvisited
s adjacent node. So, we pop B from the stack.
Stack

Fig. 3.5.7

6.

We check the stack top for return to the


C
Top- ffi
m
previous node and check if it bas any unvisited
nodes. Here we find D to be on the top of the
Slack stack.
Fig. 3.5.8

7.

Fig. 3.5.9
Top-
I ,,,

Slack
Only unvisited adjacent node is from D is C. So
we visit C, mark it and put it into the stack.

• As C does not have any unvisited adjacent node, so we keep popping the stack till we find a node that has an
unvisited adjacent node. Here, there is none and the stack is empty and the program is over.
a. 3.5.2 Performance Measures of DFS and efficiency of programs.

The performance measuring factors of an • There are four ways to measure the performance
of an algorithm :
algorithm are as follows :
(i) Completeness : DFS is complete if the search tree
• The two most common measures are : speed and
is finite, it implies that for a given finite sean::b.
memory usage; other measures could include
DFS will have a solution if it exists.
transmission speed, temporary disk usage, long-
term disk usage, power consumption, total cost of (ii) Optimality : DFS is not optimal, it means that the
ownership, response time to external stimuli, etc. number of steps in reaching the solution., or the
cost spent in reaching it is high.
• Performance measure is generally defined as
regular measurement of outcomes and results
which generates reliable data on the effectiveness

(MS-126) ~Tech-Neo Publications...A SACHlN SHAH Venture


Artificial lntelr ence MU-Al & OS / Electronics

(Ii[) Time complexity : The time complexity of DFS, a J.5.5 Solved Exmtple on DFS
if the entire tree is trnve~d. is O M where V is
the number of nodes.
For a directed graph. the sum of the sizes of
UEx. 3.5.1 fl&lffl.m:111"
Consider the following graph shown in Fig. Ex. 3..S.
the adjacency lists of all nodes is E. So, the swting from A execute DFS the goal node is G. Sb
time complexity in this case is
the order in which the nodes are expanded. AsSl.l
O(V)+O(E) = O(V+E) that the alphabetically smaller node is expanded
For an undirected graph, each edge appears
to break ties.

"'
twice.
A
{Iv) Space complexity : For DFS, which goes along a
single 'branch' all the way down and uses a stack B / C
implementation, the height of the tree matters. The
space complexity for DFS is O (h) where h is the l"o/~
maximum height of the tree. E I
'a. l.S.l Advmtaces and DIYdvmuces of F
Depth First Search
IGi" Advantages of DFS
I
H
I. 11111)Flg. Ex. 3.5.1
Memory requirement is linear with respect to
nodes. 0 Soln.:
2. Less time and space complexity rather than BFS. 0
3.

1.
Solution can be found out without much more
search.
Disadvantage s of DFS
Not guarantee that it will give you solution.
l /.
2. Cut-off depth is smaller so time complexity is
more.
3. Determination of depth until the search proceeds.
11114\Fig. Ex. 3.S.l(a)
4. The major drawback of depth-first search is the
determination of the depth till which the search is uex:-·s:s.2: MU·O. 5 b. Ma 16. 10 Marks
reached. This depth is called cut-off depth. The Consider the graph given in the Figure. Assume thar
value of cut-off depth is essential because the initial state is A and the goal state is G. FUld a path
otherwise the search will go on and on. from the initial state to the goal state using DFS. Also
If the cut-off depth is smaller, solution may not be ,report the solution cost, ..._
found and if cut-off depth is large, time-
complexity will be more.
'a. l.S.4 AppOaidons of DFS

l 1.
2.
3.
Finding connected components.
Topological sorting.
Finding bridges of graph. Fig. ~3.S.2

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Ventln
Artiftelal Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchlng) ... Page No. (3-11)
0 Soln. : A is given initial state and G is the goal (2) BFS is the core of many graph analysis algorithms
node. and it is used in many problems, such as social
► Step (I) : Place the starting node into the stack network, computer network analysis and data

8= organization.
(3) BFS involves search through a tree one level at a
► Step (II) : Now the stack is not empty and A is time. We traverse through one entire level of
not our goal node. Hence we move to next step. children nodes first, before moving onto traverse
► Step (Ill) : The neighbours of A are B and C. through the grand children nodes.

I BI CI A
(4) Breadth first search is an algorithm for node that
satisfies a given property. It starts at the tree root
► Step (IV) : Now, B is top node of the stack. Its and explores all nodes at the present depth prior to
neighbours are E and D. moving on to nodes at the next depth level.
IE ID I Ji (5) BFS uses Queue-data structure for finding the
shortest path. BFS can be used to find single
► Step (V) : Eis top node of the stack. We find its
source shortest path in an on-weighted graph,
neighbour, but there is no neighbour of E in the
because in BFS, we reach a vertex with minimum
graph, so
number of edges from a source vertex.
,..--B-r-1D--.,- E
'2,. 3.6.1 BFS Travel'YJ Alrorfthm
► Step (VI): Now, Dis top node; and its neighbours
are F and C; we push D into the stack. ► Step 1 : Add a node / vertex from the graph to a
IF IC I D •
queue of nodes to be 'visited:.,
J.
.. . ·, -i
-~i
I
► Step (VIl) : Now, G is our top node of the stack,
which is our goal node.
► Step 2 : Visit the t~mo~t ✓node in the queue, I
- and mark it as such.. , • '
► Step (VID) : solution cost is
J. ,,
A ➔ B➔ E➔D➔G
► Step 3 : If that node bas any neighbours. check
" to see if they have been 'visited' or not.
Syllabus topic : Breadth First Search
J. --:
► Step 4 : Add any neighbouring nodes that sli!
~ 3.6 "BREADTH-FIRST SEARCH (IFS)
., <
:,-;- nee<! to.be :visit~· to the_~ - ...
------------------------------~
: GQ. Comment upon statement that breadth-first search : Illustrative Example
:• is a .special case of uniform cost search. 1 Ex. 3_6 _1 : Which solution would BFS find to move
: GQ. Explain breadth first search with its algorith.m. 1 from node s to node G if run on the graph below 1
-------------------------------
(1) BFS stands for Breadth-First Search is a vertex
based technique for finding a shortest path in
graph. In BFS, one vertex is selected at a time
when it is visited and marked then its adjacent are
visited and stored in the queue. It is slower than
Fig. Ex. 3.6.l
DFS.

(MS-126) [§;!Tech-Nee Publications...A SACHIN SHAH Venture


SoMn Problems b Searchln ...Pa e No. 3-1 2
Artificial lnten· nee MU.Al & OS/ Electronics

@ Soln.: a J.6.J BFS Algorithm for Extra Memory


The equivalent search tree for the above graph is BFS is an algorithm for searching a Lree data
as follows: •
structure for a node that satisfies a given property.
As BFS traverses the tree "shallowest node first"
it would always pick the shallower branch until i~ • It starts at the tree root and explores all nodes at
reaches the solution (or it runs out of nodes, and goes the present depth prior to moving on to the nodes
to the next branch). The traversal is shown by dotted at the next depth level.
line.
• Extra memory, usually a queue, is needed to keep
Path: S ➔ D ➔ G
track of the child nodes that were encountered but
= the depth of the shallowest solution.
not yet explored.
= number of nodes in level.
Sorting algorithm

l Fig. 3.6.1 : Order in which nodes are expanded


Fig. Ex. 3.6.l(a)
.I
.I a. l.6.2 Perfonmnce Musutts for BFS
m3i' Concept diagram

G" Time complexity

d
I
Equjvalent to the number of nodes traversed in
BFS until the shallowest solution.
Level

Level2
L...=.~--~~-__;~--~~
Space complexity
Fig. 3.6.2
Eqwvalent to how large can the fringe get :
1 S' Algorithm
S (n) = 0 (n )
ti (1) Mark any node as starter or initial
Completeness
(2) Explore and traverse un-visited nodes adjacent to
BFS is complete, meaning for a given search tree,
starting node.
BFS will come up with a solution if it exjsts.
(3) Mark node as completed and move 10 next
Optimality adjacent and un-visited nodes.
BFS is optimal as long as the costs of all edges are
equal.

(MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Venture


Artiflclal Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searching) ... Page No. (3-13)
Search : Breadth flrst search a J.6.4 Execution of BFS AJrortchm
Search : Breadth first Harch .
► Step 1 : Start by putting any one of the graph's
1
vertices at the back of the qoeue.
'· J.
► Step 2 ! Tnlce the front item of the queue and
add it to the visited wt.
,1.
► Step 3: Create a list of that vertex's adjacent
.nodes.
BFS will always find the shortest path in an
unweighted graph. J.
► Step 4 : Keep continuing steps two and three till
Consider an unweighted graph like this :
the queuy is empty.
A-B-C
I i I
D ...,_.,,.... E a J.6.5 Advanures and Dlgdvanures of BFS
G' Advantages
And my goal is to get from A to E.
I begin at A as it my origin. I queue A, folJowed (1) Solution will be definitely found out by BFS if
by immediately dequeueing A and exploring it. This there is some solution.
yields B and D, because A is connected to B and D. I (2) BFS will never get trapped in blind valley, means
thus queue both Band D. unwanted nodes.
I cheque B and explore it, and find that it leads to (3) If there are more than one solution then it will find
A (already explored), and C, so I queue C. I then solution with minimal steps.
dequeue D, and find that it leads to E, and that is my
G' Disadvantages
goal.
I then deque C, and find that it also leads to E, my (1) Memory constraints as it stores all the nodes of
goal. present level to go for next level.
• BFS can only be used to find shortest path in a (2) If solution is far away then it consumes time.
graph if a J.6.6 Appl)cadom of BFS AJrorftbm
1. There are no loops. We mention some of the applications where a BFS
2. All edges have same weight or no weight. algorithm implementation can be highly effective.
• To find the shortest path, all you have to do is (1) Unwe.ighted graphs : BFS algorithm can easily
start from the source and perform a breadth first create the shortest path and a minimum spanning
source and stop when you find your destination tree to visit all the vertices of the graph in the
node. shortest time possible with high accuracy.
(2) P2P Networks : BFS can be implemented to
• Greedy best-first search algorithm always selects
locat.e all the nearest or neighbouring nodes in a
the path which appears best at that moment.
peer to peer network. This will find the required
• It is the combination of depth-first search and data faster.
breadth-first search algorithms. It uses the (3) Web Crawlers : Search engines or web crawlers
heuristic function and search. Best-first search can easily build multiple levels of indexes by
allows us to take the advantage of both employing BFS. BFS implementation starts from
algorithms. the source, which is the web page, and then it
visits all the links from that source.

(MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Venture


Solvln Problems b Searchin ... Pa e No.
Artificial lntelll enoe MU-Al & OS/ Electronics

(4) NavtgatJon Systems : BFS can help fmd all the Hence the space-complexity = 0 (bd )
neighbouring locations from the main or source a 3 . 6 . 8 Limitations of Breadth First Surch
location.
(5) Network Broadcasting : A broadcastcd packet is I. Amount of time needed to generate all the nodes
guided by the BFS algorithm to fmd and reach all is considerable because of the time-complexity.
the nodes it bas the address for. 2. Memory constraint is also a major hurdle because
• lt is possible to run BFS recursively without any of the space-complexity.
data structures, but with higher complexity.
3. The searching process remembers all unwanted
• DFS, as opposite to BFS, uses a stack instead of a
nodes which is of no practical use for the search.
queue, so it can be implemented recursively. Note
that the code used is iterative but it is trivial to
·------- ------- ------- ------- --.
: GQ, Which storage structure is preferably chosen fof
node representation in open list, while performTng
make it recursive. 1
bes1-first search over a state space and why7
• In BFS, a queue data structure is used. One can ~-----------t------------------•
mark any node in the graph as root and start OPEN is a priority queue in which the element&
traversing the data from it. with the highest priority are those with the most
• BFS traverses all the nodes in the graph and keeps promising value of the heuristic function.
dropping them as completed.
a 3.6.9
E:omple of Breadth Flnt Search
• BFS visits an adjacent unvisited node, marks it as
•- - -y,- -.--.:- - -~ ::,,- - -ccv£ - -: --,- - ""- - •
done, and insens it into a queue. ! GQ. Give an example of a problem for which breadth- :
1 ,. • 'fi'rst would work better than dep1h first search and 1
a 3.6.7 Perfomw1ce Mu.sores of BFS • .-•:,;_ • • "' I
~~!~~~ -e~;._~ ~'Y-~- - -~ - - - - - - :
G" Time Complexity
In general, BFS is better for problems related to
Breadth-first search, being a brute search
finding the shortest paths or somewhat related
generates all the nodes for identifying the goal. The
amount of time taken for generating these nodes is problems. Because here we can go from one node to
proportional to the depth d and branching factor b and all node that are adjacent to it and hence we effectively
is given by, move from path length one to path length two and so
2
l+ b + b + b3 + ..... + bd ~ bd on.

Hence the time-comple xity= O (bd) While DFS on the other hand, helps more in
connectivity problems and also in finding cycles in
IQ" Space Complexity
graph (cycles can be found in BFS with a bit of
Unlike depth-first search wherein the search modification). Determining connectivity with DFS is
procedure has to remember only the paths it has trivial, if we call the explore procedure twice from the
generated, breadth-first search procedure bas to DFS procedure, then the graph is disconnected (this is
remember every node it has generated. Since the for an undirected graph). We can see the strongly
procedure has to keep track of all the children it has connected component algorithm for a directed graph
generated, tho space-complexity is also a function of here, which is a modification of DFS. Another
the depth d and branching factor b. Thus space application of the DFS is topological sorting.
complexity becomes,
2 3 d d
l+b+b +b + ... +b =b

(MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Venture

_J
Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchlng)... Page No. (3-15)

Syllabus topic : Depth limited Search Implementation


Depth - limited search can be implemented as a
~ l.7 DEPTH LIMITED SEARCH simple modification to the general tr~r graph search
algorithm. Or, it can be implemented as a simple
(I) Explanation recursive algorithm.
In infinite state spaces, depth-first search method (II) Algorithm
fails. This failure can be alleviated by supplying depth- A recursive implementation of Depth-limited
first search with a pre -detennined depth limit /. It Search Algorithm
means that, nodes at depth I are treated as if they have Function : Depth-Limited-search (Problem. limit)
no successors. returns a solution, or failure/cut off.
This approach is called depth-llmJted search. Return : Recursive - DLS (Mak~Nodc (Problem.
The depth limit can solve infinite path problem. Initial-state), problem, limit)
1 Function Recursive - DLS (node, problem, limit)
Its time-complexity is O (b ) and its space-complexity
returns a solution, or failure/ cut-off
is O (bl)
If problem. GoaJ -Test (nod~statc) then rffllrn
Thus depth-first search can be viewed as a
solution (node)
special case of depth-limited search with l = oo
else If limit = 0 then return cut off
In some cases, depth limit can be based on
else
knowledge of the problem. Por example, on the
cut off-occurred 1 +- falsc
map of sural there are 25 cities.
for each action In problem. Action (node. state) do
So, if there is a solution, il must be of length 24 al
child +- child - Node (problem, oode, action)
the longest, hence l = 24 is a possible choice. But
if we look at the map carefully, we observe that result +- Recursive - DLS (child. problem.
any city can be reached from any other city in al limit- 1)
most 15 steps. Ir result = cutoff then cutoff-occurred? +- true
This number, known as the diameter of the state else if result 1' failure then tttum result
space, gives us a beuer depth limiL And this leads If cutoff-occurred 1 then return cutoff
to a more efficient depth-limited search. else return failure
Observe that depth-limited search can terminate ~ Advantages of Depth Limited Seard,
with two kinds of failure : 1. Depth limited search is bcucr than DFS as it
(i) the standard failure value which indicates no requires less time and memory space.
solution; 2. DFS assures that the solution will be found if it
(ii) I.be cut-otT value which indicates no solution exists in infinite time..
within the depth-Umlt 3. DLS has applications in graph theory particularly
li" Drawbacks of Depth-Limited Search
similar to DFS.
q, Disadvantages of DLS
(i) The depth limited search is also incomplete if we
choose l < d; that is, the shallowest goal is beyond (i) The goal node may not exist in the depth limit set
earlier, which will push the user to itcrrue funhcr
the depth limit.
adding execution time.
(ii) Depth-limit search will also be non-optimal if we
(ii) The goal node cannot be found out if it docs not
choose l > d.
exist in the desired limit.

(MS-126) ~ Tech-Neo Publications...A $ACHIN SHAH Venture


Solvin Problems
Artificial lntell MU•AI & OS / EJectrooics
search is run repeatedly with increasing depth
Optimality
limits until the goal is found.
The 01..S is a non-optimal algorithm since the
2. IDDFS is equivalent to breadth-first search, but
depth that is chosen can be greater than d (L > d). Thus
uses much less memory; on each iteration, it visits
01..S is not optimal if l > d.
the nodes in the search tree in the same order as
Tlme complexity depth-first search, but the cumulative order in
It is similar to OFS, i.e. O (b'). where I is the which nodes are first visited is effectively breadth.
speciJied depth limit
first.
Space complexity 3. DDFS combines depth-first scarcb's space.
It is similar to DFS, it is O (b1), where l is the efficiency and breadth-first scarch's completeness
specified depth limit. (When the branching factor is finite). It is optima)
Conclusion - DLS when the path cost Is a non-decreasing function
(i) DLS is not the case for uninformed search of the depth of the node.
strategy. 4. The Ume complexity or IDDFS Is O ( b•) and its
(ii) DLS algorithm is used when we know the search space complexity Is 0( b' ), where b is the
domain, and there exists a prior knowledge of the branching factor and d is lhe depth of the
problem and its domain. shallowest goal.
(iii) There is liule idea of the goal nodes depth. 5. Since iterative dccpcning, visits swcs multiple
times, it may seem wasteful. but it cums out to be
(iv) The problem with depth-limited search is to set
not costly, since in a tree most of the nodes arc in
the value of 1 optimally, so as not co leave out any
the bottom level, so it does not matter much if the
solution.
upper levels are visited multiple times.
Also keep the time and space complexity to a
minimum. a 3.8. t IDDFS Alrorfdun
F~ti~iterativ~D~pening«ar eh (problem) returns a,
Syllabus topic : Iterative Deepening Search solution or failure
inputs : problem, a problem
~ J.8 ITERATIVE DEEPENING SEARCH for depth t-- 0 to oo do
TECHNIQUE (IDS OR IDDFS) result f- Depth-Limited-Search (problem, depth) ;,
•- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - • jf result ~ cutoff then retum result
• UQ, Explain Iterative Deepening search algorithms based 1
1
on perfonnance measure with justification; : The iterative deepening search algorithm. which
complete, optimal, Time and Space complexity. • repeatedly applies depth-limited search with increasing

I. llerative deepening search or more specifically


-
(MU - Q. 4(a). Dec. 18. 10 Marks) I limits,
·- - - -- - - - - - -------------- - - - - --
It terminates when a solution is found or if the
iterative deepening deplh-firsl search (IDS or depth- Limil search relUmS failure meaning that 00
IDDFS) is a state space/graph search strategy in solution exislS.
which a depth-limited version of depth-first

(MS-126) [§;I Tech-Neo Publications...A SACHIN SHAH Venl\11


Artificial Intelligence (MU-Al & os I Electronics) (Solving Problems by Searchlng) ...Page No. (3-17)
Umll ■ O

Limit ■ 3

~ ~

I \ I \ I
M \

~
I \

/
I
0
\
e

I
E
\ I
F
\
C

I
G
\

Jn.

M
I \ I \ I \
C

I
G
\

Fig. 3.8.1

(MS-126)
lil Tech-Neo Publications...A SACHIN SHAH Venrure
Artificial lntelll ence MU-Al & OS/ Electronics Solvin Problems b Searchin ... Pa e No. 3-1
8

'a l.8.2 Adnnu,es of IDDFS Herc, instead of inserting all vertices into a

priority queue, we insert only source, then one by
I. The main advantage of IDDFS in game tree
one we insert nodes when needed.
searching is that the earlier searches tend to
improve the commonly used heuristics, such as a J.9.1 AJ1orithmo fU.C.S.
the killer heuristic and alpha- beta pruning, so that
n more accurate estimate of the score of various Unifonn Cost Search is an algorithm used to

nodes at the final depth search can occur and the move around a directed weighted search space to
search completes more quickly since it is done in
go from a start node to one of the ending nOdcs
a better order.
with a minimum cumulative cost.
For example, alpha-beta pruning is most efficient
if it searches the best moves first. • This search is an urtlnformed search algorithll\,
2. A second advantage is the responsiveness of the i.e. it does not take the state of the node or search
algorithm. Because early iterations use small space into consideration.
values ford, they execute extremely quickly. This • It is used to find the path with the lowest
allows the algorithm to supply early indications of cumulative cost in a weighted graph where nodes
the result almost immediately, followed by are expanded according to their cost of traversat
refinements as d increases, when used in an from the root node. This is implemented using a
interactive setting. Such as in a chess-playing priority queue where lower the cost higher is its
program, this facility allows the program to play
priority.
at any time with the current best move found in
Algorithm of Uniform Cost Search : (In Al)
the search it has completed so far. This can be
~ t 4 ... ,
phrased at each depth of the search core, ► Step 1 : Insert Root Node into the queue.
producing a beller approximation of the solution, J,
though the work done at each step is recursive. ►, Step 3 :· Repeat till queue.is not empty.
This is not possible with a traditional depth-first J,
search, which does not produce intermediate ► Step 3 : Remove the next clement with the
results. highest priority. from the queue.
J,
3. The time complexity of IDDFS in well-balanced
trees works out to be the same as Depth-first ► Step 4 : If the node is a destination node, then
search: 0 (bd) print the cost and the path and exit, else
insert all the children of removed

-~
clements into the queue with their
Syllabus topic Uniform Cost Search cumulative cost as their priorities.
........-.
. -
3.9 UNIFORM COST SEARCH. , ~-
~ ~.
Here root Node is the starting node for the path,
and a priority queue is being maintained to maintain
• Uniform-cost-search is an uninformed search the path with the least cost to be chosen for the next
algorithm that uses the lowest cumulative cost to
traversal.
find a path from the source to the destination.
• Nodes are expanded, starting from the root,
a. J.9.2 Execution of Al1orithm of Uniform
Cost Sur-ch
according to the minimum cumulative cost. The
uruform-cos t search is then implemented using a • Unifonn-cost search is similar to Dijkstra's
algorithm.
Priority Queue.

(MS-126) lilrech-Neo Publications...A SACHIN SHAH Venture


Artificial lntelli ence MU·AI & OS/ Electronics Solvln Problems b Searchln ...Pa e No. 3-19
In this algorithm, order of their optimal path cost. To measure the
time complexity, we need the help of path cost
► Step 1 : from the starting state we will visit the
adjacent states and will choose the least instead of depth d.
costly state. ~ l. 9 .l Example of Uniform Cost Surch
(Llnur Displacement)
► Step Z : then we choose the next costly state Consider the example if Pig. 3.9. I; where we need
from the all un-visited and adjacent to reach any one of the destination node (G1, G2, G3}
states of the visited states, starting from node S.

► Step 3 : in this way we try to reach the goal


state.

Remark
Even if we reach the goal state we continue
searching for other possible paths (if there are multiple
goals).
• The elements in the priority queue have almost Fig. 3.9.1
the same costs at a given ti.me, and thus the name Node ( A, B, C, D, E and F } are the intermediate
Uniform Cost Search. nodes. Our motive is to find the path from S to any of
• It may appear that elements do not have almost the destination state with the least cumulative cost
the same costs, but when applied on a much larger Each directed edge represents the direction of
graph it is certainly so. movement allowed through that path, and its labelling
represents the cost is one travels through that path.
• Uniform costing refers to acceptance of identical
costing principles and procedures by all or many Thus overall cost of the path is a sum of all the
units in the same industry by mutual agreement. paths.

• 'Uniform Cost Search (UCS)' algorithm is mainly Por e.g. : A path from S to G 1 -
used when the step costs are not the same but we lS > A > G1 } whose cost is SA + AG 1 = 5 + 9
need the optimal solution to the goal state. In such = 14.
cases, we use Uniform Cost Search, to find the
Here we maintain a priority queue the same as
goal and the path including the cumulative cost to
BFS with the cost of the path as its priority, lower the
expand each node from the root node to the goal
cost higher is the priority.
node.
We use a tree to show all the paths possible and
• Uniform cost-search is optimal. This is because, at
also maintain a visited list to keep track of all the
every step the path with the least cost is chosen,
visited nodes as we need not visit any node twice.
and paths never get shorter as nodes are added,
ensuring that the search expands nodes in the

(MS-126) ~ Tech-Neo Publications..A SACHIN SHAH Venture


Solvin Problems b Searchin ••• Pa e No. 3-2Ql
Artiflclal lntelli ence MU-Al & OS / Electronics
Flow Visited List
Explanation
► Step I We start with strut node and
check if we have reached any of the
destination, nodes, i.e. No thus continue.
0
s
► Step 2 : We reach all the nodes that can
be reached from S i.e. A, 8, D.
And since node S has been visited, thus
added to the visited list. Now we select
the cheapest path first for further
expansion i.e. A.
,'
► Step 3 : Node 8 and 0 1 can be reached S,A
from A and since node A is visited thus
move to the visited list
Since 0 1 is reached but for the optimal
solution, we need to consider every
possible case, thus, we will expand the
next cheapest path i.e. S - > D

► Step 4 : Now, node D has been visited, S, A, D


thus it goes to visited List and now since
we have three paths with the same cost,
we choose alphabetically thus will
expand node B.

► Step 5 : From B, we can only reach node S, A, D, B


C. Now the path with minimum weight is
S- >D- >C i.e. 8
Thus expand C, and B has now visited D
node.

. f

!.
I \ I
iI

I I (MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Venllft.


)
I
~ficial Intelligence (MU-Al & OS / Electronics) (Solving Problems by Searching) ...Page No. (3-21)

Explanation Visited Llst


► Step 6 : From C we can reach o2 and P S,A,D,B,C
node with 5 and 7 weights respectively.
Since S is present in the visited list, thus
we are not considering the C - > S path.
Now, C will enter lhe visited list. Now
the next node with the minimum total
path is
S >D>E , i.e. 8
Thus we will expand E.
► Step 7 : From E we can reach only 0 3. E S, A, D. B,
will move to the visited first. C,E

► Step 8 In the last, we have 6 active S, A, D, B,


paths. C,E
S - > B B is in the visited list, thus will
be marked as a dead end.
Same for S >A>B >C Chas
already been visited thus is considered a
dead end.
Out of the remaining
S >A->01
S >D->C->02
S > D>E>03
Minimum is S > D > C > 02
and also G2 is one of the destination
nodes. Thus we found our path.

(MS-126) ~ Tech-Neo Publications ..A SACHIN SHAH Venture


Solvln Problems b Searchin ... Pa
Artiflclal lntelll ence MU-Al & OS/ Electronics

c,. 1 _9 _6 Solved Enmple on Curved


a. l.9.4 Advanuges and Dls.1dv.1nuges or Dfspl.1cement
u.c.s
ii!llid•Mtel•J4Midi-iftt!n.:a
Advan tages of u.c.s
uex. 3.9•1 -·
Apply uniform cost ,25 at 'thm
gon
~
on given ~
l. It helps to find the path with the lowest
cumulative cost inside a weighted graph having a
·?·'CQ--·~
different cost associated with each of its edge
from the root node to the destination node. ¢~
3 G
4
-~2'
~· \
2. It is considered to be an optimal solution since, at
each state, the least path is considered to be
followed.
f-, --0 ' ~ )
Disadv antage s of u.c.s ~ Fig. Ex. 3.9.1 : Graph
~
l. The open list is required to be kept sorted as
0 Soln.:
priorities in priority queue needs to be maintained. ► Step I : We mention source-node :
2. The storage required is exponentially large.

3. The algorithm may be stuck in an infinite loop as


it considers every possible path going from the
8
Fig. E.~ 3.9.l(a) : Node
root node to the destination node.
► Step n : We add the nodes A. B, C to lhe SQliz:
a. l.9.5 Performance Musures node,:

Time and Space Compl exity

Uniform cost search is complete, when UCS finds


the solution. (if there is a solution).

Let c• be the cost of optimal solution, and e be each Fig. Ex. 3.9.l(b)
step closer to the goal node. Then the number of ► Step m : The node A has minimum distance I.•
steps is, we keep it aside and add the node G.
c•
= (e + 1)

Here, we have taken+ 1, as we start from state O and


c•
end toe.

Fig. E.~ 3.9.l(c)


- -------------=-::--------
(MS-126)

Ii] Tech-Neo Publications...A SACHIN SHAH V -


I
I
Artificial lntelli ence MU-Al & OS/ Electronics .•. Pa e No. 3-23
► Step IV : For nodes, B and C, B has minimum
distance, so we add node F to B.
1
I I I
•rl mv11

\ • ,<, , I

.. .11, 1 u ') , I , Fig. Ex. 3.9.l(g)


•l
► Step IX : Now, H has minimum cost, so it is
• I '•
Fig. Ex. 3.9.l(d) removed and H also H has no further subnodes.

► Step V: Now, G'has minimum distance, so


we keep G aside and add H, note that C and I have
same distance ; we re~ove C or I alphabetically ;
but I has no further subnodes.
Now, we remove D, and bring subnode E, with '
total distance of 10. But E has already lesser ,
Fig. Ex. 3.9.l(b)
distance 7, so we add E with distance 7.
Here the algorithm ends.

ry ~:© Thus, the minimum distance between the source


and destination node is 8.
7~·~
/ 5 7 ~ I Cl Y"'" Q' Time complexity
:>Ii , .; @ 0 "© j.
The time complexity needed to run uniform cost
search is : O(b(l + C / £))
Fig. Ex. 3.9.l(e) Where : b - branching factor, C - optimal cost,
cost of each step
► 'Step VI : I has minimum distance, but I has no £-

subnode, i.e. there is no further updating. Optimal:


Node, we remove D, but D has only one sub-node Uniform-cost search is always optimal as it only
E, but its distance is 10. But E already exists with selects a path with the lowest path cost.
a lesser distance, so no need to add it further.
► Step VII : (Alphabetically) the next minimum Syllabus topic : Bidirectional Search
"rustance is that of E, so we remove E.

I
~ ,3.10 BIDl~C'J1QNAI'. SEAR~'!
1
~ ij
I
I.
The principle used in a bidirectional heuristic
r:r ~ I. search algorithm is to find the shortest path from the
current node to the goal node. The only difference
being the two simultaneous searches from the initial
-:-:r , .,~ ,1.ll'IL •
' .. ,,, l r point and from goal vertex. The main idea behind
1'" Fig,&. 3.9.l(f) bidirectional searches is to reduce the time taken for
► Step VIII : Now, minimum cost is F, so it is search drastically.
removed (alphabetically) and subnode J is added.

~126) i -;,:• ,;;. ,:.1,. A H .:>llh.1I ,LI~ I. .,-, I, Iii Tech-Neo Publications...A $ACHIN SHAH Venture
Solvin Problems b Searcflin ... Pa e No. 3-24
Artificial lntelli ence MU-Al & OS/ Electronics
Tius t.alces place when both searches happen (2) Front to front BFFA t
simultaneously from the 'initial node depth' or Here the distance of all nodes is cal~u~at~, and b
'breadth-first' and 'backwards from goal nodes'. They is calculated as the minimum of all beunstic distancea.
intersect somewhere in between of the graph. from the current node to nodes on opposing fronts.
The path traverses from the initial node through
the intersecting point to goal vertex and that is the Q> Performance measure
shortest path found because of this search. (1) Completeness : Bidirectional search is complete
We consider an ex.ample: if BFS (Breadth-first search) is used in both
Forward searches.
search I ,1
(2) OptlmaHty : It is optimal if BFS is used for
I search and paths have uniform cost.
(3)•· Time and space complexity
Time and space complexity is O (b~.
When to use bldlrectJonal approach
/ Backw~• r I •I

search We use bidirectional approach when:


.11. ,.,I

f I '
Bldlrectlonal search alg'orlthm ' (1) Both initial and goal states are unique and
U I J completely defined.
► Step I : Let A be initial node and O the goal node
1 (2) Toe branching factor is exactly the same in botb
and H is tne intersection node;''· '
t , l,J directions.
► Step 2 : We start searching simultaneously from
i t ., r• l ~ ~ Why bldlrectlonal approach 7
start to goal node and backwards from goal node to
1(i) In many cases it is faster, and it reduces the
Start node. 'I I ' ' J'
i I
amount of required exploration.
► Step 3 : When the forward search and bac~ard (ii) Suppose if branching factor of tree is b and
search intersect at one node, therl searching stops. distance of goal vertex. from source is cl, then ~
Also observe that bidirectional' searches are normal BFS/DFS searching complexio/._ is O (b\ t
complete if a breadth-first search is used for both But for two search complexity is O (b"'j which is
traversals, i.e., for both paths from start node till far less than O (b').
intersection and from goal node till intersection.
Two main types of ?idirectional searches are as Syllabus topic : Informed search
follows: , J

(1) Front to Back or BF EA jm, 1:.11 ·JNfQRMED S~CH


(2) Front to Front or B FF A ,
'· .J
i GQ: W~t ls in!ormed'searcti ?"' - - - - - - - - - ~
(1) Front to Back or BF EA
1... -~-....._,..____ -"- - - - .;a._ - - - - · ~ - - - - - _...._ - _ ...._
In bidirectional front, to front search, two heuristic
Informed search (heuristic search) : This call
functions are needed.
F"irst is the estimated distance from a node to goal decide whether one non-goal state is more promisin3
tate using forward search and second, node to start than another non-goal state. The advantages of ~
:tate using reverse action. H.er~, h is calculated. in the infonned search comes from the fact that :
alFn'thm' and it is the heunsuc value of the .
distance
between the node n to the root of th~ ~ppo~1te tree s or L ,,.
II t. Thls is the most widely used b1duectional search
If ,I ·, ,
I
algorithm,
r . o· ,.
Iii Tech-Neo Publications...A SACHIN SHAH V
11
Artificial Intelligence (MU-Al & os I Electronics) (Solving Problems by Searching) ...Page No. (3-25)

I.It adds domain -specifi c informa tion to select the a. 3.11.2 Alrorfth m for Beam Search
best path along which to continue searching.
2. Define a heuristic function h(n) that estimates the ► Step 1 : Let width,_of_beam = W: '--
"goodness" of a node n. Specifically' ► Step 2 : Put the initial node on a list START.
h(n) = estimated cost (or distance) of minjmal
► Step 3 : 1f (START is empty) or (START
cost path from n to a goal state.
= GOAL), then terminate search.
3. The heuristic function is an estimate of how close
► Step 4 : Remove the first node from START,
:'e are . to a goal, based on domrun-specific
information that is computable from the current Call this as node a.
state description. Some of the examples of ► Step 5 : If (a = GOAL), then terminate search
informed search are best first search, beam search, with success.
A* and AO* algorithms etc.
► Step 6 : Else if node a has successors, generate
lei' Inform ed search algorit hms all of them and add them at the tail of
Informed search algorithm contains an array of START.
knowledge that tells us how far we are from the goal, ► Step 7 : Use a heuristic function to rank and
path cost, how to reach to goal node etc. llis sort all the elements of START.
knowledge helps agents to explore less to the search
► Step 8 : Determine the nodes to be expanded.
space and find the goal node more efficiently.
The number of nodes should not be
The informed search algorithm is more useful for
greater than w. Name 1bese as
large search space. Informed search uses the idea of
heuristic, hence it is also called as Heuristic search. START1. •
► Step 9 : Replace START wilh STARTl. • ·" ~ ,__
a. J.11.1 Example for Informed Search ► -Ste 10 : Goto Ste 2. ..
-- -.search. --- -'--,
A " t--

.-~ GQ.---
-~-,.- - -~ - for informed
-...- ~ ---- --- ---
_..._...._ _- - -·
Fig. 3.11.1 shows bow beam search proceeds.
1

_..,__..._ -~- - -- -~- - _.,..___ -


Give example
81)
_...._
_..__~
llis search has been used in an expert system called
Beam aearch I• an examp le for Informe d search ISIS for factory schedullng.
Beam search: This is an attractive heuristic step!
search techruque because it permits searching to be
done on a multi-processor machine, thereby reducing
8 10 9 5 3
Values obtained by -
computations. Reduction in computations is achleved applying heuristic ~
functiooa on Nodes discarded to keep
by pursuing some paths and selecting only selected each node the width ol the beam • 3

paths.
The searching process is similar to breadth-first
search wherein searching proceeds level by level. At
each level, heuristic functions are applied to reduce the
number of paths to be explored. In fact, it is done to
keep the width of the beam to be minimal. Toe width
of the beam is fixed and whatever be the depth of the
Fig. 3.11.1: Beam search procedure
tree, the number of alternatives to be scanned is the
product of the width and the depth.

(MS-126) I ,. t' Iii Tech-Neo Publications...A SACHIN SHAH Venture


Artiflclal lntelll ence MU-Al & OS/ Electronics SoMn Problems

2. Problems for which exact solutions are known, but


Syllabus topic : Heuristic Function
computationally infeasible. E.g., Rubik's cube,
chess etc. The heuristics which are needed f0r
~ 3.12 HEURISTIC FUNCTION
solving problems are generally represented as a
·- -- - - ---- - -- - - -- - -- -- - --- - -- - --
I heuristic function which maps the problem statcg
, UQ. What is heuristic function ?
into number. These numbers are theq
MU - a. 3 b. Dec. 18, 10 Marks.
a. 1 c , Ma 18, 4 Marks. a. 1 c. Ma 17, 4 Marks. appropriately used to guide search.

·- - - --- - --- --- -- ---- - - -- - -- --_. a.


I a. 3(b). Dec. 16, 5 Marks. a. l(c). Dec. 15, 3 Marks I 3.12, 1 Simple Hearlstk Functions
_...,
The process of searching can be drastically
1. In the famous 8-tile puzzle, the hamming distance
reduced by the use of heuristics. Heuristics are
approximations used to minimize the searching is a popular heuristic function. It is an indicator o(
process. the number of tiles in the position they are to be.
• It is a function that maps from problem state 2. In a game like chess, the material advantage one
description to measures of desirability, usually has over the opponent is an indicator. Normally,
represented as number. Which aspect of the the following values are assigned to the pi
problem state are considered, how these aspects
Queen-9, etc.
are evaluated, and weight given to the individual
aspects are chosen. Define a heuristic function The following algorithms make use of heuristir
h(n) that estimates the "goodness" of a node n. evaluation functions
Specifically, h(n) = estimated cost (or distance) 1. Hill climbing 2. Constraint satisfaction
of minimal cost path from n to a goal state. 3. Best-first search 4. A• algorithm
• The heuristic function is an estimate of how close
5. AO• algorithm 6. Beam search
we are to a goal, based on domain-specific
information that is computable from the current • The purpose of heuristic function is to guide the
state description. search process in the most profitable direction by
Computed in such a way that the value of the suggesting which path to follow first when m~
heuristic function at a given node in the search give as than one is available.
good estimate as possible of whether that node is on
the desired path to a solution. • The more accurate the heuristic function estimalt
Well-designed heuristic function can play an the tru~ merit. of each node in the search tree, the
more direct the solution process.
I
imporlant role in efficiently guiding a search process
toward a solution. Sometimes very simple heuristic • In the extreme, the heuristic would be so good I
function can provide a fairly good estimate of whether that essentially no search would be required
a path is any good or not. In other situation more
complex function should be employed. syStcm would move directly to a solution.
But for many problems, the cost of computing the
Generally, two categories of problem• use
heurlatlc:a value of such a function would outweigh the-
effort saved in the search process.
1. Problems for which no exact algorithms are
known and one needs to find an approximate and
• In general, there is a trade-off between the cost rl
satisfying solution. E~g., computer vision, speech evaluating a heuristic function and the savin& rJ
recognition etc. search • th
time at the functions provide.

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Vefl~
Artificial Intelligence (MU-Al & OS/ Electronics) (SoMng Problems by Searchlng) ... Page No. (3-27)
Syllabus topic : Admissible Heuristic

a 3.12.2 Problem Chuacterlsdcs for Heuristic Jx'dc J3=


I fCt-c:or..c)coe,.re
rI
Search
r·...-..,:,('"~r -----.-..- .;_~g--:;; -,..-.,..--------::,- -....~ ------..- -r
3fxctx _.-:::::,---_
( G~r~h~t-.~~r~ the__ y~r~u! pr~blem ~,ha-rasterlstks
~ ~a~~-1.!~~•c • _s.~~rchi Or • Explai~ the , probl~rti ,:
f?rl! 3
t 2
Jcor..cdi
I
r-•nu
!
{ •..: characteristic briefly.With-appropriate exaJ.llPII!. 'itr-., : J½()+~)•
l.._. ._- -- - -1-1-~- - _..._~ - _____..._ - __..._..._ - _....__ _.c_____..-"- _,

-21~
X

Heuristic search is a very general method 1


I.tr 21COlludr
applicable to a large class of problems. It encompasses
a variety of specific techniques, each of which is I I
I ls1112.s
-.c
particularly effective for a small class of problems. 2 4

Sever~I key dimensions for Heuristic Search Fig. 3.12.l

I. Is the problem decomposable into a set of (nearly) a. 3.12.4 Can Soludon Steps be (pored or
independent smaller or easier sub problems?
Undone 1
2. Can solution steps be ignored or at least undone if
they prove unwise? • Problem fall under three classes ignorable,
3. Is the problem's universe predictable? recoverable and irrecoverable. 1bis classification
is with reference to the steps of the solution to a
4. Is a good solution to the problem obvious without
problem. For example, consider proving the
comparison to all other possible solutions?
theorem. We may later find that it is of no help.
5. Is the desired solution a state of the world or a We can still proceed further, since nothing is lost
path to a state? by this redundant step. This is an example of
6. Is a large amount of knowledge absolutely ignorable solutions steps.
required to solve the problem, or is knowledge • Now consider the 8 puzzle problem tray and
important only to constrain the search? arranged in specified order. While moving from
7. Can a computer that is simply given the problem the start state towards goal state, we may make
return the solution, or will the solution of the some stupid move and consider theorem proving.
problem require interaction between the computer We may proceed by first proving lemma But we
and a person? may backtrack and undo the unwanted move. This
only involves additional stepS and the solution
~ 3.12.3 Is the Problem Decomposule 1 steps are recoverable.
• Lastly consider the game of chess. If a wrong
A very large and composite problem can be easily
move is made, it can neither be ignored nor be
solved if it can be broken into smaller problems and recovered. The thing to do is to make the best use
recursion could be used. Suppose we want to solve. of current situation and proceed. This is an
Ex : J(x2 + 3x + sin2x cos 2x) dx example of an irrecoverable solution stepS.
(i) Ignorable problems
This can be done by breaking it into three smaller
Bx : Theorem proving
problems and solving each by applying specific rules.
In which solution steps can be ignored.
On adding the results, complete solution is obtained•
.,
~Tech-Neo Publications...A SACHJN SHAH Venture
(M5-126)
Artlllclel lntolll once MU-Al & OS/ Eloctronlca
1. First, the start node S is expanded. It ha1 lhIt,
(II) Recoveroble problems Ex: 8 puzzle
children A, B and C with values 3. 6 alld ;
ln which solution i;teps can be undone.
respectively. These values approximately indicaii
(Ill) lrrecoveroble problems Ex : Chess how far they are from the goal node.
In which solution steps can't be undone. 2. The child with minimum value namely A I
A knowledge of these will help in determinfog the chosen. The children of A are generated. They art
control structure. D and E with values 9 and 8.
3. The search process has now four nodes to search
Syllabus topic : Informed Search Technique, Greedy
for. i.e., node D with value 9, node E with valucg
Beet Flr•t Search, A• Search
node B with value 6 and node C with value 5. O!
.. them, node C bas got the minimal value which t
~ 3.13 BEST FIRST SEARCH
expanded to give node H with value 7.
,- - - ..... - - -- -- - - - - - - - - - - - -- - -- - -111 - - -
• I

: GQ. Explaln1'est first search wltn algorithm. I • • "': 4. At this point, the nodes available for search ~
I
(D : 9), (E : 8), (B : 6) and (H : 7) where (a : ~
: OR When best first search algorith.m, will be applicable 7 :I
1 indicates that (a) is the node and ~ is 11;
I • With a suitable algorithm and example explain the' I
I
evaluation value. Of these, B is minimal ax
I besffirst search. : t -

I
,.,.._ - - - - - - - - _,._ - - - - _ G - - I
J'-~- - t. _., ..__, -
hence B is expanded to give (F: 12), (G: 14).
Definition : This search procedure is an
5. At this juncture, the nodes available for search i!!l
evaluation function variant of best the first
I search. The heuristic function used here called
(D: 9), (E: 8), (H: 7), (F: 12) and (G: 14) outd
which (H : 7) is minimal and is expanded to gi1:
an evaluation function is an indicator how far
(I : 5), (J : 6).
the node is from the goal node. Goal nodes have
an evaluation function value ofzero. 6. Nodes now available for expansion are (D • 9
(E: 8), (F: 12), (G : 14), (1 : 5), (J : 6).
Best-first search is explained using a search graph
given in figure :
cnve the goal node.
Of these, the node with minimal value is (I : 5) which is expanded to O'
9 D

Evaluation
/ function
value
1 K
Start
Node

l J~--~~l _ Goal
Node

Fig. 3.13.1 : A sample tree ror best•fitrS t search

(MS-126) 1§;1 Tech-Neo Publications...A SACHIN SHAH Venvl-


Artificial Intelligence (MU-At & OS / Electronlcs) (Solvlng Problem& by Searchlng) ...Page No. (3-29)
a. l. 13.1 Steps of tlle Search Process In IFS

The entire steps of the search process are given in Table 3.13.1
Table 3.13.1: Search process ofbest-ffnt search

I. s (A : 3), (B : 6), (A: 3). (B : 6), (C : S) (A: 3)


(C: 5)
2. A (D : 9), (E : 8) (B : 6), (C : 5), (D : 9), (E : 8) (C; 5)
3. C (H: 7) (B : 6), (D : 9), (E : 8), (H : 7) (B: 6)
4. B (F: "12), (G: (D : 9), (E : 8), (H : 7), (F : 12), (H :7)
' ••• -1
14) (G :.14)
5. H ,· .. (I: 5)
(I : 5), (J : 6) (D: 9), (E: 8), (F: 12),
(G: 14). (I: 5), (J : 6)
6. I (K : 1), (L : 0) (D : 9), (E : 8), (F : 12), (G : 14), Search stops as goal is
(M:2) : (J : 6), (K : 1), (L : 0), (M : 2) reached
As you can ·see,·best-fust search "jumps all around" in the search graph to identify the node with minimal
evaluation function value. There is only a minor variation between hill-climbing and best-first search. In the
former, we sorted the children of the first node being generated. Here, we have to sort the entire list to identify
the next node to be expanded.
~ 3.13.2 AJrorlthm for best..ftnt Search

'Greedy Be.st First Search' and 'A* Best First


Search' are the two variants of Best First Search.
The 'Greedy best first search' is the search
pl'ocedure and is evaluation function variant of
breadth first search. The heuristic function used
here is called an evaluation function and is an
indicator how far the node is from the goal node.
Goal nodes have an evaluation function of valve
zero.
Speaking about the greedy and best first search. it
The paths found by best-first search are likely to
is the 'best' first search that is named as greedy
give solutions faster because it expands a node that
best first search only.
seems closer to the goal. However, there is no
guarantee on this.

lil Tech-Neo Publicaticns...A SACHIN SHAH Venture


Solvln Problems b Searchin
Artificial lntelll ence MU-Al & OS/ Electronics

• But there is a family of best first search I.C.I.C.l Bank -90meters


algorithms, which have different evaluation B.M.C.C. College - 120 meters
functions. The best first search makes use of the
Marathwada College - 140 meters
function g (n) along with h (n).
• The evaluation function f (n) represents the total Stationary shop - 180 meters
cost. Here f (n) = g (n) + h (n), where g (n) is the Roopali Hotel - 300 meters
cost so far to reach n, while h (n) is the estimated
School - 100 meters
cost from n to the goal state.
Wallcing track- 110 meters
• For greedy best-search g (n) = 0, and f (n) = h (n)
• Let us see how this works for routine-finding Post Office - 130 meters
problems in Pune, we use 'straight-line distance' Stages in a greedy best-first tree search
heuristic. Roopali Hotel with the straight line distance he • :
• If the goal (from home) to Roopali restaurant, we hsLD· Nodes are labelled with distances.
need to know the distance to Roopali Hotel, which For the algorithm, a node should contain
are shown in the Fig. 3.14.1.
following information :
(a) lnlUal stat•
(b) Attar home Home (300m) I. Description of the state.
2. Its parent link
Step 2: 3. List of nodes that are generated from it
A graph containing these information is
'OR' graph.

a. 3.14.1 Greedy Best First Sutch Alaorida


8tap3: Now we discuss the algorithm:
tia} ~tat~nod
. -=i-:r...
.. • -
·eed~(till"ioal'is
,.. ')'..;'. -- ..•..;- 1 . . . _ ....
. Selec't'the'Qes('11ode.
.•· -~-- -~ ....:... -
,MarJc,. pie' sucoesso~t
F?,iev.err ~~CC~SOf,
8tep<4:
ince priori1Y. qu~ue is used~
~.._'""::,V'•~- ~ ~ ....
s ect
.... ..
ooe. , .. .
Fig. 3.14.1
. , th~_is aireaciy ;ucce.ssor, chan
• The straight line distance heuristic, we denote by ~ ·• ·x..,._ ~ r, .: .. '

hsLD· Now, hsw cannot be calculated from the ~


~,.~ b~tte~_than t!¼e ell[
problem description itself. It ta.lees a little • 1.; • .,,. ' ~. •

experience to know that hsLD is approximately . . ' ::. ... "" ~ ••, . .•
-~-~ upqate the costs tb'this node. >' '

equal to actual road distance, and hence it is a !f;IftR~ 'auoo'essoris not ,. ;.. . ,\, • aluate'
useful heuristic. cost.on
:-
..-· -.,.:U,
,,. .....
)~ l '". .'
~v: ..,. ... ~

Kaka Halwai - 20 meters isofhe


Mehendale Garage - 40 meters
Mukherjee Garden - 25 meters

(MS-126) 1§;1 Tech-Neo Publications...A SACHIN SHAH '11 •


Artificial Intelligence (MU-Al & OS/ Electronics)
(Solving Prob! Searching) ... Page No. (3--31}
Properties of Greedy Best-First Search
(i) Greedy Best-First Search is not optimal. .J

(,.i) It is incomplete, even if the given state consists of


finite number of nodes.

~ 3.14.2 Solved E:umples


Fig. Ex. 3.14.l(b)
Ex. 3.14.1 : Consider the following graph shown in the
Fig. Ex. 3.14.1 is the starting state, 0 is the goal state. ► Step ill:
Run the greedy search algorithm and write the order of (i) Cost: F (B) = H (B) = 6
the node in which it is explored. The straight line
F(B) = H (F) = 3
distance heuristic estimate for the nodes are also given
below: Open Queue : [SJ, [A DJ, [S DE], [S D EB F]

h(s) = 10.5, h (A) = 10, h(B) = 6, h(c) = 4,


h(D) = 8, h(E) = 6.5, h(F) = 3 and h(G) = 0 .,

Fig. Ex. 3.14.l(c)


• I Closed Queue: [SJ, [S DJ, [S DE], [SDBB)
► Step IV:
Fig. Ex. 3.14.1

~ Soln.:
► Step I : We have to find greedy search. We
choose the order of the node
Consider the tree
(i) Cost: F (A)= H (A)= 10
F(D) = H (D) = 8 Fig. Ex. 3.14.l(d)
(i) Cost : F(O) = H (0) = 0
Open Queue : [S], [A D], [S D E], [S D EB F], [S
DEBO ]
Closed Queue : [SJ, [AD], [S D E], [S D EB], [S
DEBO ]
Total cost
Fig. Ex. 3.14.l(a) 8 + 6.5 + 3 + 0 = 17. 5 and optimal path
(ii) W closed [S] and [S D] on closed queue and S ➔D➔E➔F➔O

(A D] open Queen. Remark : Here we have to find optimal (greedy)


search. And the path S ➔ D ➔ B ➔ F ➔ G is
► Step II: We consider the tree AS DB
optimal
= =
(i) Cost : p (E) H (E) 6.5 and closed Queen [S],
(1) If we have chosen the path
[S D], [S D E] and open queue : (S], [A D],
S ➔ A ➔ B ➔ E ➔ F ➔ 0, then
[S, D, E]

(MS-126) ,, I
~ Tech-Neo Publications...A SACHIN SHAH Venture
Solvin Problems b Searchin ...Pa e No. 3--3
Artificial lntelli ence MU-Al & OS / Electronics
2. If it is possible for one to obtain the evaluau011
H(A) = 10, H (B) = 6, H(E) = 6, H (F) = 3; H (0) function values and the cost function values, tJie.
=O A• algorithm can be used. The basic principle is
And the total cost= 10 + 6 + 6.5 + 3 = 25.5 that sum the cost and evaluation function value f0r
a state to get its "goodness" worth and use this aa
(2) OR. if we have chosen the path
a yardstick instead of the evaluation functi
S ➔ D ➔ A ➔ B ➔ B ➔ F ➔ O; then value in best-first search. The sum of
H (D) = 4, H (A) = 3, H (B) = 4, H (E) = 5, H (F) evaluation function value and the cost along
= 4 and H (0) = 0 path leading to that state is called fitness number.
Then the total cost= 4 + 3 + 4 + 5 + 4 = 21 3. Consider Fig. 3.15.l again with the
evaluation function values. Now associated w·
(3) For any other path, the total cost would have been
each node are three numbers, the evaluati
greater than 17. 5
function value, the cost function value and
:. S ➔ D ➔ E ➔ F ➔ 0 is optimal path and
fitness number.
optimal cost is 17.5
4. Toe fitness number, as stated earlier, is the total
1~ :S, 15 A* "MD AO• SEARCH
-
the evaluation function value and the cost-
value. For example, consider node K, the fi
- -~ ---- -.- -
,-....-... --- -,-..-....--...- -- -
--- -...-....- ----·-.
1 number is 20, which is obtained as follows :
} GQ.. Explain A• algorithm. ~
- __._ - - _.._.._ - -'"- ---•- - - - -"- -- _,.._ - - - - - -- - _.. I (Evaluation function of K) + (Cost
1. A* Algorithm : In best-first search, we brought in involved from start node S to node K)
a heuristic value called evaluation function
value. It is a value that estimates how far a = 1 + (Cost function from S to C + Cost
particular node is from the goal. Apart from the from C to H + Cost function from H to I +
evaluation function value, one can also bring in fun.ction from I to K)
cost functions. Cost functions indicate how much = l + 6 + 5 + 7 + l = 20.
resource like time, energy, money etc. have been
spent in reaching a particular node from the start. While best-first search uses the eval •
While evaluation function values deal with the function value only for expanding the best node, A
future, cost function values deal with the past uses the fitness number for its computation.
Since cost function values are really expended, 5. Fig. 3.15.l gives the algorithm for A* alg
they are more concrete than evaluation function method.
values.
~ - Fitness number
D
- - - Evaluation !unction value

t'. j 11 1 1..I t. . I 11 Cost ol travelling


the arc

I I
, . , 1 _node-
SI.art i
I I1 ,, I ' I '.
l 11

.. J • fl

6~
Ir
lit Fl . 3.15.l ~ Sam le tree with fttness ,,_.__
I
n...,._r used ror A• search

1~! (~5-126) <'I


~Tech-Ne o Publications...A SACHIN s~
If'
·~ ... -
Artificial Intelligence (MU-Al & OS/ Electronics)

~t ~teo 1 : Put Qle initial node on a list START.


~ Step 2: ff (START is empty) or (STAR1' =

(Solving Problems by Searching) ... Page No.(~}

Specifically, A* selects the path that minimizes


f(n) = g(n) + h(n), where n is the next node on the
" ' -GOAL), then terminate search. path, g(n) is the cost of the path from the start
► 1Step 3 : ~emove the first node from START. node to n, and h(n) is a heuristic function that
Call this as node a. · · , estimates the cost of the cheapest path from n to
► Step 4 : 1f (a = GOAL). then terminate search the goal .
• , with success. ., • A* terminates when the path it chooses to extend
► Step 5 : .Else if node a: has successors, generate is a path from start to goal or if there arc no paths
all of them. Estimate the fitness number eligible to be extended.
of the successors by totaling • The heuristic function is problem - specific. ff the
the evaluation function value and the heuristic function is admissible, meaning that it
• cost-function value. "sort the list by never overestimates the actual cost to get to the
1 , .•,_!itness number. . , ;. goal, A* is guaranteed to return a least-cost path
as
► Step 6: Name-this new list START 1. from start to goal.
► Step 7: Replace START with START 1, - • A heuristic h(n) is admissible if for every node n,
► Ste 8 :·• Go to Ste 2: , ·r.
if h(n) < h*(n), where h* (n) is the true cost to
reach the goal state from n.
.., : • ,. , ~MISIBJLI~ OF A:* • Theorem : A* is optimal if two conditions are
met.
• Admissible functions are functions that should ( 1) The heuristic is admissible, as it will never
satisfy the essential boundary conditions of the overestimate the cost
problem. (2) The heuristic is monotonic, that is,
An algorithm A is admissible if it is guaranteed to h(nJ < b (°i + 1), then real-cost (°i) < real

return an optimal solution when one exists. - cost (ni + 1).
f
A* algorithm is admissible if. it uses an Proof : Let G be an optimal goal and let n be an
• unexpanded node in the fringe such that n is on a
admissible heuristic', and h(goal) = 0 (h(n) is
shortest path.
smaller than h*(n)), then A* is guaranteed to find
Let some suboptimal goal 0 1 has been generated and is
an optimal solution, i.e., f(n) is non-decreasing
in the fringe.
along any path.
Now, we have
[Theorem : ff h(n) is consistent, f along any path
f(O 1) = g(O1) + h(O1)
is non-decreasing].
ff is heuristic function is admissible, meaning that f(O 1) = g(G1)
•I
it never overestimates the actual cost to get to the ['." h(O 1) = 0, ·: 0 1 is in the fringe and is an optimal
I
I
goal, A* is guaranteed to return a least cost path goal]
from start to goal. Again f (0) = g(G)
• Typical implementations of A* use a priority ... h(O) = 0 j
I queue to perform the repeated selection of
I. minimum (estimated) cost nodes to expand. Now, f(O 1) > f(O) [·: heuristic is monotonic] ...(i)

• A* is a graph traversal and path search algorithm, h(n) S h* (n) [':his adm.issible]
which is often usC9 in many fields of computer, :. g(n) + h(n) S g (n) + h* (n)
science due to its completeness, optimality, and :. f(n) s f(G) < f(O1) ...From equation
l_ optimal efficiency. Thus A• is the best solution in (ii)
many cases.

(MS:l26).' ➔ ~ViJIIJ.C.l'A ,c.;c1,. Iii Tech-Neo Publications...A SACHIN SHAH Venture


Solvin Problems b Search In ...Pa e No. 3~
Artificial lntelll ence MU•AI & OS/ Electronics

:. A• ,will never select 0 1 for expansion. (2) The travelling salesman problem, in which a
solution is a cycle containing all nodes of the
Remarks graph and the target is to minimise the total length
(1) A fringe, which is a data structure used to store all of the cycle.
the possible states (nodes) that one can go from (3) The Hopfield Neural Networks problem for which
the current states. finding stable configuration in Hop-field network.
(2) The main idea of the proof is that when A• finds a
path, it has found a path that has an estimate lower .., 3.18 HILL CUt.(BING ALGORITHM
than the estimate of any other possible path.
' -.. -- - ---: - -~- - --~ - - -- -- - - - - -
GQ, Briefly define hPI dimbrng algorithm.
-- ~
)
Syllabus topic : Local Search: HIii Cllmblng Search,
UQ. 'Explain Hill Climbing and its Drawback in details.
Slmulated Anneallng Search

i~- 4
- 3:17 LOCAL SEAR
'!,-....-,...,,.. ~--- - ---~--- - --- - - ---,(;f- ~ -...----..,- -....- . . . , Explain Hill-clfmbing algorithm with an example.
:
: ~Q. ~ Write short note on : Local search Algorithms .
_, . . .~ .1
) (MU - Q. l(d). Dec 17. 5 Marks)
~-----~----~-------------------
_..__,._ - --- -- - -~- - - - - ---~ - -~ _-.._:._..._..,__. ._ - J Definition : This algorithm, auo called ducrd~
optimization algorithm, uaea a 1impk heuristic
• In computer science local search is a heuristi9
function viz., the amount of distance the nocu u frorn
method for solving computationally hard the goal. The ordering of choicu u a heuriltic
optlmlsatfon problems. measure of the rern.aini.116 distance one htu u, trave,w
to reach the goal nocu.
• Local search starts from an initial solution and
evolves the single solution that is mostly better In fact, there is practically no difference between
solution. hill-climbing and depth-first search except that the
children of the node that has been expanded are sorted
• At each solution in this path it evaluates a number by the remaining distance.
of moves on the solution and applies the most
suitable move to take the step to the next solution. 'a. 3.18.1 Algorithm for HID-Cllmblns
Procedure
It continues this process for a high number of
iterations until it is terminated. ► Step 1 : Put the inftial node on a list START.
• Local search uses a single search path and moves ► Step 2 : If , (START is empty) oi
facts around to find a good feasible solution. (START = GOAL), then term.i.oatc.
• search. -
Hence it is natural to implement
► Step 3 : Remove the first node from START.
• Local search algorithms are widely applied to Call this as node a.
numerous hard computational problems,
►.. Step 4 : If (a = GOAL), then terminate searcll
including problems from artfflclal intelllgence, • ,, with success.
mathematJcs, operations research, engineering •►. Step !,: -~lse if node a has successors, gen •
and bioinformatics. ,. • l ...all of them: Find out how far they ~
Some problems where local search is applied are : .. ·from. the g_oal node. Sort them by th •
(1) The vertex cover problem, in which a solution is i:.· ei.-~~ • remaining distance from the goal and
a vertex cover of a graph, and the target is to find • ... _ add them at.the beginning of START.
a solution with a minimal number of nodes. • S 6: GotoS 2..

(MS-126) •, •
~ Tech-Neo Publications..A SACHlN SHAH Ventii'
A111ficial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchlng) ... Page No. (3-35)

a. 3.18.2 Exl)Wl,ldon of HIii Cllmbfnr (ii) Evaluate the new state.


Alrorfthm
(a) If it is a goal state, then return it and quit.
Hill-climbing technique is being used in some activity
(b) If it is not a goal state but it is better than the
or other in our day-to-day routine. Some of them are :
current state, then make it the current state.
L. While listening to somebody playing flute on the
transistor, tone and volume control are adjusted in (c) If it is not better than the current state, then
a way that makes the music melodious. continue in the loop.
2. While runing the carburettor of a scooter, the The key difference between this algorithm and
accelerator is raised to its maximum once and the generate-and-test is the use of an evaluation
carburetor is runed so that the engine keeps on function as a way to htject task-specific
running for a considerably long period of time. knowledge into the control process makes this
3. An electronics expert, while making the transistor method heuristic search methods, and it is that
for the first time, tunes the radio set at mid-
same knowledge that gives these methods their
afternoon when the signal is weak for proper
reception. power to solve some otherwise intractable
problems.
Notice that in this algorithm, we have asked the
relatively vague question, "Is one state better
than another" For the algorithm to work, a
precise definition of better must be provided. In
some cases, it means a higher value of the
heuristic function. In others, it means a lower
value. It does not matter which. as long as a
particular bill-climbing program is consistent in its
Goal node interpretation.
Fig. 3.18.1 : Search tree for hill-c.limbing procedure ..
.., 3.19 mEPEST-ASCENT HILL CLIMBING
a. 3.18.3 Simple HID Cllmblnr Alrorfthm •. ALGORITHM
- -_.-~--,,,.-...- -...-,- ---------- -....- -~---..--- -....- ~--- -. , -- - - -- --- - - .- - -- - -- - - -- ....

:
I
~

Eitnlain
~ '>'•
simple hill climbing algorithm
,
with its
'
1
I
1
1
GQ. Explain steepesi-ascent hill dimbing algorithm with {
1 !Imitations. 1 1 its limitations. , :; • ,..»i f1
,________.._.._ __..._ - - _._...__...._ - - - _. . _ __..._ - __._..._..._,..._..._,.._, l _ _...__-ci._ _..._..._ _.... _____ - - _ - - -
._;> _,..,_..._

The simplest way to implement bill climbing is as Steepest-ascent hill climbing : A useful variation
fo!Jows: on simple bill climbing considers all the moves from
I. Evaluate the initial state. If it is also a goal state, the current state and selects the best one as the next
then return it and quit.. Otherwise, continue with
state. This method is called steepest-ascent hill
the initial state as the current state.
climbing or gradient search. Notice that this contrast
2. Loop until a solution is found or until there are no
with the basic method in which the first state that is
new operators left to be applied in the current
state: better than the current state is selected. The algorithm
works as follows :
(i) Select an operator that has not yet been applied to
the current state and apply it to produce a new
state.

(MS-126) I) ~ Tech-Neo Publications...A $ACHIN SHAH Venture


Artfflclal lntelll ence MU-Al & DS / Electronics
~ 3.19 .2 Umltatlons of Stees,est·ucent
HIQ (
~ l .19. t AJ,ortthm : Sceepest-ucent HUI Climbing
atmblna
.,:# ~~ ~• ill t• t,,,,.:/:). J 1, ,..
state,
\WII t, ,-- --- --- ---
limita
---
tions
-------------,-~~
of steepest-ascent hiff clfmbing
l. Evaluate the initial state.' If it is also a goal
State
. . UQ
---P!JliM •W1l
'ffllW Eni'ltttU
llllff►i,~
r.'l!l3WiP .W;R4.JJL&iQ 1
lffl.itl•l fflR• t.1"
. then retur n it and quit. • ijif I I
f • t •
the
:'.i\•r. Otherwise, continue ~with the initial state as
~ • current state. • a. 2(b). Ma 17. 10 Marksi I

~ ~ ' .... • ~- "' •' ,. . .. -J, UQ. What are the problems/frustrations that occur
in hill3

climbing technique? Illustrate with an example.


~

ilM',1111111;
lete
12. Loop until a solution is found or until a comp
1 iteration produces no change to Curre nt state:

(i)·" Let succ be


1
a
st.ate such that' any possible --- --- --- --- --- --- --- -
• I
r than Steepest-ascent hill climbing may fail to find
~-:~ successor of the current state will be bette
'SUCC: ~ g
solution. Algorithm may terminate not by findin
•• t . ~>J .y 'JI
be
.... t ',/f.,lll.;. • •• J,·"' goal state but by getting to a state from which no
if th
states can be generated. This will happen
lu) :Jor eac\ operator ~that a~plies to the c~e ntstate ,
• do: -. ' ·• - program has reached either a local maximum
:---' ...(a) •Apply the operato~ and generate a new
state. plateau, or a ridge.
all
• (b} Evalu ate the new state. 1. A local maxi mum is a state that is better than
J • If it is a goal state, then return it and quit. its neighbours but is not better than some o
• .If not,"compaie it to SUCC. If it is better, states farther away. At a local maximum, all
,, then set SUCC to this state. moves appear to make things worse. Local
.i

- • .....-If it is not better, leave SUCC alone. maxi ma are particularly frustrating becau
se they
(c) 'lf~the SOCC is better than current state, then set often occur almost within sight of a solution.
: , current state to SUCC. this case, they are called foothills.
A plateau is a flat area of the search space
ii
2.
To apply steepest-ascent hill climbing to the which a whole set of neighboring states have ts
all
color ed blocks problem, we must consider same value. On a plateau, it is not possible
the
perturbations of the Initial state and choose determine the best direction in which to move by
are
best. For this problem, this is difficult since there making local comparisons.
so many possible moves.
3. A ridge is a special kind of local maximum. It ii
to
Ther e is a trade-off between the time required an area of the search space that is higher
hill
selec t a move (usually longer for steepest-accent surrounding areas and that itself has a slqlC
to a
climbing) and the number of moves required to get (which one would like to climb). But
that
solut ion (usually longer for basic hill climbing) orientation of the high region, compared to the
od will
must be considered when deciding which meth of available moves and the directions in wbid
work better for a particular problem. they move, makes it impossible to traverse a •
by single moves.

(MS-126) ~ Tech-Neo Publications...A SACHIN SHAH I/


-----------------111 .ir - - - - ,...__,. IID---------
Artificial lntelli ence MU-Al & OS/ Electronlcs Solvin Problems b Searchln ... Pa e No. 3-37
Global-.
• Apply two or more rules before doing the test.
This corresponds to moving in several directions
at once. This is a particularly good strategy for
dealing with ridges.
• Even with these first-aid measures, hill climbing
is not always very effective. It is particularly
unsuited to problems where the value of the
heuristic function drops off suddenly as we move
Fig. 3.19.1 : Problems a.uoclated with hUI cllmblng: away from a solution. This is often the case
local maxlma, plateau and ridge whenever any sort of threshold effect is present.
• Hill climbing is a local method, by which we
a. 3.19.3 Ways of Dealing with Local Mulma
mean that it decides what to do next by looking
Plateau and Ridge Problems only at the "immediate" consequences of its

: GQ. \vha~. are th~ ~:y~ -o; d:al;n~ ~~; ~o;a; ~::;a-;
choice rather than by exhaustively exploring all
the consequences. It shares with other local
: • plateau a!'d ridge problems' which• arise' In hill :
;, : methods, the advantage of being less
( climbing? •• Ji
_v:_ - - - -"'- _ ..._ - - - - - - ---- ___..:"_.. -:.,.__ ..J combinatorically explosive than comparable
Problems associated with hill cli~~i~; ~; : ... - - - global methods . But it also shares with other
(i) Local maxima, (ii) plateaus, (iii) ridges. local methods a lack of a guarantee that it will be
effective. Although it is true that the hill-climbing
There are some ways of dealing with these procedure itself looks only one move ahead and
problems, although these methods are by no means not any farther, that examination may in fact
guaranteed :
exploit an arbitrary amount of global information
• Some problem spaces are great for hill climbing if that information is encoded in the heuristic
and others are terrible. function.
• Random restart : Keep restarting the search
from random locations until a goal is found. ~ l.20 Sl!'fULATED ANNEALING (SA)
Problem reformulation : Reformulate the search I • -,- - - - - - - - - - - - - - - - - - - - - - - - - - - -1
• 1 UQ. Define the term simulated annealing. Explain 1
space to eliminate these problematic features. : simulated Annealing with suitable example.
1

• Backtrack to some earlier node and try going in a •M•M•Eb••d. . . . '.


different direction. This is particularly reasonable
: ,} •
if at that node there was another direction that ·------~-----------------------·
Simulated annealing is a variation of hill climbing
looked promising or almost as promising as the •
in which. at the beginning of the process, some
one that was chosen earlier. To implement this
downhill moves may be made. The idea is to do
strategy, maintain a list of paths almost taken and
enough exploration of the whole space early on
go back to one of them if the path that was taken
so that the final solution is relatively intensive to
leads to a dead end. This is a fairly good way of
the starting state. This should lower the chances of
dealing with local maxima.
getting caught at a local maximum, a plateau or a
• Make a big jump in some direction to try to get to ridge.
a new section of the search space. This is a
Simulated annealing as a computational process is
particularly good way of dealing with plateaus. If •
patterned after the physical process of annealing,
the only rules available describe single small
in which physical substances such as metals are
steps, apply them several times in the same melted (i.e., raised to high energy levels) and then
direction.' '

(MS-126)
Iii Tech-Neo Publications...A SACHJN SHAH Venture
Artificial lntelll nee MU-Al & OS / Electronics Solvin Problems b Searchin ••• Pa e No. ~

gradually cooled until some solid state is reached. ~ 3 _20. 2 Slmuwed Annulln1 In Machine
The goal of this process is to produce a minimal·
Leamln1
energy final state. Thus this process is one of
valley descending in which the objective function • S.A. is a technique that is used to find the beat
is the energy level. solution for either a global minim um ot
• Simulated annealing is an effective and general maxim um without having a check for evei.,1
form of optimisation. Annealing refers to an single possible solution, that exists. This is suPtt
analogy with thermodynamics, specifically with helpful when addressing massive optimisati
the way that metals cool and anneal. Simulated problems like the one previously stated.
annealing uses the objective function of an • S.A. is a stochastic global search optimlsatiGQ'
optimisation problem instead of the energy of a algorithm. The algorithm is inspired by annealing
material. in metallurgy where metal is heated to a high
Uses of SA temperature quickly, then cooled slowly and that
SA is a probabilistic technique for approximating increases its strength and makes it easier to work
the global optimum of a given function. Specifically, it with S.A. executes the search in the same way.
is a met heuristic to approximate global optimisation in • Annealing is a heat treatment process that changes
a large search space for an optimisation problem. the 'physical and sometimes' also the chemical
properties of a material to increase ductility and
~ 3.20.1 Types and Use of Simulated Annealing
reduce the hardness to make it more workable.
• Simulated annealing algorithms are essentially • It configurated correctly and under certain
random search methods in which the new condition, S.A. can guarantee finding the global
solutions, generated according to a sequence of optimum, whereas such a guarantee is available
probab illty distribution (for example, the to Hill Climbing / Descent if the all local optima
Boltzmann distribution) or a random procedure in the search place have equal scores / costs.
(e.g. a hit-and-run algorithm) may be accepted •
even if they do not lead to an improvement in the S.A. has been widely used in the solution of
Annealing. optimisation problems. As known by many,
researchers, the global optima cannot be
• Simulated Annealing is a process where the guaranteed to be located by simulated annealing
temperature is reduced slowly; starting from a
unless a logarithmic cooling schedule is used. •
random search at high temperature and then
eventually it became purely greedy descent as it
approaches zero temperature. S.A. maintains a
t~! 3.21 PARAHEtE1t·f01t S.A.
current assignment of values of variables. l. Choice of parameters depends on the expected
• Simulated Annealing is an effective and general variation ln the performance measure over the
form of optimisation. It is useful in finding global search space.
optima in the presence of large number of local 2.
A good rule of thumb is that the initial
optima. It is analogous to temperature in an temperature should be set to accept roughly 98° cl
annealing system. At higher value of T, uphill the moves and that the final temperature should be
moves are more likely to occur. low enough that the solution does not unprove
• S.A. will accept an increase in the cost function much, if at all. To improve simulated anncalin3r
with some probability based on annealing we have to do the following:
algorithm. Simulated Annealing is based on an (i) Improve the accuracy.
analogy to a physical system which is first melted
(ii) Alter the parameters of the algorithm.
and then cooled or annealed into a low energy
(iii) You could run your own meta-optimisation Olldf
state.
arameters of our roblem.
(MS-12
ii,
6)
[ii Tech-Neo Publications...A SACHIN SHAH I/
l'
Artlflclal Intelligence (MU-Ai & OS/ Electronics) (Solving Problems by Searchlng) ... Page No. (3-39)

3. Simulated annealing is gradjent based. It is an • It is inspired by natural selection. And it belongs


extension of gradient descent, and in the to the larger class of evolutionary algorithms.
degenerate case (zero temperature) they are the • Oenetjc algorithms are used to generate high
same: It generates rando m neigh bouri ng states, quality solutions to optimisation and search
and if the fitness of that state is better than the problems.
current one then it jumps there. That is, it seeks a It depends on biologically inspired operators such
local minimum. •
as crossover, mutation and selection. Some
4. In annealing, atoms rrugrate in the crystal lattice examples of Genetic Algorithm (GA) applications
and the number of dislocations decreases, leading include optimising decision trees for better
to a change in ductility and hardness. With the perfonnance, hyper parameter optimisation etc.
knowledge of composition and phase diagram, • One can easily distingwsh between a traditional
heat treatment can be used to adjust from harder and a genetic algorithm.
and more brittle to softer and more ductile.
Examples of SA a. 3.22. 1 Compmson l>ecween TraclfdouJ and
Genedc Al1orfthm
• Travelling salesman • Task allocation
problem.
• Graph colouring and • Scheduling
partitioning problems It selects the next
It selects the next point
population by
• Non-linear function optirruzation in the series by a
deterministic computation, whlch
computation. utilizes random number
Syllabus topic : Genetic Algorithm (Optimization) generators.
It creates a population of
I ETIC AL j It creates an individual
point at each iteration. points atevcry iteration.
The sequence of points The best point in the
approaches an optimal population approaches an
solution. optimal solution.

• An algorithm is a progression of steps for solving Advancement in each Concurrence in each


a problem. A genetic algorithm is a problem- iteration is problem iteration is a problem
solving technique that uses genetics as its model specific. independent
of problem-solving.
It is a search method to find estimated solutions to ~ 3.22, 2 Bask Termlnolol)'
• -....-"'"-=-----~ -.,,~-
optirruzation and search issues. ,:.,... -..- -.,-..,;-...,- - ----- --- -z-- - - - -....- ---....~

The genetic algorithm is a method that is applied


l UQ. Deflhe. the terms chromosome, fitoess 'tuncti ~n:J
•~l
• )I

{
~" ...,

crossover a~. _mutation as used in Genet!~ •1


to solve both constrilined and unconstrained , 1
~ 'cllgorith1s.
optirrusation problems.
., .• \ UQ. Explain ho.w genetic algorithms work. Define the~
It is based on natural selection. "and,:
• ~ terms chromosome,·'fitness functioi,; crossover
• The genetic algorithm modifies a population of tI ~utati oh as used In Genetic algorithms.
., '
:-.:
I
individual solutions. •
_____.._.._.. _ ___.._L I
~---- "'- _...____.,.._ - _..._ ->- - - - - - -
• Genetic algorithm is meta heuristic in computer
science and operations research. We introduce ourselves to basic terminology
required to understand GAs.

(MS-126)
.. J ~ Tech-Neo Publications...A SACHIN SHAH Venture
SoMn Problems b Searchln ... Pa e No. 3::_4o
ArtlficlaJ lntelll nee MU-Al & OS / Electronics (iii) Gene: A gene is one element position (or Part) 0)
(i) Popula tlon : The popuJation of GA is same as the a chromosome.
population for human beings except that instead of (iv) Allele : It is the value taken by a gene for •
human beings, we have candidate solutions that
particular chromosome.
are similar to human beings.
(jj) Chrom osome s : Chromosome is regarded as
solution to the given problem.
r1 1
I 1 I o I 1 -, o I I
' ' 0 I 1 11011
I 0 1 0 1 0 1 0 1
I L.J

Population
set of chromo.ame) 0 Gene

Alele
.
,
Fig. 3.22.1

Encodin

joj1j oj1jo j1joj 1j


Genotype Space
(Computation Space)
Decoding

Fig. 3.22.2
I
(v) Genotype : Genotype is the population in the Decoding has to be fast as it is carri
computation space. Using computing system, repeatedly in a GA during the fitness
solutions represented in the computation space can calculation.
11 be easily understood and easily manipulated. Remark : For simple problems the phen ~
I I' (vi) Phenotype : Phenotype is the population in which and genotype spaces are same. '
solution are represented in a way they are
Fitness Function : A fitness function is a functial
represented in actual world situations.
which takes the solution as input and produces
(vii)DecodJng and Encoding : Decoding is a process suitability of the solution as the outpu t
of transforming a solution from the genotype to

.~
the phenotype space.
' ., Encod ing is process of transforming from the
In some cases the fitness function and obj-
functions are same. Depen ding on the prob!
they may be differe nt
pheno type tci genotype space.
II I
(M_S-126) Iii Tech-Neo Publications.. .A SAC HIN SHAH V
(SoMng Problems by Searchlng) ... Page No. (3-41
)
A111flci~I Intelligence (MU-Al & OS/ Electronics)

. F'.tness function is a function, we want to • The solutions may be 'seeded' in areas where
opturuse. optim al solution may be found.

'ZS. 3 •22 •3 Optimisation Problems 'ZS. 3,22 .5 Selectlon

• 10 a genetic algorithm, a population of candidate • During each successive generation, a portion of


the existing population is selected to obtain a new
solutions to an optimisation problem is evolved,
generation.
towards better solutions.
• Individual solutions are selected through a fitness-
• The evolution generally begins with a population based process. Certain selection methods rate the
of randomly generated individuals. It is an fitness of each solution and select the best
iterative process, and the population in each solution.
iteration called a generation. • The fitness function is defined over genetic
• The fitness of every individual in the population is representation. It measures the quality of the
evaluated. The fitness of every individual is the represented solution. The fitness function is
valve of the objective function in the always problem dependent.
optimisation problem, to be solved. • In some problems, it is not possible to define the
The more fit individuals are stochastically selected fitness expression, in such cases, simulation may
be used to determine the fitness function valve of
from the current populations to form a new generation.
a phenotype (for example, computational fluid
And it is used in the next iteration of the algorithm. dynamics is used to determine the air resistance of
of
The algorithm terminates if (i) a maximum number a vehicle whose shape is encoded as the
generations has been produ ced or (ii) a satisf actory
interactive genetic algorithms).
fitness level is reached for the population.
Requirements of gen'etic algorithm are : a. 3.22 .6 Genetic Operators
(i) a genetic representation of the solution domain,
-- --
...... - --- -..- - -...-.-..-...- --,.- ---- -~~ -..- ...,
- . ...., ~
} U~ Explain how genetic algorithms work. • •
(ii) a fitness function to evaluate the solution doma
in.

A standard representation of each candidate


~-- __________ _1 ~ lM!!f_1t~•~'-fHt!§@!;
solution is an array of bUs. Genetic operators alter the genetic composjtion of
the offspring.
The main property that makes these genetic
representation possible is that their parts are easily These include crossover, mutation, selection, etc.
aligned due to their fixed size, which causes simpl
e Basic struc ture : The basic structure of GA is
be
crossover operations. (i) We begin with an initial population, it may
fitnes s d by other heuris tics.
• Once the genetic representation and the generated at random ~r seede
genet ic Algor ithm
(ii) Select parents from this population for mafing,
d, 1 a
function are define
initialises a population of soluti ons and then to the
(iii) Apply crossover and mutation operators on
improve it through repetitive application of the parents to generate new off-springs.
mutation crossover, inversion and selection ng
(iv) Finally these off-springs replace the existi
operators.
individuals in the population and
a. 3.22 .4 lnltlallsadon (v) Process gets repeated.
The population size depends on the nature of the
• • In this way genetic algorithms actually copy the
problem. And it may have several thousands of human evolution to some extent.
oJ\S, Gene rally, initial popul ation is
1 11oluti
generated randomly, and it may have all the
possible solutions. (fhis is termed as search
space).

(MS,U6}iAH2 6'1!M).~<'. A , • 'll


~ Tech-Neo Publications...A SACHIN SHAH Venture
Artificial lntall nee MU-Al & OS/ Electronics
Population invitation

"Fitness function calcium

loop until Crossover (3) Mutation


termination
ctltar1a
(i) The mutation operator encourages genetic
reached ~ Mutation diversity amongst solutions,
(ii) It attempts to prevent genetic algorit.hta
Survivor Selection ,,,
converging to a local minimum,
(iii) Through mutation operator, a genetic algori
Terminate and Retum best
can reach an improved solution.
Fig. 3.22.3 : Basic Structure of Genetic: Operator (iv) Different methods of mutation may be used.
• Genetic variation is a necessity for the process of 1110111111111010111
evolution. MUTATION

• Genetic operators used in genetic algorithms are (4) Termination


analogous to those in the natural world: selection,
crossover (also called recombination) and • The above process is repeated until a termi.natioa
mutation. conrution is reached.
• In general, terminating conditions are:
(1) Selection
1. A solution is found that satisfies minimum cri~
(i) Selection operators give preference to better
solutions (chromosomes). 2. Fixed number of generation reached.
3. computation time/allocated money reached.
(ii) Allowing them to pass on their 'genes' to the next
generation of the algorithm. 4. The solution's fitness is totally reached or any
further successive iteration no longer Produce
(iii) The best solutions are determined using some
better results.
form of objective function (also known as a
'fitness function' in genetic algorithms), before 5. Manual inspection and combination of the abo,t
conditions.
being passed to the crossover operator.
(2) Crossover a 3.22.7 AdvMt..aps of Gaede Alsortda n
(i) Crossover is the process of taking more than one 1. The genetic algorithm concept is easy -,
parent solutions (chromosomes) and understand.
(ii) Producing a child solution from them. By 2. The genetic algorithm supports multi-objectM
recombining portions of good solutions. optimization.
(iii) The genetic algorithm may create a better 3. A genetic algorithm is suitable for noisy
solution. environments. "'
(iv) The crossover method is often chosen to closely 4. The genetic algorithm is robust with respect "
match the chromosome's representation of the local minima/maxima.
solution. 5. The genetic algorithm utilizes probabilislic
transition rules.

(MS-126)
~ Tech-Neo Publications...A $ACHIN SHAH v~
Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchfng) ...Page No. (3-43)
6. The ~ene~c •algorithm utiHzes payoff (objective 'c:t.. 3.22.9 ApplJatlons of Genetic AJ,orltbm
function) infonnation, not derivatives.
7. The genetic algorithm works well on mixed 1. Genetic Algorithm In Robotics
discrete functions.
• As we know Robotics is one of the most discussed
8. The genetic algorithm concept is modular, fields in the computer industry today. 1
separate from application. • It is used in various industries in orderi to.increase
( t ·. . . .

9. In the genetic algorithm concept, answer gets profitability efficiency and accuracy.
better with time. • As the environment in which robots work with the
time change, it becomes very tough for developers
10. The genetic algorithm concept is inherently
to figure out each possible behaviour of the robot
parallel; easily distributed.
in order to cope with the changes,
11. The genetic algorithms work on the Chromosome,
which is an encoded version of potential
• This is the place where the Genetic Algorithm
places a vital role.
solutions' parameters, rather the parameters
themselves. • Hence a suitable method is required. which will
lead the robot to its objective and will make it
12. The genetic algorithms use fitness score, which is adaptive to new situations as it encounters them.
obtained from objective functions, without other • Genetic Algorithms are adaptive search
derivative or auxiliary information techniques that are used to learn high-performance
knowledge structures. •
'c:t.. 3.22.8 Llmlcatlons of Genetic A11orkhm r
2. Genetic Algorithm in Flnandal Planning
1. The Genetic Algorithms might be costly in
computational terms since the evaluation of each
• Genetic algorithms are extremely efficient for
financial modelling applications.
individual requires the training of a model.
• As they are driven by adjustments that can be
2. These algorithms can take a long time to converge used to improve the efficiency of predictions and
since they have a stochastic nature. return over the benchmark set.
3. The language used to determine candidate • In addition, these methods are robust, permitting a
solutions must be robust. It must be able to endure greater range of extensions and constraints, which
random changes such that fatal errors don't may not be accommodated in traditional
mistake. techniques.
4. A wrong decision of the fitness function may lead
Syllabus topic : Game Playing, Adversarial Search
to significant consequences
Techniques
5. Small population size will not give enough
solution to the genetic algorithm to produce
precise results.
6. A frequency of genetic change or poor selection
scheme will result in disrupting the beneficial • Adversarial search is a search when there ii an
schema. 'enemy' or 'opponent' changing the state of the
problem every step in a direction which we do not
7. Though Genetic algorithms can find exact
solutions to analytical sorts of problems, want to have.
traditional analytic techniques can find the same • Each agent needs to consider the action of other
solutions in a short time with few computational agent and effect of that action on their
data. perfonnance. So, searchers in which two or more
players with conllicting goals are trying to explore

~Tech-Nee Publications...A SACHIN SHAH Venture·


Solvin Problems b Searchin ... Pa e No.
Artlflclal lntem nee MU-Al & OS/ Electronlcs

the same search space for the sol'!tioo, are called (ill) Determlnlstic games : Deterministic g
follow a strict pattern and set of rules ro:"
adversarial searches, often known as Games.
games. There is no randomness associated \Vi
• Adversarial search is search when there is an them. Examples: Chess, Tic-Tac-Toe etc. ~
"enemy" or "opponent'' changing the state of the
(iv) Non-deterministic games : These are th
problem every step in a direction you do not want.
games which have various unpredictable ev
• Examples : Chess, business, trading, war. You and have a factor of chance or luck. These ~
change state, but then you do not control the next random, and each action response is not fi.~
state. Opponent will change the next state in a way Such games are called as stochastic garn
1. , Unpredictable 2. hostile to you Example : Poker etc.
You only have to change every alternate state. (v) Zero-sum Game Zero-sum games
In adversarial search we examine the problem adversarial search which involves
which arises when we try to plan ahead of the world competition.
and other agents are planning against us. Io zero-sum game each agent's gain or loss
(i) We study situations where more than one agent is utility is exactly balanced by the losses or gains
searching for the solution in the same search utility of another agent.
space, and this situation usually occurs in game One player of the game tries to maximise o
playing. single value, while other player tries to minimise it.
(ii) The environment with more than one agent is Each move by one player in the game is called
termed as multi-agent environment in which each 'ply'.
agent is an opponent of other agent and playing Chess and Tic-Tac-Toe are examples of a
against each other. Each agent needs to consider sum game.
the action other agent and effect of that action on Zero-sum game : Embedded thinking :
their performance. The Zero-sum game involved embedded
(ill) searches in which two or more players with in which one agent or player is trying to figure out
conflicting goals are trying to explore the same (i) What to do
search space for the solution, are called (ii) How to decide the move
adversarial searches, often known as 'Games'. (iii) Needs to think about bis opponent as well
•I (iv) Games are modelled as a search problem and (iv) The opponent also thinks what to do. Each of
1.I heuristic evaluation function, and these are the players is trying to find oat the response of
I
two main factors which help to model and solve opponent to their actions. This requires em
games in Al thinking or backward reasoning to solve the
problems in Al.
I ~ 3.23.1 Types of Games In Al
·: I Formallzatlon of the problem
I I (l) Perfect information : Agents have all the
.mformation about the game, and they can see each A game can be defined as a type of search in Al

.I
Ii-
I
l
•,\, • other moves also. Examples : Chess, Go etc.
(U) Imperfect information : If in a game agents do
not have all information about the game and not
which can be formalised of the following elements:
(i) Initial state : It specifies bow the game is set
at the start.
l
aware with what is going on, such type of games (ii) Player (s) : It specifies which player bas moved·
are called the game with imperfect information, the state space.
such as Battleship, bridge etc. (ill) Actions (s) : It returns the set of legal moves
states ace.

(M~-126) Ii] Tech-Neo Publications...A SACHIN SHAH V


L
Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchlng) ...Page No. (3-45)
(Iv) Result (S, a) : It is the transition model, which players. Game tree involves initial state, actions
specifies the result of moves in the state-space function and result function.
(v) Terminal-Test (s) : Terminal test is true if the Example : Tic-Tac-Toe game tree
1
game is over, else it is false at any case. The state ~ 3.23.2 CharKterlsdcs of Adversarlal Search
where the game ends is called terminal states. (A.S.) '
(vi) Utlllty (s, p) : A utility function gives the final
• Adversarial search in Artificial Intelligence is a
numerical value for a game that ends in tenninal game playing technique, where the agents are
states S for player p. It is also called payoff surrounded by a competative environment A
function. conflicting goal is given to the agents
Non-zero sum game : In non-zero sum game, each
(multiagent). These agents compete with one
another and try to defeat one another in order to
agent's gain or loss is not balanced by loss or gain.
win the game.
If player wins the game, then it does not mean that the
• Searchers in which two or more players with
other player has lost the game.
conflicting goals are trying to explore the same
Positive sum game 'search space' for the solution is called
Here all players have the same goal and they Adversarial Search.
contribute together to play the game.
• ••
In A.S. th.ere is enemy or opponent : Example :•
Negative sum game Chess, business, trading, war.
Here nobody wins, everybody loses. Every player • Here an agent can change the state but then it
has a different goal; e.g. war cannot control the next state. Opponent will
Game-Tree change the next state in an unprecedented way.
A game tree is a tree where nodes of the tree are Each agent needs to consider the action of other
the game states and edges of the tree are the moves by agent and effect of that action on its performance.

a. 3.23.3 Comparison of Surcb and Games

I. Adversarial search is a search where we examine Searches in which two or more players • with
the problem which arises when we try to plan conflicting goals are trying to explore the same
ahead of the world and other agents are planning search space for the solution, arc called Adversarial
against us. searches, often known as Games.

2. To find optimal solution Heuristics techniques are Games are modelled as a search prof>lem and
used. heuristic evalution function, and these are the two
In adversarial search, the result depends on the main factors which help to model games in Al.
players which will decide the result of the game.
3. Pruning is a technique which allows ignoring the Perfect Information : A game with the perfect
unwanted portions of a search tree which make no information is that in which agents can look into
difference in its final result. the complete board. Examples are chess, Go etc.

4. Heuristic evaluation function allows to Imperfect Information : If in a game agents do

(MS-126)
1'ITech•Neo Publications...A SACHIN SHAH Venture
- : ! :.
Solvin Problems b Searchin
Artlfic:laJ lnten· nee MU-Al & OS / Electronics
~ Games
• Search

not have all information about the game and nq


approximate the cost value at each level of the
aware with what is going on, such type of &llfllea,
search tree, before reaching the goal node.
are called imperfect information.
Examples are : Chess business trading war you can
Example : Bridge, Battleship blind etc.
change the state but then you cannot
Determlnistic games : They follow a strict patter.:
and set of rules for the games.
For example : Chess tic-tac-toe.
Non-determlnistic games : Such games hav
various unpredictable events and has a factor of
change or luck. These are random, and each actio0
response is not fixed.
Example : Monopoly, Poker etc.
utility is exactly balanced by the losses or gains
~ . 3.24 TECHNIQUES REQUIRED TO GET utility of another agent.
.. ,.. ~ THE. BES"( O~IMAL !<tllm9J! I

There is always a need to choose those algorithms single value, while other player tries to UlllllDlil:2111
which provide the best optimal solution in a limited it.
time. So, we use the following techniques which could
fulfil our requirements : • Each move by one player in the game is called
Pruning : A technique which allows ignoring the
ply.

unwanted portions of a search tree which make no • Chess and tic-tac-toe are examples of a
difference in its final result. game.
• Heuristic Evaluation Function : It allows to The 2.ero-sum game involved embedded
approxim ate the cost value at each level of the

in which one agent or player is trying to fi
search tree, before reaching the goal node.
out:
Syllabus topic : Mini-max Search, Alpha-Beta Pruning I
0 What todo.
o How to decide the move
,~. 3.25 GAME iLAYIMG o Needs to think about his opponent as well
o The opponent also thinks what to do
a. J.25.1 Zero Sum Gule
- Each of the players is trying to find out
:~-"' -..-,,,..J~- - ,- - -- - ... .- ,.. - -~r - -- -- =- -
t Q. ~ ~~:~ ~~ort note on : Game Playing. J
"."I •
response of bis opponent to their actions. 11il
requires embedded thinking or
~:,.,,•• , .. ~ j
'

reasoning to solve the game problems in AI.


'"' .,'lo, ., •

,_ -~:;._ .;.\;!:.>-'!! - - - - - - - - - - - - - - - - - - - - - - - _;
• Zero-sum games are adversarial search which
involves pure competition.
• In Zero-sum game each agent's gain or Joss of

(MS-126) lillech-N eo Publications...A SACHIN SHAH


Artificial lntelll ence MU-Al & OS/ Electronics
Solvln Problems b Searchln ...Pa e No. 3-4

. 'B. J'.25.2 Elements of Game Playlnr Search


-.., --- ... - - - -- - ---
: UQ. Draw a game tree tor a tic-tac-t~ ;p-r~b~e ;;-..; - - -:
... •


TERMINAL-TEST (s) : It defines that the game
bas ended and returns true.
UTILIT Y (s,p) : It defines the final value with
'ff ~

''- • .- - ____,,_ - - ---J. .. t&m&,» aactJtg1 r,rnrm ••


~--.- -•,a«i which the game has ended. This function is alsd
--■--.J
TI . ------ -------------

o ~ ay a game, we use a game tree to know all known as Objective function or Payoff function.
the poss1b!e choices and to pick the best one out. There The price which the winner will get i.e. ' •

are followm g elements of a game-p 1aymg: • (-1) : If the PI/4YER loses. .d. t
• SO .: It is the initial state from where a game
~~- .

(+1): If the PLAYER wins.
(0) : If there is a draw between the PLAYERS.
• PLAYER (s) : It defines which player is having 1• I ,
For example, m chess, tic-tac-toe, we have two or
ti)

t the current turn to make a move in the state.


three possible outcomes. Either to win. to lose, or to
• ACTIONS (s) : It defines the set of legal moves draw the match with values + 1, - 1I orI. 0. ' 't • ,t
to be used in a state.
'- I
"
U\.- .. J l,, I a..,.
~
,...,,
• RESULT (s, a) : It is a transition model which I
r. r, :irr11· , J 1yJ'.J.lq .,i ..,!1-1.•
,, I q~ ~ ..
. r; I I ~f ,, ; ri ':,l1: ...
defines the result of a move. Jr J

I I

' I J
3.25.1 for tic--tac-
Let's understand the working of the•elements with the help of a game tree designed in Fig.
players.
toe. Here, the node represents the game state and.edges represent the moves taken by the
,11) 4
-

A ., 'li
MAX (x) r', . ,.
_.,, r;:i:iU ~'- 'II •

J~ ti , I":. J.l .,,


~l'I
,..r. •- 1
MIN (0) • YAI ~,, lJ,!l'J

, ;m . ' J If
JIJ
MAX (0) ~lLJ'jr'lO') ~,i ,~1 :'.I~ 1 < ·1 , ,:. :1 1n , .,,
tJ ,,
•1.~.'.I' tq
J

e ,: .:'l•Hi'J I~
'£0" -:>rl
I I NJ ,a, •iii~ ,, ,, ,,f.l
1•:. .1s ~ • ., i;, •
• (,., .. -1 MIN (0)
I." l rl • r ,, • •''i ~,uc:r:r) , / r'J":,.•u I ,i1r • '.J
,lq fl V .·, ··•·", ');,, • :,J'J- -~1. .... ••• I
I , .J I ..... •.... '''I 11 .I I f 11 'JI, I u 1J ,r ':,,;;I,· ,·:, :ii

n ., <
Terminal
lxl61xt lxl6jxj
0 X 0 0 X X
!M ,n l I ~ .,,,J -,j ~ii :: ..•
;,l•:rt
.U,•"J ! ,.
l'.i ,..,,u ,, 1
t, ft''l>·..
it! 1 1

(1B54)Ftg. 3.25.1 : A game-tree tor tic-tac-toe I '

move and place X in the empty square box.


INITIAL STATE (So) : The top node in the
• ACTIONS (s) : Both the players can make moves
game-tre e represents the initial state in the tree •
and shows all the possible choice to pick out one. in the empty boxes chance by chance.

PLAYER (s) : There are two players, MAX and • RESULT (s, a) : The moves made by MIN and
• MAX will decide the outcome of the game.
MIN. MAX begins the game by picking one best
~ Tech-Neo Publications...A SACHIN SHAH Venture
(MS-126)
'

Solvin Problems b Searchln


Anlflcial lntelll nee MU-Al & OS / Electronics
possible moves (that's 500 n,
• TERMINAL-TEST(s) : When all the empty you're wondering).
boxes will be filled, it will be the terminating state
(3) ''Go''
of the game. ~l
Alpha.Go is a computer program designed to
• UTILITY : At the end, we will get to know who
• and master the 31()()()-year-old board game "Go.~
wins: MAX or MIN, and accordingly, the price It uses machine learning (ML) and deep n~
will be given to them. networks and has indeed mastered the g~
beating "Go" champions such as Lee Se-do!
a. l.25.l Some More Examples of Game Fan Hui.
Pbylna/Advers.ulal Surch Se-do! retired in 2019, declaring that "Even if

(1) Chess : In 1997, IBM's supercomputer dubbed become number one, there is an entity that
"Deep Blue" did just that And it didn't beat just be defeated." He was referring to Al-pow
any person, but the world chess champion Garry "Go" opponents such as AlpbaGo, which has
Kasparov. In an essay, Kasparov wrote about his enormous search space.
first game with Deep Blue in 1996. He said that • Two-player games such as chess, checkers,
while he played numerous times against a "Go" have come a long way with the help
computer, his match against Deep Blue was adversarial search methods and o
different. He sensed a "new kind of intelligence technologies. Although the same game rules
across the table." During a rematch in 1997, the still used, it's awe-inspiring to think
chess master lost to the supercomputer. In this intelligent machines can also play against h
instance, we can deduce that Deep Blue was better People can hone their skills in these types
at adversarial search. games by playing against machines.
• Indeed, Deep Blue has enormous computational
~ l.25.4 Types of AJ,orkhms In AdvenuW
power. It can consider 100-200 billion positions
I . per second and has around 4,000 positions in its Search
opening book. In a normal search. we follow a sequence

• In 2006, another world champion, Vladimir actions to reach the goal or to finish the &alll
Kramnik, was defeated by a machine. This time, it optimally. But in an adversarial search. the rcq
was by Deep Fritz, a German chess computer depends on the players which will decide
program. result of the game.
• It is also obvious that the solution for the
(2) Checkers
state will be an optimal solution because
• Checkers is another two-player game that can use player will try to win the game with the sh
adversarial search. A computer program called path and under limited time.
"Chinook" was developed specifically to play in
There are following types of adversarial search:
the World Checkers Championship. And in 1990, •
it reached its goal. Chinook was the first computer o Minmax Algorithm o Alpha-beta PruniDI
program that won the right to play for the World
~ l~6 GAME TREE
Checkers Championship.
In contrast to Deep Blue and Deep Fritz, however, , .
• Chinook's knowledge of the game was not ~-- ___
, GQ. Explain game tree.

Learned via AL as its developers programmed • A game tree is a type of recursive


everything. Still, we can't discount the fact that it function that examines all possible mov~ rJ
is a powerful application. It has a search space of strategy game, and their results, in an attelllP'
5xl020 or 500,000,000,000,000,000,000 sets of ascertain the optimal move. They are v~

(MS-J.26) lil Tech-Neo Publications...A SACHIN SHAH V


I
1
_________
_________
_________
a ; a_ _,_~- =-·='" "·---- ----
Artificial lntelli ence MU-Al & OS I Electronics Solvin Problems b Searchin , ..Pa e Na. 3-49

for artificial intelligence in scenarios .that do not priority. If there exist more than one cell with the
require real-time decision making and have a maximum priority, then any one of them can be
relatively low number of possible choices per selected. The PS class of moves makes~ sure that
play. the player has control over the most important
cells on the board. I
• The most commonly-cited example is chess, but
they are applicable to many situations. Game trees 2. Motion (M) : The M class of moves finds all
are generally used in board games to determine tracks with only one cell filled with the symbol
the best possible move. For the purpose of this assigned to the player and other two cells blank.
article, Then one of these tracks with the highest priority
Tic-Tac-Toe will be used as an example. is chosen. After that the blank cell with higher
• The idea is to start at the current board position, priority in the chosen track is selected. The M
and check all the possible moves the computer class of moves makes sure that the player
can make. Then, from each of those possible continues filling a track in which there is still a
moves, to look at what moves the opponent may chance to win.
make. Then to look back at the computers. 3. Definitive offense (DO) : This class of moves..
Ideally, the computer will flip back and forth, finds a track with exactly two cells filled 'witli the
making moves for itself and its opponent, until the symbol assigned to the player and the third cell
game's completion. It will do this for every blank. This blank cell is selected. This moves is
possible outcome, effectively playing thousands meant to provide an immediate win to the player.
(often more) of games. From the winners and 4. DeflnJtlve derense (DD) : This class of moves
losers of these games, it tries to determine the finds a track with exactly two cells filled with the
outcome that gives it the best chance of success. symbol assigned to the opponent player and the
,- - ---~ ~-- --- - --- -- ---
...-----..-....- ~
• GQ,.. How- AI technique is used to, solve tlc-tac'..toe 11
-- -----~ ....---:;-,
third cell blank. This blank: cell is selected. It is
1 ~ problem? • ,,;f " ' .... • .., meant to prevent the player from an immediate
~ - _..._ - _... - _..._..._ - - - - - - - - ..n ---- - _._ - - - ___ _,._ .•
loss. If the moves are not used then the opponent
q, Heuristi cs function for Tic-Tac- Toe problem definitely gets a chance to win in the subsequent
move.
The board used to play the Tic-Tac-Toe game
: This class of moves
consists of 9 cells laid out in the form of a 3x 3 5. Tentative offense (TO)
matrix. finds all pairs of intersecting tracks in which both
the tracks have exactly one cell filled with the
The game is played by 2 players and either of
symbol assigned to the player and the other two
them can start. Each of the two players is assigned a
cells including the common one blank. All such
unique symbol (generally O and X). Each player
common cells of the intersecting tracks are
alternately gets a tum to make a move. Making a move
identified and the one with the maximum
is compulsory and cannot be deferred. In each move a
priority is selected. This move tries to keep the
player places the symbol assigned to him/her in a blank
foundations of victory simultaneously on two
cell.
tracks. If the TO class of moves can be applied
Seven classes of moves have been designed then the player can win in the next move.
using the available heuristics. Each class of moves
6. Tentativ e defense (TD) : This class of moves
represents a set of functionally cohesive moves in
finds all pairs of intersecting tracks in which both
order to achieve a certain objective during a game.
the tracks have exactly one cell filled with the
These classes of moves are defined and their roles in
symbol assigned to the opponent playec and the
playing the game are discussed below :
other two cells including the common one blank.
1. Prioritiz ed selection (PS) : The PS class of All such common cells of the intersecting traclcs
moves selects the blank cell with the maximum are identified and the one with the maximum

(MS-126)
~ Tech-Neo Publications...A SACHIN SHAH Venture
Artificial lntelll nee MU-Al & OS / Electronics Solvin Problems b Searchln ... Pa e No. :!:§a
priority is selected. lt tries to undo the effect of Let the evaluation function, e ( P) of a P<>sitio
the tentative offense class of moves applied by . . . ll~
be given simply by, lfp 1s not a wmmg position f~
the opponen t If the move is not applied then the
player can lose in the next move. either player.
7. Diagonal correctio n (DC) : The above six e (P) = (number of complete row, colulllll,

#
classes of moves may be insufficient to prevent a diagonal that are still open for X) ,
loss for the player moving second if either of the
- (number of complete row, column, ot
two diagonal tracks are filled in the first three
moves. This may happen even if the losing player diagonal that are still open for 0)
controls the more important cells. It is used to If P is a win for X,
save the player from losing in such conditions. e (P) = oo (a very large value)
In Tic-Tac-Toe, player alternate putting marks in
If P is a win for 0,
a3x3 array, one mark (X) and other mark (0).
e(P) = - oo

Thus if Pis
We have e(P) = 6.4 = 2

'J , .

... .,
~·.•:u
:1 I
I C
Max'•Mov•
•, ':JC,. ®' G>
L I\)
Jn ,11
J-•....c.. :', .,
I

' t I
#- -- -# ~- -,-
I' ,, II, (..,,_.

.. , . 111
*5-5•0

r:. ' .
i· ' ,,- t·
. I.,, . •
• I

..
~ , IH , #5-8•-1

e
I I
al • #5-5•0
, I! I . ,.,.- ·, II .,' f I

r (,1

.d
I It,

r · #--~e-s-1

rr ' I•

I I I! J

,

IJ ••i .. •r.' • , I 8-5• 1
, XO
Ffe, 3.26.1: First state of search In Tic-Tac-Toe

.. • I I

,I

(MS-126) 1' : ' , .


[ii Tech-Neo Publications...A SACHIN SHAH V~
I
I
It
Artificial lntelll ence MU-Al & DS I Electronics Solvfn Problems b Searchln ... Pa e No. 3-51

~ J.26.1 dc-uc-toe Problem 2. Here, the maximizer has to play first followed by
-~-----------------
: GQ, How AI technrque is used'~; ~~v_e_ti~ - - - --,
the minimizer. Thus maximizer assigns - 6 and B
1 •bl em 7
p,o ~~~·, which is passed back to A. This is replaced by 3,
1
A _____ _.. __ _' value passed by C as A has the maximizer move.
· - - - ~ _..._ -s- - - - - - - - - - - - - -

Now A will not be examining D and its children.


Since value of K is zero and D is having
minimizer move making its value 0 or less than 0
only. Thus the tree with the root K will be pruned,
which would save a lot of time in searching.

a. J.26.2 Umludons of Game Trees

+2
1. As mentioned above, game trees are rarely used in
+7 +4 +3
real-time scenarios (when the computer isn't given
Fig. 3.26.2 : A game tree expanded by two levels and
their associated static evaluation function value very much time to think).
2. The method requires a lot of processing by the
8. If A moves to C, then the minimizer will move to
computer, and that takes time.
K (static evaluation function value = O) which is
the minimum of 3, + 5, 7 and 0. So the value of 0 3. Por the above reason (and others) they work: best
is backed up at C. On similar lines, the value that in tum-based games.
is backed up at D is 2. The tree now with the
4. They require complete knowledge of how to
backed up values is given in Fig. 3.26.3.
A move.
Maxmizefs Move 5. Games with uncertainty generally do not mix well
with game trees.
6. They are ineffective at accurately ascertaining the
best choices in scenarios with many possible
choices

-2 +3 +5 +7 0 +'4 +3

Fig. 3.26.3 : Maximizer's move for the tree


+2 •~ J.27 HIHMAX PROCEDURE ~::a
r----- ------ ------ ------ ----~- -
: GQ. Explain Minmax procedure with suftable example, :
9. The maximizer will now have to choose between search as shown below : i
: OR Consider the 2-ply
B, C or D with the values - 6, 0 and 2. Being a
(Q If the first player is a maximizing player, what :
maximizer, he will choose node D because by
move should be chosen under the mini-ma><1
choosing so, he is sure of getting a value of 2 1
strategy. (iQ What nodes should not be needed to be• I
which is much better than O and-- 6.
examined using a-~ pruning technique? I
I
a- p pruning : UQ. Explain Min max and Alpha beta pruning algorithms ,
for adversarial search with example. ,:
1. In ct - J3 pruning, ct method is the lower bound on I
I
J
the value the maximizer can be assigned and the I
L ~ - ---l.:...,...,!: -~-.._ _.._ -~- _.._..._ - - - - - - - -- _.._ t

other is p, which represents the upper bound on


the value the minimizer can be assigned.

(MS-U6)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Solvin Problems b Searchln ... Pa e No. ~~
1

evaluation function value for that node is


maximum (figure). The same figure shows that~
the minimizer has to make the first move, he ~
go to node B because th~ static eva 1, 1

function value at that node 1s advantagcoua ~ I


him.
5. But a game-playing strategy never stops with OQe
-4 ~ ~ 3 5 7 0 4 3 2 level but looks ahead, i.e., moves a couple Of
Fig. 3.27.1 : MJnmax strategy levels downwards to choose the optimal Pall
Sometimes, by expanding these nodes
• Minim gy : Minim ax strateg y is a simple
ax strate scanning them, one might be forced to retract
gy for two-p erson game- playin g.
look ahead strate rethinks. Let's examine this with the help of F'1&,
Herc one player is called a maximizer and the 3.27.2. Let's assume that it is the maximizer a
• A~ other is called a minimizer. Both the adversaries,
maxim izer and minim izer fight it out to sec to that who will have to play first followed by '
minimizer. The search strategy here tries for o I•
1·l! the opponent gets the minim um benefi t while they
II two moves, the root being M and the leaf n
1 I I get the maximum benef it The plausible move being A, B, C, D, E, F, G, H, I, J, L, M, and N.
• l generator generates the necessary states for
1 6. Before the maximizer moves to B, C or D it
further evaluation and the static evaluation
Ii I l function "ranks" each of the positions.
have to think which move would be hi

I II Mlnm ax Strat egy Algor ithm


beneficial to him. In order to evaluate, the chit
of the intennediatc nodes B, C and D
generated and the static evaluation function v
.~ I I. The working of the algorithm is described with the
generator has assigned values for alJ the
aid of Fig. 3.27.2. Let A be the initial state of the

[)I
game . The plausible move generator generates
three children for that move and
evaluation function generator assigns the values
given along with each of the states.
the static
nodes.
7. If A moves to B it is the minimizer who will
to play next. The minimizer always tries to
the minimum benefit to the other and hence
2. It is assum ed that the static evaluation function will move to G (static evaluation
gener ator returns a value from - 20 to +. 20, value - 6). This value is backed up at N.
wherein a value o( - 20 indicates a win for the
maximizer and a value of - 20 a win for the
a. 3.27. 1 Properties of Min Mu Al,orf:cba
minimizer. A value of O indicates a tie or draw. 1. Mini-max algorithm is a recursive or bac:Jctr:idilli
3. It is also assumed that the maximizer makes the algorithm which is used in decision-making
first move. (It is not essential so. game theory.
Even a minimizer can 2. It provides an optimal move the for
i·l l make the first move.) The assuming that opponent is also playing optimally.
maximizer, always tries 3. Mini-max algorithm uses recursion
to go to a positi on where through the game-tree.
. the static evaluation
function value is the
4. Mini-max algorithm is mostly used for
~ 2 7 playing in AI. Such as chess, checkers, tic- '
maxim um positive value.
l I
Fig. 3.17.2 : InJtJaJ state or the game
go and various tow-players game.
5. This algorithm computes the mini-max
,l! 4. The maximize being the player to make
move to node D becau se
the
the
first
static
for the current state.
move ; will
,/;
,.!.,
(MS-126)
• 1 ~ Tech-Neo Publications...A SACHIN SHAH V •

• • I
1
Artfficlal lntelll ence MU-Al & DS / Electronics

6. In this algorithm two players play the game, one .is


. Now, we calculate the utility values, considering
cailed MAX and other 1s called MIN.
Both the players fight it as the 0 pponent player one layer at a time, till we reach the root of the
ts th • •
ge . e rrurumum benefit while they get the tree, i.e. the top-most point
maximum benefit /_
Here we have 3 layers, so we can directly evaluate
7. The mini-m~ algorithm performs a depth-first min(9,l l,-8,12} =-8 ...
search algorithm for the exploration of the
complete game tree. ► Step m : Thus, the best opening move for min is
8. The mini-max algorithm proceeds all the way the third node.
down to the terminal node of the tr th
backtrack the tree as the recursion. ee, en This move is called the min max decision. It
maximises the utility with the knowledge that the
'uE- 3.21.1 : ll~l•■•l§Mi-ii•NM€M111MfflflJ I
opponent is also playing optimally to minimise it
em~ o~ mm~max. search on game tree as shown .
ml
g.Ex:3.27.L tr- 11 .-,. ? • - ...J ► Step IV : Thus,
Min.
Min-max decision = min ( max (9, - 7), max ( 11, 4},
max (-8, -9}, max (-2, 12}}
= min (9, 11,-8, 12} =-8
UEx. 3.27.2 : MU - Ma 18. 10 Marks :~ ,~ ~-~- .,.-...,_ ~
Apply min-max search on game tree given in
fFig. Ex. 3.27.2. • ✓, .:..-. - .!

Fig. Ex. 3.27.1 0 Soln.:

0 Soln.: ► Step I : Here the given move of the player is max;


we calculate first maximum of all nodes in the last
► Step (I) : Since the given move of the player is
layer, to determine the utilities of the terminal
max; we calculate first maximum of all nodes.
nodes.
We begin with left node of the layer above the
We begin with the left node of the layer.
terminal; to calculate the utility of the left node.
Now, max {9, - 7} = 9 Again move of the layer is maximum, we choose
the maximum of all the utilities.
:. Utility of the left most node 9 = ...(i)
(i) max (4, 3, 1} = 4
The utility of the next node of the same layer is
max { 11, 4} 11= ... (ii) :. Utility of the left most-node is 4
Again the utility of the node (beginning from left)
= max {- 8, - 9} = - 8
and the utility of the last node of the same layer is
max {-2, 12} = 12
4
► Step Il:
Fig. Ex. 3.27.2

► Step Il : Again we evaluate maximum of the


middle node in the same layer,
i.e. max (5, 2} 5 =
Ffg. Ex. 3.27.l(a)
~Tech-N eo Publications..A SACHIN SHAH Venture
(MS-126)

_I-
e No.
b Searchin ...Pa
Solvin prol)lems
layers, so 'irit
H er e th ere are only 3
S/ Electronics ► Step IV :
th e root, i.e. lbc
Artlflaal lntelll
nee MU-Al & O
m edia tely re ac h to the root. A t
im e minirnl!Qi
os t po int, m in ha s to ch oo se th
topm
value. 4
w e ev al ua te m in {4, 5, 8} =
is th e left 'no.
So,
T oe be st op en ing move for min
2
:.
2 8
5
3 node max decisiOQ
m ov e is called as mio
Fig. Ex. 3. 27 .l( a) Not e th at this
th e assumptiOQ
-m os t of the the utility under
right as it maximises imise,
► St ep m : Similarly, tefomr axthe{8, 2} = 8 po ne nt is pl ay in g optimally to min
alua that the op
sa m e layer, w e ev
it. , 3, 1),
on = min {max (4
:. Minmruc de ci si
{8, 2}}
II
m u . {5, 2}, m u .
4
= min {4, 5, 8} =

Fig. Ex. 3.27.l(b)


rr
-
• ~,-
,
: 1,I z L1
-! . ' CJ· (j ~
1/ !I .,, • 'l .,
I , ': I I ,. I
J, f ..
'.11 ·' J ~
,1 :,/ J
' I I I
!I I •I• I

I • I,

• .,rf• fI
- I .> : / '
Jl
,fl I)
1
11 1, J
J•

• r I ," •"'• •

'•li- '4
• JI l1 .1 '· " :
I. I ' lU l
J
; ; • •• , If, '• '

• IL I r

'
ll n 1 K
iH

I
'.
Module 4
CHAPTER
Knowledge and
4 Reasoning

S Ila ua ,,

Definition and importance of Knowledge, Issues in Knowledge, Representation, Knowledge Representation Systems,
Properties of Knowledge Representation Systems. Propositional Logic (PL): Syntax, Semantics, Formal
logic•connectives, truth tables, tautology, validity, well•formed•formula, Introduction to logic programming (PROLOG).
Predicate Logic : FOPL, Syntax, Semantics, Quantification, Inference rules in FOPL, Forward Chaining, Backward
Chaining and Resolution in FOPL.

4.1 Knowledge and Reasoning ........................................................................................................._ ................................. 4-4


4.1.1 Knowledge Progression ...................................................................................................................................4-4
4.1.2 Types of Knowledge .........................................................................................................................................4-4
4.1.3 Knowledge Agent .............................................................................................................................................4-5
4.1.4 Levels of Knowledge Representation .............................................................................................................. 4-5
4.1.5 Various Levels of Knowledge•based Agent ..............................................................·-·········.......................... 4-5
t; 4.1.6 Knowledge Level ........................................................................................................::.................................... 4-6
4.1.7 Logical Level ....................................................................................................................................................4-6
4.1.8 Implementation Level ...............................................................................................!:7::......... - ..................... 4-8
4.1.9 Representing Knowledge Methods ...................................................................................::............................. 4-7
4.1.10 Acquisition of Knowledge ................................................................................................................................. 4-7
4.1.11 Significance of Knowledge Representation ................................................................................................... 4-8
4.1.12 Characteristics Of Knowledge Representation .................................................................................::...: ......... 4-8
4. 1.12(A) Knowledge Representation Schemes .......................................................................,.................... 4-8
4.1.13 Properties of Knowledge Representation System .................................................................._. ...................... 4-9
4.1.14 Procedural and Declarative Knowledge ..................................................................................-......................... 4-9
4.1.15 Difference between Procedural and Declarative Knowledge ........................................................................... 4-9
4.2 Propositional Logic (First Order Logic) ......................................................................................................................... 4-10
4.2.1 Introduction to Loglc ....................................................................................................................................... 4-10
4.2.2 Logic Language .............................................................................................................................................. 4-11
4.2.3 Syntax ............................................................................................................................................................4-11
4.2.4 Semantics ....................................................................................................................................................... 4-11
4.2.5 Propositions and Logical Operations.............................................................................................................. 4-11
4.2.6 Compound Propositions .................................................................................................................................4-11
4.3 Basic Operations...................................................................................•••..•••.•..•••.•.••..•.....•........................................... 4·12
4.4 Propositions and truth•Tables ...................................................................................................................................... 4-13
Artificial lntelll enoe MU-Sem.S-AI & OS Know!

4.4.1 Method of constructing Truth-tabla of the Proposition· .......;::..:.:.'.:.:.:.....:.;:.................................... ........,..... A .


.. . .. .,.1a
·"4.4.2 •.• Exemplaa Baaed on the Proposition ..........................................................J.:--···..··-···................._....._....~_4-1~
UEx. 4.4.3 ~MU· Q. S(b), May 11, 10 Merka) .........................................................."'.·-·····"";·.._--···.......... _.. _.....___.4-13
:
..._,_~.$ Tautologies and Contradictions .............. ~~· ............ ·:····" ._. ..........~···· ..•..••. •····· ..........- .........- ..,··--·--·····-··--•...,......,.:1
' • > ,.t "l •. (

i4.6
::::! ~":':~~.~~~~::-::::::~'.·.:~·::::::'.::::~:'.:::::'..::~:'.·::::.~::~:::::::::::::::::::::::·.:::::~::::::::::::::::::::::::::::::::::::::::::::::::::it;
Vondi11onal Connectives Of Implication ..................,,;:';;. .............................................~................................................. 1 ..r,
't- 4.6.1 E>Campl~s; .... ., ..'; .........'_.......'1.::.-:...:.'..:.'.. ::........t ............................................:.................................................. 4,.
f 4.6.2 Condltlorial Statements and Variations ..:.'.................................................·-::................................................ 4,~
i~ ... ~'"· :.s.~,:--, Advanta~e and Di~advanta'!~s of P~~sltio~~~.~!~·~··.;::····.. 1'...·;·;··..:·'.";··;·....'. ••••;.;:;·······..•••••••••••..•••••••••••• ~}
4.6.4 Theorem of Contra-Positive of the Statemerrts.'....;'..;.:..: ..-:..:..:.... '.....:....:.;......................................................~ql
4.6.5 Bloonditional : p ++ q ......................................................................................................................................411
4.7 Arguments..................................................................................................................................................................._~-
4.7.1 Theorem on Tautology ....................................................................................................................................:'•
•• 4.7.2 • • •Fundamental Prtnclple of Logical Reasoning ...,:.:.'..::..::..... ;...........•;.:............................................................;4-
4.7.3·-· •• Verification of Law of Sylloglsm.. .-: ... ,·...: .. :...............i.'.... r................::.........:.;..:................................................
1
;_,

• ,· • '4.7.4 Logic Programming (PROLOG) .. :.......,..:.:;... :.:.:.,:·.:.:::.....1.:.':...:.......:: ...................................,...:......................


4.8 .o·• Precedence Ruie·.:..:.::.....-..:'.:~·:.: ...•....:i.L::..::...:::);..;::... ::,..'..-:, ..!:.;;.;.;;........:.::.:............:..:...........:...............................:...
4.9 ..g1J~lity ~ ••••.....-....•.... -··-··•·· .....__ ._ •• ___. .......... ·•.. ·····- -·-···-··:_ .. __ _. __. _.•• _ .... _. __.. '...... ·_·,-· ...._••• _..................... ·_.......... _
4.9.1 lllusrative Example based on Duals ...............................................................................................................
1 • •4.9.2 • L.oglcal Identities........................................................, ...;........................................- ........................................._

:·· •
1
4.9.3 • Example& on Logical Equlvalency....•.•.-•..-..............................................................- ........._..............................
4:1°0 Normal Forms .......... ·.......... •..·...... ·.·... ·... ·.·.•....• ·.·.... ·...... •· ....... -...... · ... · ..... •................................................................_
•.. •• 4.10.1 • Disjunctive Normal-Forrn ...,..........................................................................................................................:_+a
: i-- • • • 4.10.2 • Examples of ONF ········•·•·.. •····•·••••••.. ·••..·•u• .. ••••••• .. •••••...·••·U••·..······~:·.:.~~:............................................................:_
4.10.3 Conjunctive Normal Form (cnf) ......................·...........:....;;·.........:.:..:.:..:..•.;...................................................._
~-: •••. · 4.10.3{A) Conversion from PL•to CNF ...............................................................,..............n••·•·•··.. ~·-~ ........................-
•.•.-· . UQ.. Explain the stepe involved In converting the propositional logic statement Into CNF with a suitable
example. ........." ...................................~..... ~, .......................................~.....
. ..t. UQ. Convert the following propositional logic statement Into CNF. A ➔ (I ~ C)
1l11U- 0 1 r- Dec. 11. '., r:h11k ,1 0

4.10.4 Example$ on Conjunctive Normal Form .......................:.....-............................................................................


4.11 Truth Table Method to Find DNF .................,..........................;:.............:·.........;:.........................................................~ ~.
4.11.1 Examples on dnf..................................................::...:..............;.......................................................................,.
4.;12 Predicate Logic (First Order Logic) .................................:..;......-..... "" ............................................................................
4.12.1 Predicates .........................................................................- ........................................................................-..-
4.13 The Universal Quantifier ...................................;;:..•........;:.............................................................................................~.
.. . 4.13.1 Ex18tentlal Quantlfle1'8 .......................................................................-............................................................... -
4 14 Examples of quantification.......................................................................... ...... 4,11
• _ 4.14.1 Free and Bound Varlablea............................................................::::·.................................................••·:::...,... -

4.14.2 . Logical Equivalences Involving Quantifiers .................................................................................................- ; :


......
4 .14.3 Negating Quantified Expressions.......................................... ................................................................... lrf}"
4.14.4 Negating an Existential Qualiflcatfon ...................................................._.........................................................;,,
4.14.5 De Morgan:a Laws for Negadone for 0uantifle,- ............................................................................................~
4.14.6 . Different Inference Rules for FOPL......................................................•.••••• ·~............,.......................,.............., 't •

liJTech-Neo Publications...A SACHJN


Artificial lntelll ence MU-Sem.S-AI & DS Knowled e and Reasonln ... Pa e No. 4-3
............................................ 4-28
~:. - :~I~ different Inference Rules for FOPL. (f,lU • 0 4(b). r,la 18. 10 fvlarks) A_
7 pes....................... 29
~·;~~• ~ .........................................................................,.......................................... ~ o
UEx. 4.14.9 (MU- Q. 2(b), 2016

4.14.8 FOPL .......................... •.............. ) .......... :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::~:::::::::::::::a


.................................................................................4-30
UEx. 4.14.1 o(MU- a. 1(c), May 2018, 6 Marka) ........................ A .,n
Loglc (Predic . ate Logic) ........................................,. ....,.,
4.14.9 Comparison between Propositional LogIo or Flrat Order
te logic (FOPL) knowledge , •
UQ. DISt1nguish .between Propositional Logic (PL) and first order predica
suitable example for each point of differen tiation.
representation mechanisms. Take
(MU· 0. 2(b). Ma 19. 10 Marks) ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••...................................: •••••• 4-30
........ 4-31
4.15 Forward Chaining And Backward Chaining .....................................................................:....................................
the help of example. .,
UQ. Explain Forward•chaining and Backward•Chaining algorithm with
NU•d•iit.J&iifiHffiN•ifrWl•l4Wk4iMiffiN•ifrffii•i............ 4WH ••
i•i@i1Miii0t•i1tii•J4W€111ii'iiffiiffi\ .................................... .............................................. 4-31•
.......4-31
............................................................
4.15.1 Forward Chaining ........................................................................
............. 4-31
UQ. Illustrate Forward chaining In propositional fc;>gic with example.
................................................ ,..
.. .......... ,m••··· ....4-32
4.15.2 Backward Chaining ............................................................
logic with example. ~
ua. Illustrate Forward chaining and backward chaining in propositional 1
(MU· a. J(b). Ma 17. 10 Marks) ···············································............................................................
...... 4-32
........................ ....4-32
4.15.3 Forward Reasoning ............................................................................................................
.......................4-33
4.15.4 Modus Ponens ........................................................................................................................
............... 4-33
UQ. Explain modus ponen with suitable example.
.........................................................4-33
4.15.5 Forward Chaining Proof ........................................................................
7'
....................................................4-34
4.15.6 Backward Chaining ....................................................................................
.................................................................................. 4-34
4.15.7 Properties of Backward Reasoning (Chaining) ............
............................................_ ........................ ......... 4-34
4.15.8 Backward Chaining Proof ................................................ j

ing ............................................................................. 4-35


4.15.9 Comparison between Forward and Backward Reason
....................................................................... 4-37
4.16 Example to Compare Forward and Backward Chaining ........................
...............................: ................. ~~...................... 4-38
4.17 Difference between Forward and Backward Chaining ........................
!...\ .. '......~ ......: ..........:: ................................................ 4-40
4.18' Semantic Networks ......................................................................
................................................................................... 4-40
4.18.1 Advantages. Disadvantages of Semantic Nets ...........
.................................................!.....................~........... 4-41
4.18.2 Examples for Semantic Representation ......................
........................................................:!.............. :.:.. 4-42
4.19 Resolution ........................................................................................
..................................................................................4-42
4.19.1 Resolution and Unlflcation ............................................
...............................................................................4-42
4.19.2 Resolution Algorithm .......................................................
.....................................................................:..'..'..~ .. 4-43
4.19.3 University Solved Examples ............................................
.........................~....: .... :.......~!........'. ...:~.~........ 4-43
UEx. 4.19.2 (MU• a. 1(c), Dec. 19, 5 Marka) .................................
......................................................................... 4-44
UEx. 4.19.3 (MU· Q. 3(b), Dec. 15, 10 Marka) .................................
10 Marka) ............................................................ 4-44
UEx. 4.19.4 (MU. a. 4(a), Dec. 18, 10 Marka, a. 4(A), Dec. 17,
10 Marks) .............................................................. 4-45
UEx. 4.19.5 (MU. Q. 4(a), May 16, 10 Marka, a. 4(a), May 19,
............................................................................. 4-46
UEx. 4.19.6 (MU: Dec. 15, 12 Marka) ............................................
.................................................................:........ 4-46
I,• 4.19.4 Unification.............................................................................
.................................................................................. 4-47
,, 4.19.5 Conflict Resolution .......................................................
........................ 4-47
ua. Explain Resolution by Refutation with suitable example. ..................4-48
...................... ........... ...........
• Chapter Enda ........................................................................................
I

(MS-1Q6) "'H2 .1.


Iii Tech-Neo Publications_.A SACHIN SHAH Venture
Knowled e and Reasonin
Artificial lntelli ence MU-Sem.5-AI & OS

Syllabus topic : Definition and Importance of Knowledge

~ 4.1 KNOWLEDGE AND REASONING


· k led tO be implemented. Knowledge can uc ~.
• Search-based problem-solving programs require some now ge
particular state or path toward solution, rules, etc.
• knowledge. •11 must be represented •m a part1'cu lar way with a certain
. format.
. . Know)P'"",.
• In order to use this
and 1Il AI m parucular.
Representation (KR) is an important issue in computer science, in general
has been based on the pre
dominant paradigm for building intelligent systems since the early 1970s
that intelligence presupposes knowledge".
consists of data structu
• Generally, knowledge is represented in the system's knowledge base, which
and programs.

Syllabus topic : Issues in Knowledge, Representation

~ 4.1.1 .~ -
,- - - ---- - - ---- --- --- - - -- - - - -- -- -- -- - - - - -- - - - - -.- - -- - --..::, -~--,
Knowledge Progression
- .. ~;
~ GQ. What is knowledge 7
---- ---- ---- ---- ---- ---- ----
sion
----
that starts
----
with
----
data
----
which is
----
of
----
limited
-------J--
utility. By organiz.ing
• Definit ion : Knowledge is a progres
ation.
analyzing the data, we understand what the data means, and this becomes inform
anding of the prlncjp/
• The interpretation or evaluation of information yields knowledge. An underst
embodied within the knowledge is wisdom.
Organizing Interpretation Understanding
Know ledg• t----- --._ _ __
Data t - - - - - i Information Princip~s
Analyzing Evaluation

Fig. 4.1.1 : Knowledge Progression

1. Data: is viewed as collection of disconnected 4. Wisdom : is the pinnacle of understaodin


facts. uncovers the principles of relationships
Example: It is raining. describe patterns. Provides answers as "why".
2. Information emerges when relationships among Example: Encompasses understanding of all
facts are established and understood; Provides interactions that happen between ratrung
answers to "who", "what", "where", and evaporation, air currents, temperature gradic
"when". and changes.
Example: The temperature dropped 15 degrees
~ 4.1.2 Types of Knowledp
and then it started raining.
3. Knowledge emerges when relationships among I. Procedural Knowledge
patterns are identified and understood; provides 2. Declarative knowledge
answers as "how".
Example: If the humidity is very high and the ► 1. Procedural ~owle dge : is a compiled
temperature drops substantially, then an knowledge related to the performance of somt
atmosphere is unlikely to bold the moisture, so it task. For example, the steps used to solve Ill
rains. algebraic equation are expressed as procedutl
knowledge.

(MS-126) Ii] Tech-Neo Publications...A SACHIN SHAH Venrllf


Artificlal lntelli ence MU-Al & OS / Electronics

► z. Declarative knowledge : on the other hand


is passive knowledge expressed as statements of
facts about the world. Personnel data in a datable
Written text
is typical of declarative knowledge.
113' Heuristic knowledge I•
Character string
There is one more special type of knowledge
frequently used by humans to solve complex Binary number
problems, it is the heuristic knowledge. Heuristics
are the knowledge used to make good judgments Magnetic spots
or strategies, tricks or rules of thumb used to
simplify the solution of problems. Fig. 4.1.2: Different levels ortmowledge
representation
For example : In locating a fault in a TV set, an
experienced technician will not start by making (1) Any choice of representation will depend on the
numerous but instead will immediately reason that type of problem to be solved and make use of
the high voltage fly back transformer or related inference methods available. For example,
component is the culprit and then it leads to a suppose we wish to write a program to play a
quick solution. simple card game using the standard deck of 52
a. 4.1.3 ICnowledp Apnt playing cards.
-----------------------------
1
I GQ, What Is knowledge agent? ,.,...,.._,_c,.,._",__,...:,
I
I
(2) We will need some way to represent the cards
dealt to each player and a way to express the rules.
J_..._ __ _._ - - - --- - - - ------ - - - --- - - - - - - - - __ _,
We can represent cards in different ways.
In artificial intelligence, a knowledge agent is (3) The most straightforward way is to record the suit
autonomous entity, which observes through sensors (clubs, diamonm, hearts, spades) and face
and acts upon an environment using actuators (i.e. it is values (ace, 2, 3, ........ 10, jack, queen, king) as n
an agent) and direct its activities towards achieving symbolic pair. So the queen of hearts might be
their goals. It may also learn or use knowledge to represented as <queen, he.arts> Alternatively, we
achieve their goals. could assign abbreviated codes (c6 for the 6 of
clubs), numeric values which ignore suit (l, 2,
Syllabus topic : Knowledge RepresentatJon Systems ...... , 13), or some other scheme.
(4) Consider the problem of discovering a panem in

.-.
~ 4.1.4 Levels of ICnowledp Represemadoa
---- ------------. --------
•·GQ. Write a note on "knowledge representation". Or •
I '
-. I
the sequence of numbers 1 1 2 3 4 7. A change of
base in the number from IO to 2 transforms the
number to 01 1011011011011011.
J what are the different levels. of~ knowledge ,
: '- representations. Or J
I
a. 4. t .5 V.to.s Levels of ICDowtedpbased
' .,
I
What are the methods of knowled_ge representation. : Apat
1 1

,_ --- ----- ---------- ------~ ---- ~


• name thelJl. ,. , .. •
Knowledg~based agent can be described at three
• Knowledge consists of facts, concepts, rules, and different levels : These are :
so on. It can be represented in different forms, as
mental images in one's thoughts, as spoken or 1. Knowledge level;
written words in some language, as graphical or 2. Logjcal level;
other pictures, and as character strings or
3. Implementation level
collections of magnetic spots stored in a
computer.

(MS-126) lilrech-Neo Publications...A SACHIN SHAH Venture

L
Knowled e and Reasonin
Arti1lciaJ lntem nee MU-Al & OS/ Electronics
a. 4. 1.6 ICnowleclp Level
a- (I) Knowledge-base , I
Knowledge-base is required _for ~
(a) Knowledge-based agents arc those agents who knowledge for an agent to learn with expenCllcea.
have the capability of maintaining an internal state and talce action as per the knowledge.
of knowledge, reason over that knowledge, update
their knowledge after observations and talce w (11) Inference system
actions. These agents can represent the world with Inference means deriving new sentences from old,
some formal representation and act intelligently. •
• Inference system allows us to add a new sen~
(b) Knowledge-based agents are composed of two to the knowledge base.
main parts:
• Inference system applies logical rules to KB to
(I) Knowledge-base and deduce new information.
(Il) Inferenc.e system. Inference system generates new facts so that
A knowledge-based agent must be able to do the

agent can update KB.
following: An inference system works mainly in two
(i) An agent should be able to do represent states,

which are given as :
actions, etc. (i) Forward chaining (ii) Backward chaining
(ii) An agent should be able to incorporate new
W Operations performed by KBA
percepts.
(iii) An agent can update the internal representation of Following are three operations which
the world. performed by KBA in order to show the intelli
(iv) An agent can deduce the internal representation of behaviour:
the world. 1. Tell : 1bis operation tells the knowledge
what it perceives from the environment.
(v) An agent can deduce appropriate actions.
The architecture of knowledge-based agent is 2. Ask : 1bis operation asks the knowledge
what action it should perform.
shown in Fig. 4.1.3.
3. Perform : It performs the selected action.
1:---
Environment
-----•
1 ----------------------------,
KBA
Input from a. 4.1.7 Lopcal Level
environment Output
• At this level. we observe how
Leam~ representation of knowledge is stored.
(updating KB)
• At this level, sentences arc encoded into diff
Knowledge
logics. At the logical level. an encoding
baae
knowledge into logical sentences occurs.
Fig. 4.1.3 : Architecture of knowledge-based agent a. 4.1.8 lmplemeatadoa Level
Toe Fig. 4.1.3 is representing a generalised (1) This is the physical representation of logic
architecture for a knowledge-based agent (KBA). knowledge. At the implementation level
KBA takes input from the environment by pexforms actions as per logical and knowl~
perceiving the environment level.
Toe input is taken by the inference engine of the (2) At this level. an automated taxi agent acniaJl11
agent and it communicates with KB. The learning implement his knowledge and logic so that be r:JI
element of KBA regularly updates the KB by learning reach to the destination.
new knowledge.

(MS-126} .' Iii Tech-Neo Publications...A SACHIN SHAH V


~ Intelligence (MU-Al & OS/ Electronics)
(Knowledge and Reasoning) ... Page No. (4-n
Approaches to designing a knowledge-based agent grouped together into a single structure for easy
are as follows :
access.
,r (I} Declarative approach (2) Networks also permit easy access to groups of
related items. They associate objects and linkages
(1) We can create a knowledge-based agent by
TO show their relationship to other objects.
initializing with an empty knowledge base and
telling the agent all the sentences with which we (3) Fuzzy logic is a generalization of predicate logic
want to start with. developed to permit varying degrees of some
property such as tall. In classical two-valued logic,
(2) This approach is called Declarative approach.
TALL (john) is either true of false, but in fuzzy
, r (II} Procedural approach
logic this statement may be partially true.
(1) In the procedural approach, we directly encode (4) Modal logic is an extension of classical logic. It
desired behaviour as a program code which means was also developed to better rcprcscnt common
we just need to write a program that already sense reasoning by permitting condition such as
encodes the desired behaviour or agent
likely or poMible.
(2) But in the real world, a successful agent can be (5) Object oriented representatiom package an
built by combining both declarative and object together with its attributes and functions,
procedural approaches, and declarative knowledge therefore chiding these facts. Operations arc
can often be compiled into more efficient pcrformed by sending messages between the
procedural code.
objects.
~ 4.1.9 Repnsendq ICnowledp Medlocls ~ 4.1.10ALtMSkloa ofKaowledle

1. Frames and associative networks (also called


• ~ - - - - ~ ~ - .,,., -....,; . . . . .- . . -1'1- i
GQ.~~asti oftnoteori ~~~w
~~ - =~- _. _._ .;,.._._,.__:i~~.:;-_..__ -~~J✓-
I
semantic and conceptual networks), •

2. Fuzzy logic, (1) Decisions and actions in knowledg~ba.sed


systems come from acquisition of the knowledge
3. Model logics and
in specified ways. Some form of input will initiate
4. Object- oriented methods. a search for a goal or decision.
Clearly a representation in the proper base greatly (2) For this known facts in the knowledge base be
simplifies finding the pattern solution. A typical
located, compared, and if necessary altered in
statement in this logic might express the family
some way. This process may set up other subgoals
relationship of fatherhood as FATBER Oohn, Jim) and require further inputs. till a final soJution is
where the predicate father is used to express the fact found. The acquisitions are the computational
that John is the father of Jim. equivalent of reasoning. This requires a form of
Other representation schemes include frames and inference or deduction, using the knowledge and
associative networks (also called semantic and inferring rules.
conceptual networks), fuzzy logic. model logics and (3) All forms of reasoning require a certain amount of
object- oriented methods. searching and matching. These two operations
(1) Frames are flexible structures that permit the require a lot of computation time in AI systems. In
grouping closely related knowledge. For example, a way, it gives a set-back to the acquisition of
an object such as a ball and its properties (she knowledge.
color, function) and its relationship to other
ob'ects, (to the left of on to of, and so on are

(MS-126) lilrech-Neo Publications...A SACHIN SHAH Venture


r<nowt
ArtlflaaJ Intel MU-Al & OS / Bectronlcs
a. 4.1.11 Slpilflcanee of Knawledp
(4) One of the great.est bottlenecks in building Representation
knowledge-rich systems is the acquisition and
validation of the knowledge. Knowledge can come
textbo oks,
~~- ~~~ ~
~: ;i;n~ ~~ ~f ~ed ge
? ~
iqa
from variou s sources, such as expert s, I
, , representation ____ ____ J,l..
__ _) ~
reports, technical articles, and the like. Reading
courses,
!---------------- ~
~
research articles, taJcing college (1) The Oxford English Dic~onary
knowledge as intellectual acquamtance ': th.
ues, and using 1
consulting with expert colleag
clinical databases are examples of knowledge perception of, fact or truth. A. representation ia
acquisition. way of describing certalll . fragments .,
s
information so that any reasomng sy tcm
(5) Each of these activities provides a way to acquire r
easily adopt it for inference purposes.
new knowledge through reading, observation and
engaging in life-long learning activities. (2) Knowledge representation is a study of w
of how knowledge is actually picturized
(6) To be useful, the knowledge must be accurate,
how effectively it resembles the representation
presented at the right level for encoding, in
knowledge in human brain.
complete sense that all essential facts and rules are
included, free of inconsistencies, and so on. a. 4.1.12 Cbarac:tertsdcsOflCnowtedp
(7) Eliciting facts, heuristics, procedures, and rules Representadon
from an expert is a slow, and tedious proces s.
Experience in building dozens of expert systems }~. ;a7:~;;;~ ~~ ;-o f~~ .,)· • '
, : representation.
and other knowledge- based systems over the past 1- - ~- _.i,,-~ --~ _._..._.._, _ _.....,.J:-_..._ _r;_ _.. __ -"'----~
-

fifteen years has shown this to be the single most A knowledge representation system shai
of the building
time-consuming and costly part provide ways of representing complex know ledge )
process. should possess the following characteristics :
(8) This has led to the development of some 1. The representation scheme should have a
sophisticated acquisition tools, including a variety of well-defined syntax and semantics. This
of intelligent editors; editors which provide much help in representing various kinds of knowledgc. j
assistance to the knowledge engineers and system 2. The knowledge representation scheme ~
users. have a good expressive capacity, a '

(9) The acquisition problem has also stimulated much expressive capability will catalyze the infi
research in machine learning systems, that is; mechanism in its reasoning process.
systems which can learn new knowledge 3. From computer system point of view,
autonomously without the aid of humans. representation must be efficie nt By this we
1
that it should use only limited resources
(10) Since knowledge-based systems depend on large
compromising on the expressive power.
quantities of high quality knowledge. for their
success, it is essential that better methods of ~ 4.1.12 (A)Kn owled aebpr esent adoa
acquisition. refinement, and validation be
developed. Various Knowledge Representation Scheme!·
as follows : i
(11) The ultimate goal is to develop techniques that
(i) Semantic networks (ii) Frames
permi t systems to learn new knowledge J
autonomously and continually improve the quality 1(ili') Conceptual dependency (iv) Scripts
of the .knowledge they possess. r,
I • r.

(MS-126) 1'
Iii Tech-Neo Publications...A SACHIN SHAH v,
---------------~M~WII~ .. Mtliw.lfij_ _ _ _ __

Al1fficlal lntelll ence MU-Al & OS/ E


- lectronlcs
Syllabus topic Properties of Kn program. In this view, the implication statements
Owledge
Representation Systems define the legitimate reasoning paths and the atomic
assertions provide the starting points (or, if we reason
~ 4.1.13 Properde, or Knowtectae backward, the ending points) of those paths.
I R ~ o n System S" (II) Procedural

The following properties should be possessed • A procedural representation is one in which


by a knowledge representation system : the control information tb:at is necessary to uscl
the knowledge is considered to be embedded in
(1) Representational Adequacy : The ability to the knowledge itself. To -use procedural,
represent the required knowledge. representation, we need to augment it with an
(2) Infe~ntial Adequacy The ability to interpreter that follows the instructions given in\
marupulate the knowledge represented to the knowledge.
f~!"1rreduce frnew knowledge' corresponding to that
lllle om the original.
• a
Screening logical assertions as.code is n~ very
essential idea, given that a programs are really
(3) Inferential Efficiency : The ability to dir
infi ·a1
th
ect e
data to other programs that interpret (or compil~) l
. ere~ti mechanisms into the most productive and execute them. The real difference between
directions by storing appropriate guides. the declarative and the procedural views of
(4) AcquJsitlonal Efficiency I : Toe ability t0 knowledge lies in where control information

acqwre new knowledge using automatic resides. For example, consider the knowledge
methods wherever possible rather than reliance on base:
human intervention. • man (Marcus)
person(Oeopatra)
a. 4.1.14 Procedural met Declarative Va: man (a) ➔ person(a)
Knowledge . ll
Now consider trying to extract from this
S" (I) Declarative ', ' .J"" knowledge base the answer to the question ~ :..
person(z)
A declarative representation is one in which
knowledge is specified, but the use of that knowledge • We want to bind y to a particular value for t_,hiri;
is not given. A declarative representation, we must • person is true. Our knowledge base justifies any
th
augment it with a program that specifies what is to be of e following answen : ••~ _ n
done to the knowledge and how. z =Marcus, z = Casear, z = Cleopatra • i. ,,

For example : a set of logical assertions can be • For the reason that there is more than one value
combined with a resolution theorem prover to give a that satisfies the predicate, but only pne value is
complete program for solving problems. There is a needed, the answer to the question will depend on
different way, though, in which logical assertions can the order in which the assertions are examined
1
be viewed, namely as a program, rather than data to a during the search for response. • _..,, 1J
"
1 :n

a. 4.1.15 Difference between Procedural and Dedaradve Knowledge


Table'4.1.1

1. Follow black box. Follow the white box.

2. Based on process orientation. Based on data orientation.


., l I I
Iii Tech-Neo Publications...A SACHIN SHAH Venture
,~
Artificial lntel llaence ( MU- Al & OS/ Electronics)
. ~, ~ •-~ <1: ·-· • *
.. . ,.
l)ed arat ive, Knowledae,, :;,
,· ~- •-~ ~

~
•-~·sr.:.Nc "' ~~P
.. ~ . ,~:.,..z.i.. . roc edu ral Kiio wle
. d ~,'»• ~ . •
ge. ..,.,.,
.. s~
Based on knowledge in the process of sy
3. Pos sibl e to fast er usag e.
design. ---....,
wle dge form at that may be ~
4. in a part icul ar resu lt According kno
Wh en we hav e to achi eve
manipulated and analyzed. ---..
'
.. 5.
6.
Kno win g how to do something.
Sim ple data type can be used. I
It is knowledge abo ut something.
Large data type can be used. --
~
q
I
Followed by SQL.
7. Fol low ed in C++ and Cobol. ~
line i1
task. According SQL any simple task in one
' 8. Acc ord ing and program.ming any simple
cod e i.e. one line SQL statement. ~

. I
n. Programmer should not interact with SQL ,.
9. Pro gram mer sho uld understand executio
r.
Initially slow er but possibly faster late ~
10. Init ially fast er bu_t late r it can be slow.
I!'
I
Wo rk on data eng ine
11. Wo rk on interpre~er of language.
~ I
with the DBMS.
Example : Man moh an is olde r than mon
u. ~~
r.
II 'I 12. Exa mpl e : If Marunohan or Mon u is olde
~
·l
I>
'11 :
l'' I
ot'r.
Syl labu s top ic : Propositional Log

t-"" ... . , t. ~
ic (PL)

• , I this
~ ""';! .
Wh at was the colo ur of the
bear ? Now, actually the bear
walked 3 sides of a squ are, Lik e
LJ 'f
F1g.4.2.1
,,:4~2' .. •. '"'-OPOSITIONALLOGJC. ~~
I
.(FIW;,O.R.Di
-:,,~;_.,
~;;;1,'q;,~'fVf i tOG ICf •• ;::" ••.
-:.;.L ~ ~ But since it ended up whe re it star ted
from, @
I• ... ..,_ '• • .,.,·c ,..... I, ~._

actual path must hav e been a triangle :


a. Introduction to Log ic

D
4.2 .1 ,,~
,,
'
Log ic is the disc ipli ne that deals with
the met hod s .n
of reasoning. On an elem enta ry leve
l, logic provides ti~
whether a given
rule s and tech niqu es for determining
:J..~
F1g. 4.2.2
argu men t is vali d.
this coaY
I r Com mo nse nse log lc The only two places in the wor ld whe re
th-pole. 1k
personal happen is either the North-pole or Soo
It is deri ving conclusions from impossible •
that som ething south-pole is not possible, sinc e it is
exp erie nce or kno wle dge . A conclusion bear was ,
I l' ng doe sn't make travel south from south (pole). So the
mak es sens e, so it is righ t or somethi the beat .,,.
ti,' sen se, so it is wro ng. North pole, i.e. it was a pola r bear. So
j1
A bear white.
i! •I: Let us con side r one clas sic example:
left and dge, i.e., i
'I ., wal ked one k.m . due sout h, then turn
ed to the This problem is solv ed by knowle
.I I I
requires a logical type of min d to app ly that
mowlediC
wal ked the left again and
one k.m . due Eas t. The n it turned to be called Id
I to a particular problem. It can also
it ~
'I
!
I. ',
' lrj wal ked one k.m . due Nor th and arriv
star ting poin.L
ed back at its
Inductive logic, which doe s not perm
outside the facts available. 1
' I·
~, liJ Tech-Neo Publications...A SACHIN SHAH"'
fJ :,. Ir- (MS-126)
k ~ .. t• •

~' I I
~ a l Intelligence (MU-Al & OS/ Electronics)
(Knowledge and Reasonlng)... Page No. (4-11}
J'I Logical reasoning
llluatrattve Ex. 4.2.1 :
lt is, used in mathematics to prove theorems,. m .
. rify (i) The earth is round
computer science to ve the correctnes f
so programs (ii) 3 +4 = 7
and to prove. theorems. And in our everydaY lives to
solve a multitude of problems. (iii) 4+ x= 9
(iv) Do you speak Gujarathi 7 •lJ
~ 4.2.2 Loalc Lanaua,e
(v) Take two aspirins. 1
• One of the basic . difficulties
.
• an
m· deve1opmg (vi) The temperature on the surface of the planet mars
approach to logic 1s the limitation of ordinary is 500°F.
ts
. it comes to presenting statemen
language when (vii) The sun will come out tomorrow . •• ,,.jnr ,.,
d 1 [Exactly the same prob! •
..an cone us1ons. em anses @ Soln.:
wi~ computers. You cannot instruct computers in
ordinary language-it has to be consistent with the (i) and (ii) are statements which are true.
input/output capabilities of the computer]. (iii) it is not a statement, since it is true or false
depends on the value of x. But we can say that it is
• Our aim is now very simple- to give each
statement an exact meaning and manipulate such a declarative sentence.
statement in a logical manner, determined by the If we put x = 5, it becomes a true statement, if we
rules and theorems. Here, we discuss a few of the take value of x 'I:: 5, it becomes a false. Such statements
basic ideas; i.e. rules and theorem. are open statements.
Thus if a mathematical statement is oeitber true
~ 4.2.3 Syntu nor false, it is called open statement.
Syntax is the order or arrangement of words and (iv) It is a question, not a statemenL
phrases to fonn proper sentences. The most basic (v) It is a command, but not a statement
syntax follows : (vi) It is a statement, because in principle we can
A subject + verb + direct object formula determine if it is true or false.
For example : Ramesh hits the ball. (vii)It is a statement, since it is true or false but not
both.
~ 4.2.4 Semmdcs
&. 4.2.6 Compound Proposldoas ' t'
The 'semantics' mean meanings in language.
Semantic technology simulates how people Propositions composed of sub propositlom are
understand language and process information. called compound propo.tjtiom. A proposition is said
to be primitive if it cannot be broken down into
By approaching the automatic understand of
simpler propositions; that is, if it is not composite.
meanings, semantic knowledge (technology/overcomes
Compound propositions or statements are
the limits of other technologies).
composed of various logical connectives.

Syllabus topic : Formal loglc-connectlves W Examples of compound propositions


(i) 'Roses are red and violets are blue' is a compound
~ 4.2.5 Proposfdons and Losfal Opendoas statement with sub statements: "Roses are red"
and "violets are blue".
Definition : A statement or proposition is a
(ii} "John is intelligent and studies every night'' is a
ckclarative sentence that is either true or false, but
compound statement with sub propositions :
not both.
"John is intelligent'' and" John studies every night

~Tech-Neo Publications...A SACHIN SHAH Venture


(MS-126)
Know! and Reasoni
MU-Al & OS / Electronics
l:i'" Remarks on the above examp a
BASIC,. OPERATIONS
f ...... - .... , -,
Example : (ii) Shows that we may join two tota)J
. , d( y
(I) Conjunction (p" q) unrelated statements by the connective an A)'
(Il) Disjunction (p v q)
(II) Disjunction
(DI) Negation ( -, )
(i) If p and q arc statements, the disjunction of p "lld
(I) P " q conjunction of p and q, read "p and q"
q is the compound statement "p or q" denoted b)
Two propositions p and q can be combined by the
wor<i 'and ' to form a compound proposition and called pvq.
as cooJunctlon of the original propositions. (ii) The connective 'or' is denoted by the symbol v.
Symbolically, p A q is a compound proposition. The compound statement p v q is true if at least I

Ii~ And it has a TRUE value. one or p or q Is true; it is rat.se when both p lllld
Defmiti,on : If P and q are rat.se.
q are t r u e , then. p " q is also true,
1 I~ otherwise p " q is false. The truth table of p V q :
We prepare the table for the truth-value of p "q.

,,I~ I•~Note : T at.and$ior


~ true~and F ~ ~ -
Table 4.3.2 : Truth table of p V q

p ,_q pvq
Table 4.3.1 : Truth table for Conjunction

~q,. pAq
. T T T
p
T F T
T T T
F T T
T F F
F F F
F T F
(lll)Negation: ( -,)
F F F
If p be any statement then negation of p is dcnoecd
(i) In the first row; if p is true and q is true then p " by-, p and is read as 'not p'.
q is true.
If p is true then-, p is false. If p is false then -, pis
(ii) In the second row : if p is true and q is false then true.
p A q is false. And so on.
a- Truth table for Negation If
Remark : p A q is true only when p and q both are
true. Table 4.3.3 : Truth table for Nepdon

I r Examplu based on conjunction


Form the conjunction of p and q for the following
(i) p : 3 < 5 and q :- 2>- 4 I •

:. p A q: 3 < 5 and- 2 >-4


(ii) p : it is bot, q : 2 < 4 s- Example based on Negation
:. p " q it is bot and 2 < 4. If P : Gopal is good at sports
(iii) p : it is snowing, q : I am cold then -, P : Gopal is not good at sports.
:. p A q : it is snowing and I am cold.

(MS-126) lilrech-Neo Publications...A SACHIN SHAH v~

'
ll ,J
.
Remar : egauon 1s also denoted by O • Thus if pis
trUC. then o p is false 'Zs. 4.4.2 Examples Based on the Proposldon
Ex. 4.4.1
Syllabus topic: truth tables, tautology, valldlty, (i) Fwd the truth-table of the proposition
well-formed-f ormula -, (p I\-, q)
,~ - '1t"f=l
p q -,q ,P ."::, q -, (pA-, <y
;I
p q .;(pA~
~ ' V

4.4· PROPOSITIONS AND /1 T T F F T TT T


:,TRUTH-TABLES
T F T T F =T F F

Proposition : A proposition is also called a well- FT F F T FT T

formed formula of logical variables p, q, r, ... . and FF T F T FF T


logical connectives (A,v,,) We denote such a (a) (b)

proposition by P (p, q, r, ....). Remark

Truth-table : The truth-value of a proposition The truth table of the preposition consists of the
columns under the variables and the column under the
depends upon the truth values of its variables. (Thus proposition, as shown in (b).
the truth value of a proposition is known once the truth
Ex. 4.4.2 :Fmd the truth-table of-, p " q.
values of its variables are known). This relationship
0 Soln.: We construct the table:
can be shown through a truth-table. ,~
p' -9 -rp ,pA q
~ 4.4.1 Method of constructing Truth-table
T T F F
of the Proposition
T F F F
► Step (i) : The first columns of the table are for the F T T T
variables p, q, T F
F F
► Step (ll) : Allow the rows for all possible
(i) For two variables p, q we choose the truth-values
combination of T and F for these variables. (For 2 in the first two columns as shown.
variables, i2 = 4 rows are necessary; for 3 (ii) We fwd the truth value of-, p using the negation
d •
variables, 23 = 8 rows are necessary, an , m
general, for n variables, 2n rows are required. (ill) We fmd the truth value of -, p " q using the
conjunction "·
► Step (ill) : There is a column for each
"elementary" stage of the constructions for the UEx.4.4.3 MU· 0. 5 b. Ma 19. 10 Marks
1. John likes all kinds offood.
truth-value of the proposition
't/x : food(x) ➔ likes (john, x)
► Step (iv) : Toe truth value of each step being
2. Apples are food.
determined from the previous stages by the
food(apple) l • • • I
definitions of the connectives A, v, -,
3. Chicken is food
► Step (v) : Finally, in the last column, we obtain food(chicken)
the truth value of the proposJtlon. 4. Anything anyone eats are isn't kiled by is food
Vx: (3y:eats{y, x) A -, killed by (y, x))
➔ food(x)

~ Tech-Neo Publications...A SACHIN SHAH Venture


(MS-126)
~ee l 8 and Reasonln
Artificial lntoll nee MU-Al & OS/ Electronk:S _
e· Bill cats peanuts and ls still alive. ► Step III·
. Resolution performance
Peanu ts) -,food (x) V llkee( Joti
( A. ca13 (Bill, peanuts) e. alive (Bill) -,Jl k8S (Jo hnv "•-)
6 Sue cats everything Bill cats x/pea nute, unlllca 11on, •

L@ Soln .:
'Ix: cats(Bilt, x) ➔ eats (Sue, x)
'Y~i' ly;allve Ot)~: :, k,illcdby_(~ Y ) ~
.
'
-,food (Pean ute) -,eate (a,b) V klllec1

V eanut a
Vtooc1<•1
(,)

( 1 / ( a , peaou ta) V klllad (a)


1. John likes all kind of food.
2. Apples and chicken are food.
3. Anything anyone cats is not killed by food. ~·· "'"' "V '•(B III, , •••• ,,) ''
4. Bill eats peanuts and is still alive.
5. Sue eats everything that Bill eats.
► Step I : Converting the given statements into Fig. Ex. 4.4.3
FOPL :
TAUTOLOGIES AND
1. 'Ix : food (x) ➔ likes (John, x) coNTRADICTIOMS
2. food (apple) /\ food (chicken)
Defin ilwn 1: A proposition P (p, q, ....) ii a ~
3. Va, 'v'b: cats (a, b) A 7 killed (a) ➔ food (b)
if it contain.a only T in the laat colum n of u. '1-IIQ
4. eats (Bill, peanuts) /\ , killed (Bill) table, i.e. if p i, trw for any truth valuu of it

5. 'v'y : eats (Bill, y) ➔ cats (Sue, y) variables.

► Deff.nUwn 2 : A proJ>O$ition P (p, q, _) i, •


Step Il : Converting FOPL statements into causal
form contra dictio n if it contain8 oraly ·r in ~ -
column of its truth table, i.e., it P ii false for an, ~
1. 7 food (x) V likes (John, x)
values of its variables.
2. food (apple),
I • For example (i) "p or not p", i.e., p v - p ii
3. food (chicken) tautology
4. 7 eats (a, b) V killed (a) V food (b) Table 4.5.1 : Truth table of "p or not p b tauto qf'
5. eats (Bill. peanuts) .,
6. 7 e killed (Bill) ~ -:p _pv -p '
7. eats (Bill, y) v cats (Sue, y) T F T
Conc lusion : Likes (John, Peanuts)
F T T
(ii) 'p " - p' is a contradiction By truth table, . f
'p A - p' is a contradiction.

(MS-126)

Iii Tech-Neo Publications...A SACHIN SHAH v~



¢ Intelligence (MU-Al & OS/ Electronics) (Knowledge and Reasoning) ... Page No. (4-151
f.t,le 4.5.Z : Truth table of ''p or not p Is contndJctJon " Proof:
p -p PA-p Since P (p, q, _) is a tautology. And it does not
depend on truth values of its variables p, q, ••• , we can
T F F substitute P1, for p, p1 for q, ... in the given tautology
F T P (p, q, ....) and it becomes a tautology.
F
Ex. 4.5.2: Verify that (p " -, q) v -, (p /\ -, q) is a
~ 4,5, f Eumpl es on Taiutolosy tautology.
lt'.J
Soln.:
EX, 4.5.1 : Show that p v-, (p /\ q) is a tautology. Let P = p " -, q, then the proposition becomes
(t1 50ln.: p v -, p. We have seen that proposition p v -, p is a
we prepare the truth-table for p v -, (p " q) : tautology.
:. (p"-, q) v-, (p /\-, q) is a tautology.
/\q)
T ~ 4.6 CONDITIONAL CONNECTIVES OR
IMPLICATl~_!f.. ...,. _.,. ~ ~
T T T
F T F T T In mathematics, we come across statements like
F F F T T '1f p then q". Such statements are cailed conditio nal
stateme nts and are denoted by p ➔ q.
(1) We choose the truth-values for p, q in first two
Remark
columns.
I. The conditional statement p ➔ q is sometimes
(ii) Then truth value for p "q, (conjunction) read as (i) p implies q, (ii) p is sufflde nt for q (lH)
q is necessary for p (iv) p only if q.
(iii) Negation of (p /\ q)
2. The conditional p ➔ q is false when the first p ls
(iv) Truth value of p v-, (p /\ q); which is a tautology. true and the second part q is false ; i.e., when p is
false, the conditional p ➔ q is true regardless of
'?:!.. 4.5.2 Theorems the truth value or q.
Theorem (1) 3. (i) If p is true, q is true then p ➔ q is true.
(ii) If p is true, q is false then p ➔ q is false.
If P (p, q, _,) is a tautology, then-, P (p, q, --) is
(iii) If pis false, q is true, tbco p ➔ q is true.
a contradiction, and conversely.
(iv) If pis false, q is false. the p ➔ q is true.
Proof
G' Truth table for p ➔ q
Since a tautology is always true, (i.e., it contains
Table 4.6.1 : Truth table for p ➔ q
only Tin the last column); the negation of a tautology
is always false, (i.e. it contains only F in the last p q, p➔ q
'I
column) T T T
-, P (p, q, ...) is a contradiction, and ., tJ T F F
conversely.
F T T
,,
Theorem (2) : (Princi ple of substit ution)
F F T
Suppose p (p, q, ...) is a tautology, then
P (p., P2, ...) is a tautology for any propositions P1, P1•

Iii Tech-Neo Publications...A SACHIN SHAH Venture


!MS-126)
~ed 8 and Reasoni

Artlfldat I U•AI &OS / ElectronlcS t the truth-tab e


We constrUC ..
4. We obsc-zve that the truth tables of-, P v q and
•callyP al ropas1t1on.
condition P . Truth table for conditional pro
➔ q arc identical. (check!); i.e., P ➔ 4 is Jogi Table 4.6.2 • .... . ,
equivalent to -, p v q. ' ~ }.. : ' .
i.e., p ➔ q ■-, p V q. • -,q conctltk)nal eonv~

~ 4.6.1
p q ....p 'p➔,/' ~·4-➔
Examples
T'
Ex. 4.8.1 : Determine the truth value of the following TT F F_L_T.:_-,--,--,--r , ,
statements : .. rti F T T F ,
~TfF~F+2_T+_;__l-:-r--:--r---..: ·>
(i) If Bombay is in India, then 4 + 5 = 9.
(ii) If Bombay is in India, then 4 + 5 = 3. 1 J ,- ~T T
1~~2_j~F~_T:_..-r-=-F7r:F7--l• t
• •, l,, T T T l I
@ Soln.: tFtFLT~_'._T_L~-~=:--::=--=-=~
(i) Let p = Bombay is in India, q = 4 + 5 = 2 - Only the contra positive '-,q ~ -.p' is l<>gican,
·: pis true, q is true, :. p ➔ q is tnie. eqwvalent to the original conditional probai,;,,,.'

~'Ill}
p ➔ q.
(ii) Let p = Bombay is in India,
J q = 4+5=3. ., I .. t a 4.6.l AdvantaP and Dlsadv~ of ....
,. ~·. t➔ q 1s false.
·: pis true,' but q is false·
'I Proposfdonal lope
_ _ _ _ _ _ ___:__ _:__ _ _~----:--:---)f, J ~ •~ Advantages of propositional Logic I '
\I,
Ex. 4.6.2 : Rewrite the following statements ,.without

using ~e condi~onal. .. ' '. ir. L. ,, .,.


, . , 1. It is used 1in· artifi_citeallli~getenllit.gcoenncetrofilorandplanning,
(1(
·, If 1t
• 1S
• hot, h e wears a hat , ii ( i i ',r,, problem-so vmg, ID mi.t
.~~ .~·,; ;,• 1
1
(ii) If Fis fi~ld, itiis inte~al doipak. importantly for decision-making. .
OJ
ILi Soln • : I' q "JI' I I ,'l",:) ~d, • I
2. It is about Boolean functions
than . and the s ~
(i) Recall that p ➔ q =_ p v q ; i.e., ,,u Lui, u1_1_ ' r, where there are more JUSt true and false
j J',,•
0
values, includes the certainty as well •r/
~
1111 ,. • 11
if 'p then q, is equivalent to 'not p or 9.'. ' , uncertainty.
•V :tr ,.,Js, f 110,, ~: •
:. It is not hot or he wears aI hat
!
,, , ,
,~' I ' .:Jtf u ',J l 1 ' I I
3. It led to the foundation of machine 1-..;..
"""'"'11111
(ii) F is not field or F is integral domain; , ,·; I models.
·, ~. t ') ' -t 1

a. 4.6.2 Condldonal Sutenems and Vufadoas 4. It is a useful tool for reasoning.


-;t
, , n :, ·,, , Disadvantages of Propositional Logic
Now we consider other simple conditional
propositions containing p and q. • u I ,_. r,t••. r 1., We cannot represent relations like All, some, 11
1. Let p ➔ q be conditional proposition. :rben 'q ➔ none with propositional logic. '
p' is called converse conditional proposition. I 1 :11
, Example : All the boys are intelligent.
2. '-, p ➔ q' is called Inverse of the original
2. Propositional logic bas limited expressive powa._
conditional proposition p ➔ q. j
I 3. In propositional }oc,;c, we cannot ~
3. '-,q ➔ -,p' is called contrapositive of the 1J1 ~ -1

original conditional proposition p ➔ q. statements in terms of their properties or logii;11


relationships.
I
Q> Example of conditional propoaltlon 4
• Propositional logic is lacking of syntaX {d
Which, if any, of the above propositions are , , representing objects in the domain of internet
logically equivalent to p ➔ q. ,,q 'i(
' I

-=-:-=-~--~~7--l..-~--------
(MS-126) 1§;1 t, tt"1' " ·, I" ,,~, ·· •

Tech-Neo Publications...A SACHJN SHAH"' j


fia . • ail
n
No. <4-1
e 1ntellip80C8 (MU-Al & OS/ Electronlcs) (Know1edge
and Reasonlno) ..• PaQ8

~.6 .4 'lbeOrem of Contr.Pos1t1ve of die ltl Soln .:


~ Statemems q is -, q ➔ -, P·.
(i) The contra positive of P ➔
.. the • en statement is
,..,_nua-positive is a proposition or thcorem :. The contra pos1nve of gJV
• both the subject and predicate
""' by contradi ctm~ h 15 ot a teaeber.

If Bbide is not poor, then e n
t111ed
fo bOtb the hypothesis ~d conclusion of a given (ii) The contra positive is : ,
~ sition or theorem and mterchanging them "if not-
If Thombare does not study• then be will not pass
propO not-A" is the contra-positive of "if A then B" •
8 (bell
fo form the contra-positive of the conditional the test.
....enl, we interc•hange the hypothesis and the a. 4.6.5 llcondfdonal : P ._. q
state"· th
lusion of e mverse thstatement The contra- • f the . form "p if
cone if . • th
•uve of " it rams, en ey cancel school" is "If Another common statement lS o nditio nal
pasi do not cancel school, then it does not rain". If the and only if q". Such statements are called bico
tbeY rse is true, th en the •mverse .1s also logically true.
conve statements and are denoted by P 4-+ q.
A conditional statement p ➔ q and its contra
•uve.., q ➔ -, p are logically equivalent 1W Truth -Tab le for p tot q
p0s1 .
Table 4.6.3 : Truth table for P ._. q
If the statement 1s true, then the contra-positive is
al O logically true. If the converse is true, then the
,.. \ • -.- I .•nlo2 rg
1i.ik ,_q 1--~ ~q
jn:erse is also logically true.
T T T
If two angles are congruent, then they have the
T F F
same measure.
,p ~ q
converse, Inve rse, Contrapositive ) F T F

Statement If p, then q F F T
~-" ,G
If not p, then not q_
Inverse Toe bicooditional p 4-+ q is true when ever P and q
If not q, then not p. ~·
Contrapositive have the same truth values and false otherwise. q ,1
The contrapositive of a conditional statement of
the form "if p then q" is "If -, q then -, p".
s- Theorem
Symbolically the contrapositive of p q is -, q -, p. The propositions P (p, q, - ) and Q (p, q, -) are
(i) Conditional : The conditional of q by p is "If p logically equivalent if and only if the proposition.
then q" or "p implies q" and is denoted by 'p ➔ P(p, q,-) tt Q (p, q, -) is a tautology.
19~ • 1
q'.
and q Proo f
(ii) Bicondftional : (Hf) : The biconditional of p
is "p, if and only if, q" and is denoted by p H q. ► Step (I) : Let P (p, q, ...) = Q (p, q, ...). Then they
(iii) Only if: p only if q means "If.not q then not p"
or have the same truth table. ..
equivalently, "if p then q". :. P(p, q, ...) t t Q (p, q, ...) is true for any value s
of the variables p, q, ..., It means that the propo sition
is,
(iv) Sufficient condition : p is a sufficient condition
for q means "if p then q". tautology.
Ex. 4.6.3 : Determine the contra positive of the ► Step (Il) : Since each step is reversible, ., ·' ,,
statements. :. Converse is also true.
(i) If Bbide is a teacher, then ho is poor.
(ii) If Thombare studies, he will pass the test.
~Tec h-Ne o Publications...A SACHIN SHAH Ventu
re
tMS-'126) H. t2 11.~ _ . znc,t. •
Artfficial Intel! nee MU-Al & OS / ElectronlcS
19- 4.1.1
~ 4.J ~ ARGUME~TS 'Jbe argurne
nt p P:z, --, P. r- Q is valicl
i, it ~
proPOsition <P1 "P2" - "PJ...,. Q
Definit ion: An argume nt is an asse,ti.on such P,. onlY if the ~1
. . ns p P:,, ...,
that for a given set of propositw 1, iautology,
.
called premhe •, gives nse to another led the
proposit ion Q (or a consequence Q), cal d proof ecaJJ that P1, P2, ...,.P., are true if llnd
We r . °'11);,
concl,u ion. And such an argument is denote
p AP2A .... /\ p n lS true..... ~-(i)

by P1, Pa, ....., P,. Q
I
nus th anmeot P,, P2, .... .P. I- Q is VaJidif.
e aft,- . Q~
De/milio n : Valid Argumen t or Logical Argument :
trUe whenever p i, p2, •.. • PO is true or tqui,.,a1,.,,.,
~
1. • sai•d to be logical
An argument P1, P2, ..., P0 Q 1s p A Pz A ... A p0 is true.
or valid if Q is true whenever all the premiseS Pi, 1
'!bat is the proposition <P1 "P2" ... "P.) ~Q,
P2, ... , P0 are true.
a tautology.
2. An argument which is not valid is called a (allacy.
~ 4.7.2 Fancfanaental P1bidple of LOdQI
Ex. 4.7.1 : Show that the following argument is valid
Reasonins
p, p ➔ q t- q (This is called as Law of detachment).
@ Soln. : We prepare the truth-table for p ➔ q The principle states that :
"If P implies q and q impHes r, then P imp&. r.•
' ¥·-:-

P,.. f~q4 :~,lq that is if the following argument is valid :


T T T p ➔ q, q ➔ r r- p ➔ r. (Law of syllogism).
T F F
~ 4.7.3 Vertflcadoa ol law of SJ1otka
F T T
fJ ,JO,.. I I •
F I I I ~ifj' Ex. 4.7.3 : Show that the following argumra i.tt
F T
(i) p is true in 1 and 2 lines; p ➔ q is true in 1,', 3, 4 falJacy : p ➔ q. -, p t-, q.
lines. e">h..,. @ Soln.:
(ii) P. and p ? q are true in line 1. We construct the truth table.
, . 'tIT
(i) (p ➔ q) A(, p); and-, q and then
Since in this case q is also true. , v:{mr~
[(p➔ q) /\ (-, p)] ➔ (-, q) •' l
The argument is valid. , •1 l
•••• J . and check whether it is a tautology or not
Q' Remark or
i, '
For the argument to be valid, it is not necessary ILi Soln. :
r
that 'p', 'p ➔ q' and 'q' are true in all the rows. .. \-.-~
q P ➔c{ ;-tP (p ➔""
t"f II , i .......

Ex. 4.7.2 : Show that the following ar~ent is a


I~
t-, ~ -,q ftp. -f.,.-, pJ 1S!
fallacy : p ➔ q, q f- p TT T F F
@ Soln.: I JI f T F F
F T
-
We observe that p ➔ q and q are true in case (3) FT T
F F T T
-
in the above table, but in this case p is false. 1 ' '
Hence the argument is fallacy. I •
FF T
T

T
T

T
F

T
F

T
-
I Smee the proposition [(p ➔ q) A-, p] ➔-, qis
~:-:-=--~-~--:-------;-----.!!..._no_t_a_ta-;u;:;to;ilo_:gy:_:()as=t~c=o=lu:m:m~}~,:
(MS-126)
.. ,1 4
,.. Iii
hen:c:e~it~i=s~a~fall=a&Y=•
---
Te.ch-Neo Publications...A SACHIN SHAH vedJI'
1oteUi08__nce (MU-Al & OS/ Electronics)
~ e r v e that in line (3) of the truth (Knowfedaa and Reasonlng} ... Paoe No. (4-19
~so are true but, q is false. table P ~
_,d..., p ~ q p.-.q ,q !P ➔ q) ... (-,q)] -,p [(p ➔ q) ... t, q)J;-t'~
4 A: rrove that the argument is Valid
1,.1,
fl, qr P·

-
TT
TF
T
F
F

T
F F T
T
r~q, F F
Soil'I•: F T T
¢ can establish the validity of the
F F T T
we argument in F F T T T T T
cliJ!efCDl methods :
~ 1 Since the proposition [(p ➔ q) " rq] ➔ rP is a
~construct truth-table for p H q, tautology, the given argument is valid.
.. ' •
Ex. 4.7.6: Show that (p" q) ➔ (p v q) is a tautology.
p q PH'q ,
0 Soln.:
T T T We prepare truth table:
T F F Since the truth value of (p /\ q) ➔ (p v q) is T for
.. ~~ i
1
11
all values of p and q, the proposition is a tautology.
F T
.' .. [J

F F
F ,.
p "q ~
p:i(q' ~vq •
j:;;'·q~pv·q ---~ J

T ,.
T T T T T
ro Now•1 p ~ q is true in lines 1 and 4' and qm
· T F F T T
!inesl F T F T T
and 3. F F F F T
:. p +-+ q and q are both true in line 1, where pis
Ex. 4.7.7: State the converse of each of the following
also true.
implications :
:. The argument p ++ q, q I- p is valid
(i) If 4 + 4 = 8, then I am not the PM of India.
Method 2 (ii) If I am late, then I did not take the train to work.
Now, we construct the truth-table of [(pH q) "q) (iii) If I have enough money, then I will buy a car and
➔pas follows : I will buy a home.
. ..,
f.·-·c1 I,.ii,!" l°A "1
p+-+q '(p·~q)li'..q [(pH q) Aq),:;+p
0 Soln.:
tl', 9 (i) If I am not the PM of India, then 4 + 4 = 8.
T T T T T
(ii) If I do not take the train to work, then I am late.
T F F F T , (iii) If I buy a car and I buy a house, then I have
T enough money.
F T F F
Ex. 4.7.8 : Determine the truth value for each of the
F F T F T
following statements :
Since [(p H q) " q] ➔ p is a tautology, the (i) If 8 is even, then Bombay has a large population.
argument is valid.
(ii} If 8 is even, then Bombay has a small
Ex.4.7.5: Determine the validity of the argument population.
P➔ q, .., q I--, p. (iii) If 8 is odd, then Bombay has a large population.
@Soln. : We construct the truth table for (iv) if 8 is odd, then Bombay has a small population.
[(p ➔ q) /\ -, q] ➔ -, p

O,l5.U6}
~ Tech-Neo Publications..A SACHIN SHAH Venture
~ •.. Pae:: .l
MU-Al & OS / E1ec1ron1cs . c1eetarative language, which tneans
15 8
(2) Prolog consists of data based on the fcleb ~
Let P = 8 be even, q = Bombay bas large population. a pro~ logical relationships rather ~
We prepare the truth-value table.
rules _i.e.hoW to find a solution.)
cornpunng . .
~ I
~~t. No. .
1 .... ~¾ A logical relationship des~be~ the relation.,k
p q p➔ (3) . h Id for the given applicallon. "II)
t.: '- '.·
' ,. .1. J- q I I which o
(i) T T T ►.M ,., ._PRECEDENCE RULE
(ii) T F F
(ill) always use a truth table to show that
F T T •
(iv) =nt-for m is valid. We do this by 5~
F F T
that whenever premises are true, the conc1~
Ex. 4.7.9 : Construct truth tables to determine whether must also be true.
the given statement is a tautology, a However, this can be a tedious approach. ~
contingency or an absurdity : • example, when an argument form invol\'es 10
(i) p ➔ (q ➔ p) (ii) q ➔ (q ➔ p). different propositional variables, it requires 2io:::
@ Soln.:
1024 different rows to show the argument forin is
(i) We prepare the truth-table : valid, or not
Fortunately, we do not have to resort to ~

T T T tables. Instead we can first establish, Rule ((
T F T precedence, i.e., the order of preference in wbiQ
F T
F F
F
T
T
T
.. ,:~
the connectives , are applied in a formnJa ((
propositions that bas no brackets is :
Since the truth-value of p ➔ (q ➔ p) is T for all (i) -, I (ii) /\ (iii) v and --, (iv) ➔ and...I
values of p and q, the proposition is tautology.
(ii) The truth-table is : I '
.., I
p, q· '...
q;p· 'q ➔J'q-+p) In this section, we shall consider formulae which
c.,L L:;J contain the
T T T T connectives A, v, -,. Well shall see lalr:r
• II
T F T T that, any formula containing any other connective can
F T F ' 1: ,, , be replaced by an equivalent formula containing only
F
l ',I .II , these three connectives.
F F T T
Since the truth-value of q ➔ (q ➔ p) is not T for , Definition : Two formulae, A and A•, are said to r,,
all values of p and q, the proposition is contingency. duau of each other if either one can be obtaiMd frrJa

Syllabus topic : lntroductJon to loglc programming the other by replac~ ,\ by V and V by A


(PROLOG) The connectives " and v are called duals of each
other. If the formula A contains the special variable T
~ 4.7.4 Lope Programming (PROLOG) or F, then A•, its dual, is obtained by replacing T by F
(l) Prolog stands for programming in logic. In th~ and F by T in addition to the above-mentioned
logic programming, prolog language is' most . interchanges.
widely available.

(MS-126)
liJTech-Neo Publications...A SACHIN SHAH Venttl' I

Oj tl~ ic a)
-z,.. 4.9 .1 fflasndve E. .... ....

-- --
e,c. 4.9 -- -- -- -- based Oft Daals oi (Knowledoe and R9810nlnp) ... Pape No.
.1 : Wr ite the dua ls of. (4-21)
... ILi So ln. :
(i) -
(p V q) /\ r ; (ii) (p I\ q) • Mota ,ne'='n,·r...
(iii) show ·•·-"
that - n ell'Of ln' thi
"'a • ..,.,
--- "'-
(p VT ), (iv ) -, (p V q) I\ (p .._,-, wet haw
'-·'
lo
V V r) -, (p A q} and (-, p V -, q))
(ti So ln. : Du als arc 'C. • ..,..
are loglcaDy
-, (q "- , r) ••

(i) (p " q) v r We prepare truth-tables for bot


(ii) (p v q) A r (i) Truth h ex •
(Jii) (p AF ), tables for -, (p A q) press1 ons.
Troth table for
(iv )-, (p /\ q) V (p (,p V ,q )
/\ -, (q V -
-, r))
- ,1~
'Zt,. 4.9 .2 Loalcal Identities p CJ ~ rt'
~{p;ACJ) 'p t
~l~ ~ ~p y. ~q --
1. De Mo rga n'• La w• T T T F T T F F F
T F F
(i) -, (p V q) a (-, p /\ -, q) T T F F T T
F T F
l)U8l is: (ii )-, (p /\ q) = (-, T FT T F T
p V- , q)
F F F T
2. ASSC>Clatlve La ws FF T T T
(i) p V (q V r) e (p V q) V r
s·~ the truth tables are the
same. i.e., both
proposrt.ions are false in the
first case and true in the
Dual ls : (ii) p A (q Ar ) a (p other three cases . th
A q) /\ r , •. e propos•itio • ns, -, (p " q)
(-, p) v (-, q) arc logically equ and
3, co mm uta tiv e La ws iva len t;
i.e .,-, (p A q) ■ (-, p) V (-, q).
(i) P V qe qv p; Du al ls: pA q•
qA p
4. Ide mp ote nt La w•
(i) p V p • p ; Du al ls : p (i) Le t P (P1, P:i, ..., P.) be a
/\ p • p ' - ~ • given statement, where
Pu Pi, ..., P. arc variables.
5, Double Ne ga tio n : (In To obtain truth-value
vo lut ion La ws ) of the formula P, we develo
p truth-table for P.
-, (-, p)= p. Such a truth-table bas 2• row
s.
• If P bas true val ue for all
6. Dis trib uti ve Laws possible assignments
the n Pis said to be Tautology.
(i) p V (q /\ r) = (p V q) /\ (p V • If P bas false value
r) for all possible
assignments, the n P is
Du al ls: p A (q v r) = (p A said to be
q) v (p Ar ) contradldion.
7. Absorption Laws • If P bas truth value, T
for at least one
combination of truth-values
of P 1, P2, ••• , Pa
(i) p V (p A q) a p Dual : (ii) p /\ (p V q) = p the n P is said to be sat lsfl abl
e.
(ii) Th e problem of finding wh
8. Co mp lem en t La ws ether a given statement
is tautology or contradiction
or satisfiable in a
pv -.p aT ; Du al: pA -ip aF finite number of steps is
called as ded slo n
9, -, T ■ F, -, F ■ T problem.
(iii) Fo r decision problems, the
construetion of truth-
~ 4.9 .3 Ex am ple s OD Loaial EqalvalencY
tables ma y not be a practic
al solution. Th ere for e
we consider alternate pro
cedure kno wn as
Ex. 4.9.2 : Show that the pro reduction to Normal Fo rm s
positions -. (p " q) and •
., p v q are logically equivalen
t.
~ Tech-Neo Publications...A SACHIN SHA
(MS-126} H Venture
(t<noWI$ and Reasoning) ••• Page~- i..~

~ lnteTP:e (MU-Al & OS/ EJec1rOf'lk:S) . (iistributite laW,


UsJJlg (p " q) = F v (p /\ q)
~
1bere are two such forms : ::: (p /\-, p) V
1. Disjunctive Normal Form (DNF}. (": p /\
2. Conjunctive Normal Form (CNF). ...,h~
== (p " q) is d.nJ.
'These forms are also called as eanonlca1 form,s.
-z,. 4.f0.3 eonJanedve Normal Fonn CCIII)
'a. 4.10.1 ~ • Normal Form
tateroent form, which COnsists
Disjunctive nonnal form is a disjunction of A s
di0 o of tundamen dis.lJUDction is
tal Of
~
fundamental conjunctions (A). Now fundamental co~UD . normal form (abbreviated as cn~I
conjunctio ns (A) are conjunction of simple statements, conJunctJVC . . . . -...J. ~-
daDJental disjunctions (v) are clisJunction of .-~
i.e. joining two statements by "·
fun ts • e J.oioing two statements by <y, ,
i.e. p, ..., p, ..., p " q. p " q, p " -, p " q are staternen , 1• • •
fundamental conjunctions. « Remark
Qi" Some examples of DNF are as follow• : We kooW that p v-, pis always true. I I
~I
1.
(i) (p /\ q /\ r) v (p /\ r) v (q /\ r) . p v -, p v r is logically equivalent
L~, ~I
(ti) {p /\ -, q) V (p /\ r) tautology.
(iii) (p /\ q /\ r) V -, q 2. cof is a tautology_ if ~d ~~y if every ~
·1 (iv) (-, p /\ q) V (p /\ q) V q.
« : : : : contained m 1t 1s a tautology. -:: I
I (v) Remark : q "-, q, p "-,pare always false, (F).

'a. 4.10.2 Eumples of DNF (i) p "r (ii) -, p A (p v r)


Ex. 4.10.1 : Obtain the d.n.f. of the form (iii) (p V q V r) /\ (-, p V r)
{p ➔ q)A(-,pAq)
1:1. 4.1 O.J(A) Conversion from PL to CNF
lt'.f Soln.: • L ll" t.~~a.- ~ --;.~ ~~..;~---- - ,.. -. ~ ~- ~~~-~
{p ➔ q)A(-,pAq) J I• ) UQ: Explain the sfeps involved in converting 1he 1
• (-,pVq)A (-,pAq) 1 •propositional logic statement into CNF with . ,:
I , ,'I- 1 I

~
,t: suitable example. t ,
Using distributive Jaw, [(p ➔ q) =,p V q] ..... "
' I
L- - - _ ___....;;.i_..__.._ - - - - - - - - - - - - - - - _..._ - J_.,I
• [-,p /\ (-,p /\ q)] V [q /\,p /\q)
II
To convert propositional formula to CNF, wt
Using associativ e and commutative laws ; perform the following steps.
• [(-, p /\-, p) /\ q) V [q /\ q /\-, p] ► Step 1 : Push negations into the foonula,
Using idempoten t law ; i.e. -, p /\ -, p = -, p and q " q repeatedly applying De Morgan's Law, until all
=q negations only apply to atoms. We obtain •
= [-, p /\ q] V [q /\-, p] is dnf. formula in negation normal form : J •
(i) -, (p V q) to(-, p) /\ (-, q)
Ex. 4.10.2 : Obtain the d.n.f. of the form: p " (p ➔ q)
(ii) -, (p V q) to(-, p) V (-, q)
0 Soln. : We have
► Step 2 : We apply distributive law ~
pA(p ➔ q) where a disjunction occurs over a conj~
=pA (-,p vq) (": p ➔ q•-,pvq) When it is completed, the formula is in CNF. tJ
(i) P V (q Ar) to (p v q) /\ (p v r) 1r

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH VedJP
1· _J
I
-~ .;,1i?Dill----~-
~ ~~~-----------aii;:~\~~~-

. ..;f1Cisl Intelligence (MU-Al & OS/ E l ~


~ lcs)
(J' Re,nert<
(Knowledpe and Reasonlng) ... Page No. (4-231
ro obtain a formula in disjunctiv
Ex. 4.10.4 : Obtain cnf of the following :
. . e DOnnaJ forn,
.:,nplY apply the distnbution of/\ over . ...., (p ➔ q)A(q ➔ p)
_. v m step (2)
£salDPle : A ➔ ((B "C)) @ Soln.:
o-,Av(B AC) {p ➔ q)A(q ➔ p) (',' (p ➔ q}=-.p~q)

o (-, Av B)" c-. A v C) = hp vq)" (-,q vp) is cnf.


,.-,'."--- ------ ---
.;: •c~nvert the following
r, .""''. 'propositional;;:~.-. -:'.1-:- - ; ; l
1og1c stateh1ent<,, Ex. 4.10.5: Obtain the Conjunctive normal form of:
{ , into CNF. .
OOpA(p ➔ q)
11t=c-1;a•.,=
00-.(pv q)H(pA ~
,,~ A ➔ cs H o ..-,-...,,F.ni)J
!

~~~~
- - - ---
- - - • - - - - - -·
► Step (]) : FOL : A ➔ (B H C)
------------~ ■
-~
.,_,_Jfy I @ Soln.:
(i) PA (p ➔ q)
► Step (ll) : Normalise the given statements = p /\ (-, p v q) is cnf. (': p ➔ q =-, p v q)
A ➔ (B ➔ CA C ➔ B) (ii) -, (p V q) H (p /\ q).
A ➔ (B ➔ C) "A ➔ (C ➔ B) = (-, (-, (p V q) V (p /\ q)] /\ [-, (p /\ q) V-, (p V q)]
► Step (III) : Converting to CNF : (',' pH q = (-, p V q) /\ (-, q V p)]
We apply the rule : P ➔ Q -, p v Q = [(p V q) V (p /\ q)] /\ [(-, p V-, q) V (-, p /\-, q)]
:. -, A V (-, B V C) /\ -, AV (-, C B) (",' -, (p /\ q) :-, p V-r q;-, (p V q) =-, p /\-, q].
-, A ( (-, B v C) " (-, C B)) = [(p V q V p) /\ (p V q V q)] /\ [(-, p V-, q V-, p)]
(-,pv-,q v-,q)
~ 4.10.4 Eumples on Con.lanctlve Normal = (p v q) 1 (pv q) A (-,p v-, q) A (-,p v-,q).
Form
[': pvp=pa nd-,pv- .p=-.p]
~ 4.10.3: Obtain conjunctive normal form of (',' pAp=p; -.pA-ip =p1
(1) (-, p ➔ r) /\ (p ➔ q) (ii) (p "q) v (-, p" q "r). = (p V q) /\ (-, pv -, q) is cnf
ltf Soln.:
(I) (-, p ➔ r) "(p ➔ q)

= [-, (-, p V r)] /\ [p ➔ q)


Let p be a statement form containing n variables
= [(-,-,p)v r]A[-,pv q] C-:p ➔ r=-,pvr) P1, P2, ...., P0 • Its dnf can be obtained from the truth-
table, '
= [p v r] " [-, p v q] is cnf.
as follows:
(II) (p" q) v (-, p "q" r)
► Step (I) : Consider tht>-tru.th value (I) from P. =
Using distributive (left) law ;
► Step (Il): Form the conjunction ('A'):
= [p v (-, p" q" r)]" [q v (-, p" q /\ r)]
=~V-,~A~Vq)A~VfflA~V-,~
<P1 "P2" ••• "P1" ••• "P1" ••• "P.),

A (q V q) A (q V r)) Where if P1 is true consider P1 and


=[TA (p V q) A (p V r)J A [(q V.., p) A (q) A (q V r)J if PJ is false consider -,PJ·
= [(p V q) A (p V r)J A [(q V..., p) A (q) A (q V r)J Is cnf. Such a term is called minterm.

(MS-126) , . Iii Tech-Neo Publications...A SACHIN SHAH Venture


and Aea son i
Artitlaal I MU-Al & OS / ~ ,nple
for 8"8 ~ng variables suc h as :
► .
Ste p (Ill ) : The disjunction of rrun • (he dnf • VO1Vi,,
t.enllS 15 statements 1D
of the given form. ,, c··) '"le == y + 3" (iii) "x :: y
.) 3 " +.,_...
(, "x, .
. Jt is und er attac kby an tntr

ude '
'&. 4.1 1.1 Exmaples on dnf . ) •'Computer r"
(1v is functioning pro per ly"
"Computer Y
Ex. 4.11.1 : Find the dnf of( -. p ➔ (v) ments are neither true nor false
r)" (p +-+ q). .
These state
@ Sol n.: . th variables are not specifi • • J.f lbt
ed. »ere
► Ste p (I) • Flf'St we prepare values of e ys
the truth-table. - discuss the wa . that propositions can be Pt'Odi...~~
:: -~
cf li,,'t };\ p ;H i~i )• 1.-fP,ttqf FP ~Jt ~i; fti from sueb statements.
T T T F T T T ~ 4.1 2.1 ~
T T F F T T ,..roent ''x is gre ate r tha n 3"
T - has
• The sta..,
T F T F parts- Tbe first part. the van•able . ~o
T F F x, 1s the SUbjeq
T F F
of the statement
F T F F
Tbe second part - call ed as Pre dic
F T T T • ate "is ~
T F F tbaO 3,, _ refers to the property tha
t the subject Of
F T F T F the statement can have.
F F
F F T We denote the statement "x is gre
T T T T • ater than 3" by
P(x), where "P" denotes _the pre dic
F F F T F
ate ..is ~
T F than 3" and 'x' is the van.able. The
statement P(x)
► Ste p (ll) : (i) Consider only 'T' from is also said to be the value of P at
last column L
and choose corresponding values
(T) from p, q, r. • Once a value has been ass ign ed to
E.g . for firs t row , corresponding p, the Variable l,
q and rare true, the statement P(x) becomes a pro
so we consider (p " q " r). position and has
a truth value, and P is cal led as
(ii) Similarly for 2 nd row, correspond proJ>OSWonai
ing p and q are function.
true, but r is false; so we consider
(p " q "-, r).
(iii) Aga in for 7th row, correspond
ing p, q are false and Ex. 4.12.1 : Let P(x) denote the stat em ent "x > 3".
r is true, so we consider, (-, p A-. q" r). What are the truth values of P(4) and
P(2).
► Ste p (Ill ) : Hen ce the dnf @'
of Sol n.:
(-, p ➔ r) A (pH q) is equal to: To obtain P(4), we put x = 4
in the statcmcot
(p "q Ar) v (p "q "-, r) v (-, 'x > 3'. :. P(4) is "4 > 3", wh ich is tru
p A-, q "r) . e. But P(2),
which is the statement ''2 > 3" is fals
e.
Syllabus topic : Predicate Log
ic : FOPL, Syntax, Ex. 4.12.2 : Consider the stat em
Semantics ent '"le = y + 3". We
can denote the statement by Q(x, y),
wh ere x and y are
variables and Q is the predicate.
Wh at are the tralb
·'··•:"i'2 ';ui>ICATE toGic ' values of the propositions Q(l , 2)
"™~v~
·, , and Q(3, 0) ?
- ~ (f! .~P Rp lR .L @' Soln.:
To obtain Q(l , 2); we put x =
Propositional logic, cannot adequa
tely express the 1, y = 2 in ~
statement Q(x, y). Hence Q (1, 2)
me ani ng of statements in mathem
atics and in computer : "l = 2 + 3" which
programs. is false.
.. The statement Q (3, 0)
3 = 0 + 3", which is the proposition
is true.
(MS-126)

li!T ech -Ne o Publications...A SAC


HIN SHAH Veflbl'
~ lntelli ence MU-Al & OS/ El~nlcs
~ ,.12.3: Let R(x, Y, z) denote the ta and Reason!
• ' What are the truth Values of s temeot "x + Y
~, iZ: 3) and R (0, 0, 1) ? the propositions q, Remark (I)
R
('
(ti 50ln.: (1) The symbol 'V' is known as universal quantifier._

'Jbe proposition R (1, 2, 3) is obtain (2) The proposition "for all x. P(x)", . wbic\ ~
.
_ I y = 2, z = 3 in the statement R ( ed by PUtting interpreted as "for all values of x. P(x) 1S true. lS
r.- ' x, Y, z). We see a proposition in which the variable x is said to be
d}at ) • "l 2 3" 1..: 'universally quantified' [and 'V' is known tas
R(I, 2, 3 1s + = Wwcb is true. universal quantifier]
Also, note that R(0, 0, 1) which is th
I. c.,
"() + o= 1" , 1s
• false. e statement 'a. 4. t 3. t Exlscendal Quadflen
~ Remark • Suppose for the predicate P(x), V x [P(x)] is false,
but there exists at least one value of x (or some
I• In general, abestatement
d
involving n variabl
es Xi, values
Xz,···.xn can enoted by P(xi, x2,.. ,x,J, and is
of x) for which, P(x) is true, then we say that, x is
the value of the propositional function p at the n-
tuple (x 1, X2, ••• , x0 ) bounded by existential quantfflcation.

p is also called an n-place predicate or a n-ary • The symbol used for 'there exists' is denoted by
predicate. '3'. Thus 3x (P(x)) means 'there exists a value of
x in the domain for which P(x) is true'.
z. A predicate is generally not a proposition but
q- Remark
every proposition is a propositional function or a
predicate. (1) Negation of V x(P(x)) is "V x. P(x) is not true"j
i.e. 'there exists' at least one x for which P(x) is
Syllabus topic : Quantlflcat1on .. not true, or in other words, there exists an x for
which •..., P(x)' is true.
.,,. .,.,. ~.,. (2) Observe that the statement 3 x P(x) is false if and
• RSAL QUANTIF only if there is no element x in the domain for

which P(x) is true.
, Many mathematical statements assert that a
We summarise the meaning of universal
• property is true for all values of a variable in a
particular domain. The 'domain' of the values of a quantfflcation of P(x) :
, variable is also called as 'universe of discourse', Table 4.13.1 : Univenal qwintfflcatlnQ
or 'domain of discourse', (often just referred to as
the 'domain'). ~fillet
, Toe universal quantification of P(x) for a ';/ x P(x) P(x) is true for There is an x for
particular domain is the proposition that tells that every x which P(x) is
P(x) is true for all values of x in the domain. false
3 xP(x) There is an x for P(x) is false for
De{in.itif,n : The universal quantifi,canon of P(z) is
which P(x) is every x.
Jl the statement. true
, "P(x) for all values of x •ID the d Omain"
• The
Specifying the domain is mandatory when
notation V x P(x) as "for all :x P(:x)".
quantifters arc used. The truth value of a q~tifie<I
• An element for which P(x) is false is called a
statement often depends on which elements are m the
', counterexample of V :x P(:x).
domain.

(MS-126)
~ Tech-Neo Publications..A SACHIN SHAH Venture
~~~ ~~~ ~~~ ~~~ ~~~ ~~: ;-r~ ;j:~ ~;~ . 4-
i MU-AI & OS/ EJec1JOOlcS

EXAMPLES OF QUANTIFICATION _j
~~a ~nd ~A =ea son ~~· in•i l!=i ··~· P•a
::1 p I
(lt ) v P(itz} v ••• V p (ltn)
. •
a!C e~N o~

aecause this disjuncn~n is tru(-re, ~ and Only
• 1east one of p(it1), p(xv , ••• , p "nl lS true. •
If\
Ex. 4 ·14-1 : Let P(x) be the statement
....,_2 > O", where
~e universe of discourse consists of all integ _ : What is the truth value of 3x P(~)
ers. What Ex, 4.14 4 .. 2 10'' d th
. th ' \\ih.._.
IS the truth-value of the quantifica
tion V x P(x) 7 P(it) 1s e statement x . >. . an e uni ~
onsists of pos1uve mte gers not c~ vCtlc
@ Sotn.:
disco C
urse ar
(No te : 'Uni vcn e of discourse' is dom
-- ~
2
ain of the 47
state men t "x > O" consisting of all integers] so1n .:
!ti 1be given domain is { 1, 2, 3, 4}. From the
We see that x = 0 is a counter-example becaU ~
2 SC •
x = 0 whe n x = 0, so that x2 is not greater problem. we say ~ ~e proposition 3 x
than Owhen P(~ _e
x=O . the same as the disJuncuon.
) IS
1 llir Remart< P(l) v P(2) v P(3) v P(4)

Whe n all the elements in the domain can be aecause P(4), which is the statement "42 >
listed- lO" .
say, X1, Xz, ...x,. - it follows that the • •IS
universal trUe, it follows that
quantification [P(l) v P(2) v P(3) v P(4)] is true
\;/ x P(x) is same as the conjunction P(x ) A
1 P ('½) A ••• :. 3x P(x) is true
A P(x J, beca use this conjunct
ion is true if and only if
P(x, ), P(xi), ..., P(x J are all true. Ex. 4.14.5: What do the s~ en ts \;/ x < O 2
(x > O). ~
Ex. 4.14 .2 : Wha t is th; truth-value of V
x P(x), where
cl
y * O * 0) and 3 z ~ 0 (z = 2) mean.
where the
P(x) is the statement "x < 10" and the dom domain in each case cons1Sts of the real num
ain consists bers.
of positive integers not exceeding 4 ? @ Soln .:
@ Sol n.: (i) The statement V x < 0 (x2 > 0) states that
for evuy
The domain consists of integers 1, 2, 3, real number x with x < 0, x2 > 0. Tha t is,
4. We it ~
obse rve that (from the above remark) the state "The square of a real num ber is positive"
ment ¥
x P(x) is the same as the conjunction. This statement is same as \;/ x (x < 0 ➔ x2 >
O).
P(l) A P(2) A P(3) "P (4).
The statement P(4) "42 < 10" is false
(ii) The statement V y :1:- 0 cl :1:- 0) states that for evay
real number y with y :I:- 0, y3 :I:- 0.
:. \;/ X P(x) is false. That is, it states that "the cube of ever y non-
mo
Ex. 4.14 .3: Let Q(x ) denote the statement real number is non-zero".
''x = x + l".
Wh at is the truth value of the quantification This statement is equivalent to
3x Q(x),
whe re the domain consists of all real numbers. v Y <Y *o➔ l * o)
@ Sol n.: (iii) The statement 3 z > 0 (z2 = 2) state
s that there
We note that Q(x ) is false for every real num exiSts a real number z' with z > O such that
ber x,
~ is, it states "There is a positive square
z2 =2.
:. The existential qualification of Q(x), whic root cl.
h is 3 2 • The statement
...
X Q{x), is false.
Remart<
3z (z > O" z2 = 2) .
is equivalent to

I • Wbc o all elements in the domain can be


say, x 1, x2 , ... , x,. the existential quantifica
listed _
tion 3x
°a. 4.14.1 F1- and Boand v - J - L . 1 -
• Bound Var iables : Whe
--
P(x) is the sam e as the disjunction. n a quantifier is WP'
__ ..,1 --
1
-: :: :, ~- -- -- -- -- -- -- -- 1_ the.variable x. we say that this occurrence
_::v ~an~ ab~l e:_! is~b o~o nd~ !:--= ~=-= ~==
of ~
(MS-126) =~-
~
Tech-Neo Publications-A SACHIN SHAH ve.nt
Jt
~ ' "'""" · - ..,v-,..ra. US/ El~ics
free Variables : An occurre
• is not bound by a quantifj nc_e of~ Variable that and
[A variable is free if it is O er _is said to be free. If follows that for an a, p(a) " Q(a) is true. Hence
. uts,de the
<JllB:°tifiers . m the formula that s ~ of all Vx lP(x) /\ Q(x)] is true.
variable, or set equal to a Parti,_,,1__ pecifies this :. 'ti x [P(x)" Q(x)] = V x P()t) "- Vx Q(x).
. ~\&14£ Value]
, For eumpIe: ~ the statement 3x (
variable x 1s bound b x + Y = 1), the °a. 4.14.]NeradaaQaadledbp. . . .
• :>.. Y the ,.,,;~ .
quan:tifi1cat1on --, but the . -""'tcntial
Here we consider the negation of a quantified
l)eCaUSC it is not bound by Variabl~ Y is free
expression :
value is assigned to this variabal quantifier and no
e. F°lf'St we consida 'universal quant:Hla: 'V",
~ 4.14.2 Loslal Eqatyalences lnvo1vtaa We consider an example:
Qaandften
"Every student has taken a course in calculus."
We have introduced the noti· • This statement is a universal quantification.
on or logical
equJvalences of compound proposttt namely, V x P(x), where P(x) is the statement: "x
. ons. Now we
exteod this notlon to expressions involvin .' has taken a course in calculus" and domain
and quantifiers. g Predicates consists of the students in your class.
l)e~n : State_ments involuing Predi.catea and • The oeption or thi1 statement 1s : "It is not the
quanti{iu3 are lolflCall:, equwalent if . case that every student in your class has taken a
they . have tM s ~ truth ualue no m:! :iict
P ~ ~ sub~tituted into these statements and
0
course in calculus".
Thls statement is equivalent to: "There is a
which domaui. of dUJCOurse is used for the uariablu . student in your class who has not taken a course
these propositional functions. m in calculus''.
We use the notation S = T to indicate that two • This is simply the existential quantification of the
statements S and T involving predicates and negation of the original propositional function.
quantifiers are logically equivalent namely,
3x-, P(x).
Ex. 4114.6 : Show that V x [P(x) /\ Q(x)] and \;/ 'x P(x)
" 'rJ x Q(x) are logically equivalent, where the same • The above two statements illustrate the following
domain is used throughout. logical equivalence : -, Vx P(x) = 3x -, P(x).
@ Soln.:
1111, l v11br.l J,•lu~ 'a. 4. t4.4 N..-. • E W Qulllbdoll
► Step (I)
We take an example to negate an edstmtlal
• Suppose that V x [P(x) " Q(x)] is true. This
qualiftcatioo : Let us con.sider an example :
means that for any 'a' in the domain p(a) /\ Q(a)
is true. Hence, p(a) is true and Q(a) is true. "There is a student in this class who bas taken a
course in calculus."
• Because p(a) is true and Q(a) is true for every
element in the domain, we conclude that V x P(x) This is an existential qualification : 3x Q(x),
and V x Q(x) are both true. where Q(x) is the statement "x bas taken a course in
1l : 1I, !>lqrni." calculus."
:. 'ti x P(x) " V x Q(x) be true , ")\'I l)l I .I I
The negation of this statement :
► Step D J 'I
(1) "It is not the case that there is a student in this
• Now let \;/ x p (x) A V x Q (x) ~ true, ~t follows
class, who bas taken a course in calculus."
that
• 'ti x P(x) is true and 'ti X Q(x) is true. Hence, if 'a' This proposition is equivalent to :
is in the domain, then p(a) is true and Q(a) is true. (2) ''Every student in this class bas not taken
[Because P(x) and Q(x) are both true for all calculus."
elements in the domain].
~ Tech-Neo Publications-A SACHIN SHAH Venture
™5-126)
•• .,. .-- --- --- --• •• •• •• •-_
,-: --- --= ~= :e d: ;~ an d Reas
onin •.. Pa e No. 4-~ ,
ArtiflciaJ In MU-A l & OS / EJectronlcS

We phrase the above two stateroeots ..., :.. the


b·ect referred by the Br oth er~
ffere, the O ~b.ect referred by Asho k. b)
I
iJilil to the o ~
language of quantifiers : is s ar uality symbol can also be l1SCd
.
..... 3x Q(x) • 't/ x .., Q(x) The eq ent that two term s are not the ~
negation to repres
~
This is logical equivalence, objects- . h. . al
I . -, (x = y) whic 1s equi v ent to~ *
~ 4.14 .5 De Mor pa;s uws for Nep doll t EJaJ DP e. Y.
for ~ ruld for quantifier
,r FOLlm..,v ••--
Qam dfle n
Tabl e 4.14.l : De Mor gan's laws for quantffler As propos1·tional logic we also .
have infer-..
s . first-order logic, the follo wmg are some -~--.:t
basic
Negatio~~ Equ ival ent Wlie .
., .-,. "'
nls Whe n
roles 10 . • FOL .
infer ence rules 10 •
~
~

. ersal Generalisation
l'n'',• . state men t
C
nega
true ?
tive false (i) Uruv
... ,. (ii) Universal Insta O~
.
• bOD
.
-. 3x P(x) Y x-, For every Ther e is an (iii)
P(x) Ex,istentia
l JnstaOba~OO
P(x) is X for
. X,
false. which P(x)
(iv) Existential Jntroducbon
(i) Universal Generalisation
is true
Universal generalisation is a valid inference
-. Y x P(x) 3x- . P(x) There is an P(x) is true rule
which states that if premise P(c) is true
x for which for every x for any
arbitrarY element c, then we can have a conc
P(x) is lusion as
Vx P(x). It can be represented as :
false
p (c)
a. 4.14 .6 Different Inference Rules for FOPL V x P(x)
·- ---- --- --- -------------- --- ---..
~uQ.·.. Explain different inference Rules for FOPL
(il) Universal Instantiation (UI)
I ' Universal Instantiation is also calle d as unive
. rsal
\.. -

, - - - ------ - ---- - ------ ------'
.).
Inference in First-Order Logic is used to dedu
I
elimination or UI is a valid infer ence rule.
It can be
ce applied multiple times to add new sente nces .
new facts or sentences from existing sentences.
As per UI, we can infe r any sent ence obtai
• Let us first see som e basic terminologies used nell
in by substituting a ground urm ror the variable
FOL. .
The UI rule states that we can infer any sentc
Substitution : Substitution is a fundamental ncc
P(c) by substituting a ground term c (a cons tant
operntion performed on terms and formulas. within
It occurs domain x) from Vx P(x) for any obje ct in the
in all inference systems in first-order logic univcne
. The of discourse.
substitution is complex in the presence of quan
tifiers in
FOL. It can be represented as V x P(x)
If we write F [a/x], so it refers to substitute P(c)
a
cons tant 'a' in plac e of variable 'x'. Example (l) : If "Every pers on like ice--cream
" ➔ Vx
Equ ality : First-order logic also uses what P(x) so we can infer that " John likes ico-crcain

calle d as Equality in FOL. For this, we
is ➔ P(c). 1
can use
equa lity sym bols which specify that the Example (U): "All kings who are gree dy are
two terms Evil".
refer to the sam e subj ect In FOL form:
Exa mpl e : Brother (Ramesh) = Ashok
't/x king (x)" gree dy (x) ➔ Evil (x)

(MS-126)

Iii Tech-Neo Public.ations...A SACHIN SHAH ventUI'


nee MU-Al & OS / Electronics
Knowt
~enda l Instanti ation Pi is greedy (y) P2 is greedy (x
(111) . Ins • • .
~stentl ~ . . tantlatt_on . is also called 8 is {xi John, y/ John} q is evil (x)
tial Elinunauon, which 1s a valid im as
rule Subst (8,q)
~tt01--0rdet logic. It can be applied only to erenlce
rep ace the
· lits
ill tial sentence. °a. 4.14.7 Eumple s
ePs1en
'fbis rule s~tes th~t one can infer P(c) from the
Ex. 4.14.7: Consider the stateme nts:
~ul
a given to the 1onn of 3 x P(x) 1 r
~a~
tafll symbol c. "All humming birds are richly coloured"
'°OS 3 xP (x) "No large birds live on honey"
is represented as P(c) ,
11 "Birds that do not live on honey are dull in colour."
,)~ti a1 Introduction "Humming birds are small"

O /Ul existent! • oducuon
"al mtr • is also kno Let P(x), Q(x), R(x) and S(x) be the statements "x
• .al ali . wn as an
existent! g~ner sat:1on, which is a valid is a humming bird", "x is large", "x lives on hooey,"
inference rule in first-order logic. and "xis richly coloured" respectively. Assuming that
, 'fhis r1;11e statesf d~at if there is some element c in the domain consists of all birds, express the stateme nts
the universe ~ 1scourse which has a property P, in the argument using qualifiers aod P~). Q(x), R(x)
then we can infer that there exists somethin •
the universe which has the property P. g m and S(x).

It can be expresses as : @ Soln.:


We express the statements using qualifietS
1.&--
3 x P(x) \;/ x [P(x) ➔ S(x)]
£J(lmple -, 3x [Q(x) A R(x)]
'Priyanka got good marks in English' \;/ x [ -, R(x) ➔ -, S(x)]
'"Therefore, someone got good marks in English." \;/ x [P(x) -, Q(x)]
« Generalised Modus Ponens Rule:
Remarks
Q'
For the inference process in FOL, we have a
(i) Here we assume "small" as "not large" and
single inference rule which is called 'Generalised
Modus Ponens'. (ii) "dull in colour" as "not richly coloure d"
Generalised modus ponens can be summarised as, Ex. 4.14.8: For the universe of integers. let p(x), q(x),
" P implies Q and P is asserted to be true, therefore Q r(x) and t(x) be the following open statements :
must be true". p(x) : x > 0, q(x) : x is even, r(x) : x is a perfect square.
According to modus ponens, for atomic sentences t(x) : X is divisible bys.
P;, P;, q, where there is a substitution 8 such that
Write the following statements in symbol ic form
subset (0,p~ ) = subset (8, pJ
(i) At least one integer is even
It can be written as :
(ii) There exists a positive integer that is even.
~ ,P'
P,.) ➔ q, subst • (8, q) (iii) If xis even then xis not divisible by 5,
····• ~ (P, A P2 A ..•./\
(iv) No even integer is divisible by 5,
Example: We will use this rule for kings are evil,
(v) There exists an even integer divisible by 5.
so we will find some x such that x is king, and x is
greedy so we can infer that x is evil.
@ Soln.:
Herc, let p~ be king (John) P1 is king (x) (i) 3x q(x)

{MS-126) Iii Tech-Neo Publications...A SACHIN SHAH Venture


Artiflaal Intell Kn<>
wted 8 and Reasonin ...Pa e N 11
MU-Al & OS / EJectronicS o.~ ,l
-
(Li) 3x [p(x) A q(x)] Syllabus topi c •• Inference rule s In FO ~L
(ill) V x [q(x ) ➔ -, t(x)]
(iv) V x ..., [q(x) A t(x))
{v) 3x [q(x) A t(x)]
-
~ 4.14 ,8 FOPL

ue.x @•l•Mi&&tl4•11
. 4.14.10 iw.i t u
What is FOP L7 Rep rese nt the 10
:idM%
. _,J ~ •
"
OWt ng sen • .
UEx.'""4."14.9 MU- 0. 2 b . 2016
. 5 Mark s
;wri te first orde r logic statements for using FOPL. _. •• : , i
~~
Soln .: - ·- -- -- --
(i) John has atleast two friends
"(il) If two people are friends then they
are ,.
. :J _

~en emi es.


(1) If a perf ect squa re is divisible by a
prime P then it ~
is also divisible by square of P. Soln .:
The First Order Predicate Log ic (FOP
3 x : (x divisible by p) ➔ x (divisible by L) is a
p1 method of formal representation of Natu
(2) x ➔ likes (-, chemistry " -, History) ral Lang\lagt
(NL) text The prolog lang uage for AI
pro~
-, xv (-, chemistry A-, History) has its foundations in FOPL. If dem onst
rates how to
(3) If it is Satu rday and warm. Then sam trar1&late NL to FOPL in _the form of facts
is in park and rules,
use of quantifiers and vana bles , synt ax and
Vx, y (x A y) ➔ park (sam) sclllantica
of FOPL, and conversion of pred icate expr
(4) Any thin g anyo ne eats and is not kille ession lo
d by is food. clause forms. This is follo wed with unifi
cation ((
Vx Vy : eat (x, y) A-, killed (x) ➔ food (y) predicate expressions usin g instantiat
ions .-i
Vx, Vy substitutions, compositions of subs titut ions
, uni fi~
algorithm and its analysis.
-, (eats (x, y) A -, killed (x)) v food (y) (i) x : x (John) ➔ at leas t friends (y A z)
(ii) x "y (friends) ➔ (-, x) A (-, y) (enemies)

~ 4.1 4.9 Comparison between


Proposldonal Logic or First Order I.ogle (Predicate I.ogle)
:-~ -,;;s i\;~u i;h- ;e~ :;n~ ;;;: it~~ a; ~~~"
-(;L) a~ fi~•order ~predicate iog~c (FOP
~ L) knowledge r ~
mechanisms. Take suitable example for each
------ ---------------------------------------------- --- ---
point ofdiffer-="tiati?r· , .

-,
.,,.. ., (MU • Q. 2(b). May 19, 10 Marks
.Sr.~
I'.•
Parameters
..
ProP9Sltio~Logic (PL)., ,:. ,
...
--- ·
7, -::.~. 5.·.•• •'!1. ,..,~. . .

No., -·Predicate Log ic (FO L)


,, ··~
.. '1 ' i "' ' 1-'
~
A f I $ \
• \ ,}
" ... l! ,,. ,,,
1. Defi nitio n Propositional logic deals with •
simple First order
declarative propositions. logi c additionally covers
predicates and quantification.
2. Enti ties A proposition is a colle
ction of
declarative statements that has either a Predicate logic is an expr essi on of one or
truth value "true" or a truth value "false". more variables defi ned on som e specific
domain.

3. Boolean values Propositional logic is a simple form of -


logic which is also known as Boolean Predicate logic is a colle ction of fonnal
logic. systems whic h uses quan tifie d variables
over non-logical obje cts and allows the
use of sentences whic h cont ain variables.........,
(MS-126)
~Te ch-N eo Publications...A SACHIN SHAH
vennie

~ l
. ..:Ar.ial intelliaence (MU-Al & DS / e1-.,__
~~uv11ic s)
~
IKnowtedae and Reasonlnal...Paae No.14-31)
p
i:5r.
..- parame.ters l i
Predicate Logic (FOL) · • ' -,j
'. ~~lti _oi:, .Logic {PL) • . ,
" .• .... !•.Jt I.;,
lfN(),, •<t ' •
•.....:.,, ,~ • i.,"tl?Qill~ .....,,J)S .,r ,·
..
I
' ~- , .,dt, '«"1· ... ' t ~
~ truth values A
_proposition has truth ~ " .. . . .
4.
means it values (0, I) Predica te logic is an express ion cons1stlllg
which • I •
can have one Of th e two of variables with a specific domru.n. t 1S
values i.e. True or Pal
1~~ ~~~as ~~1Boo ~~1 ~·~----~
t~;:i~;;-1~;;;~=::~~~s~e·:____ __
It · th so known can ogic.
L--- ,_,_ bas·
S. Usefum ess 1s e most
ic and widely used logic. Predicate logic is an extension of
_ _~
--t---~ -7-;~: :=~=- ~:---- ----~p r~o~p ~o~si ~ti~o ~nal~ lo~gi ~·.:.c ·:....__ _ _ _ _
This logic is used ~
powerful s h or th~ development of Predicate logic deals with infinite
. earc algonthms includin structures as well. The quantifiers are the
unplementation methods. g
linguistic marks that permit one to treat
with such infinite structures.
...---
Propositional logic is used . AI for A predicate with variables can be made a
1 • 10
I I
p anrung, problem-solving intelligent proposition by either authorising a value to

- 6. Nature
control and for decision-making.

uncertainty.
the variable or by quantifying the variable.
It also_ includes certainty as well as It consists of objects, functions, relations
between the objects.
-7. Representations It led to the foundation for machine Predicate logic helps artalyse the scope of
learning models. the subject over the predicate.
81 Language It is a useful tool for reasoning. It is different from propositional logic
which lacks quantifiers.
9. Level logic It has limitation because it cannot see Predicate logic is undecidable, since
inside prepositions and take advantage of universal and existential quantifiers treat
relationships among them. with infinite structures.

Given a set of rules, there are essentially two ways


, Syllabus topic : Forward Chaining, Backward to generate new knowledge: one, forward chaining and
Chaining
inl the other, backward chaining.

a. 4.15.1 Fosward Cbarnlng


~~.... 5/ FORWARD CHAINING 'AND ...~ ~ - - - 1·- - - :;- - -.:-;;:. -~-....- - -~~ .,:--.....--- ;
" r, , BA~KWARD CHAJNl~G : , UQ. Illustrate Forward chaining in propositional logic 1

( ' - -- I
l UQ:'"Explain Forward-chaining and Backward-Chaining ,
i _; with ~g,plSe;r - - - - - - - - - - - - - - - ~- - :
Forward chaining employs the system starts from
l•a:i....
~ a(gorithril with the help of example. :
I that a set of facts, and a set of rules, and tries to find a
,cmr.r,.-- • I
IF.~:; ~-
(• ,;,' t • I. • • • • I. • I way of using those rules and facts to deduce a
\ -J~f t . . • t . I• ' I • ; conclusion or come up with a suitable course of action.
'..._ - ~ -- - - - - - - - - - - - - - - - - --- - - - - - This is known as data-driven reasoning because
Rule-based system architecture consists of a set of
the reasoning starts from a set of data and ends up at
rules, a set of facts, and an inferen ce engine. The need
is to find what new facts can be derived. the goal, which is the conclusion.

(MS-126) ~
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Artifldal In t<nowled 8 and Reasonin
U-AI & OS / EJectronicS
q- ste ps fo llo we d « ......,ard ch ain ing us
In for wa rd chaining ea ~... ed 1n
(i) W he n applying for pla n•
ward chaining, the first ste
P~
to tak e the facts in the
any combination of
f.act database and see if
these matches all the
• A pIan
is a sequence of actions
deeides to take to solve •
tha t a ~ "'
t''O Rh~
a particular Pro b\"111
antecedents of on e of Back.Ward chaining ca n
the rules in the rule mak~ the PrOccss~
database. ting a plan more eff ici en
(ii ) W he n all the antec
fo ~~1a t than t Of
edents of a rule are match chailllJlg. or,,,,clfd
facts in the database, then ed by
this rule is triggered. • 8 1cward chaining in this wa y starts with the
(ii i) Us ua lly , when a rul ac
e is triggered, it is then Or
ed, state, which is the set of conditions the &Oai
wh ich means its conclusio wishes to achieve in carrym . . agctit
n is added to the facts g ou t its plan. h ~
database. If the conclusio examines this state and sees
n of the rule that bas wh at actions c°'1ld
be en fired is an action or lead to it.
a recommendation, then
\ the sys tem may cause tha
t action to talce place or
• For ex.ample, if the goal
the recommendation to be state .involves a block_
made. being on a table, then on
(iv ) Fo r ex am ple , consi e possible action w<lllld
der the following set of rul be to place that block on the
us ed to control an ele es table.
vator in a three-story This action might not be
bu ild ing : • possible from the start
\ Ru le 1: IF on first floor
state, and so further action
s ne ed to be added
and button is pressed on before this action in order
I fir st flo or
state.
to reach it from the start

I TH EN op en do or
Ru le 2 : IF on first floor • In this way, a plan ca n
from the goal and working
be formulated Staning
I AN D bu tto ns is pressed
on second floor THEN go state. Toe benefit in this
ba ck toward the start
method is particularty
I to sec on d floor.
Ru le 3 : IF on first floor
clear in situations where
very large number of possi
the fir st state allows a

I AN D buttons is pressed on
third floor THEN go to • In this kind of situation, it
ble actions.
can be very inefficiczt
I thi rd floor.
Ru le 4 : IF on second flo
or AN D button is pressed
to attempt to formulate
chaining because it involv
a pla n using forward
es examining every
on first floor AND alr possible action, without pa
eady going to third ying any attention to
floor which action might be the
\ TH EN remember to go to goal state.
be st on e to lead to the
first floor later.
a. 4. 15 .2 Backward Ch
• Backward chaining ensur
es tha t ea ch action that i1
aining taken is one that will defin
j'- -~.. G- ..- -~ -4""-;- itely lead to the goal.
-,-, r"" )- - - - - , -:: .-1 and in many cases this wi
1 u'Q: ill u~ "e - - - -;- -- ll ma ke the planning
Forw'~"rd iliaining and bac
kwa'ro ctfain!ng· in1 process far more efficient.
f( propo~itlonal logic with
example.
~ . :
·~ ~ = -~ !J!':_..._·_.,.__ - -.
a. 4.1 5.3 Forward Reason
ing
In ba ck wa rd chaining, we
wh ich is the hy po the sis
- - - - - - ... - - ---
-----·
start from a conclusion, ~-~--------~-----------------~-
: GQ," •~a in forward· and
backward reasoning
we wish to prove, and we : ~ examples.~
to sh ow ho w tha t conc aim ,r, •
lusion can be reached fro . .... ,
rul es an d facts in the datab m the 1 GQ Explain'reasoning wit
ase. h example. Compare roiw
1
~ • and backward reason atd
To e conclusion we are aim ing with example.
ing to prove is called a 1'r ·1
go al. an d so reasoning GQ' Differentiate between .
in this way is known as goa ' forward and bad('#lff'>-.;
~~

l- ~ reasoning. ".I
dr ive n rea son ing .

(MS-126)
~~~-----~------------------~~~ • •
{ jf

IiiTech-Neo Publications.. .A SACHIN SHAH VenttJI


'

\
~
e lnt"nioef'IC8 (MU-Al & OS/ Electronics)
P
(Knowledge and Reason1np>··· a,...
- No, (4-33,J,
• -

~r,fS fd . reason lng is a~ called as forward 0 Soln.: • to


ert all the facts Ul
, cbaiJUPg Ill the field of artificial intelligence. It is To solve the problem. we conv will use a
e of the methods that is. used as a reasorung • first-ord er definite clauses, and then we
oti .th kin th
engine w1 wor g with inference driven forward-chaining algorithm to reach e goal.• (FOL) :
entities- (I) Facts converting into F'JISt Order Logic
, forward ~onin g is o~e of the most population (a) It is a crime for an American to sell weapons to
jDlpJernentauon strategies in the concepts of hostile nations. (Let p, q, r be the variables)
~ systems and production rule-based American (p) /\ weapon (q) /\ sells (p, q, r) A
sY5teJJlS· hostile (r) ➔ criminal (p) ••• (l)
With forward chaining, it makes use of the (b) Country A has some missiles
' esJsdng data alongside the inference rules to ? P Owns (A, p) /\ Missile (p)
e~tract move data from the user until a certain
It can be written in two definite clauses by using
goal is reached. Existential Instant iation, introducing new constant
An inference engine will Iterate through the
T,. . ..(2)
process of obtaining new data to eventually
satisfy the goal. . ..(3)
Missile (f1)
• 'Jbe working is based on the real-world
unptementatlon of the if-then clauses. (c) All of the missiles were sold to country A by

• forward chaining is a down-up process in which Robert.


of ? P Missile (p)" Owns (A, p) ➔ Sells (Robert. P, A)
it works from the bottom to the top in the order ...(4)
the occurrence of data.
(d) Missiles are weapons.
~ 4,15,4 Modus Ponen s ...(S)
- - ---- --~ - - ---- -
~...,,,--- ,-~-..- --- -.- ---
.
-. Missile (p) ➔ Weapons (p)
: (e) Enemy of America is known as
hostile.
~UQ. .. Explain modus ponen with suitable example ...(6)
Enemy (p, America) ➔ Hostile (p)
L-- -------------------------J
• Forward reasoning (chaining) can be described as
(f) Country A is an enemy of America
...m
Enemy (A, America)
repeated application of 'modus ponens'.
• Forward chaining is a popular implementation (g) Robert is American
...(8)
strategy for expert systems, business and American (Robert)
production rule systems. Remark : Existential instantiation is also called as
• We consider the following famous example which (existential elimination) is a rule of inference which
we will use in both forward and backward says that, given a formula of the form (3x) +(x). one
may infer ,Cc) for a new constant symbol c.
reasoning.
Remark : 'Modus ponens' is a rule of inference a. 4.15.5 Forward Chalnlns Proof
and it states that if P and P ➔ Q are true, then Q is also
► Step 1 : We begin with the known facts and will
true.
choose the sentences which do not have
Ex. 4.15.1 : "As per the law, it is a crime for an implications, such as : American (Robert),
American to sell weapons to hostile nations. Country Enemy
A. an enemy of America, has some missiles, and all (A, America), Owns {A, T 1), and Missile
the missiles were sold to it by Robert, who is an CT1),
American citizen".
Prove that ''Robert is crimin al".
~Tech -Neo Publications...A SACHIN SHAH Venture
!MS-126)
~led 8 and Reasonin ... Pa 8 No
• 4,.
properties of Backward R~ ·~
Artiflclal lntelli ence MU-Al & OS/ Electronics '2'- 4,tS, 1 nJ ) l\fiit
(Chai ng
All these facts are represented as :
EnernY .
0
as a top-down approach.
American Missile (A ) 1t JS ,la)OW •
(Robert) ' (a d-chaiJling 1s based on nlOd
(f,) Arnerica) (b) aaclcWaf Us Poi...
.....__ _ __j_L_ _ _LJ_ _ _ u...:.---- 1 inference rule. .""'
► Step 2: We see the facts which infer frorn bacicward chaining, the goal is broken.
available facts and with satisfied premises- {c) :al or sub-goals to pr~ve the facts true. lllto Sllb.
Rule (1) : does not satisfv' premises, so it wiJJ not be • C11U
__ 11ed a goal-driven approach as
(d) Jt 1s
u;:_ ~
' a L
added in the first iteration. goals decides which rules ~e sel~ted and
Rule (2) and (3) are already added. k ard chaining algonthm 1s USed •
Rule (4): Satisfy with the substt"tution {p/f1 }, so sells (e) Bae w autoroa
• ted theorem provin Ill "°"" e,,_
(Robert, T , A) is added, which infers from theory,
·-'erence engines, proof assistants,
• and v~.: ~
g kw..
1
the conjunction of Rule (2) and (3) uu' "''Olis~
Rule (5) : is satisfied with the substitution (PIA), so applications. th
hostile (A) is added and which infers from (f) 'fhe baclcward-ehaining me od mostly useci
Rule (6). depth-first search strategy for proof. 1

..._~.,........:...J ~--(--T-,.A-) [ Hostilo(A) J Example:


We use the same above problem in bac~
I-<-, I .___ _..:;__J .__ __ _ , e.-..:..;...--
chaiJling.
► Step 3: As we can check Rule (1) is satisfied with We rewrite all the rules :
the substitution {p/Robert, qJT 1, r/A), so we
(a) American (p) ~ ;"eapon (a) " sells (p, q, r) A
can add criminal (Robert) which infers
hostile (r) ➔ criminal (D) ...(I)
all the available facts. And hence we
reached our goal statement. (b) owns (A, T,) -.(1)
(c) ? p Missiles (p) A Owns (A, p) ➔ Sells (RobQ,,p.A)
...(3)
(d) Missile (p) ➔ Weapons (p) ...(~
(e) Enemy (p, America) ➔ Hostile (p) ...{~
(f) Enemy (A, America) ...{~
Hence it is proved that Robert is criminal using (g) American (Robert) ...(7)
forward reasorung (chairung) approach.

a.
a. 4.15.8 Backward CIYiDlng Proof I
4.15.6 Backward Chaining
In Backward chaining, we begin with our goo
Backward chaining is aJso known as a backward
predicate, which is criminal (Robert) and then in/11
deduction or backward reasoning method when using
further rules.
an inference engine. A backward chaining algorithm is
Step 1 : We assume the goal fact And from~
a form of reasoning, which starts with the goal and ► goal-fact, we infer other facts, and at last, tt
works backward, chaining through rules to find known
' prove those facts true.
facts that support the goal.
So our goal fact is "Rober is Criminat, JO
following is the predicate of it

(MS-126) ~ TKh-N,o P,blkat,>o,_A SACHIN SHAil '"" I


Af!!liclal lntelli ence MU-Al & OS/ Electronics Know and Reasonin ... Pa No. 4--35
., SteP 2 : Here we infer other facts (rom goal fact
.
► Step 4 : Now we can infer facts Missile (T1) and
r h
which sausfies the rules. So as we can see .JO Rule Owns (A, T 1) from sells (Robert. T1, r) whic
. A in
1' the goal. pred icate criminal (Robert) is.
present satisfies the Rule (4) with the substitution of
.
with subsUtutJon (Rob ert/p ). So we wiU add all th place of r.
conjunctive facts below the first level and
w: So these two statements are proved here.
replace p with Robert.
so it
Here we can see American (Robert) is a fact,
is proved here.
c.tmal (Robert)

(q/T,)

y {A,
► Step 5 : Now, we can infer the fact Enem
ile
► Step 3 : At step 3, we extract further fact Miss America) from Hostile (A) which satisfies
Rule
ies
(q) which infer from weapon (a) , as it satisf (6). And hence all the statements are prov ed
true
the
Rule (5). Weapon (q) is also true with using backward chaining.
substitution of a constant T 1 at q.

Mllslle q

ward Reasonlnr
a 4.15 .9 Comparison between Forward and Back

Sr.-No.' I Forward Reasoning • ..


~
'
- Ba ck wa rd R~ :;; ~
It is a data-driven task. It is goal driven task.
I

It begins with new data. It begins with conclusions that are uncertain.
2
ort the
The objective is to find a conclusion The objective is to find the facts that supp
3.
conclusions.

It uses a conservative type of approach.


4. It uses an opportunistic type of approach
It flows from consequence to the incipient.
5. It flows from incipient to the consequence.
initial
It is based on the decision fetched by the
6. The precedence of these constraints have to , and
state. The system helps choo se a goal state
match the current state. n as
reasons in a backward direction. It is also know
ence
a decision-driven or goal-driven infer
technique.

Venture
~Tec h-Ne o Publications...A SACHIN SHAH
(MS-126)
( ~ and Reasoning) ...Page No
Artific:taJ Intelligence (MU-Al & OS/ Electronics)

'8.,r. No. . _.. _, . Forward Reasoning .. - ,.


1
,f
._
~ ~ >
, Backward reasoning ~
~ CL

7. First step is that the goal state and rules are SClCct~
The inference engine searches the knowledge -~
base with the given information depending on
the constraints.
8. The first step is that the system is given one Sub-goals are made from the selected rule>
or more constraints. need to be satisfied for the goal state to be true. bici.
9. The rules are searched for in the knowledge The initial conditions are set such that they-;;;
base for every constraint. all the sub-goals. The ~tablished states are l'Ilatc~
to the initial state provided.
r---+----
10.
-------+~ =-===--If-=----::- :==-~--- ----
the condition is fulfilled. then the goat is Ilic
The rule that fulfils the conclition is selected.
solution. Otherwise the goal is rejected.
11. Every rule can produce new condition from If tests have less number of rules.it provides ~
the conclusion which is obtained from the amount of data.

12.
invoked one.
New conditions can be added and arc It contains less number of initial goals and has large
processed again. The step ends if no new number of rules
-
conditions exist.
13. It follows top-down reasoning. It follows bottom-up reasoning technique.

In forward reasoning, reasoning proceeds forward. On the other hand. when the right side of the rub
beginning with factor, chaining through rules and is instantiated first, the left-hand conditions become
finally establishing the goal. subgoals. These subgoals may in tum cause SIii,.
When the left side of a sequence of rules is subgoals to be established. and so on until facts ac
instantiated first and the rules are executed from left to found to match the lowest subgoal conditions. Wla
right the process is called forward chaining/reasoning. this form of inference takes place, we say dill
backward chaining is performed. This foon cl
This is also known as data-driven search, since, inference is also known as goal-driven inference siDI%
input data are used to guide the direction of the an initial goal establishes the backward direction of tbt
inference process. For example, we can chain forward inferring.
to show that when a student is encouraged. is healthy,
For example, in MYCIN the initial goal in a
and has goals, the student will succeed.
consultation is "Docs the patient have a ccrtaiD
ENCOURAGED (student) MOTIVATED disease?" This causes subgoals to be established such
(students) as "are certain bacteria present in the patieot1'
MOTIVATED (student) & HEALTHY (student) Determining if certain bacteria are present may require
such things as tests on cultures taken from the paticd-
WORKHARD (student) WORKHARD (student)
This process of setting up subgoals to confirm a goal
& HASGOALS (student)
continues until all the subgoals are eventually satisfied
EXCELL (student) EXCELL (student) ➔ or fail. If satisfied, the backward chain is established
SUCCEED (student) thereby confirming the main goal.

I!
(MS-126)
Iii T""-N«> PublkationUI SACHIN S!Wl V - J
J
~telligence (MU-Al & DS / Electronics)
(Knowledge and Reasoninp)-.. Page No. (4-3Il
......-'sorne systems use both forward and b t-...
. d din ac-.. ward Table 4.16.1 : .Deduction repraeotation
c;baiJUll~ reaso~g, epenail •g o~ the type of problem ...
the informaaon av able. Likewise rules be Bi!ds "'"C° 1~!:'1'.n~~ ·Jla)e~
and . I I .
iested ~usnve y or se ectively, depending on the
may
A,B,F 1,2
contt0l sttUcture.
A,B,C,F 2 2
~ ,,~_t 6 EXAM,LE TO COMPARE fORWAU
i AND BACKWARD CHAINING A,B,C,D,F, 3 3
~ =~
A,B,C,D,E,F 4,5 4
In th.is case, we will revert to our use of symbols
for logical statements, in order to clarify the A,B,C,D,E,F,G 5 5
eXPJanation, but we could equally well be using rules A,B,C,D,E,F,G,H 6 STOP
a))oUt elevators or the weather.
• Now we will consider the same problem using
Jules: backward chaining. To do so, we will use a goals
Rule 1 : A A B- C Rule 2 : A --+ D database in addition to the rule and fact databases.
Rule 3 : C A D - E Rule 4 : B " E" F--+ G In this case, the goals database starts with just the

Rule 5 : A " E - H Rule 6 : D "E" H --+ I conclusion, H, which we want to prove. We will
DOW see which rules would need to fire to lead to
Facts
Fact2: B this conclusion. Rule 5 is the only one that bas H
Fact 1: A Fact3: F
as a conclusion. so to prove H. we must prove the
Goal antecedents of rule 5, which are A and E.
Our goal Is to prove H Fact A is already in the database, so we only need

• First let us use forward chaining. As our conflict to prove the other antecedent, E. Thetcforc, E is
resolution strategy, we will fire rules in the order added to the goal database. Once we have proved
they appear in the database, starting from rule 1. E. we now know that this is sufficient to prove H,
• In the initial state, rules 1 and 2 are both triggered.
We will start by firing rule 1, which means we •
so we can remove H from the goals databa~.
So DOW we attempt to prove Fact E. Rule 3 bas E
-
add C to our fact database. Next, rule 2 is fired, as its conclusion. so to prove E. we must prove
meaning we add D to our fact database. the antecedents of rule 3, which are C and D.
• We now have the facts A, B, C, D, F, but we have • Neither of these facts is in the fact databa~ so we
not yet reached our goal, which is G. need to prove both of them. They are both
• Now rule 3 is triggered and fired. meaning that therefore added to the goals database. D is the
fact E is added to the database. conclusion of rule 2 and rule 2's antecedent, A, is
already in the fact database, so we can conclude D
• As a result, rules 4 and 5 are triggered. Ruic 4 is
and add it to the fact database.
fired first, resulting in fact G being added to the
database, and then rule 5 is fired. and Fact H is • Similarly, C is the conclusion of rule 1, and rule
added to the database. 1's antecedents, A and B, are both in the fact
database. So, we have now proved all the goals in
• We have now proved our goal and do not need to
the goal database and have therefore proved H
go on any further. This deduction is presented in
and can stop.
the following table :
!Ms-126) Iii ech-Neo
T Publications...A SACHIN SHAH Venture
,:sr:,:i:m and Reasoning)...•_Page ~4.llo
Mffk:lal lntellg.,,,ce <MU-Al & OS I Bec:11010)
u:ce,dents, then forward chaining l l l i ~
• Tb.ls Pl'OCa9 fa npresam d In the table bdo" : an .._ even more inefficien t 1te1
1iave ,.,.,...n
Table 4.16.2: Proceu leple9mtad oa _ _1 t,ackward chaining is apPr0n,,;_
]JI gena..., .r• "lie •
• where there arc few possible conc1\lh,.. lll
cases 'bl -~111 (~
even J·ust one) • and many poss1. e facts• -.
"""
v~
A.13.P H s many
of which are necessarily relevant to
Git
conclusion.
A.13.P E 3
~ • Forward chaining is more _appropriate When lhert
A.13,F c.o I are many possible conclusions. The way in ~
forward or backward chaining is usually chosen it
A.13,C,P D 2 to consider which way an expert would solve Ille
A.13,C,D,F D STOP problem. This is particuJarly appropriate becaa.t
ru]e-bascd reasoning is often used in CXPQt
• In this case, backward chaining needed to use one
syste1IIS-
fewer ruJe. If the ru]e database had a large number
of other rules that had A. B and F as their

..t-,.
. .fT DIFF'ERENCE BETWEEN FORWARD AND BACKWARD CHAINING
Forward chalnhlg - B~cba hrlng
• J '

1. In this a problem solving technique used by expert It is a reasoning technique employed by expert
system when they are faced with a scenario and system to take a goal and prove that this goal
have to give a solution or conclusion to this founded legitimately according to the rule base
scenario. it possesses.
2. The sysr.em will work its way through the roles, It is a form of reverse engineering, which is very
findui'g which ones fit and which leads to which applicable in situations where there arc so many
goals using deductive reasoning. roles that could be applied to a single problem.
The system could be shifting through roles
before it gets anywhere.
3. Forward chaining is used when a conclusion is not Backward chaining is more appropriate when
known before hand and it has to reason its way the conclusion is already known.
through to one.

4. If matches conditions and then Generates inferences Backward chaining is used for Interrogative
from those conditions. And then generates appllcatiom (finding items that fulfil certain
inferences from th.ose conditions. These conditions
criteria) one commercial example of a backward
can in turn match other roles. Basically, this takes a
chaining application might be finding which
set of initial conditions and then draws all
insurance policies are covered by a particular
inferences from those conditions.
reinsurance contract
s. Starts with initial facts.
Starts with some hypothesis or goal.

(MS-126)
~ Tech-Neo Publications...A SACHIN SHAH ventil'
I'
(4--39_1
llkJence (MU-Al & OS/ Electronics) and Aeasoning) ... Page No.
rnte ··- . __, ....._f
~~ ~
Forward cbalning ...,., . --
, . . ..Backwa
rd cbafnlPI'~ .. .(.;1

KS, (
. ....,,,_ .,, .... • ,;•. ~l; i~ ...... ·~-
--,.
il{o
,-1,-:fS
~ .
tJons.
i.;:;--- ;.sJcs roanY ques Asks a few questions. - ·-
~ f eSts aIJ the rules. Tests some rules.
-
~ i--- -- •
SJow, 1,ecause 1t tests
all
the rules.
Fast, because it tests fewer rules.
~ _..-rrovides a huge amount of infonnation from )Ust
.
a Provides a small amount of information
from
9. smal1 amount of data. just a small amount of data.
v ~rn pts to infe r everything possible from
the
Searches only that part of the knowled
ge base
JO.
available information. that is relevant to the current problem.
i..-- ~a ril y data-driven. Goal-driven.
IJ.

---
12.
Uses input; searches rules for answer. Begins with a hypothesis; seeks info
rmation
until the hypothesis is accepted or rejected
.

~
Top-down reasoning. Bottom-up reasoning.
J3.
ort the
l,...--
ks forward to fmd conclusions from the
facts . Works backward to find facts that supp
J4. Wor

--15, Tend
s to be breadth-first.

16, Suitable for problems


hypothesis.
Tends to be depth-first.

that start y-om data collection. Suitable for problems that


hypothesis, e.g. diagnosis.
star t from a

e.g. planning, monitoring, con trol


- 11. Non -foc used because it infers all conclusions, and Focused; questions
may answer unrelated questions.
all focused to prov e the goa l
and search as onJy the part of KB that
is rela ted
I, ) to the problem.

facilitated. :J Explanation is facilitated.


18. Explanation not ,,. I
(i.e. on
All data is available. Data must be acquired interactively
19. • I
demand).
of initial goals and a larg e
A smaJJ num ber of initi al states but a high number A small number
20.
number of rules that matches the facts.
of conclusions.
Easy to form a goa l
21. Forming a goal is difficult.

SHAH Venture
~Te ch-N eo Publications...A SACHIN
(MS.126)
l(nowled 8 and Reasonin
Artificial lntelli nee MU-Al & OS / Electronics Game
Isa

_
Syllab us topic : Seman tic N~or ka aco111

I~, 4.18~ SEMANTIC NE1WORXS.,


__,___,
pWI
home_team

,- -- ----
( ,:n Wh
- ----
.. •
- -----
• •
--"" - -- - ---- --,
; .i, Ff&. 4_18.l: A sematk network for n-place p ~
, ,+'C-' at are semantic' networics {or senianu c netS) 8 nd I1
ikda •fl
,__:_-ol-
I~ • •
~• cal!9n?- -r,:; ... . , , - ..~- - •·Ti'...,.JI
~ _.._.._~ -- - ~ --..'ir._ ._ _..._...._ --- - - - --- - - ,..._ ~ 4.1 e.1 AJ)vantalfl, Dlsad v-uae s of
• Semantic networks are an aJternative to $efflalldc Nets
predic ate logic as a form of knowledge
representation. « Advantages of seman tic nets
1. Semantic network can represent default YalQea I(
• The idea is that we can store our knowledge in the different categories.
form of a graph, with nodes representing objects
2_ Semantic networks are simple and easy to
in the world. and arcs representing relationships understand.
between those objects. 3_ Semantic networks are easy to translate in to
• The physical attributes of a person can be prologue.
4. Semantic network arcs represent rela ~
represented as in Fig. 4.18.1
Mammal between nodes.
laa 5. In semantic networks the relatlomb.ips an
I l
Peraon haa_part I Head
6.
bandied by pointers.
Semantic networks provid e good visualizatioa.
Instance Being diagralDID8tiC representation they at
-..z-.===~
I
Black/Blue I
team colours
I ?Yuvraj ~ PWI
«
easy to view.
Umlta tfon• of .eman tic netwo r1'9
Fig. 4.18.1 : A semantic network
I. Lack of link names standards; make it difficuJt ID
• These values can also be represented in logic as : understand the net meaning.
Is a (person, mammal), instance (Yuvraj, person) 2.
Even the nodes naming is not standard. H a DOde
team (Yuvraj, PWI). We have already seen bow is labelled "car" this may mean
conventional predicates such as lecturer (Poonam) • The class of
• A specific • The ~
can be written as instance (Poonam. lecturer). a car car of a car
• But, we have a problem: how we can predicate 3. Answering
negative query like "is XYZ a m"
have more than 2 place predicates in semantic takes a very long time.
nets ? For example score (PWI. India, 20). 4. Semantic nets arc logically inadequate bccas
• Solution is that, create new nodes to represe.nt they cannot define knowledge in a way logic cao..
new objects either contained or alluded to in the 5. Semantic nets can be used better in reprcscntmC
knowledge, game and fixture in the current binary relations, but not all types of relations.
example. Relate information to nodes and fill up 6. Logic enhancements have been made and heuristic
slots. enhancements have been tried by attathinl
procedures to the nodes in the semantic nets. ~
procedures will be executed when the node is
activated.

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH VedJ"
~cial lntelli ence MU-Al & OS / Electronics
No. 4-41
"'-~•:.!·:_ E-7-'~ ~f~~-~ ~kea, r~ (i) (a) Semantic net representation of sentence
,';""' Give semantic representation fo • f-. .:-,- :-<:- t""..· 1' ~ - ,
and Reason!

_. , r ollOWing "John gave book to marry'' will be :


: -• f.-~~ ~ :3~f~~~n_ H!~: ___ • , 1 acts.~{
--- _______,,._,_._:it.· __ ~ gave •I BookJJ to •I MarryJ
RaJll is taller than hari :
Flg.4.18.6
(b) Semantic net representation of sentence
height "Every dog bas bitten a postman" will be :
H1
taller-than Dogs Bite 6>ostma].
Fig. 4.18.3
-------~ ------ -- ls_a is_a ls_a
( ;1' construct semantic net representat-: - - - -- - -,
~ ion for the
1 ""' •
,
,.:rr,.'7';"
• .
following : ~;... ~,

.,_, ._. ·- _ .
'f. ~ ~ ;;~
Ji d b m
"y
Victim
, (i)~ •• Pompeian (Marcus), Blacksmitfi
(
(Marcus) • r: 1
}
_ ___. Assailant - - -
I ,~ .
I ' ' ti ~ • • I
r 00 _.Ram gave t e gree'"! flower~ vase,.to her•fa . •
l
•• f ·1.vinte :f 1 Fig. 4.18.7
' cousin. t .;
!--- ~:. - - - -~- - _.,_,,__.,_ ----- - __.._ --~ •
-- ----,.;:,,r.J
The nodes-dogs, bite and hitings and mail earners
(i) semantic net representation of "Pompeian
nodes d, b and m represent a particular dog. a
(Marcus), Blacksmith (Marcus)" will be
' particular biting and a particular mail earner This fact
~ompelan I• Is a Marcus } ls_a •[01acksm ~ J can easily be represented by a single net with no
partitioning If we represent the fact "Every dog has
Fig.4.18.4
bitten a postman" or the logic
(ti) Semantic net representation of "Marry gave the V(x) Dog(x) ➔ 3 y Postman- artier (y) A Bite
green flowered vase to her favorite cousin" will (x, y)
be,
Green

has colour
I I Batter Hit

r.-.;;.ga_v_e-i Flowered vase


1s_a
to
is Marry's r----. a b IS
t-----"--.1 Favourita
Assanant Victim
Fig. 4.18.S
t - i - . ~ ------ - - - - --- ----,- - - - -...-, - - - - ----- - - - .,
Fig. 4.18.8
~ ' • ,I I
.....'.,1':
~..,_.J• ~':,t~-,~ ('~,l
-
iGQ:.:et ~ ~ "'~•• - t~\.,t,
~ (i) ~ Rep(esentthe"'foliowing sentence in semantic net:~ : (ii) The node 'g' stands for the assertion given above.
lr ~. , (a) ~.:.John gave the book to'marry,""t)..,, ,, ~ ~• Node g is an instance of spedal class of OS of
.., ~ />
...

1 . ..t ~) Every dog has bitten a postman. general statements. Every element GS has at least
:•00 ~ Bharathla~ Universi~ Computer Centre, the ·mini-
.,,()
two attributes, a form which states the relation that
~ ', computer system is. a generic' node because !Tiany is being asserated. For even dog d, there exists a
1 -' mini-c9mputer systems exist' and that nod.! has to
biting event b and a postman m such that d is the
eate; to all of 'them. On the contrary, lndividu~I of
assailant of b and m is the victim.
~ . - instance nodes explicitly state that they are s~c
·"

.__
1'l •
_,,_
Instances of,a generic node. •
-- - - - - - - - - - - - -- - -- - -- --- ---
.J

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
~ed 8 and Reasoni
ArtiflclaJ Intel U-AJ & OS / Bectronlcs

c;Bharathllrt
HCLhorizon
Ill
I
~ Une1)rimer 1 unlveraity
computer
~centre •
t,as_a
I
haa_port Speed

Hammor-oa nk
pert_of
I
I
ls_ln MonitOr
Bharathlar ~
Coimbatol'9 unlveraity
I

I
I
d' For example, consider the following

Syllabus topic : Resolutio n In FOPL dauses:


A:Pv Q v R B:-, PvQ v R I
1~~-{.9 R.ESOLUTION,
C: -,Q V R, D=Qv R I
·-~~17.i-z,-,.:;----•-r,;:-c= -":1-,.·"'ll'-r- -=-- ~;:i;.; --c,r--.;.s;--,
~ - What do you mean by resolution and uni~catio~7 ~
Clause A has the literal P which is complementary to I
1 ~ Explain with example. , _p in B. Hence, both of them are deleted and a
• - " ,J ·, 1
I l~~ ...

I
I.) { • , : ~- • • ,.., ~, • ,; ~:.i-. ,q. I
\ Or • Write short note on resotution algorithm, resolvent (disjunction of A and B after the
._ - - - - -- - - - - .:. - -- .:. - - -"- _._ - -- _._. - - - :.:, !,. 1
I
complementary clauses are removed is generated).
a. 4. t 9. t Resolution and Unffladon That resolvent again has a literal Q Whose I
If various statements are given, and we are negation is available in C. Hence resolving those two,
required to state a conclusion of those statements, then one has the final resolvent I
this process is called Resolution. A:PvQ vR (given in the problem)
Resolution is a single inference rule which can B :-PvQv R (given in the problem)
efficiently operate on the conjunctive normal form or D:Qv R (resolvent of A and B) I
clausal form.
Unlflcation is a 'key concept in proofs' by
C:-QvR (given in the problem) I
E:R (resolvent of C and D)
resolutions.
It is possible to picturize the path of the problem I
a. 4. t 9.2 Resolution Algorithm using a deduction tree. In fact. it is easier for one to
grasp the flow of the problem using the deduction nee.
Robinson in 1965 introduced the resolution The deduction tree is, I
principle which can be directly applied to any set of PvQvR -PvQvR
clauses. The principle is "Given any two clauses A and I
B, if there is a literal Pl in A which has a ~
complementary literal P2 in B, delete Pl and P2 from QvR -QvR I
A and B and coosi:ruct a disjunction of the remaining
clauses. The clause so constructed is called the
~
R
resolvent of A and B". Fig. 4.19.t : Deduction tree I
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH VentU"
I
I
-~ 1ntelli ence MU-Al & ~S / Electronics Know! e and ReaSOlll
th
A!!! ornes, theNIL
resoluuon might ultimat (U) Resolve them together : The resolvent will be e
50Jlle • e1Y lead to
"' set or • The Following is h disjunction of all literal of both parent clauses th
etx1P•1 sue an
sll with appropriate substitution perfonned and wi
~aJllpJe. the following exception. If there is one pair of
.., 9.J University Solved Eumples
1 literals Tl and -, T2 such that one of the parent
~ or•
. pertorrn resolution on the set of cl clauses contains T 1 and the other containS T2 and
,.19. 1 · auses
~ QvRB:PvR if Tl and T2 are unifiable, then neither- Tl nor T2
A: pv should appear in the resolvent We call Tl and T2
.-1'1 p:1l
c: \£ complementary literals. Use the substitution
li'.1 50111·: produced by the unification to create the resolvent
(given)
J\:pvQvR If there is more than one, pair of coroplernents_z
(given)
13
-pvR literals, only one pair should be omitted frolll the
(Resolvent of A and 8)
x:QvR resolvent.
(given) (iii) If the resolvent is the empty clause, then a
c:-Q
contradiction has been found. If it is not, then add
r (Resolvent of X and C) it to the set of clauses available to the procedure·
uex. 4.19.2 68i•i••ifD•&i44◄@1Miv . .,*'4·
:R -
D: ~ (given)
z : NIL- (Resolvent of Y and D) Consider following facts :
l. lt is Humid 2. If it is Humid then it is hot
'Jbe deduction tree is,
pva~QvR f'If it is bot· 'd it will rain. Pro
~~·
QvR -QvR
0 Soln.:
~

I t """• I

Fig. Ex. 4.19.1
► Step I : Propositional symbols
It is humid : H
It is bot : 0
It will rain : R
l! I
If a A is a formula of predicate calculus, then. ► Step Il : Propositional logic
(x I t). A denotes the formula that results when every (i) H (ii) H ➔ 0 (iii) H /\ 0 ➔ R
occurrence of x in A is substituted by t.
► Step m : In CNF rorm
Ii' Algorithm Steps (i) H (ii) , H V O (iil1 , H V , 0 V R
I. Convert all the statements of F to clause form. , r
2. Negate P and convert the result to clause form. Step IV : We assume negation

Add it to the set of clauses obtained in 1.
It is not raining, i.e. , R • II
3. Repeat until either a contradiction is found. no
► Step V : We form resolution tree. J
progress can be made, or a predetermined amount
. . /'_
of effort has been expended.
(I) Select two clauses : Call these the parent clause.

~Tech-Neo Publications...A SACHIN SHAH Venture


(MS-126)
nr:tedge and Reasoning)...Page No ~
ArtiflclaJ lntefflgence (MU-Al & OS/ EJectronics)
-,A -,QV- ,HVA (v) 3x O W egate the concl usion :
► Step IV: en
~
-,ov-
, 3 X S (X) ~ Vx, S
,H -, PVO

~ R_.c,1ut1on tree -,HVS

~
-,p p

~
~ -,H~-iGVH

Fig. Ex. 4.19.2 : Resoludoo Tree


W (Smll llng)
Since the cod is empty.
0 -,G(S omeo ne)

~
We conclude that it will rain.
UEx. 4..19.3 • t -,W
Consi der the follow
ing axioms:
!M people who are _graduatjng are happy.
Fig. Ex. 4.19.3
!All happy people smile. someone is smiling.
Someone is graduating.
Explain the following : . UEx. 4.19.4 (fvlU . a. 4(aJ. Dec. 18. 1O Marks .
a. 4 A . Dec. 17. 10 Marks
i). R'!= 'ent these ..;om, in .fint: orooi p<ed i1 Consider the statements : mamma_ s t1 tll .-,1• I

~
!JDOrtal. man is a mammal, Tom JS a man
i!:e rt ~b formula to clause form.
s.J.mPOSCd ~ these.
P,rove that "Is. someone smiling?'' using resolution'
~ Soln. :
technique. Draw ther £S.Q lutio n~-- •
► Step I: We have
@ Soln .:
► Step I : Symbolic logic : Tom is a man.
Man is a mammal,
x = people
G = people graduating,
Mammals drink milk.
H = happy people, So we have to establish that Tom drink s milk.
S = smiling people First we write down the impli cation proposition.
► Step Il : First order propositional logic M : Mammals drink milk.
(i) \:;/x G Vx H Mammals (Tom) ➔ drink (Tom. Milk)
(ii) Vx H, Vx S A : Man is mortal.
(iii) 3x G V \:;/x G (f (x)) == G (y) Man (Tom) ➔ mortal (Tom)
► Step m : In clause form : Man is a mammal.
(i) \:;/x G Vx H;
N : Man (Tom) ➔ mammal (Tom)
CNF : ,Gv H
Tomi sama n.
(ii) \:;/x H, VS (x);
S: Man(Tom)
CNF : 7 H V S (x)
Goal : Tom drinks (milk)
(iii) Claus e form :
► Step ll : We note that
3x7 G V H (i) M
ammal (Tom) ➔ drink (Tom, milk)
(iv) Vx7 H VS (ii) Man (Tom) ➔ mortal (Tom)

(MS-126)
[i!Tec h-Neo Publications...A SACHIN SHAH V ~
.,,;dBIIO~~~p:a::~~~M~U=·~A~l&:D~S~/E;l:ectron~~lcs~----~---~Knowl=:a::~:and~R~easor!:=:~'"~·•":P=a~N~o~.~4~-4~~5_,
~ (folll) ~ mammal (fom) are propositions FOPL: American.(x) "weapon (y) "sells (x, y, z)
' ~S
rJj)
1
~ "" tJ1ail (fom) is an assertion. 11. Hostile (z) ➔ criJDinal ( x ) •
teP Jll : Now in disjunction form,
2. Nono has some missiles :
~ S JJlllll1'11al (fom) OR
drink (fom, Mille)
3x Owns (Nono, x) /1. Missile (x).
O) r,1ot ~ (fom) OR mortal (fom).
:. FOPL : Owns (Nono, M) and missile (M).
(111 r,1ot (fom) OR mammal (fom).
-Qn

(Jll•') 1•
..rot w-- ,. 3. All of its missiles were sold by ColoD:el West. ,-
SteP i µ
► .,.rr... V""'
dT • oeso)ution Tree :
•-'

er-) V"'"" (Tom, Mlk)


FOPL : Missile (x) /1. Owns (Nono, x)

4. Missiles are weapons.


➔ sells (West.-x. Nono).
r • r
•i

r
Mammal (Tom) Man (Tom) V Mammal (Tom) ◄
FOPL : Missile (x) ➔ Weapon (x) 1 ,I •

~ Man (Tom) Man (Tom)


5. An enemy of America counts as ','hostile".
FOPL : Enemy (x, America) ➔ hostile (x) • ,

~ 6. West is American.

'
Fig. Ex. 4.19.4
FOPL : American (West)
7. The country Nono is an enemy of America.
FOPL : Enemy (Nono, America)
'IbUS man (fom) is not a man but
► Step Il : To represent CNF : (using dJsjunctlon)
Toro is a man (": We have arrived at empty

claUS<:) 1. , American (x) V , weapon (y) V , sells


Tom does not drink milk is contradiction.
(x. y, z)
Tom drinks milk.
V, Hostile (z) V criminal (x).
2. Owns (Nono, M), Missile (M)
3. , Missile (x) V , owns (None, x)
ns: °The country ,.Non~.
· some missiles' art.d all of •• V sells (West, x, Nono)
it b Colonel '< •
4. , Missile (x) V weapon (x)
• ;:,! : • •
,\..,. -~ .,
5. (An enemy of America counts as "bostile1")
. . v'e . ~entences • first d' er ,)
Ill' cate logic (FOPLt.). ' , Enemy (x. America) V Hostile (x)
'f..•t- .... • ·: '

:ii; Co°'.;ert them to-clause fonn: ;' • 6. (West. Who is American)


l'rove' tl,la • tio., American (West).
tcehni . •
~ que. •• 7. (The country Nono, an enemy of America) J

0 Soln.: Enemy (Nono, America).


► Step I : To represent sentences in FOPL : ► Step ID: 'Resolution Technique' (using CNF):
I. It is a crime for an American to sell weapons to 1. , American (x) V , weapon (y) V , sells
hostile nations.
(x. y, z)
Let x : American, y: Weapon
V, Hostile (z) v criminal (x).
z : Hostile, M: Missile.

(Ms.126) 1 ~ Tech-Neo Publications...A SACHIN SHAH Venture


~ 1ntenioeng: (MU-Al & DS / 8ectronics) F""":"I" and "'"""''"11)-••P... " , _ ~
2. , Missile (x) V , owns (Nono, x) ► Stepill : ~I
V sells (West. x. Nono) We use Resolution Technique 1 • • •·

3. , cocmy (x. America) v Hostile (x) To show that x3 smile (x3) I •1~ < I
We negate the statemen t .,.
4. , Missile (x) v weapon (x) ' I ~F
5. Owns (Nono, M) ,f p
6. Missile (M) Resolution tree
, !n~
• ·r,
~
' I I.'
S lle(x ) -.happy( x 1) V &mlle(x,) f.l);,
7. America n (West)
-, m " / _ , 1
8. Enemy (Nano, America) "'-/'X:/ X1 --•~
9. , Crimina l (West) , 11 i -.happy(x 1) -.gradua tlng(x) Vhap
► Step IV : Conclus ion '"""' l''I ~ PY(l)
\
•'
We discard that West is not criminal. Hence, we
conclud e that 'West is criminal'. r ' '
,gmd ••"V "'"•l l•v
• _4.1e.arlM•.■·MMi-WtJMtffift1 ~~
DSLder followm g
axioms :r l •• ' :"'1 .,'
people who are graduating are ha_ppf:.·~!.! ~ (NuU 88t)
happy ~ple smile. . / .t ·;,..J,J; "~ ·"' Jf-S : Fig. Ex. 4.19.6
• ,:, • ;£'.,jJ, 'ii•
Someon e is graduati ng ,.;, . ~ . », . ·: • • •••
,..... • ,. '--.;'.,. t .... .-. Our assumption is wrong.
q~ Represent these ,axioms ·in FOL. ~-.• :. Someone is smiling.
Convert each formula'.iti.' Cl'{P.. •• •
• Pr~ve·tha~~~ m~e ~ sm.ilirig us· .resolutio~ 4.19.4 Ualfkadoa a. r
~~hnig ue. Ql:a"Y.,.the resolution ~=--........................ J
@ Soln.: 1. It is the process of finding substitutions for liftm
inference rules, which can make different logi:al
► Step I : Converting the given axioms in First expression to look similar (identical)
Order Logic (F.O.L)
2. Unification is a procedure for determfla& I
(i) Let x stand for people :
substitutions needed to make two fust order logic
(a) Vx : graduati ng (x) ➔ happy (x) , •r, r expressions match.
(b) Vx : happy (x) ➔ smile (x) .. 3. Unification is important component of all ftnf
I
(c) Someon e is graduati ng: 3x: graduating (x)

order logic Inference algorithms.
I
Step II : 4. The unification algorithm takes two sentenceS and
Convert ing First Order Logic (F.O.L.) to
returns a unifier for them, if one exists. I
conjucti ve normal form (C. N. F.) Unifier : A substitution that make two cJ4lllll
I
Note that (x ➔ y) is equivalent to (-,x vy) resolvable is calkd
a unifier and the procdl (
identifyin g au.ch unifiers is carried out 1,y,,_tJ,,. I
.•. (a) ., graduating (x) v happy (x)
unification algorithm.
(b) . , happy (x 1) v smile (x1) ,1~1111 r
The unification algorithm tries to find out ~ I
(c) graduating (x:z) most General Unltler (MGU) between a giv~ set rJ
r ' I
, II
atomic formulae. Any substitution that makes 2 .-1
more ex ression ual is called as ve • • lineal:
;I
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH I/_,,. J

I
I
_J
~ 1nte1ligence (MU-Al & DS / Electronics)

:-:gorit hn'I : Unify (L1, L2) (Knowledge and Reasoning) ...Page No. (4-471
De/initu, n : Conflict set. is the set of rulu that have their
1.
If Ll or L2 are both variables or constants th . conditions satisfied by u,orldng memory el.emmU -
, en. Conflict ruolution normally seiecu a migk rule to
(a) If Ll and L2 are identical, then return NIL.
/ire.
~~·· .
(b) Bise if Ll is a variable, then if L1 ---..urs m L2 q- The Popular conflict resoluti on mecha n~•
then return (FAIL}, else return (L21L1). are:
I.
(c) Bise if L2 is a variable then if L2 occurs in L1
l. Refractory 2. Recency 3. Specificity • t.
then return (FAIL}, else (Ll/L2).
(d) Else return (FAIL}. 1. Refractory : A rule should not be allowed to fire
z. If the initial predicate symbols in L1and L2 are more than once on the same data. Discards
not identical, then return {F All..}. executed rules from the conflict set. Prevents
3_ If Ll and L2 have a different number of undesired loops.
arguments, then return {FAll..}. 2. Recency : Rank instantiations in terms of the
4. Set SUBST to Nil... (AT the end of this procedure recency of the elements in the premise of the rule.
sUBST will contain all the substitutions unify Li Rules which use more recent data are preferred.
andL2). Working memory elements are time-tagged
indicating at what cycle each fact was added to
5. For i +- 1 to number of arguments in L1:
working memory.
(a) Call unify with the ith argument of L1 and the ilh 3. Specificity : Rules which have a greater number
argument of L2, putting result in S. of conditions and are therefore more difficult to
l' satisfy, are preferred to more general rules with
(b) If S contains FAIL then return (FAIL}.
fewer conditions. More specific rules are 'better
(c) If S is not equal to NIL then : because they take more of the data into accounL
(i) Apply S to the remainder of both Ll and L2.
"a. 4. t 9.6 Rehmtion
(ii) SUBST : = APPENDS (S, SUBST). -....- - ---- - - - - - - - - - - --- - ---- - ---- ---~'9'r, ,-~~~
6. Return SUBST. : UQ. Explain' Resolution by Refutrti~ with • ;itable ~
~
example. ..,
1 , 1
'll. 4. t 9 .5 Conflict Resolution - - - -~ - - - - - - - - - - - - - - - - - a. - - - --- - ~
Refutation is nothing but a technique that a
f; ex;a~n- -:o~;i; ~~~; - ~l;i~ - ;;: -; ~-: resolution procedure used to prove a statemen
t. i.e., an
~ _ refutation. And let's take an example to explain this: : attempt to show that negation of the statemen
t
(f foll~ng statements are assum~ to be true : s i
produces a contradiction with known statements.
lt Steve only likes easy cou{SeS. • We consider an example :
t~
\
Science cQUrses are hard.
tne
, :
1
I
The following statements are assumed to be true:
\ ;. Alf courses in the basket weaving department ~ 1. Steve only likes easy courses.
~ are easy. ,
, i 2. Science courses are hard.
1- 'BK30[ is a basket weaving course. We ask : What I
course would steve like? 3. All the courses in the basket-weaving departm ent
OR • • •
1~~ 'What is conflict ~solution? Ulu_strate WI~ an : 4.
are easy.
BK 101 is a basket-weaving course.
°-·- example In any production system.
-- - - - - -
--------- --- ---
___.,,,._ ___....__..._,,,._
_._ -~~ We ask: What course would Steve like 7

-----------__L-n;ici~~~---~---
~S-126) l!ll Tech-Neo Publications...A SACHIN SHAH Venture
~ lntell 'R:e IMU-AI & DS I Elecironlell)
. ,!5'.??M$e and Reasonl?81, ..Pape No_t ~
lfkd(stefft .s) : A ~olub on Pl'Oof ~
Tlte predJc:ate Joatt! eocncUne ot Che preml les of tbe 6•
obtained by tM foJlowmg seq~ e of ~J . ~
Pml ou Pl"Oblan 11 • foDows:. (each step includeS a_ parenthesized numtier ~
1. '\f(x) eaay(x) ➔ likes(steve. x) resol~ent generated m the curren t step; 1 llit
2!'°·: V·(x) science(x) ➔ easy(x) JDeans that we fC801V~ clause s ( 1) and (5). ~ 5
3• t 'I (x) baske and 6 . .:.ltf... resolvent-easy(x).
t weaYing(x) ➔ easy(:x) ,- 1. 1
4• :, ~ wcavm . _. 4 and 7 yields resolvent -basketwcaving{x).
a (BK301) ',., 8
'.l"bc c.oaclua.ion ia encoded as, likes(stev~.x). ,' 9. 5 and s yields empty clause: th~ substibitio.
XJBK301 is proouced by the WlUJCation alg •
wbi&h says that the only wff of rhe fonn~
P'mt we put our premises in. the clauae form and
the negati on of conclusion lo ouc·set of.clauses (we use (steve, x) which follows from the .Pl'Clnisc. .
numb en in parentheses to number the clauses):;,. ~(sre ve.BK 30J). ll
1. :; easy(x ) or likcs(stcve, x) • ThUI. resolution gives us a way to find ~
2;, scieac e(x) or -wy(x ) aasumpdom .(io this case s = BK.301) which Ulake ~
3. scicnce(x) or. -easy(x) I .. j ~tru e... •
4 - •basketweaving(x) or ~(x ) :.
• '.' ••• I

~s~· -bas ketw -:-_- ~_vm g_·~ ~=~= -01~ >-·-- ,---- --....
: •.I, ,., ...
:-l--, -__,. .,.-''_ ·~--c ----- ----';
• ',./.'' i-:
••

.. , . : . .• . . ; '. • • ·t: ~ . --: • . Chaptu&ad.._


..'.'
• I
' ..
, qoa
: J" - •

'.•
~

.,,.:
. ' , \
: I

- ... ·- :>·....._::_:~:i;.. _ ·_· \ .-_: •.. .,_· .- •:~-~: . . : • \' ... ,


! i

0
.; .t l,'1~-; , _.v.. :·._.1
.... .: : i..:-:•!. :.. ,: .·':·~ ):_(•<; H:-,:,/ ) ...;_:~ i~
il-' •l :J .. -:-~ -,~-- ~::!. • j~ .. -.~;. ,. ' l~n,-:·: .. J ; -~-·
• :,;); .~ .. ,,)a.:~ ~·i;) ;~ 11 ;r:·,ir:.: ri,!.:f· :, :•:1·,u~·•t·~: ;

;
. ,

: 1,,;:, . Iii 1 •
I .

. 1).
• ~ '
..,,

cJIAPTER
Reasoning Undef
5 Uncertainty
I •

5.1 Handling Uncertain Knowledge Rancto


. . ' m Variables, Prior and Posterior Probabifrty, Inference using Fun Joint
5.2 Distribution. Bayes' Rule and its u88 Ba
• yeslan Balief Networks, Reasoning In Beriet Networks.

Reasoning .................. ·····........................................... 5-2


5,1 1 Non-monotonic Logi .............................................................................._.....................
5. 1.
ction to random variabl
cs......................... ......................................................................_. ....................................
5-2
5~~ trod
In u es......................... 5-2
B,ask: terminology........................... .............................................................._.............._........•-···············...
5•3 Mathematical (OR classical or 'A Priori')·Probablr.......................................................................................................... 5-2
5.4 ity........................................................ ,................_......................... 5-S
S.5 Statistical (OR Empirical) Probability .............................................................................................................................. 5-4
s.6 Umitatlons of classical Probability .................................................................................................................................. 5-4
5.7 terms used in Axiomatic theory.................................................................................................................................. ~
5.s Algebra of events : For events A,B,C... ...................................................................................................................... 5-4
5.8.1 Table of Probability Terms ...............................................................................................................................5-6
5.8.2 Solved Example on given Event In TenllS of Standard Events ..................................................._................... 5-6
5.9 Axiomatic definition of probability............................................................................._............................._. ............._. ..... 5-6
5.10 Theorems on probability of Events .................................................................................................................................5-6
5.11 SOLVED Examples ON Axiomatic Probability................................................................................................................ 5-6
UEx. 5.11.1 lli•M■-············ ....................................................................................................................................
5-6
UEx. 5.11.? Nli•MPPN .................................................................................-............................................................. 5-6
uex. 5.11.3 ■l!+ee ................................................................................................................................................
s-1
5.12 Some Theorems on Probability.......................................................................................................................................5-8
5.12.1 Solved Examples on Simplification of Given Events ..........................................................................._........... 5-9
5.13 Conditional Probability ..................................................................................................................................................5-11
5.13.1 Properties of Conditional Probability .............................................................................................................. 5-11
5.14 Independent Events ....................................................................................................................................................... 5-12
5.15 Probabilistic Reasoning ................................................................................................................................................ 5-13
5.16 Inference using Full Joint Distributions .........................................................................................................................5-14
5.17 Bayes Theorem .............................................................................................................................................................5-1 6
5.18 Bayesian Belief Networks .............................................................................................................................................5-17
5.18.1 The semantics of Bayesian Network .............................................................................................................. 5-19
5.18.2 Advantages and Disadvantages of Bayesian Belief Network ....................................................................... 5-20
5.19 DecisIon Theory .......................kl
...........................................................::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::~~
5.19.1 Types of Decision Ma ng ..........................................
• Chapter End .................................·.....................................................................................................................s-23
.J J►
Reasoni Under Uncertai
Artificial lntelli ence MU-Al & OS/ Electronics
-- SyllabUS topic : Random Variables

~
Syllabua topic : Uncertain Knowledge and Reasoning
Handling Uncertain Knowledge IJITIIODUCTION TO
~ S.% yAJUABLES ---J
~ S.1 ltEASONING := We daily come across the se~tences like:
Possibly, it will rain to-rught.
• In a reasoning system, there are several types of 1.
There is a high chance of my getting the i.J..
uncertainty. Reasoning under uncertainty research 2. ,UV
in July.
in AI is 'focused on uncertainty of truth value' in above sentenCCS, with words like '00s.til.i...
1b
order to find the values other than True and False. • 'bi~-chance' indicate a degree of ~
• To develop a system that reasons with uncertainty about the happening of the event.
means to provide the following : • A numerical measw-e of uncertainty is .Pl'Ovided
by a very important branch of mathematics called
1. An explanation about the origin and nature of the 'Theory of probability'•
the uncertainty.
• Broadly, there are three possible states ct
2. To represent uncertainty in a formal language. expectation-'certainty', impossibility' aai
3. A set of inference rules that derive uncertain 'uncertainty'.
conclusions. • The probability theory describes ~ t y by 1,
4. An efficient memory-control mechanism for impossibility by o an d the vanous grades Ii
uncertainty management. uncertainty by coefficients ranging bctweea
0 and 1.
a. 5.1.1 Non-monotonic Loaks • According to Ya-Lin chou- "Probability is the
science of decision-making with calculated risb
• A reasoning system is monotonic if the in the face of uncertainty.
truthfulness of a conclusion does not change when
new information is added to the system.
.., 5.l IASIC TDMINOLOGY

• In contrast, in a system doing non-monotonic Here we explain the various terms which are used
reasoning the set of conclusions may either grow in the definition of probability :
or shrink when new information is obtained. 1. Random experiment
2. Outcome
• Simply spealcing, the truth value of propositions
3. Trial and event
in a non-monotonic logic can be classified into the
4. Exhaustive events of cases
following types : 5. Favourable events or cases
1. Facts that are definitely true, such as 'Crow is 6. Mutually exclusive events
a bird'. 7. Equally likely Events
2. Default rules that are normally true, such as 8. Independent Events
9. Joint and Conditional events
'Birds fly'.
3. Tentative conclusions that are true, such as ► 1. Random experiment
'Crow flies'. If in each trial of an experiment conducted under
• Remark : When an inconsistency is recognised, identical conditions, the outcome is not unique, bU1
may be any one of the possible outcomes. then
only the truth value of the last type is changed.
such an experiment is called a random experunenl-
e.., selectin a card from a ack of la in cards:
(MS-126) [il Tech-Neo Publications..A SACHIN SHAH Ven'/J$f
~1~nw:n~i~e~n~~•M•U--A•l•&•D•S•/•E•lectr•o•n~i~~----~---.JR~ea=~so~n~i~n~Und~e~r~U~nce~rtal~n~.···~P~ag:,~N~o~.~5-3-•
out~ . ► 8. Independe nt Events ______ _ .--.,
ult of a random expenmen t is called ,-.- - - - - - -;y - - - -,r, -,-.- - -,--- - - ~
t i I t events • "
'J1)e res an UQ;! State and explain :
~- Independen
e
outcOIIl •
'
'
r.m
t • . : 111wa•asa••w:- _·:.:-:-- I
I

frW and event L ~- - - - .: - - - - - - - - - - - - - - . - - _, - - t if the

t 3-. ADY particu_lar perfonn~ ce of a random (i) ~=~e;~ :~ n:~.:: ~~ :::e~~= t. is no:
(1) e~runen t is called a tnaJ and outcome or affected by the happening (or non-bapperung) 0
colilbinations of outcomes are called as the remaining events.
th
events. (ii) For example when a die is thrown twice, e
.. Jf a coin is tossed, we may get head or tail. f
result of the ust throw does not affect the result

(IJ) 'Jbus tossing of a com • 1s. a random ofth e second throw.
experiment or trial and getting head or tail is ► 9. Joint and Conditional events
an event i - - - ~-•i.-: - - - - -.;; - -P:lil"; - - -:; .-, - : -,.- ~'iF - --.-:
and explain: Joint and condrtiOna(events. ,
Es)18usf.lve events of cases ,, UQ. State 1

►• 4. ') 'Jbe total number of possible outcomes of a ! - - - - - - - - - - - _., - _._ - - - - - - - - - -· - _, _.._ '
(1 random experiment is known as exhaustive □ Definition : Two events X and y are
said to be
events or cases. independent if P (X) 7' 0, P (Y) 7' 0, and if
1

(ii) For exa_mple, in tossing a die, there are 6 p (X I Y) = p (X) and p (Y I X) = P (Y)
exhaustive cases. Q" Joint events :
?
' 5• Favourable events or cases Let X and Y be two events, then the happening of

(i) 'Jbe nwnber of cases 1avoura


r ble to an event in a X and Y is called a J.oint event and is denoted by
trial is the number outcome which includes the (X n Y).
,,•• happening of an event. For example : q- Conditional probabili ty :
The probability P (X) of an event X represe~ts the
(ii)ln throwing of two dice, the number of cases likelihood of outcome of the set x relative to
favourable to getting the sum 5 is (1,4), (4,1), sample spaces. But if we have some information
(2,3), (3,2), i.e. 4. that the outcome of the random experiment is in a
set Y of S, then instead of finding outcome of X
6: Mutually exclusive events w.r.L S, we find the outcome of the event A w.r.L
(i) Events are said to be mutually exclusive if the y and denoted it by p @
or read as conditional
... happening of any one of them precludes the probability of the event x given that y bas already
happening of all the others, i.e., no two or more occurred.
·- • can happen simultaneously in the same trial. Syllabus topic: Prior and Posterior ProbebUJty
(ii)For example, in tossing a die all the 6 faces
numbered 1 to 6 are mutually exclusive, since ~,. 5.4, MATHEMATICAL (OR ClASSICAL .;,
any one of the faces comes, the possibility of ..., ~ -'', Oil 1A PRIOJU~l..P~OBA.81Llt;Y _, ~
others is ruled out.
□ Definition of Probability : If an event E happens
► 7. Equally likely Events in m ways out of n possibe likely ways, the
(i) Events are equally likely if there is no reason to probability of occurance of the event E is defined
as
• ' expect one in preference to the others.
m
(ii) For example, in throwing an unbiased die, all P(E) = p=- n
the six faces are uall likel .
!MS-126) 1~ Iii Tech-Neo Publications.-A SACHIN SHAH Venture
Artlftolat lntoll
Aeasonln Under Uncertain
MU--At & 08 I Elec1ronlcl ... Pa
8
No
1
(4) A sample space 1s sai to be finite (' • ~
TI c Pf'Obnblllty of non-occurance is defined by S. 'd
sample space if the num ber of eJe lll~
lllfillj
P (not B) • q • !l.::.!!l • 1-!! !.• 1-p le)
n n 1s OnJte (lnflnJte). 111 s
p+q - (S) A sample space is called discrete
it .
~ s.s contains only ftnltely or lntlnJteJy
STATlttlCAL (OR EMPIRICAL) q
points which can be arranged into a ~
PROBABILITY
I . sequence w1,w2, ..... ---·Pit
a Den n1u on : lf an experiment Is
performed (II) Event
repe nted ly und er identical condition □
s, then the DeflnltJoo ''Every non-empty subset A
limi t of U,e mtio of u,e number of time of S
s the event random experiment Eis called an event". • Of I
occu rs to Ule num ber of trlnls, n& the
number of
trinl s beco mes inilnltely llll'ge, Is An event may aJso be explained as follo
called the ws :
prob abil ity of that event.
"Of aJJ possible outcomes in the sample
SJ>ace
i.e. P(B) = an experiment, some outcomes satJs . Of
lim !!! fy a SJ)Ccifled
n-+ oo n description, which we call an eve nt
~f" s., \ ..,.., •• "\-41

LIMITATIONS OF CLASSICAL
., -, • I> - ...
1. As empty set , is a subset of S, , is
also an CVQI,
PROBABILlrt known as impossible evenL

a 2. An event A can be a single clement subs


Def init ion : The definition of classica et of s, ia
l probability which case it is known as elementary evtn
brea ks dow n in the following cases : t.,
(i) lf a pers on jum ps from a running 3. Events in which we arc interested in
train, then the two or lllOrt
prob abil ity of his survival will not be than two char acteristic
50%, since s arc called COIIIPQald
in this case the events survival and deat
h, though events.
exh aust ive and mutually exclusive are
not equally
like ly. A compound event can be decomposed into
two or
(tl) To rem ove the limitations of clas more simple events, e.g. the compoun
d event (i
sical theory of
prob abil ity, we put it in axiomatic fonn getting a ''red picture" can be decompo
. sed into two
(iii) Firs t we men tion som e terms, som
e axioms some simple events of getting a red card and getting a
theo rem s and the develop axiomatic
probability picture card.

y
theo ry.

t• 5.7 nR Ms
THEORY
ustD IN AXIOMATIC "' s.■' AiG EII A Of EV IN1 $,
EY£NTS A,B,C
fOlt
.,..... ~
(i) AU B=[ {ce Sle e Ao rce B}
(l) Sam ple spa ce
(1) The set of all possible outc ome s (ii) AnB = {CE s I e E A, and C E B}
of a given
ran dom exp erim ent is called the
sample (iii) A (A complement) = {C E s I C E A}
spac e associated with that experiment. (iv) A- B = {c E SI c E A but c E B }
(2) Eac h poss ible outc ome or clement
in a sample (v) AcB ⇒ cveryce A,c e B
spac e is call ed a sam ple poin t
or an AcB ⇒ B::::>A.
elem enta ry even L
(vi) A= B if and only if A and B have sam
(3) Toe num ber of sample points in e clements.
the sample (vii)
spac e is den oted by n(S). A and B disjo int (mutually exclusive) ⇒ A n B
=, (empty set).
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH v~
~ I 1ntelli ence . DS / Electronics
\'iii)A u B:: A+ B if A and Bare disjoint Reasonln Under Uncertaln ...Pa e No. 5-5
( AOB denotes those 'e' belonging to exa tl Ex. 5•8•1 : A, Band Care three arbitrary even
(~) expres ts. Find
of AandB1 'e
, •• c.yo ne • 1..or the even
sion ts noted below, in the context
of A, B andC.
~ ::: AB U AB = AB + AB (disjoint events). (i) Only A occurs
(i) pe,roorgan's Laws : (ii) Both A and B but not C, occur.
AUB = AnB and Ana = Aun (ill) All three events occur.
(~)l,aWS of distribution : (iv) At least one occurs.
Au (B nC) = (AUB) n (AUC) (v) At least two occur.
An (BUC) = (AnB) u (An q (vi) One and no more occur.
(vii)Two and no more occur.
-i,. 5,8, 1 Table of Probability Terms (viii) None occurs.
"';\:,',·, ~v•"'t.!);;,
_sr- - ~
'"st,a'tenfei( =
--1 ....
>•
-~ .., • <t
" .. .
.M~ nJ .t ..
0So ln.:

J!pi''J. ~ 'ii ""~ "":'.i -e ~ (i) AnB nc


-I.
.;_ V
(ii) AnBnC
'
At least one of the events ee AU (ill) AnB nc
B (iv) AUBUC
A and B occur
(v) (AnBnC) u (AnBnC) u (An BnQ u(A nBn
2. Both the events A and B ee An B O
occur
(vi) (An anc) u (AnBnC) U (AnBnC)
3. Neither A nor B occurs
ee AnB (vii) (AnBnC) u (AnBnC) u (Af)BnC)
4. Event A occurs and B
does not occur. weA nB (viii) (Ananc} or (AuBvC)
5. Exactly one of the events WE A6B
AorB occurs
6. If event A occurs, so does AcB
B
7. Events A and B are AnB
=~
mutually exclusive
8. Complementary event
of A □ Deflnltion : Let S be the sample space. Let A be
A
- the event (or element) of S, then

- 9. Sample space

a_ 5,8,2 Solved Eumple on sfven Event In


Universal set S.
1. P(A) ~0 (Axiom of non-negativity)
(Axiom of certainty)
2. P(S) = 1
Terms of Standard £vents 3. P (AUB) = P(A) + P(B)
To express one of the required events in terms of (Axiom of Additivity) p.,. {
another given events :
where A and B are disjoint events.

-- -- -- -- -- -- -- -~ rr ic ~ = ~ = = ~
IMs. )
126 ~ Tech-Neo Publications...A SACHIN SHAH
Venture

L
Reasonln Under Uncertain
Artlflclal lntelll ence MU-Al & OS / Electronics

~
(¾+%+ 1) p
S.10 THEOREMS ON PROBABILITY Of
EVENTS
l\l

... p = n
► .Theorem (1> liMi'WMi11
(!) Probability of an impossible event cj) is zero,
_.,.._~e.:.:•~P.(t) ='o. · ,;.. • .:... ~ - -
'1' ,. ~1~ ~
_l!
~
2 4
:. P(A) =11, P(B) = 11' P(C) =n5 ,
...
Proof: ._. SUcj,=S UEX, 5.11•2rm\lM€WI
\I ~I
P (SUcj,) = P(S) = l (Axiom 2) A ball is drawn at random from a box containing !
Jed, 18 white, 10 blue and 15 orange balls. F'Uld· ~
P{S) + P{cj)) = I (Axiom 3)
:. P(¢,) = 0
probability that y' I
(i) It is red or blue (ii) white, blue or orang~ ' -~
(2) Complimentary events : The events A and A ~
1
I
@i) Neither white nor orange. ~
where A is the complement of A in S are called
complimentary events. 0soln.:
► Step (I) : Let A,B,C and D be the events o f ~
► Theorem (2) : P(A) = 1 - P(A)
red, white, blue and orange balls respectively. Tota1
Proof: Wehave AU.A = S number of balls = 12 + 18 + 10 + 15 :: ss.
Probability of drawing one red ball is
:. P(AUA) = P(S) = I
12C1 12
= P(A) = 55C1 = 55' Similarly
P(A) + P{A) = 1, ("." A and A are disjoint)
18C1 18 lOC1 10
:. P(A) = 1 - P(A) 55
and P(B) = 55C1 = 55 ' P(C) = 55C1 =
• 1:1 ~SOLVE°irEXAMPLESON' 15C1 15
~;'~ :~•.r • AXIOMAT • • and P(D) = 55C = 55
•• ' .. , .;~ • ~
. . . ,. 4
1
u~i.11,.1 rmmw.mt:Ji'>". - ,~ ~ . •• -- ' ~ ► Step (II) : Probability of a red or blue ball
~B,'C are bidding for a CO.!}~ct. A' has exa~tly h~ lb~ = P(AUC) = P(A) + P(C)
c.Ji~ce that B ha~; ·B._in ~tum. is ~ as likel; as C to ~inl ·: A and C are mutually disjoim
r . .. .. , ..
:tliJ. c~nlra~t. Wh~! js tpelrqbabjlity for each to win'thei 12 10 22 2
' ·' • 'the co actis to.,,kgixen;to one Qf them:_ = 55 + 55 = 55 = 5
0Soln.:
(i) Probability of white, blue or orange
► Step (I) : Since the events A,B and C are exclusive, = P(BUCUD) = P(B) + P(C) + P(D)
:. P(A) + P(B) + P(C) = 1
1 ·: B, C, Dare mutually disi~
Now, P(A) = 2 P(B) and ... (ii) 18 10 15 43
4 = 55 + 55 + 55 = 55
P(B) = -5 P(C)
... (iii) (ii) Probability of neither white nor orange
► Step (ll) : Let p be the probability of C, from (i), = 1 - P (AUB) = 1 - [P(A) + P(B)]
I 4
2 • 5 P(C) + 5 P(C) + P(C)
4
= 1 = I [li 15] 33
- 55 + 55 = l - 55 = 55
22

= ¾ ...~
(MS-126) '
~ Tech-Neo Publications...A $ACHIN SHAH venture
I. r .
r
......
.:r,c1a11nte 111
~ .. 3
ence MU-Al & OS / Electronics

am,w.;mui. ·---:
.,..
.-< ·/ ·~ •• •'7•~
I _ •511, ·, . ' - 1 ; " ' , ' ) ·t,;;•.>?.
t(V· • ·rso;is including A an~.B~ stand in a 'i· •·~ ,.., . + P (S = 11) + P (s +.12). J t ,_(j),
tet't.Jl ~ ti.' Fina ~e prpbability thar tliere ~ ~~ f~r.
~~~ b -~B • • •"' ~, e, exact} ► Slep (D) : In a throw of two dice, the sample spacS
. 2
~~ ~ -.,;. ~ ·~,, contains 6 = 36 points.
it1Sol11· : The number of favourable cases are as follo.ws :
~ steP (I) : [ Note This is a problem on S = 9: (3,6), (6,3), (4,5), (5,4) i.e. 4 sample points
, perinut.ations ]
4
,r 11 ◄
nl :. P(S=9) =36 Jr • J, ILA
o -~)I
tJSO, Pr- (n-r
S = 10: (4,6), (6,4), (5,5)) i.e. 3 points.
persons can stand in a row in
seven 71 71 :. P (s = 10) = } )
6
7P7 = (7-7)! =01=?!Ways, (': 01=1)
S = 11 : (5,6), (6,5), i.e. 2 sample points. . .. (i)
If there are ~o persons between A and B, then 2
P(S = 11) =
~ are 4 followmg cases 36

A * * B * * S = 12 : (6,6), i.e. 1 sampl«:J><>in!-


* :::>1..,ih I>(!
1
* A * * B * :. P(S = 12) =
* 36

* * A * Required probability
* B *
4 3 2 1 f0 5"
* * * A * * B - 36 + 36 + 36 + 36 = 36 - 18
4- I
► Step (D) : In these 4 ways, A and B can Ex. 5.11.5 : A card is drawn from a pack of 52 cards..
interchange the positions. Hence there are 2 ways. Find the probability of getting a king or a hc,rt ox: a.red
:. There are 8 ways. card. . t. :
I -

The remaining 5 persons can occupy the positions ltlsoln.:


in5P5 = SI Ways ► Step {I) : Let the events be : ,.1 1 L "I :,rl I

► Step (ID) : Number of favourable cases=' 8 x 5! = A = the card drawn is a king 11 • ,. • I- I •

mand total number of cases = 7 ! = n B = the card drawn is a heart ~ ". '~. 1 1
1, > •
•.11edPr b bil. m 8x5! 8 C = the card drawn is a red card ' '.
:. Requtr O a tty = D = ~ = ? X
6
Note that A, B, C are nor. mutually exclusive
4 AnB = the card drawn is the king of hearts
= 21 ...Ans. !: .i:1
:. n (AnB) = t
I 11 I : . •.,/.)

Ex. 5.11.4 : If two dice are thrown, what is the :. p(AnB) = 52 .l.
probability that the sum is greater than 8.
I I
0So1n.: • ~- BnC = B: the card drawn is a heart(': Be C)
► Step (I): Let S denote the sum on the two dice, :. n (BnC) = 13
then we want P(S > 8). 13
:. p(BnC) =s2 CnA : the card drawn is a ~ png;
The required event can happen in the following 'L .
~UlualJy exclusive ways : n (CnA) = 2
2
b)S:::9, (ii)S=lO, (iii)S=ll, (iv)S=12. :. p (CnA) = 52 I 111• 'I, • )( .
1
:. By addition theorem .... 1. I ~ fl : • ~ \ tt l

P(S:::,.g) = P(S=9)+P(S=10)
!Ms-126) ~ Tech-Neo Publicatlons...A SACHIN SHAH. Veptuce
a,

Mflclal lntelll ence MU-Al & OS/ Electronics


.and AnBnc = AnB : the card drawn is the icing of
lt[soln,:
Reasonin Under Uncertain
··•··:,
. """e sample space S of the expe,.; ...
0,

hearts ► Step (J) • -'" . •«t1C1tt


{ "bed, .....y,z}, .. n(S):::26
n(AnBnC ) = Is S = .., ' ' •
► Step (11)
P(AnBnC ) = ...!.. the event that the letter cbo
52 (i) Let A be sen~
► Step {D) : The required probabiHty of getting a vowel. '
king or heart or a red card is given by A = { a, e, i, o, u}, n {A) = 5
nius,
P(AUBUC) = P(A) + P(B) + P(C)-P(A nB) 5
P(A) = 26
- P(BnC) - P(CnA) + P(An B nC) ... (i)
.. L t B be the event that the Jetter precedes .,.
(") e • "'atid •
a vowel, then B = { a,e,1}, n {B) = 3 IS
3
:. P{B) = 26

(iii) Let c be the event that the letter follows maQd is


a vowel,
7 Then C = {o,u}, n(C)=2
=13 ...Ans.
2 1
:. P(C) = 26 =13
Ex. 5.11.6 : A fair dice is thrown thrice. Find the
probability that the sum of the numbers obtained is 10.
5';12 ~SOME THEOREMS ON
@s01n.:
,,. • ,.,• PROBABILITY "
► Step (I) : When a dice is thrown thrice, total
number of cases= 6 x 6 x 6 = 216 Here we prove some theorems which help us ., ,
:. n = 216 evaluate the probabilities of some complicated evenis
The number of favourable cases are as follows : in a rather simple way. In proving these theorems we
(1,4,5) in 31 Ways= 6 ways use axiomatic approach based on the three axiom
(1,6,3) in 31 Ways= 6 ways discussed previously.
(2,5,3) in 31 Ways= 6 ways ► Theorem 1 : For any two events A and B. we hart
(2,4,4) in 3Ci ways = ;; = 3 ways
(i) P(AnB) = P(B) - P(AnB)
(2,6,2) in 3Ci ways = 3 ways
(3,3,4) in 3Ci = 3 ways. (ii) P(Anl3) = P(A) - P(AnB)

► Step (Il) : Total number of ways of getting the sum Proof: (i)
10 is = 6 + 6 + 6 + 3 + 3 + 3 = 27 ways. From the venn-diagram,
27 1
Pr0 ba b'li
1 ty=216 =s ... Ans. A s

Ex.' 6.11. 7 : A letter of the English alphabet is chosen


at random. Calculate the probability that the letter so
chosen
(i) is a vowel, (ii) precedes m and is a vowel,
(iii) foUows m an is a vowel.
Fig. 5.12.1
l = = = - - ~ - - - - - - [il
(MSO126)'' :::.-------
Tech-Neo Publications...A $ACHIN SHAH V -
ence MU-Al & OS / Electronics
Reasonin Under Uncertain
s
B = (AnB) u (A.nB)
nB) and (AnB) are disjoint sets
Wbere (A A . ·-:.. .. I--;;
8
:. bY A~om(3)
p(B) = P(AnB) + P(AnB)
•I
. p(AnB) = P(B) - P(AnB)
') j
,. s;narlY, we have Flg'.5.11.3

(ii) A = (AnB) u (AnB) AUB = AU(MB)

P(A) = p (AnB) + P(AnB) Where A and .i\nB are mutually disjoint 2 ..


;.nB and AnB are disjoint :. P(AUB) = P[AU(AnB)) = P(A) + P(B) - P(AnB)
I I
Using [Theorem(l)] r
13 = P(A) - P(AnB)
► Corollary (1) : If the events A and B ue mutuailY
disjoint, then AflB = ~
i:.s· -~ ".:.
:. P(A()B) = P(,) = 0
Joi ~
Then P(AUB) = P(A} + P(B)
I ~ j ' 1 .
(This is axiom 3 of probability)
(I) When B c A, B and AnB are mutually exclusive ► Corollary (2) : For the oon-mwuallY exclusiv.2 events
A,B and C, we have- • ,,
events A= BU (AnB) "-
P(AUBUC} = P(A) + P(B) + P(C)- P(A'1B)
. [1 s
- P(B()C} - P(CM) + P(AnB()C)

I 'I
Proof: P(AUBUC}= P[AU(BUC)] I • •

A
IJ I -" ,, = P(A) + P(BUC} - P[An(BUC)]
= P(A) + P(B) + P(C)- P(BnC) - P((Al1B)u(AnC))

= P(A) + P(B) + P(C)-P(B'1C)


Fig. 5.12.2
- [P(AnB) -P(A()C) + P(AflB()C)}
P(A) = P(B) + P(Ani:i)
= P(A) + P(B) + P(C)-P{B(IC)
:. P(AnB) = P(A)- P(B) - P(AnB) - P(A()C) + P(AnBnC)
'I ..
(ii)·: P(AnB) ~ 0 = P(A) + P(B) + P(C}- P(AnB)
'.,, P(A~-P(B)~0 - P(BnC) - P(CnA) + P(AnBnC)
, ~ L .-. P(B) ~ P(A)
~ '~ :. '
if BcA then P(B) < P(A)
a 5.12, t Solved Enmples on SbnpUfladon of
<iJven Events
► Theorem (3) : If A and B are any two events
(subsets of sample space S) and are not disjoint, To solve examples using Theorem l, 2, 3 and
then P(AUB) = P(A) + P(B) - P(A()B) corollaries of previously.
These are just extensions of the above theorem.
Pi:oor: From the veen-diagram,
~Tech-Neo Publications..A SACHIN.SHAH Venture.
lM~-126),
Reasonin Under Uncertain
Artificial lntelll ence MU-Al & OS/ Electronlcs
P(AnB) = P(B) - P(A)
Ex. 5.12.1 : If Ai, A2, .... Ak are mutually exclusive
events such that their union is the whole of the sample
space S, then, P(A 1) + P(Ai) + ....+ P(AJ = 1
cur: PcXnB) 2: o

li'.Jsoln.:
:. P(B)-P(A)2:0
,I :. P(B) 2: P(A)
... A1UA2U ..... u Ak = S ··~
Ex 5 _1 2 .5 : If A and B are any two e v e n ~
:. P(A1UA2U ... UA0 = P(S) = 1
·babi.lity that exactly one of them will OCcur ;. . the
·: events are mutually exclusive, pro "'&lv~
... Ans.
by
P(A1) + P(A2) + .... + P(A0 = 1
(i) P((AnB) u (.AnB) = P(A) + P(B) - 2P (AnB)
Ex. 5.12.2: P(AnB) ~ P(A) + P(B)-1.
lt'.Jsoln.:
li'.J Soln. :
:. P(AUB) S 1
Since AnB and A n B are mutually exclusive,
P(A) + P(B) - P(AnB) S 1 s
, J .-. P(AnB) ~ PCA) + PCB) - 1 ·,

Ex. 5.12.3: If AnB = cf>, then P(A) S P(B)


li'.J Soln. :
•: P(AUB) = P(A) + P(B) - P(AnB)
·: AnB = cf>, ... P(AnB) = P(cf>) = 0 Fig. P. 5.12.5
:. P(AUB) = P(A) + P(B) P[(AnB) u(AnB) = P(AnB) + P(AnB)J
·: P(AUB) S 1 l
= [P(A) - P (AnB)] + [P(B)
:.) P(A) + P(B) S 1 , 1 -P(AnB)J
:. P(A) S l - P(B) = P(B) = P(A) + P(B) - 2P(AOB)
by theorem 2, Articlel ...Ans. [Using theorem 1, Article I.II)

Ex. 5.12.4: If A!;;;B, then prove that Ex. 5.12.6: Show that : P(AnB) 2: 1 -P(A)-P(B)
(i) P(AnB) = P(B) - P(A) (ii) P(A) S P(B) li'.JSoln. : We have
@soln.: P(AnB) ~ P(A) + P(B) - 1

s P(AnB) ~ [ 1-P(A)] + [l -P(B))-1


P(AnB) ~ 1 - P(A) - P(B) .. .Aa1.
B
[ ·: P(A) = 1-P(A)I

Ex. 5.12.7 : Two cards are drawn from a pack of


cards. Find the probability that they will be both red«
Ffg. P. 5.12.4 both pictures.
(i)We have B = A u (AnB) li'.JSoln. : Left as an exercise :
Hint : P (AUB) = P (A) + P(B) - P(AnB)
·: A and (.AnB).,are mutually exclusive events,
r ~
:. P(B) = P(A) + P(AnB) JJLi.=(,63

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH vtf!OJ'
Reasonln Under Unc:ertaln ...P e No. S.11
, .. .. ' ••• I
R•martca
(1) If P(B/A) = P (B), then events ·i\·aad D are"said lD
be independent.· · · :; ,..., •
In this case, knowledge aboUt eidier·-eveot doeS
.:.;.;_..-,.-~;..\w
not alter the likelihood of each~;_.-, -; ;; : ln:,·,'f
• coo~tional probability is the likelihood of an
(2) P (AIB) (the conditional probability of A given B)
' :: e~nt or outcome occurring, based on_ the differs from p (BIA)°: :· ··: ··-· • _. ·.- ; r • Fi ·-, .'· J' '·!
aocurrence of a previous event or out.come. ••
For example :.H a-pcnoo .bas ~~fever, be
•It iS·caiculated by multiplying the probablltty.-~f might have. 90% cliance· "of. beiflg· tested as
the preceding event by the updated probability of
tbC succeeding, or conditional, event ,...?QS*~C, ·_l, ·' . } ' .
,·i H~ lhe proba,bility of A ( ~ -~ positive) given
, fyf. ~xample : Event A : There is an 80% chance .... that B"has ~ is 90%.-. •. • . •••: •
0
•• _lbat -~~ individual applying for college will be
-'~ttd- · \ ! · ~ Wewrite.P(AIB)=~=0. 9
·• ..
·: .:-.;:,
1

, tv~t B : This_ individu~ will be given dormitory


u
13~:i ~--pers~a·:·is\.;;ted positi~e foe dcogue
fever,.there Jlllll'.' have ~nly ~ t;haace.of actually
..,bO~/n.8, {?omutory housing will only be provided
._for 60% of all lhe accepted students. . . .
=
. ----.having tbil!.~ i.e,. E!.(B/A) 2Q% ;;:._0.2Q. ~--·-
(3) _'., ! ·. ·_:,.,
P(Accepted and dormitory housing) • • _,;:)r:.. .-i ''

.. p (Accepted) • p (Dormitory houshlg/ A~q,ted).


.: ,.:, ....• ~.
i•\n::.!i1,·1! ·;i.r
::::(0.80)(0,60) = 0.48 •• ·: ••_. ' ; ;
~.!.-:.h
,._•• •.: h.:'""!'
Aconditional probability would iook'~:th~6·tw~
events in relationship with o~e another.; !_, •.•.
·'
Coodltlonal probablllty can be 'contrasted ,with
unconditional probability. . . ,. , . . ... ••l

'~~-UncondJtio1,1al probability , :refcts_ ._tq d:)e Fla-5.13.1


likelihood that an event will take: place illuk~on of con~tfonal probabilities with mi
, ~ective of any other events have. ·taken place Euler diagram. • ,.
'! {or any,o~~ co~ditions arc present).. . . . •• The unconditi?nal prob~ility p (A)°=. 0.30 + 0.10 +
0,12=0.52 · ,. : .. ;r ......,- ••,. / .,
' Cooditional probability is used in a variety of
·eut the conditional probability ;:_ · , . , .
fields, such as insurance, politics and many
different ftdds_ of mathematics. . P (AIB1) = 1, '
o·.12 • • '. ,· ·;·_,_.- •
~ of Conditional Probability .•• P (A/82,) = 0.12 + 0:04 =0.75 and P-(AJB)) = 0-. .
(IJ Let A aria · . two events in a sample space
S, The B ~-any
has ~~d1ty that A will occu_r_ given that I3 'll.. s.u.1 Pro.,...ti~.oi ~--~.
Pl'Ob .. . . Y ~ is called conditional 0)
. . p· be events of a'-
Let E and. ~mnl • space
--r e .. • ~-s ;•.a._
(2J S' _ability of A and is denoted by P(A/B). ·web.ave • •• , ·; . . . ,, . ..'u.u;;u
:arly the probability that B will occur _given
CObdi ~ bas already ~lll'l'ed is called the
il(B/ UooaJ probabiUty of B. and is,· denoted by
~- •••

O..S.126)
···-~,,-.;·)·.:':, ...---,.,;, :.-i- :---<~l
~ TedrNea Publicatlons...A SAOUN SHA)1_.y~
•- • • •
~ a l lntelll ence MU-Al & OS / Electronlcs Reasonln Under Uncertain
(i) If A and B arc any two events of And we have
n sam ple space 5
and F is an eve nt of S such thnl p (An 8) = p (A) • p (B)
P (F) 'it 0, then P ((A u 8)/ F) Def inU lon: TM even.ts A, Bar e independ
.tnt if
= P (A/F) + P (B/F) - P (A n 8/F). p (An 8) ::::p(
Pro or: We hav e
► Theorem (1) : If the events A ~~-
~!)~
~ arc su,. ~
~ o. P(B) ~ 0 and A is independent of
P [(A u B) IF]"" (P (A~ cM n F) P(A)
independent or A.
B'Ucb ~
~ 9,
_ [P (A n F ) u (B n F )] Pro o[ : Let A be independent of
- 8, then P(
p (F) P(A).
Aili) \
... (Using De-Morgan's Law) Now, P(A nB) = P(A/B) • P(B) = P(A
) • P(B)
= P (An F) + p (B n F)- P (An B n F),
p (F)
- p (A ("\ Fl p (B ("\ F) p (A ("\ P(A nB) -..(s.14_
B) Also, = P(B) 11
- P(F ) + P(F ) - P(F ) P(A) (': Afl B-s n
- A)
= P (A/ F) + P (B/F) - P (A n 8/F) :. P(BIA) = P(B)

~- s. t 4 INDEPENDENT EVENTS
.. . .':· .:; i~ :. Bis indepclldetitOf>.,
► Theorem (2) : For any event A in S,
--- ---
Two or mor e events are said to be (i) A and null eve nt, are independent
independent if
the hap pen ing or non -hap pen ing (ii) A and S are inde pen den t
of any one of them
doe s not affe ct the hap pen ing of othe
rs.
~ Ind epe nde nce Pro of: (i) P(A n ,> = P(~) = 0 = P(A) • P(t)
• In disc ussi ng conditional probabili :. A and , are inde pen den t
ty, we
con side red a pair of events A, B whi (ii} P(A nS) = P(A) = P(A ) • I =-P(A)
ch could be • P(S)
inter-related; one of them could give :. A and S are independent
infonnation
abo ut the other.
• It is just poss ible to carry out trials and ► Theof'em (3) : Mul tipli catio
define two n theorem or probabitiiy b
eve nts in such a way that one even independent events :
t gives no
info rma tion at all abo ut the other. If A and B are two events with P(A)
'I: 0, P(B)io,
• For exa mpl e : Sup pose our trial then A and B are independent if P(A
is a game in nB) = P(A).
whi ch a fair die is thrown onc e and P(B).
a fair coin is
toss ed twice. Pro of : •.• A and B are independent,
• We defi ne two even ts as follows : .-. P(AIB) = P(A) and P(BIA) = P(B)
A : the scor e with the die is 3
Now, P(A nB) = P(A ) • P(BIA) = P(A)
• P(B)
B : both tosses of the coin give tails
And P(A nB) = P(B) • P(AIB) = P(B)
• P(A)
• Sinc e A, B refer to sam e trial, they occu
r together.
But the info rma tion on wha t happened ► Theorem (4) : For any three even
to the die ts A,B,C defined oodl
is of no help at all in predicting wha sam ple spac e S such that BcC
t may have n.nd P(A) > 0.
hap pen ed to the coin. P(BIA) :!> P(CIA).

• It is obv ious in this case that, Pro of: Now,


p (B/A ) = P (B) and P (A/B) = P (A) P(C nA)
P(CIA)
= P(A)

~ Tech-Neo Publications...A SACHlN SHAH VtdJtt


(MS-126)
~ lntelli ence MU-Al & OS/ Electronics
Reasonin Under Uncertain ...Pa e No. 5-13
- P[(BnCnA) u (BnCnA)] Syllabus topic :Inference using Full Joint Distribution
- P(A)

~ u o) u (B n D) = o ~ ~
J~ 5. t 5 PROBABILJmc REASONING

= P[(Bncn~ PCBncnAl • Probabilistic reasoning is a way of knowledge


P(A)] + P(A) representation where we apply the concept of
probability to indicate the uncertainty in
= P(BnCIA) + P(BnCIA) knowledge.
= P(BIA) + P(BnCIA) (": BCC) • In probabilistic reasoning, we combine probability
theory with logic to handle the uncertainty.
• • P(BnCIA) ~ O
• We use probability in probabilistic reasoning
P(C/A) ~ P(B/A) because it provides a way to handle the
uncertainty that is the result of someone's laziness
► The<>reffl (5) : If A and B are independent events, then and ignorance.
(i) A and B, (ii) A and B, • In the real world, there are lots of scenarios,
where the certainty of something is not confirmed,
(iii) Aand B are also independent such as "It will rain today'', "behavior of someone
Proofs for some situations", "A match between two
(i) A and B are independent teams or two players".

P(AnB) = P(A) • P(B) ... (5.14.2) • These are probable sentences for which we can
assume that it will happen but not sure about it. so
Now,P(AnB) = P(A) - P(AnB) here we use probabilistic reasoning.
= P(A) - P(A) • P(B) ... From (5.14.2) • Need of probabilistic reasoning in AI
= P(A) [l - P(B)] l. When there are unpredictable outcomes.
2. When specifications or possibilities of
:.P(AnB) = P(A) • P(B):. A and 8 are independent predicates becomes too large to handle.
(u)Again, P(AnB) = P(B) - P(A n B) 3. When an unknown error occurs during an
= P(B) - P(A) • P(B) from(5.14.2) experiment.
• In probabilistic reasoning, there arc two ways to
= P(B) [l - P(A)] = P(B) • P(A) solve problems with uncertain knowledge:
= P(A) • P(B) I. Baye's rule
2. Bayesian Statistics
Cw) P(Ans) = PC Aus) = 1 - P(AuB)
• As probabilistic reasoning uses probability and
= 1 - [P(A) + P(B) - P(AnB)] related terms, so before understanding
= 1 - P(A) - P(B) + P(A) • P(B) probabilistic reasoning, let's understand some
common terms :
...From(5.14.2)
1. Probability : Probability can be defined as a
= [l - P(B)] - P(A) [1 - P(B)] chance that an uncertain event will occur. It is the
= 1- P(B)][l - PIA] numerical measure of the likelihood that an event
will occur. The value of probability always
= PCB) • P(A) = P(A) • PCB) remains between O and 1 that represent ideal
:. A and B are also independent uncertainties.

~126) ~ Tech-Neo Publications...A SACHIN SHAH Venture

t,
Reasonln Under Uncertain ... Pa N ,
Artificial lntelll ence MU-Al & OS/ Electronics
8
o. s.,
~ s. t 6 INFERENCE USING ~ •
• 0 S P (A) s I, where P (A) is the probability of an DISTRIBUTIONS )()flt)
event A.
• • 111
P (A) = 0, indicates total uncertainty • an event
:-= Probability inference means c o m ~
A. observed evidence of posterior probabilition ,
• P (A) = I, indicates total certainty in an event A. Cs.
The knowledge based answering the
• We can find the probability of an uncertain event
by using the below formula.
• represented as full joint distribution. ~, I
Prob b. . Number of desired outcomes • The probability distribution on a single Y . I
a ihty of Occurrence= Total number of outcomes must sum to I. ~ I
It is also true that any joint probability distrj~
• P (A) = probability of event A not happening. •
on any set of variables must sum to l.
• P (A) + P (A) = 1.
Any proposition 'a' is equivalent to
disjunction of all the atomic events in Wbicb ~
2. Event : Each possible outcome of a variable is •
called an event.
bolds.
3. Sample Space : The collection of all possible
Call this set of events e (a).
events is called sample space. •
4. Random Variables : Random variables are used • Atomic events are ~u~y exclusive, so dlt
to represent the events and objects in the real probability of any conJunction of atomic eva.,
1
world. zero.
5. Prior Probability : The prior probability of an • We have,
event is probability computed before observing
P (a) = Leiee(a) P (eJ
new information. For example, if the prior
probability that I bave cavity is 0.2, then we • Given a full joint distribution that specifies die
would write probabilities of all atomic events, this cquatQ
P (cavity = true) = 0.2 or P (cavity) = 0.2 provides a simple method for computing ~
6. Posterior Probability : The probability that is probability of any proposition.
calculated after all evidence or information bas
taken into account is called the posterior toothache -,toothache
probability. It is a combination of prior probability
catch .catch catch -.catch
and new information.
7. Condltlonal Probabllity: Conditional probability cavity 0.108 0.012 0.072 0.008
is a probability of occurring an event when .cavity 0.016 0.064 0.144 0.576
another event has already happened.
Let's suppose, we want to calculate the event A • For example, there are six atomic evenJS fct
(cavity V toothache) :
when event 8 has already occurred, "the
probability of A under the conditions of 8", it can 0.108 + 0.012 + 0.012 + 0.008 + 0.016 +o.~
be written as: =0.28
P (Ar, 8) • Extracting the distribution over a yariable
P(Al8) = p (8) (or some subset of variables), ~
where, P (An 8) = Joint Probability of A and 8. probability, is attained by adding the entries ill~
corresponding rows or columns.
P (8) = Marginal Probability ofB and
P (8) >0. • For example,
We can write, P (An 8) = P (A I 8) P (8) p (cavity)= 0.108 + 0.012 + 0.072 + o.008 ,.o.2

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH v,
1ntelll nee MU•AI & OS / Electronics Reasonln Under Uncertain ... Pa e No. 5-15

~ e can . write ~e following general • In other words, we can calculate the conditional
n¢gioaliZlltlon (sumnung out) rule for any sets of probability distribution without knowing P
....1es y and Z:
.,,attiw (toothache) using normalization.
P (Y) = r
ze Z
r cv. z)
Ex. 5.16.1 : In a class, there are 80% of the students
foC~ ple, who like English and 30% of the students who likes
English and Mathematics, and then what is tne
·cy) :: L f (cavity, z)
f (caYI z E {cavity, to othache} percentage of students those who like English, also
A variant of this rule involves conditional like mathematics? ., '
' probabilities instead of joint probabilities, using @so ln.:
the product rule: Let, A is an event that a student likes Mathematics
p (Y) = L e(Y I z) p (z) B is an event that a smdent likes English
ze Z P (A r'I B) 0.3
P (AIB) = p (B) = 0.S = 0.375
'Ibis rule is called conditioning.
I Marginalization and conditioning turn out to be Hence, 37.5% are the students who like English
useful roles for all kinds of derivations involving also like Mathematics.
probability expressions.
Ex. 5.16.2 : The probability that it will be sunny on
, Computing a conditional probability
P(cavity "tooth ache) Friday is 4/5. The probability that an ice cream shop
P(cavityltoothache) P(toothache) will sell ice creams on a sunny Friday is 213 and the
0.108 + 0.012 probability that the ice cream shop sells ice creams on
' • .,_. a non-sunny Friday is 1/3. Then find the proba
bility
·. = 0.108 + 0.012 + 0.016 + 0.064
that it will be sunny and the ice cream shop sells the
- 0.12 -0 6
I I - 0.2 - • ice creams on Friday.
Similarly, @so ln.:
. 0.016 + 0.064 O
= .4 Let us assume that the probabilities for a Frida y to
P(-caVlty I toothache) - 0 _2
r be sunny and for the ice cream shop to sell ice creams
• The two probabilities sum up to one, as they be Sand I respectively. Then,
should,
1 l . P (S) = 4/5
th=-a-c-he-) = 0.2 = 5 remains
In both the cases, -P-(t-o-o
P (I IS) = 213
constant, no matter which value of cavity we
calculate. It is a nonnalization constant ensuring P (I IS) = 1/3.
that the distribution P (cavity I toothache) adds up We have to find P (S r'I Q.
to 1. We can see that S and I are dependent events. By
' Let a denote the normalization constant. using the dependent events' formula of conditional
P(cavity I toothache)= ex P(cavity, toothache)
probability,
= ex [P (cavity, toothache, catch)
+ P (cavity, tooth ache, -, catch)] P (S r'I Q = P (I IS)· P (S) = (2/3) • (4/5) = 8/15.

= <l [ (0.108, 0.016) + (0.012, 0.064) 1 Answ er: The required proba bility = 8/15.
= <l [0.12, 0.08) = [0.6, 0.04)
~Tec h•Neo Publications...A SACHIN SHAH Ventw e
ll,15-l26) j
,a
Artificial lntelliaence (MU-Al & OS/ Elect
ronics) (ReasonIn Under Uncertalntvl ••• Paae No (S
·16
:~·b5 •1 6 ·~ : The table below shows the theorem calculates the probability bas,.,.
occurrence of • Bayes . ""on
•~ etes tn I OO people. Let D and N be the bypothes1s.
the events
~ ere a ra nd0 mly selec ted person let us state the theorem and its proof. B
"has diabetes•· and Now,
not over weig ht". The n find p (DIN ). • m states that the condiuona
. I ayes
probabilit •
theore Y
vent A given the occurrence of another e of
an e '
Diabetes (D) No Diabetes (D) B, is equal to the product o f the likelihood Vent
of 13
Not over weig ht (N) given A and the probability of A.
5 45 •
• It is given as :
Ove rwei ght (N) 17 33 P (BI A) P (A)
0So ln.:
p (A I B) == p (B)

From the given table, • Here p (A) == how likely A happens


cPri
kno; ledg e) i.e., the probability of a hypo
p 5 + 45 50 thesis~:
(N) = 10 0= 100= 0-5 trUe before any evidence is present.
• p (B) == how likely B hap ~ns (Marg_ina
P (D n N) = 5 lization)
100 = 0.05 i.e., the probability of observing the evidence
.
By the conditional probability formula, • p (AIB) == how likely A happens given that B
has
happened(Posterior) i.e., the probability
of a
P (DI N) = P (D n N) 0.05 hypothesis is trUe given the evidence.
p (N) = 0.5 = 0.1
Ans wer : P (DI N) = 0.l • p (BIA) == how likely B happens given that A
has
happened (Likelihood) i.e., the probabili
ty of
seeing the evidence if the hypothesis is true.
Syllabus topic : Bayes' Rule and its use
Ex. 5.17.1 : Assume thar the chances
of a per ~
having a skin disease are 40%. Assuming
that skin
»I 5.1 7 BAYES THEOREM .. -~.. creams and drinking enough wate r reduces
the risk of
skin disease by 30% and prescription of a
certain drug
• Rev eren d Tho mas Bayes, that help
s in reduces its chance by 20%. At a time, a
patient can
dete rmin ing the probabiljty of an even choose any one of the two options
t that is with equal
base d on som e even t that has already occu probabilities. It is given. that after picking
rred. one of !he
• Bay es theo rem has many applications
such as options, the patient selected at random
has the skin
Bay esia n interference, in the healthcare disease. Find the probability that the patie
sector, to nt picked the
dete rmin e the chances of developin option of skin screams and drin king enou
g health gh water
prob lems with an increase in age and using the Bayes theorem.
many
othe rs. In finance, Bayes' Theorem can be
used to 0So ln.:
rate the risk of lending money to
potential Assume B1: The patient uses skin crea
borr owe rs. ms and
drinks enough water; Bi: The patient uses
• Bay es theorem, in simple words, determine the drug; A:
s the The selected patient has the skin dise ase
cond ition al probability of an event A give
n that P CEt) == P <Ev = ½
even t B has alrea dy occurred.
• Bay es theo rem is also known as the Baye Using the probabilities known to us, we have
s Rule or
Bay es Law . P(A IE1) = 0.4 x(l- 0.3 )=0 .28
• It is a meth od to dete rmin e the probabili P (A I Bi) = 0.4 x (1 - 0.2) = 0.32
ty of an
even t base d on the occurrences of prior
events. It
is used to calc ulate conditional probability.

(MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Venture


nee MU-Al & DS I Electronics
J.1nteU
Jl!,11'_ Bayes fheo~em, the probability that the Syllabus topic :Bayesian Belief Networks
gh
~sill~uent uses skin creams and drinks enou
~ ·venbY, ~ 5.18 BAYE~IAN BELIEF ~ET WO ~. ,
~
(IJ1s gJ p (A I E1) p (E1)
p\) "' pfAi E1) P CE1) + P (A I I½) p Cl½) • Bayesian belief network is key com
pute r
stic even ts
f~l 0.28 X 0.5 0.28 technology for dealing with probabili
"' o.2i"x 0.5 + 0.32 X 0.5 = 0.60 = 0.47 and to solve a problem which has uncertainty,
robability that the patient picked the first a Bayesian network as :
• We can define
'J1)e p hical
"A Bayesian network is a probabilistic grap
~is0 .47 . . set of varia bles and
and model which represents a
~ e e identical boxes contain red ted
their conditional dependencies using a direc
Ei-.:·balls. The first box contains 3 red and 2
white
and 5 white balls, and acyclic graph."
,bl the second box bas 4 red ork.
bllJS,!bird box bas 2 red and 4 white balls. A box is It is also called a Bayes network. belie f netw
~ very randomly and a ball is drawn from it. If • decision network, or Bayesian model.
0 the use
di()Sti,all that is drawn out is red, what will be Bayesian networks are probabilistic, beca
• prob abili ty
~bili ty that the second box is chosen? these networks are built from a
y for
distribution, and also use probability theor
~50ln.: l
Let At, A2 and A3 represent the events of choosing prediction and anomaly detection.
X in
tbtfl!S~ second, and third box respectively, and let Real world applications are probabilistic
box. • een
~Ille event of drawing a red ball from the chose n nature, and to represent the relationship betw
Then we are to find the valu e of P (A2 I X). multiple events, we need a Bayesian netw ork.
Since the boxes are identical, hence various tasks inclu ding
1
• It can also be usedalyin detec tion, diag nosti cs,
prediction, anom
P (A 1) = P (Ai) = P (A3) = 3 serie s
automated insight, reasoning, time
maki ng unde r
·Again, by the problem prediction, and decision
3 3 uncertainty.
P(X IA1) = 3 + 2 = 5; can be used for build ing
• Bayesian Network and experts opinions, and it
4 4 models from data
P(XIAi)= 4+5 =9; consists of two parts:
2 l
P(XtA3) = - -_ (a) Directed Acyclic Grap h
2+4 -3
Now, event X occurs if one of the mutually (b) Table of conditional probabilities.
• and exhaustive events A 1, A2 and A3 occurs.
exclusive The generalized form of Bayesian netw ork
that
• prob lems unde r
lberefore, using Bayes' theorem formula we get, represents and solve decision
uncertain knowledge is know n as an Influ ence
P(A2l X)
p (Ai) P (X I Ai) diagram.
=p node s
(A,) p (X I A ) + P (Ai) P (X I Ai) + P (AJ P (X
I AJ A Bayesian network graph is mad e up of
1 •
and Arcs (directed links), where:
lxi 4
=- 3 9 9 10 (a) Each node corresponds to the rand om
varia bles
\1 l
3 5+3 X
4 1 1 = 62 = 31
9 +3 X 3 45
and a variable can be continuous or discr ete. '

~ Tech-Neo Publlcations...A SAC HIN SHAH Venture


~S-J.26) '

------
~ lnten en ce MU-Al & OS 'Ele Re asonin Under Uncertain
ctronicS
(b ) •.. pa No
Ar c or dir ec ted arrow On the other hand, Mary
s represent the causal likes to Ii • S..ie
rel ati on sh ip or conditio music, so sometimes sh e
nal misses to h :n to hi&h
ran do m variables. Th ese probabilities between Here we would like to co
I ' ,I directed links or arrows mpute the p :c ai ,
co nn ec t the pa ir of node Burglary Alarm.
s in the graph. These links ro abilil} Of
I rep res en t tha t on e node
directly influence the other
no de , an d if there is no Ex. S.18.1 : Calculate the
directed link that means p ro b ab il it y ~
tha t no de s arc independe sounded, but there is
nt with each other. neither a bu rg la ry ~~

fI
• In Fi g. 5.18.1 below, A, earthquake occurred, and
John and Mary both Or
B, C, and Da re random aii
va ria ble s represented by the Harry.
the nodes of the network can~
gra ph . If we are consi @ So ln. :
dering node B, which is
co nn ec ted wi th node A • Toe Bayesian network
by a directed arrow, then for the above pr b
no de A is called the pa given below. o 1cm.
rent of Node B. Node C IS
ind ep en de nt of node A. is • The network structur
e is showing that b
• Th e Ba ye sia n network
graph does not contain any
and earthquake is the paren
t node of the al~&lary
cy cli c gra ph . Hence, directly affecting the pro
it is known as a directed bability of alann•s g ~
ac yc Jlc gr ap h or DA G. off, but John and M ary 's
calls depend o n ;:
• Th e Ba ye sia n netw
ork has mainly two
probability.
PB
co mp on en ts: Causal
Component and Actua
nu mb ers . l
• Ea ch no de in the Baye
sian network has condition

I
pro ba bil ity distribution
P(Xi I Parent(Xi) ), which
de ter mi ne s the eff ec t of F
the parent on that node. 0.9 5 0.0 5
• Ba ye sia n ne tw ork is 0.9 4 0.0 6
based on Joint probabilit 0.2 9 0.71
dis tri bu tio n an d conditio y 0.0 01 0.9 99
nal probability. So let's
fir st un de rst an d the joi nt
probability distribution : A T F
A T F
A Node T
F T 0.7 0.3
F 0.01 0.99
f1A2)Fig. P. 5.18.1
• The network is representin
g that our assumption.,
do not directly perceive
(1A 1)F ig. 5.18.1 : Directed Acyclic the burglary and also do
Graph not notice the minor earthq
uake, and they also DOI
• Le t's un de rst an d the Ba
yesian network through an
confer before calling.
ex am ple by cre ati ng a •The conditional distributi
directed acyclic graph. ons for each node are
• Example : Ha rry installed a new burglar alarm at given as conditional proba
bilities table or CYf.

I bis home to de tec t burglary. Th e ala •


rm Each row in the CP I' mu
res po nd s at de tec tin g a reliably st be sum to l '-a
~
-us e all
burglary bu t also r~spond the entries in the table rep . ...
fo r mi no r ea rth qu ak es. s resent an exbausnve
' Harry has two neighbors
of cases for the variable.
..,.
Jo hn an d M arr y, wh o
have taken a responsibility
to in fon n Ha rry at wo rk •In CPI', a Boolean va
when they hear the alarm riable with k Boolean
Jo hn alw ay s ca lls Harry . parents contains 2K proba

1 bu t so me tim es he go
_ _:nn~·~g.111:'.'.·~g.:.an=d_c_al_ls_a_
when he h~ the alarm,
t confused with the phon
t_th_a_t_tirn_e_t_oo_._ _ _ _
e
are two parents, then
probability values.
bilities. Hence. if ~
CP T will contaUl

_-.. !_ _- ; ; ~ -
------
------
(MS-126)
~ Tech-Neo Publications...A SACHIN SH
--

AH Venni'
c:e MU-Al & OS / Electronics Under Uncertain ... P No. 5-19
. . . ReasonI
11ntelll en
,If, all events occurnng rn this network : • The value of this entry is given by the formula
J,.iSI of
8urgJllfY
, CB) D
0
Eaflhquake (E) I I P {x,, X2, ..., x.i) = TI 8 (X; I parents <Xi))
0 i=1
Alaflll (A)
0
Jobn Calls (J)
• Where parents (X1) denotes the values of Parents
(X;) that appear in x1, ... , x0 • Thus, each entry in
o t,1arrY calls {M)
0
the joint distribution is represented by the product
can write the events of problem statement in of the appropriate clements of the conditio?aJ
\'{ef ...,.. of probability : probability tables (CPfs) in the Bayesian
tbC 011w
network.
p(MJ,A,B,E) = P(MIA~x P~JIA) We can rewrite the above equation as

, X P (A I B n E) X P {B) x P (B) D
:: 0.70 X 0.90 X 0.001 X 0.999 X 0.998 P (X1, X2, ..., Xn) = TI P (X; I parents <Xi)) •
= o.0006281 i= 1
Hence, a Bayesian network can answer any query • The next step is to explain bow to construct a
;bout the domain by using Joint distribution. Bayesian network in such a way that the resulting
joint distribution is a good representation of a
i,. 5, 18.1 The semantics of Bayeslu Network given domain. First, we rewrite the entries in the
joint distribution in terms of conditional
Tbere are two ways to understand the semantics of probability, using the product rule.
tile Bayesian network, which is given below:
p (x1, X2, ..., Xn) = p (Xn I Xn- I• ..., x,) p <Xn- I• •••, X1)
I. To understand the network as the representat ion of • Then we repeat the process, reducing each
the Joint probability distribution.
conjunctive probability to a conditional
, It is helpful to understand how to construct the probability and a smaller conjunction. We end up
network. with one big product:
, One way to define what the network means is to P (xi, X2, ... , xJ = P (x0 I Xn-1 • ... ,xi)
p (Xn- I I Xn-2• ...,Xi) ... p (x2 I Xi) p (x1)
define how it represents a particular joint
D
distribution over all variables. To do so, we must
= TI P (X; I Xi- t. ... , x 1)
first retract what we said earlier about the i= 1
parameters associated with each node.
• This is known as the chain rule. It holds true for
• We stated that those parameters correspond to any set of random variables.
conditional probabilities P(X; I Parents (X; ));
• For every variable X1 in the network,
While this is true, we should think of them as p (Xi I xi - I• •••• X1 ) = p (Xi I Parents (XJ ),
numbers provided that Parents (X1) !:: {Xi_ 1, ..., Xi}.
9 (X; I Parents (X; ) ) until we assign semantics to
• The above equation says that the Bayesian
, the network as a whole. network is a correct representation of the domain
• A generic entry in the joint distribution is the only if each node is conditionally independent of
probability of a conjunction of particular its other predecessors in the node ordering, given
its parents.
ilSsi&nments to each variable, such as P (Xi = x, 11
2. To understand the network as an encoding of a
"· II xn = Xn ). We use the notation p (x, ,..., Xn)
collection of conditional independence statements.
as an abbreviation for this.
It is helpful in designing inference procedure.

~ Tech-Neo Publications...A SACHIN SHAH Venture


--- --• ••• ••• •-- --- --- -o- n-l
n_ U_ n_ d_ er Uncertain
'.
I
11 Reas
... Pa
8
N~

II Artificial lntelll ence MU-Al & OS/ Electronics o. S..~


The Bayesian netw ork fails to defllle
• 0
'a. s .1 B.2 Adv anta ses and Dlsadvantaies of .
relauons hips for exam ple, defle ction of . cyclic
fi
hyes Ltn Belief Necwork . and fluid pres sure 1eld around a1:-OP....
1ilric
wings th • 11. 'h.
Ther e are many Bayesian . ·on depe nds on e
be1·1ef network defl ecU pressure •ric

...
adva ntag es and disadvantages. They are listed
Adv anta ges
below : press
ure is depe nden t on the deflection • anct
'ghtlY coupled prob1em w hich this netwo• 1t.t ts
ti
to define and mak e dec1S. .
1ons.
.tbc
r, fa,1.
.a

Ther e are a few advantages of Bayesian "'13


belief The network is expe nsiv e to build .
netw orks as it visualizes different probabilit
ies of the •
varia bles. Som e of them are: It performs poorly on high dimensional data.

• Grap hica l and visual networks provide a mode It is tough to inter pret and require c
l to •
visu alize the structure of the probabilities
deve lop desig ns for new models as well.
and functions to separate effec ts and causes.
~
Syllabus topic : Reasoning In Bellef Netw
• Relationships detennine the type of relationsh orka ,
ip
and the presence or absence of it betw
een
variables.
~t·. 5'.1
'"' ·~ ~"9 DE~
EJSIO
. :N·THE
- -ORY ~
• Com puta tions calculate complex proba __:_~ ~ II

prob lems efficiently.


bility
• Decision theory is base d on the axiom
s of
I
• Baye sian networks can investigate and tell
you probability and utility.
whe ther a particular feature is taken into a
not~ • Where probability theo ry prov ides a frame
for the decision-making process and can force
it
won
to inclu de that feature if necessary. This for coherent assig nme nt of beliefs
network With
will ensu re that all known features incomplete information, utili ty theory introduces
are a
inve stiga ted for deciding on a problem. set of principles for consistency amon
g
• Baye sian Networks are more extensible than
other
preferences and deci sion s.
netw orks and learning methods. Adding a A decision is an irrev ocab le allocation
new • of
piec e in the network requires only a resources under cont rol of the decision make
few r.
probabilities and a few edges in the graph. So,
it is • Preferences desc ribe a deci sion maker's relativ
an exce llent network for adding a new piece e
of valuations for poss ible state s of the world
data to an existing probabilistic model. ,~
outcomes.
• Toe grap h of a Bayesian Network is useful.
It is •
read able to both computers and humans; both The valuation of an outc ome may be based
can on the
inter pret the information, unlike some netw traditional attributes of mon ey and time, as
orks well
like neur al networks, which humans can't read. as on other dime nsio ns of value includ
ing
s» Disa dva ntag es pleasure, pain, life- year s, and computatio
nal
effo rt
• The mos t significant disadvantage is that there
is •
no universally acknowledged method Utility theory is base d on a set of simple axiom
for s
cons truct ing networks from data. There have or rules concerning choi ces unde r uncertainlY
been ·
man y developments in this regard, but Like the axioms of prob abili ty theory, these
there rules
hasn 't been a conqueror in a long time. are fairly intuitive.
The
desi gn of Bayesian Networks is hard to make
• The first set of axio ms conc erns preference
com pare d to other networks. It needs a lot s for
of outcomes und er cert aint y.
effort. Hence, only the person creating all
the 1.
netw ork can exploit causal influences. Neur The axiom of order ability asserts :
al bY
netw orks are an advantage compared to this outcomes are com para ble, even if descn
as ssible
they learn different patterns and aren't limited many attributes. Thu s, for any two Po
to
only the creator.
or 000
outcomes x and y, eithe r one prefe rs JC to YtbeDl-
(MS-126}
prefers Y to x, or one is indif feren t between
~
~Te ch-N eo Publications...A SACHIN SHAHventul'
-...J~nl=~~~
---:--Reaso ~=~l
Under .,!::,
Uncer " ?,L
tal a:~~ o~~, :~-21
-~~~~M~U·=A~I&~D :S~/~ E~le ct-=r o=n: lcs~~
,~tell.., all ·"s
• o..
'" of transitivity
aJt.l·o..,
asserts that these • The power of this result is that 1t
preferences for complex and uncertain
i 'lb' . gs are consistent; that is, if one prefers x to
~ to z, then one prefers x to z. combinations of outcomes with multiple attributeS
a,11d y
1 eeond set of axioms describes preferences to be computed from preferences expressed for
'lb' 5 uncertainty. They involve the notion of a simple components. Thus, it may be used as a tool
under an uncertain situation with more than one to help people think about complex choices by
decomposing them into simpler choices. "
JottetY•
1'°551'ble
outcome. . • Decision theory provides an appealing approach
~ outcome bas an
..,.~h assignable probability of to analytic tasks, particularly to those involving
oecurreoce. inference and decision making under uncertainty,
I

roonotonidty axiom says that, when Consequently, we focus on expert systemS fcJ
analytic tasks.
I, 1bC .ring two lotteries, each with the same two
colllP"'"' . Decision theory also can be relevant to synthetic
a)tet11ative outcomes but different probabilities, a •
tasks, because useful alternatives often must be
decision maker should prefer the lottery that bas
selected from large numbers of options.
the higher probability of the preferred outcome.
a of Dedsloa Haklal
'Jbe decomposab!li~ axiom says that a decision 5 _19 _1 Types
2- Jl)8ker should be mdifferent between lotteries
that
inty : The outcome
)laVC the same set of eventual outcomes each with 1. Decision making under certa
of a decision alternative is known (i.e., there is
the same probabilities, even if they are reached by
different means. For example, a lottery whose only one state of nature).
r risk : The outcome of a
outcomes are other lotteries can be decomposed 2 Decision making unde
into an equivalent one-stage lottery using the • decision alternative is not known. but its
standard rules of probability. probability is known.
. 1be subsdtutabillty axiom asserts that if a 3. Decision making under uncertainty : The
3
decision maker is indifferent between a lottery and outcome of a decision alternative is not known.
some certain outcome (the certainty equivalent of and even its probability is not known.
the lottery), then substituting one for the other as a A few criteria (approaches) are available for the
makers to select according to their
possible outcome in some more complex lottery decision
preferences and personalities under uncertainty
should not affect her preference for that lottery.
Max1max Criterion
4. Fmally, the condnuity axiom says that if one 1.
prefers outcome x to y, and y to z, then there is An adventurous and aggressive decision maker
some probability p that one is indifferent betwe en •
may choose the act that would result in the
getting the intermediate outcome y for sure and a
lottery with a p chance of x (the best outcome) and maximum payoff possible.
{I· p) chance of z (the worst outcome). optimistic approach, "Best of
• Tb.is is viewed as an
' The consistency criteria embodied in classical bests".
decision theory can be stated as follows : Given a ► Step 1 : Pick maximum payoff of each alternative.
set of preferences expressed as a utility function,
maximums in
belief expressed as probability distributions, and a ► Step 2 : Pick maximum of those
set of decision alternatives, a decision maker Step I; its corresponding alternative is the
should choose that course of action that decision.
lllaXimizes expected utility.
~ Tech-Neo Publications...A SACHIN SHAH Venture
o.ts.126)
~~~~l!!ilntelll
Artlflclal ~~~~.!.!
ence,~:,"-!~~
MU-Al &~~----~~ 7s~:R;e a~s~o:n :ln::U~n ;d~e;rU o.
~ OS I Electronics
4. Lap
:~nc;e~
I e Criterion rt~a:in::
(Equally ··:·P~a~
likeUhOO ~
d) e~N~s.'.)"
ac
2. Maxim.in Criterion Th decision maker makes a simple as
• This is also called Waldian criterion . • e •
that each state of nature ts equally likelysulllpr
to ton
• This criterion of decision making stands for & compute average payoff for each. OcC\ir
choice between alternative courses of action Choose decision with highest average payoff

assumin g pessimistic view of nature. ► Step 1. : Calculate the average payoff f •
Or each
• This is viewed as a pessimistic approach, "Best of altemat1ve.
worsts" ► Step 2 •• The alternative with highest averag .
CJStbc
decision.
► Step 1 : Pick minimum payoff of each alternative
S. Hurwiczalpha Criterion (RationaJity or Reallsni)
► Step 2 : Pick the maximum of those minimums in
Step 1, its corresponding alternative is the decision • This method is a combina tion of
. .
Maxim.in
all(j
Maximax cntenon.
3. Minimax Criterion Also known as criterion of rationality. neither too

• Applica tion of the minimax criterion requires a optimistic nor too pessimistic
table of losses or table of regret instead of gains. ► Step 1 : Calculate Hurwicz value for each
alternative
• Regret is amount you give up due to not picking
► Step 2 : Pick the alternative of largest Htttwicz
the best alternati ve iµ a given state of nature.
value as the decision.
► Step 1 : Construc t a 'regret table', Hurwicz value of an alternative = (row max)
► Step 2 : Pick maximum regret of each row i~ (ex) + (row min) (1 - a.) where a. (0 5 ex S I) u
regret table, called coefficient of realism.

► Step 3 : Pick minimum of those maximums in


Step 2, its corresponding alternativ~ is , th~
decision .
! I {' ' .iJ Chapter En<h...
,. !
·, I t.,·1 •• .' t □□0

.I •••.. :i I II . : .fl

I '
.,!1•• \ ., •• J, '·. , t' ' '
• I • ': ' I I . 'I'
I 1_, : I J •• ,1 ! " ., .• • 1· f, I ' ,

l,1 I I

. / I•

I• 1, I I J ,.
r:, • •• '• I • '
.,.
., ,r
'r•.

.1 . 1
Module 6
cJIAPTER

6 Planning and Leaming

&.1 The planning problem, P.artial order planning, total order planning.
6.2 Learning in Al, Leaming Agent, Concepts of Supervised UMUpe,vieed Semi ·Supemaed ~ .
Aelnforcernent Leaming, Ensemble Leaming. ' ' · .
• 6.3 Expert Systems, Components of Expert Syatem: Knowredge baee, Inference e,vne, user ntarface; working .
memory, Development of Expert Systems . .

,•
1 components of Planning...............................................................................................- .....·-·-···-·-·"...................;_ ... 6--3
t expliin the different components o1 planning system. OR Given the C0ff1)0nentt p1amng syatelTI and «
tJ. • tty explain them. OR Explain the various components of planning system. How can you . · .
~=resent a planning ~tion? .......... ,......................................................................................- ..·-··..··----.........- ...- ... 6-3
8.l.i Problem Solvmg Va. Plannlng ..................................................................................-...........-...- ...........~···M
40 Compare Problem Solving and Planning ..............................................................................._,.._.........~.............."'....... M.
2• problems A-oc·1atA"'
- ...,. With Plann·1ng Agent ...............................................................................'..........................- ........ ' 8-6.·· ·
~ Write short notes on : (I) Planning agent (II) State goal and action repreaentation.........., ......................- .......- -.... --· 8-5
Oil, 62.1 Planning Agent ...................................................................................................................._..'. ...'."'..···.·--~-.. 6-6
6.2.2 Tllree Key Ideas behind Planning ..........................................................................." .. _ .., u.........., ....- .....- ... 8-5
e.3 Partial Order Planning............................................................................................................M........._....:........ -,., ... _ ... 8-5
UQ. Explain a partial order planner with an example 6-6
/MU. Q 5 c Dec 15, 0 4 b Ma 17 0 5 J r,l.1 · ?019 lU !.l.irx-.
......................·-·-·---·-····--,..~···
5.3.1
Causal Link In Parta • 10rder PlannIng ....................................................................-.... ..... ·.•.......- ...........
. 6-6, 6-7°
Working ol a PartlaI Ord er Pranner ............................................................................~........
6.3.2 .. . _.............._.., .....· .. 6-7
GQ, How does partial order planner work ?..................:.....................................................................- .........."'::-·--
5.3.3 Example for Partial Or""r ,.,. Pianner ..............................................................................- ..............- .............- ... &- 6-7.
GQ Given an example for partial order PIanner.......................................................................... · ,~.....................
_ ,........ 6-8. 7'
6A ~dltional Planning .......... •··· .........................................................................................................· .... : ·· :··..-::: H
e4 1 o·tte noe In Hierarchical and Conditional Planning ..............,..........................................·-·····..............'.'.'::': &-e
B.5 ~·~
Rep~se~ation ...................... •••.... •••............ ••• •••• •............................
IS Planning with state space REsearch .......................................................
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::·&:1,0·
.. __...... 6-10
661 Foiward State-Space Search ...........................................................v............ • , . . 6-11

::.r
6.8.2 Backward State-Space Search·······............................................................................................:..:·..:.........:.. 6-11:

~ =;::: ::::::::::::::::::::::::::: : : : : :::::::::::::::::::::::::::::::::::::==.=.i=::: ;::;::;~::;;:::;~:.}:::.::!·


llO SLlpervlsed leamlng ................................................ h .... 6-
13
UQ. What Is supervised reaming ? Give example of eac ............................................................................:.. &-1:4 .
e.10.1 How Supervised Leaming Works?.................................. ·
Artificial lntelll ence MU • Al & OS/ Electronics ........................................................... 6..~
6.10.2 Advantages of Supervised Leaming········································· ...................................•..................
. ••••••••• 6-1s
6.10.3 Disadvantages of Supervised Leaming.••••••••••••••• ••··············:::::::.......................•••••••••••••••••••..........
...:::········ 6-1s
6.11 Unsupervised Leaming···········································:······················::::::.......................•••••••••••••••••••••••••••••
.•...........:::····· 6- 16
GQ. What are the types of unsupervised lea ming?················ • ed reaming? .............•.... •····················· ••••• 6- 16
. tages of unsupervis
h.li•M•itG~r••■•~.llll!lliDt=i-iMfia s.16
GQ. What are the advantages and d1sadvan •••••• •••••···
UQ. What Is unsupervised learning? Give example of eac ··················
':. 6 •11 .1 Types of unsuperv,s · ed Leaming Algorithm························· .................···········••••· .••. •. .••. ••...... ............
.......................................... ••••••••• 6-17
•6-1s
6.11.2 Advantages of Unsupervised Leaming··································: ...................................................
........••••••••••• 6-17
6.11.3 Disadvantages of Unsupervised Learning •••••• ••···;~·~;~;·i~~· ······················································
······:::········ S.17
6.11.4 Difference between Supervised and Unsupervis. unsu rvised reaming? .............................. •••••••• 6-l7
GQ. What ls the difference between supervised learning and pe
6.12 Semi-Supervised Leaming ...................................................................••••••••••••••••••••••••••••••••••••••••••••• •••••••••• S.1a
•••••••••••••••••••••....... S.19
6.13 Ensembl e Leaming · ....................................•
. ·••••••••• ············••••••••••••••••••••••••••••••••••••••••••••••••• ····························••..... s.19
GQ. Explain the concept of Ensemble learning. (5 Marks)··························· ••••••••••••••••••·· S.19
6.14 Cross-validation in machine reaming......................................................................
·················S.19
6.15 Stumping ensembles ........................................................................•••••••••••••••••••••••••••••••••••••••••••••••••
•••••••••••••••••••••••••••··· 6-20
6.15.1 For Continuous Features....................................................................................... ·····························•·..- 6-20
6.15.2 Remarks···················································································································································
······6-20
6.16 Reinforcement Leaming ········································•·••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
•••••••••••••••••• 6-20
GQ. What ls Reinforcement Leaming? Explain with an example.·······································································-
6-20
6.16.1 Approaches to Implement Reinforcement Leaming•··········································································
············ 6-21
GQ. What are the approaches for Reinforcement Leaming? •················································ •••••••••••••••••••••••••••••••
S.21
6. 16.2 Challenges of Reinforcement Leaming ..................•••••• •··•···································································
··········· 6-22
6.16.3 Applications of Reinforcement Leaming ...................................................................•......................
............._ 6-22
6.16.4 Reinforcement Leaming vs. Supervised Leaming ........................•••····· ... •••· •••· •••· ••••·· ••••· •••••··· .•..•.............
..... S.23
GQ. What Is the difference between Reinforcement Leaming and Supervised Leaming? ································-&-
23
6.17 General Leaming Model ..............................................................................................................................
.................6-23
6.18 Techniques used in Leaming ............................................................................................................
............................6-2.C
6.18.1 Leaming by Memorization (Rote Leaming) ..........................................................................................
......... &-2.C
6.18.2 Leaming by Direct Instruction............................................................................................................
............&-2.C
6.18.3 Leaming by Analogy..............................................................................................................................
........&-2.C
6.18.4 Leaming by Induction·································································································································
·-·&-2.C
6.18.5 Leaming by Deduction ······························································································································
6.18.6 Neural Network.............................................................................................................................. --&-25
.................&-25
6.19 Definition of Expert System.............................................
GQ. 6-25
What Is an expert system? ...............................:··························
•&-25
GQ. Define expert system..................................... ••••••••••••••••••••••••••
••• &-25
6.20 ;~~~~lqu~ ::iK::~::~:- ;:~1!~:~::· ··············· ·····:::::::::: ::::::::::::::: ::••••.•.....•.•.........................................
..••••••••••:::::
GQ. Explain knowledge acquisition process ••••••••••••••••••••••••••••••••••••••••••••••••••• ••••••••••••••••••••••••••••••••••••••• &-26
GQ. Explain techniques of knowledge acquisition·································································································
&-26

:~ ~!::~~~:r1:::'.~~:::::::::::::::::::::::::············:::::::::::::::::::::·:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
6.22.1 Inference Englne(Rule s of En In ) •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 28
: ; : ::::en:7~·~~;;~·~;~~;······~···~·:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::::::::::::: :::::::::~

• Chapter Enda •••••••••••••••••••••••••••••• ···················:::::::::::::::::::::::::::::::::::::::::::::::.............................................


:::::::::::::::::~

(MS-126)
lil ech-Neo Publications..
T .A SAC HIN SHAH V ~
• ence MU • Al & OS / Electronics

syllabus Topic : The Planning Problem


► 3. Detecting a solution
A planning system bas succeeded in finding
oMPONENTS OF PLANNING a
·. -~ solution to a problem when it bas found
a
,,.,,. .. _.,. - -+: . ~ . - - - - -. sequence of operators that transform the initia
lexpiain 'the different components • of plann l
ing : problem state into the goal state. In simp
,syste(ll. OR Given the compoo~nts o~ pl~nni~g le
: problem-solving systems we know the solution
•~em andbriefly explain them: < '
• ••
: by a straightforward match of the state
- •
OR Explai!l,i,the yario~s. components, o( plannjrg
I
description. But, in complex problem diffecent
1
·; tern How can;ou represent a planning action reasoning mechanisms can be used to describe
? :
.. ~:rd...;'.,;..- --- - - - - - - - - - - - - --- - - - ___,
.Different components of planning system are the problem states, that reasoning mechanism
as s
could be used to discover when a solution had
il'IJll'S:
been found.
. I. Choose best rule to apply
► 4. Detecting dead ends
\ Apply the chosen rule
A planning system must be able to detect whe
•3. Detecting when a solution has been found n
-z it is exploring a path that can never lead to
4. Detecting dead ends a
solution. Above same reasoning mechanism can
S. Repairing an almost corre ct solution be used to d.etect dead ends. In search proc
ess
reasoning forward from initial state, it can prun
1LChoose best rule e
any path that lead to a state from which goal
Isolate set of differences between the desired state cannot be reached. Similarly backward
goal state and the current state, identify those reasoning, some states can be pruned from
the
rules that are relevant to reducing thos search space.
e
differences (means-end analysis). If several
► 5. Repairing an almost correct solution
rules found then choose the best usin g heuristic
information. In completely decomposable problems can
be
solve the sub problems and combine the
I LApply rules sub
solutions yield a solution to the origi
nal
In simple systems, applying rules is easy. Each problem. But try to solving nearly decomposable
rule simply has specified the problem state that problems one way is use means-ends anal
ysis
Would result from its application. In complex technique to minimize the difference betw
een
'Ysterns, We must be able to deal with rules that initial states to goal state. One of the better way
s
~ify to represent knowledge about wha t wen t wron
only a small part of the complete g
Ptobtem and then apply a direct patch.
/(tio state. One way is to describe, for each
11.._ ~ each of the chan ges it mak es to the
state
~Ption.
I
Plannin

Artificial lntelll ence MU - Al & OS/ Electronics

• • "' . e Iving is the systematic searc


in order to reach ou&h a
I. The task coming up with a sequence of acuons ro Of possible actions to 801llc
·
th a t w,·11 achieve • range fi ed goal solution.
l a goal is called p!anrung. prede tn
Problem solving comprises standard search pr0ecs3.
2. Planning is typically viewed as a generic term of
problem solving because it deals with search in
an abstract level.
Problem solving is more concerned With Plan
3. Planning is involved in plan generation. execution.
strates one sn,,,.;r._
Problem solving just demon r-...:
4. Planner is viewed as the producer or generator
of the solution. solution .
In problem solving usually forward approach is
5. Backward planning is typical for planning.
adopted.
Problem solving is of two types - special Plllposc~
6. The pre-requirements are almost same for
general purpose. A special purpose is for ~
variety of problems in planning.
problem. General purpos e is for variety of problClll!.
A* algorithm is an example.
7. STRIPS (Standard Resear9h Institute problem
solver), PDDL (Planning domain definition
language) are some examples for planning
systems.
I In searches, operators are used simply to genent
1 A simple planning agent is very similar to
I successor states and we cannot look "inside' 111
problem- solving agents in that it constructs plans
operator to see how it's defined. The goal-ft.II
1' that achieve its goals, and then executes them.
, .· The limitations of the problem solving approach predicate also is used as a "black box" to test if a stall
is a goal or not. The search cannot use propettics d
.i motivates the design of planning systems.
how a goal is defined in order to reason about findi.og
q-.. To solve a planning problem using a state- path to that goal.
, space search approach we would let the
Planning is considered different from problelll
l

; 'Initial state = initial situation solving because of the difference in the way tbeY
• Goal-test predicate = goal state description' represent states, goals, actions, and the diffetellCCS i.o
the way they construct action sequences.
• Successor function computed from the set of
operators. G'
Remem ber the search-based probltll'
r; Once a goal is found, solution plan is the solver had four basic eleme nts
sequence of operators in the path from the start • Representations of actions : ProgralllS dial
node to the goal node.
develop successor state descriptions IVhidl
represent actions. ----

(MS-126) , •. ·1 .1,...
(il Tech-Neo Publications...A SACHIN SHAH yt{d!ft
U _Al & OS/ Electronics
~

~
111
1~tt1 tsdOP of state : Every state description a. 6.2.2 Tlane ~ . . . ~ ........
• ,~ 'Ibis is because a complete
.,,plete• . . and .
actions • There are Three Key Ideas behind Planning .....t . ~,., ,
i-1 ,~...rioO of the initial state 1s given., _ . . . . . . I. ,.~ I .. . ..L ;;

To "open up" the r ~ o n s •of state, goals,


~r- ented by a program.
and •operators io that a .rcuooer' •'.caa. 1 more
,e~ tatlOP of goals : A problem solving intelligently select actiom when they are.needed -
~ onlY information about its goal which is • •The planner is free r.o add actions to the' plan
, t tJ.aS f a-oal test and the heuristic function. wherever they are n.eecwl. ratbet than'· in an
. t¢t1S o a,.
111
of plans : In problem solving,
tadOD incremental sequence starting at tbe'initial scar.e.
JCP~ is a sequence of actions. • Most parts of tfie world are indepc.ndcnt M most
' ,t,c: ,olutl~O : other parts which makes it feasible to take a
:, •• drlll for a problem solving exercl. . conjunctive goal and solve it• wilh a• <livide-aod-
eorcll•,.
, _,,.
ed to ,peclfy . conquer strategy. • . ; '; ( !'~

)Dldal--.
....... • The agent is at home without any
.
• An intelligent agent cart act indepeodcntly and bas
well-defined goa1s. It can adapt its bdJavior to its
I, that be is wanung.
bjetlS environment "a general~purpose 9ystem ~ like
~ ~tor set : Everything the agent can· do. a human, can perform a variety of different tasks
• '\11\der conditions that may not be known a priori.~
An agent must be aware of the ma' 9: goals and
may have to behave in a changing world. · :: '
Syllabua Topic: Partial Order Plannl~ T~
• Order Planning

ll 6J, I Planning Agent i,oimut'·PWDlni•.. ,: ': ·


"'-':.\•'•'•••h ••,,_ • ,., , , . . , , ,.,, . . . . " • ,..,

• 111e· actual branching factor would


thou•~ <>r millions. The heuristic evaluation
•• function ClUl only choose states to d.etennine
May 2019, 10 Marb)~
•which ~ne
is closer to the goal. It cannot eliminate
utions from consideration. •
Partial ord.ec planning• is an approach to automated
' The. ~g~t ~es guesses by considering ~tions planning that maintains a partial ordering between
actions and only. commits . ordering ~
.and lhe evaluation function ranks those guesses.
actions-"'~en the actions are ~ , , 1,.:, ,-; , 1i;,~
The agent picks the best guess, but then has no
Also the. planning does not specuy. which ,action
idea what to try next and therefore starts guessing •
•• , will come .:o)lt first when two , 11Ctioc1$, are
.•. processed.
' It co~'iders sequences of actions beginning from • A partial order plan or partial plan which specifies
,!he initial state. The agent is forced to decide what all actions that need to be taken, but only specifi~
lo do in the initial state first, where possible the order- between actioos when. nec::essuy, is lhe
cboices are to go to any of the next places. Until result of a partial order pl.annct'; r" ·. '! t ~·

the agcni decides how to acquire the objects, it . • . This is . sometimes also' called' • non-linear
~·t dccide·wberc to go. Planning emphasizes planner,· which ia 'a• 1 misoomer·~ such
What is in operator and goal representations. planners often produce a linear plan.

~--:-------.----;:re,~--------
-·, 1!1:1 Tech-Neo Publications~ SACHIN SHAHVentur.e

. I
Artificial lntelll ence MU - Al & OS / Bectronlcs Plannln and Leamin ... Pa N
8
• . o. ~
A Pan:ial. Ordering is a less-than relation that is 8 The triple (acto, P, act,) ,s a causa l Ii
transi tive and asymmetric. • artial order specifies that action act _nl<. 'l'hc
p
before action act1, w hih. •
c 1s writte 11 0cc
n as a<\ llta
Parti al orde r plann er comp onen ts
Any other action A that makes P false rnUs~~ct,.
1. A set of actions (also known as operators). be before acto or after act1• 11bcr
2. A partial order : A partial-order plan is a set of s- Llnearlzatlon of partia l order plan
action s together with a partial ordering,
repres enting a "before" relation on actions, such A linearization of partial order plan is
that any total ordering of the actions, consistent lotaJ
order plan derived from the particular Partiala order
with the partial ordering, will solve the goal from plan, in other ~ords, both ~rder plans consist of the
the initial state.
3. same actions, with the order m the linearizauon be·
A set of causal links : It specifies which actions a linear extension of the part!'al order in the Ori&ina
Ulg
meet which preconditions of other actions. J
partial order plan.
Altern ately, a set of bindings between the
variab les in action. For example : A plan for baking a cake might start as
Write
act0 < ac~ if action acto is before action act follows:
1
occur s in the partial order.
4. A set of open preconditio
• Go to store,
ns : For uniformity, • Obtain flour, get milk, etc.
treat start as an action that achieves the relations
that are true in the initial state, and treat finish as •
Pay for all the goods,
an action whos e precondition is the goal to be • Go to kitchen .
solve d. The pseudo· action start is before every This is a partial plan because the order for finding
other action . and finish is after every other action. flour and milk is not specified; the agent can Wander
The use of these as actions means that; the around the store, accumulatin
g all the items as its
algor ithm does not require special cases for the shopp
ing list is complete.
initia l situation and for the goals. When the
preco nditio ns of finish hold, the goal is solved. ~ 6.3. t Causal Unk In Pardal Order Plannlas
5. In order to keep the possible orders of the actions
as open as possible, the set of other conditions and • Each causal link specifies a pair of steps and a
causal links must be as small as possible. A plan proposition, where the proposition is a post
is itself a soluti on if the set of open preconditions condition of the first step and precondition of the
is empty . secon d step. The first step is ordered before the
6. An action , other than start or finish, will be in a secon d step.
partial-order plan to achieve a precondition of • If a precondition of a step is not supported by
a
an action in the plan. Each precondition of an causal link, then it is a flaw in a partial-order plan.
action in the plan is either true in the initial state
or so achie ved by start, or there will be an action
• Any planning algorithm that can place two actions
into a plan without specifying which should come
in the plan that achieves it.
first is called partial-order planner.
7. We must ensur e that the actions achieve the
condi tions they were assigned to achieve. Each • POP (Partial-order-plan) : is a regression
precondition P of an action act, in a plan will planner; it uses problem decompos1n • •on·• it.
have an action actt associated with it such that searches plan space rather than state space; it
acto achleves precondition P for act,. builds partially-ordered plans; and it operates by
the principle of least commitment.

(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Ventul'
...m:~nce~~M.;;u...-...A,..l,..&.,,o.,,s.,,.,,1.,,E1,..e..,ct..,ro.n=icsi=_,."""".,.....":""',..._...,,.,...,.....J~~~~~~~U,;.~i:,~~~
dln,!!111 e
~ worklnl of a Partial Order Planner needs to start off with an initial plan. This is an
...!7 6,J.2-..of- - --- - - - - - - -~ ..-l>.
- - -,- •- -i,t- - - - ., unfinished plan, which we will refine until we reach a
~~ -~ does partfal'oraer planner wofk ? '' 1.., : solution plan.
~~t.-'-' - - - ·- - - - - - - - - - - - - - --- - ·- - The initial plan comprises two dummy steps,
1,'.P . with the actions start and finish and the
called Start and Finish. Start is a step with no
1. B~ order start< finish.
preconditions, only effects: the effects are the initial
parti tanner maintains an agenda that is a set of state of the world. Finish is a step with no effects, only
~ 'fbe~ pairs, where A is an action in the plan and preconditions: the preconditions are the goal.
(P'. an atorn that is a precondition of A that must
:cbieved. Initially the agenda contains pairs (G, '2s. 6.3.l Enmple for Pardal Order Pbnner
ftnl.Sb), where G is an atom that must be true in
the goal state. t,<t,,~- ~.~;,;:~~;p1:~:~=a!fa:~~~~~
,;,. - - - - - - --- - - _...._ _,._ _.._,..zi~~- - ""'-~~~ _...
At each stage in the planning process, a pair (G,
1 adi) is selected fr?m the agenda, where P is a
-nndition for action act1• 7=,Jm/lm,W,J m>llmllm,mMM/nn»J
P!""i"
Initial state Goal state
l. ,Then an action, acto, is chosen to achieve P. That
Fig. 6.3.1 : Partial order planner
action is either already in the plan it could be the
start action, or it is a new action that is added to The above Initial state can be represented In
the pl~. Action acto must happen before act1 in POP as the following Initial plan :
the partial order. It adds a causal link that records
Plan(STEPS: {Sl: Op( ACTION: Start, EFFECT:
that acto achieves P for action acti. Any action in
the plan that deletes P must happen either before clear(b) /\ clear(c) /\ on(c, a) /\ ONTABLE(a) A
acfo or ~ter acti. ONTABLE(b) /\ ARMEMPI'Y}, S2: Op( ACTION:
I
l. If acto is a new action, its preconditions are added Finish, PRECOND: on(c, b) /\ on(a, c))},
)
to the ~genda, and the process continues until the ... ' r
ORDERINGS
agenda is empty. This is a non-deterministic
procedure. The "choose" and the "either ...or ..." {Sl < S2}, LINKS:{})
fonnchoices that must be searched over. This initial plan is refined using POP's plan
~ POP : is a regression planner; it uses problem refinement operators. As we apply them, they will talce
decomposition; it searches plan space rather than us from an unfinished plan to a less and less unfinished
state space; it builds partially-ordered plans; and it plan, and ultimately to a solution plan.
0
P«ates by the principle of least-commitment.
Ii' lllili' There are four operators, falling Into two
A Plan In POP (whether It be a finished one groups
or an unfinished one) comprises of
following l. Goal achievement operators - Step addition :
I A set Add a new step S1 which has an effect c that can
of plan steps: Each of these is a STRIPS achieve an as yet unachieved precondition of an
, r™or, but with the variables instantiated.
set existing step Sj . Also add the following
, ~f ordering constraints: S1 < SJ means
tep Si must occur sometime before SJ (not constraints: Si < SJ and SI c- -+ SJ and Start <
1 llcccss il • Si < Finish. - Use an effect c of an existing step
A ar Y Ullmediately before).
ac:t of causal links: c - ➔ SJ meaJ?s ~t~p Si
s, Si to achieve an as yet unachieved precondition of
So ~ves Precondition c of step SJ. another existing step Sj . And add just two
~~C:~~rises actions (s_~ps) fith constraints constraints: Si < SJ and Si c - -+ SJ .
Q.i· ~ d causality) on them. The algorithm
s.l.26)
!'1 Tech-Neo Publications...A SACHIN SHAH Venture
Artificial lntelli ence MU • Al & OS / Electronics Plannin and Leamin ... Pa 8 No. ~
It takes place in fully observable envtronrn
2. Causal links: must be protected from threats, i.e. • where the c~ent state of the agent in kno:
steps that delete (or negate or clobber) the
environment 1s fully observable.
protected condition. If S threatens link Si c - -
SJ : - PROMOTE: add the constralnt S < Si; or • Outcome of 8:" ac~on cannot be de~c d so the
- DEMO TE: add the constraint Sj < S the goal environment 1s said to be non-deternurustic.
achievement operators ought to be obvious
~ 6.4.1 Difference in Hlerucblc.11 ~
enough. They find preconditions of steps in the
Condld oul Pwanln s
unfinished plan that is not yet achieved.
The two goal achievement operators remedy this 1. Hierarchical Plannin g
either by adding a new step whose effect achieves the
precondition, or by exploiting one of the effects of a (a) In hierarcnical planning, at each level of
step that is already in the plan. nierarchy the objective functions are reduced
The promotion and demotion operators may be to a small number of activities at the next
less clear. Why are these needed? POP uses problem- lower level.
decomposition: faced with a conjunctive precondition, (b) The computational cost of finding the corrca
it uses goal achievement on each conjunct separately.
But, as we know, this brings the risk that the steps we way to arrange these activities for the CWTcot
add when achieving one part of a precondition might problem is small.
interfere with the achievement of another precondition. (c) Hierarcnical methods can result in linear time.
And the idea of promotion and demotion is to add (d) The initial plan of hierarchical planning
ordering constraints so that the step cannot interfere describes the complete problem which is a
with the achievement of the precondition. Finally, we
very high level description.
have to be able to recognize when we have reached a
solution plan : a finished plan. (e) The plans are refined by applying actioo
decomposition.
Q' A solutio n plan Is one In which
(f) Each action decomposition reduces a high
• Every precondition of every step is achieved by level description to some of the individual
the effect of some other step and all possible
lower level descriptions.
clobberers have been suitably demoted or
promoted. (g) The action decomposers describe how 10
• There are no contradictions in the ordering implement the actions.
constraints, e.g. disallowed is Si < SJ and SJ < Si 2. CondltJonal Plannin g
; also disallowed is Si < SJ , SJ < Sk and Sk < Si (a) It deals with the planning by some
.The solutions may still be partlaJly-ordered.
This retains flexibility for as long as possible. appropriate conditions.
Only immediately prior to execution will the plan (b) The agents plan first and then execute the
need linearization, i.e. the imposition of arbitrary plan that was produced.
orderin g constraints on steps that are not yet
(c) The agents find out which part of the plan to
ordered. (In fact, if there's more than one agent,
or if there's a single agent but it is capable of execute by including sensing actions in the
multitasking, then some linearization can be plan to test for the appropriate conditions.
avoided: steps can be carried out in parallel.) (d) Conditional planning works regardless of the
outcome of an agent
~ ., 6.4. CO~DITIONAL PLANNING
(e) It takes place in fully observable Envir0runen1
Conditional planning has to work regardless of where the current state of the agent in k:nOWll

outcom e of an action. environment is fully observable.

(MS-126)
Iii Tech-Neo Publications...A $ACHIN SHAH vennst
nics
nee MU - Al & OS / Electro
e of robots, the pre and
sl 1ntelli e .
tons can not be determined actions for an agent or in cas
8 outcome of act
fbe the environment .1s sai.d to be non- the post situations need to be
specified. These can also
(0 and the after effects arc
so . be called as preconditions
d~ su c.
''state node" is represented wit
h a "square" called post-condltJons.
od " • resented with a ve from one place to
IH\ A
~ and "chance n e 1s rep Fo r example, an action to dri
••circle". ows :
is happening in another can be mapped as foll
}{ere, we can che ck wh at Action<drive(c,from.to)
ine d points of the •
(b) vironment at pre det enn • Pre-condition: at(c,from)"car
(c)
action.
;an to deal with ambiguou_s at(c, to)]
ions at every state • Post-condition: -at <c, from)"
(1) It needs to tak e som e act s case Is :
ry outcome for Action Representation In thJ
and must be abl e to han dle eve
• Action Schema
the action it takes.
in linear time. Th e ➔ Action name
f{ierarcbical methods can res ult
ng describes the ➔ Precondition
. ·tiaJ plan of hierarchical pla nni
a very high level ➔ Effects
~plete problem wh ich is
Example:
del(ription. h the planning by • Action(Fly (p. from to),
Conditional planning deals wit (p from)Aplane
(p}
and
➔Pre-condition: At
Th e age nt pla ns firs t
some appropriate conditions.
lheOexecuteS the plan tha t wa s
produced.
rt(from) AAirport (to)
refusal the planning AAirpo
➔ Effect: At (p, from) .i\At{p.
to}}
As an alternative to out rig ht
n sub jec t to one or more into ADD list and
l)lbority can grant per mis sio Sometimes, Effects are split
etim es limit the use •
(OO(!ition. Planning con dit ion som DELETE list
mi ses to a nam ed person or
occupation of lan d or pre At (WHI, LNK), Plane(WIIl}
11
company. Airpost (LNK) Aiq>ort (OHA)
Syllabus Topic : Total Or de
Al
r Plann ing
[ • Fly(WH!,~[;NK·OHA)'
- ~
Lea rni ng In
At (WHl. OHA)-At (WHI.LN
K)
ere we hav e written
GOAL REPRESENTATION Here, in the post-condition, wh
that; the state is deleted
-at (c, fro m) which indicates
Goal is most often a partially spe
cified State. A cases, add and delete list
or is to be removed. In some
ach iev e or satisfy the
llaleor say proposition is said to can also be used.
the obj ect s required for entation, the state
gjvco goal if it consists of all In case of a slate variable repres
~ goal or may by som e oth er too. As
an example, if variables. Th e action be.re
comprises of different state
te tha t has kind n on the states,
lbe goalkind /\ hardworking, the n a sta is defined as a partial functio
goal. ually applied using
hhardwork.ing /\ pretty fulfils the « The actions can be act
abl e satisfies the
2xainple : Rich " Fam ous " Miser following criteria
toaJ ied for the variables.
D!
I\ICb " Famous A substitution is to be identif
state that exists'. identify
1. Th at is, for the current
It 4Ction representation that sattsfies the
the action with a pre-condition
can be subset of the
Whenever we dec ide to do
something, we are current state (the current state
~arc of the state we are in and wh at possibly the
1ft th re-condition •
mapping of e SHAH Venture
ccts arc. Wh en it com es to the ~ Tech-Neo Publlcations..A SACHlN

~s.126)
s substitution (for whatever part of (4) Add sufficient quantify of sugar and
cur ren t stat e it is applicable). the the coffee is
ready.
3. Add the post-condition (effects) to the The most straight forward approach is
remaining to use state.
sub set of the current state if any. space search.
Since the descriptions of actions in
~ ...._,.~ • PlANMIMG WITH STATE SPACE spec
a platini
pro blem ify both pre con diti ons and effects -t~g
RE.SEARCH .
just possible to each along both the dire . ' L LS
ct10ns :
(l) Forward from the initial state or
We highlight the main planning met
AI pro ble ms hod to solve (2) backward from the goal
Sta te space search is a process used in ~ 6.6 .1
the field of Forward State-Space Sur cb
com put er science, including artificia
l intelligence (AI) ,
in whi ch successive configurations This search is also called as p~ogr~ssive
or states of an planning,
inst anc e are considered, with the inte because it moves in the forward dtrectio
ntion of tlndlng n. It is similar
a goa l sta te with the desired proper to the problem-solving approach.
ty.
For finding the solution one can We begin with the problem's initi
make use of al state,
exp lici t search tree that is generated considering sequences of actions until
by the initial state we reach a goal
and the successor function that together state.
define the state
spa ce
The formulation of state-space sear
ch planning
Con stru ctio n of Sta te Spa ce problem is as follows
(i) The root of search tree is (a) Forward search is an algorithm
a search node that searches
corresponding to initial state. In this forward from the initial state of the
state we can world to
che ck whether the goal is reached. try to tlnd a state that satisfie
s the goal
(ii) If goal is not reached we conside formula.
r another state.
Thi s can be done by expanding from (b) The initial state of the search is
the current the initial state
stat e by applying successor func from the planning problem here, each
tion which state will be
generates new state. And from this set of position ground literals; not
, we get appearing
multiple states. literals are taken as false.
(iii) For eac h one of these, again we nee (c) The actions which are applicable
d to check goal to a state are all
test or repeat expansion of each state. those whose preconditions are satisfie
d. Adding
(iv) Toe choice of which state to exp the positive effect literals and dele
and is determined ting the
by the search strategy. negative effect literals, the success
or sate is
generated from the action.
(v) It is possible that some state can
not lead to goal (d) The goal test chocks whether the stat
stat e such a state we should not expand e satisfies the
. goal of the planning problem.
Exa mp le : sta te-s ear ch exa mp
le : (e) The step cost of each action
is taken as l.
Ma kin g coffee Different cost for different actions
may be
(1) Take som e of the boiled water in a cup and allowed.
add
necessary amount of instant coffee (f) Forward state-space search is not
powder to very practicab;
mak e decoction, It is because of a big branching fact
or. F~rw
(2) Add mil k powder to the remaining boiling water search considers all applicable actions
, (i.e. all
to mak e milk. relevant and non-relevant actions are con
sidered).
(3) Mix decoction and milk.
(g) A forward planner searches the stat
e-space graPh
. from the initial state to the oal-descri
lion.
~ Toch-Neo Publkatlon,..A SACHIN S H A H ~
(MS-1.26) • I ,
. nee MU • Al & OS / Electronics Plannln and Leamln ... Pa e No. 6-11
I 1ntell• e
~ ,entat lon (h) To obtain full advantage of backward search, we
ll•Pre need to deal with partially uninstantiated actions
pt' pace essentially consists of a set of nodes
5181e
s and states.
, A enting each state of the problem, arcs
rel'res n nodes represent the moves from one state For example, suppose the goal is to deliver a
t,el'Veether, an initial state and a goal state. specific piece of cargo to DelhJ :
10 aJ)O
. tate space takes the form of a tree or a This suggests the action unload (C, P, Delhi)
each s Thus, Action (unload (C, P. Delhi))
, ph, . .
gfll 5 that deternune which search algorithm or Precondition : Io (C, P) " (P, Delhi) " cargo (C) "
factor .
. ue will be used me1ude the type of the plane (P)" Airport (Delhi)
tecbnlq
bJem and how the problem can be represented. Effect : At (C, Delhi) " -, In (C, P)
pro
have presented state-space search as a forward Syllabus Topic : Learning Agent
We b • • al 'bl
' earch method, ut 1t 1s so poss1 e to search
5
kward from the set of states that satisfy the
bac
goal to the initial state. ~ 6.7 TWO TYPES OF A_GENTS.., ·:· ,.i.L·:. .11

'It- 6 ,6.2 Backward State-Space Search 1. Physical agents (usually known as robots)
2. Software agents (sometimes know n as softbots)
Backward state space search planning is we want
(a) to generate possible predecessors of a given goal 1. Physical agent s
state, work backwards toward the initial state. These are physical artefa cts and act in a physical
(b) If a solution exists, it should be found by a environment, e.g. send a physical agent into a
backward search which allows only relevant dangerous building. It must be able to see, it must
action know where it is, it must be able to move and
search, plan its goals, execute its goals
(c) The meaning or restriction to relevant actions only
means that backward search often has a much (physically), re-plan if necessary, communicate
lower branching factor than forward search. with other (possibly human) agents.

(d) Goal states are often incompletely specified. It Q> Consist of some or all of the follow ing :
expresses only what is desired in the fmal state, (a) Comp uters
rather than a complete description of the final
Top-level controller
state.
Low-level controller, e.g. to manipulate a hand
(e) It is also called as Regression planning because
Backward state-space search will find the solution (b) Senso rs
from goal to the action. Establish contact / non-contact with objects in the
(0 If there are many know n states possible to be environment
reached from goal state. Reaching to any one of To "scc 0

those nodes, we can reach the start node.


(c) Effectors
(g) In general, backward search works only when we
"anns"' uhands"' ''feet''
know how to regress form a state description to
the predecessor state description. For example, it (d) Auxiliary equipment
is hard to search backwards for a solution to the n- Tools, containers to put things in, tables to put
queens problem becau se these is no easy way to things on
describe the states that are one move away from Stages of development of physical agents :
the goal. • Slave manipulator operated by huma n maste r

lMs.126) ~ Tech-Neo Publications...A SACHIN SHAH Vent\Jre


Artificial lntelli ence MU . Al & OS/ Electronics
• L· • ••• Pa e No "
lilllted sequenc e manipulator (hard to adjust) • v-1~
(i) Machine translati on (MT) is the tasJc
• Teach-re play robot
automatically converting one natura) languag . Of
• Compute r-contro lled robot • • e 1n1o
• Intellige nt robot another, preservmg the m_earung of the input ~
d roducing fluent text m the output 1an01 , ~
Incentiv es for develop ment of physical agents an P cuage.
• Job undesirability (ll) State, goal and acti~n representation : Describe
• Lethal (radiation) tate goal and action represents to Plan
s •
• Harmful (paint spraying, chemical hand}jng) should be able to represent the problem Pro• one
Representation of the planrung . I>erly,
• Risky (fire-fighting, combat) probJern is
• Strenuou s (heavy loads, visual inspection) mapping of the states, actions and the goaJs. Far
• Noisy (riveting, hammering, forging) the representation, the language used shou)d be
• Boring (sorting, assembling) concrete, understandable and expressive. In a
2. Softwar e agents broader perspective, there are different
representation methods or ways that are followed
These are software programs and act in computers
or compute r networks. like propositional, first order, state Variable.
Simple examples are programs that sort your Fig. 6.7.2 depicts the high-level diagram for
incomin g mail according to a given priority, that planning.
store documen ts in the correct folder (previously
done by a human agent, a secretary).
IGi>
Three Laws of Robotic s
•I A robot may not injure a human being or, through Goal
inaction . allow a human being to come to harm.
• A robot must obey orders given it by human
beings except where such orders would conflict
with the First Law. Fig. 6.7.2: Planning

• A robot must protect its own existence as long as Planning essentially needs the representation in
such protectio n does not conflict with, the First or terms of time. This is required so that we are able
Second Law. to reason regarding the actions that are to be taken
• A robot may not harm humanity, or, by inaction, along with the reactions that we get back.
allow humanit y to come to harm. (iii) State Representation States are the
B" Agents and goals representation of the facts. States are represented
Agents and goals as conjunction. It comprises the positive literals
that specify the state.
~Agent - Sensors ~
t _. -.(• A state is represented with a conjunction of
State of the world now
positive literals using :
t . m
::,
Set of - What will It be Ilka if s.
actions I do action A ~ a Logical Propositions: Poor/\ Unknown
::,
3 FOL literals: At (Planl,O MA) " At (Plan2,JFK)
.... Goals _ '
What action I
should .ii , (D

I do now
•· a FOL literals must be ground & function-free
~l. •· •
~ ~..... ; ~ ~ '""" _.,.. • """' r,.,r.
'
i=ffectors.,, ..•,
•,.,._ ~ ,

Fig. 6.7.1 : Software agents


Not allowed: At (x, y) or At Father(Fred),SyctneY)
Closed World Assump tion:
What is not stated are assumed false.
(MS-126) •10:: ; i!I•;. ',, • ' , • .. , I
..
: '
[il Tech-Neo Publications...A SACHIN SHAH Ventuf8
MU - Al & DS / Electronics
. 8 nce 6-13
I 1ntel i
~
lfZ fopC·
,,,i,11•
'
1
. concepts of Supervised Learning 1 ~ ~.10 SUPERVISED LEARNING J
~~:~---~-7~-----------7-----~--1
~ S Of LiARN~G ,_l:'Q. What is supervised learning? Give example ofeach.--:..:
1 •,
~ 6· • I 1'- ..l J' "I

,~: • . g is one of the fundamental building -------------------------------~


• Learning that takes place based on a class of
~Arl"ifjcial Intelligence solutions. From a
examples is referred to as supervised learning. It
.ioC~ off"'-nd point, learning
• • a process that
1s is learning based on labeled data. In short, while
IJI ual sta . .
~pt the )cnowledge of an A~fic1al Intelligence learning, the system has knowledge of a set of
iJllP'°ves b making observatJons about its labeled data. This is one of the most common and
~y frequently used learning methods
f'°. Jllllent, • The supervised learning method is comprised of a
ell~ . process can be effected as :
1,¢111"~ series of algorithms that build mathematical
retlnement : Repeatedly practicing one can models of certain data sets that are capable of
'(1) S~ ood skill refined skill; e.g. learning to containing both inputs and the desired outputs for
at1Jll" g that particular machine.
(!riie a car.
gnowledge acquisition : Knowledge is acquired • The data being inputted into the supervised
ti) · b Jearning. One can develop learning based
learning method is known as training da~ and
tbfOUg essentially consists of training examples which
type of knowledge representation used (e.g.
contain one or more inputs and typically only one
~~,1 ·uve predictive logic, concepts, predictions
, uiuuc , ' desired output This output is known as a
1
etc} "supervisory signal."

~.~~'Yi.'iARNlNG ,ALGORlTHMS • ln the training examples for th.e supervised


f6~i .. ~ " learning method, the training example is
.,,,.,..... represented by an array, also known as a vector or
With the constant advancements in artificial a feature vector, and the training data is
intelligence, the field has become too big to specialize represented by a matrix.
in a11,together. There are countless problems that can
be solved with countless methods. Knowledge of an
• The algorithm uses the iterative optimization of an
objective function to predict the output that will
experienc~ Al researcher specialized in one field may be associated with new inputs.
mostly be useless for another field. Understanding the Ideally, if the supervised learning algorithm is
nature of different machine learning problems is very

working properly, the machine will be able to
important. Even though the list of machine learning
correctly determine the output for the inputs that
problems is very long, these problems can be grouped were not a part of the training data.
into three different learning approaches:
• Supervised learning uses classification and
I. Supervised Learning; regression techniques to develop predictive
2. Unsupervised Learning; models. Classification techniques predict
3. Reinforcement Learning. categorical responses,
Top machine learning approaches are categorized • Regression techniques predict continuous
depending on the nature of their feedback mechanism responses, for example, changes in temperature or
for learning. Most of the machine learn.ing problems fluctuations in power demand. Typical
applications include electricity load forecasting
may be addressed by adopting one of these
approaches. and algorithmic trading.

(MS·l26) IA 1 . ,r-' l ~ Tech-Neo Publications...A SACHIN SHAH Venture


AttificlaJ tntem ence MU - Al & OS/ Electronics Plannin .. :

• Let ~ begin by considering the simplest machlne-


leanu ng task : supervised learning for
class mcallo
• n. Let us C... A
talce an example of

°¼
classification of documents. In this particular case 000 0
ooo
a learner learns based on the available document
s 00
and their classes. This is also referred to
labeled data. as
0
• The program that can map the input documents
to 0
appropriate classes is called a classifier, because 0
it Clal&B
assigns a class (i.e., document type) to an objec
t
(i.e., a document). The task of supervised learning
is to construct a classifier given a set of classified (lD1)Fig. 6.10.1 : Supervised learning
training examples. A typical classification There are a number of challenges in superyiscd
is •
depicted in Fig. 6.10.1.
classification such as generalization, selection of
• Fig. 6.10. 1 represents a hyperplane that has been right data for learning, and dealing with
generated after learning, separating two classes variations. Labeled examples are used for training
-
class A and class B in different parts. Each input in case of supervised learning. The set of labeled
poin t presents inpu t-out put instance from samp examples provided to the learning algorithm u
le
space. In case of document classification, these called the training set.
poin ts are documents.
• Leam ing computes a separating line
• Supervised learning is not just about
or classification, but it is the overalJ process that
hype rplan e amon g documents. An unknown with guidelines maps to the most appropriate
docu ment type will be decided by its position with
decision.
respe ct to a separator.
a. 6. 1 O. 1 How Supervised Leaming Works?

• In supervised learning, models are trained using


labelled dataset, where the model learns about each
data. Once the training process is completed, the type of
model is tested on the basis of test data (a subse
training set), and then it predicts the output. t of the
• The work ing of Supervised learning can be
easily understood by the below example and
(Fig. 6.10.2). diagram
Labeled data

Modet training 6 Triangle

5 151
@ Tut data
Hex ago n~~
Triangle

1102)Flg. 6.11).2 : How Supervised learning works


?

(MS-126) I
Iii Tech-Neo Publications...A SACHJN SHAH Ventu
r'
r:::
J.!
t.4U • Al & OS/ Electronics
tia¥e a da~ of different types of No. 6-15
,V,"~ ~:b includes square, ~tangle, triangle, ._. Regresalon
~ .,blo. Now the first step 1s that we need to ~egression algorithms ·are used i i ~ . _
' -~ pol~odel for each shape. relationship between the • . is a
F' i)le ~ "abl mput Variable and the output
i,,iJl ,, dJe o-criven shape bas ,our sides, and all the van e. It is used for the ,-,.1; •


r.....,ctJon of continuous-
0
;deS are equal, then it will be labelled as a vanables, such as Weather forecasting, Market Trends,-
~.-...re etchi: Bhelow are some popular Regression algorithms
~-- givens• hape bas· ... _ s1"des, then it will
u.uee w C come under SO~r..l ............ :
_..~ J-:-g
0
1f tb~Ued as a triangle. • Linear Regression
t,e la • • • Regression Trees

0 !it
, fbe given shape bas six equal sides then it
be labelled as hexagon.
r training, we test our model using the


Non-Linear Regression
Bayesian Linear Regression
rJo'lt', afte d the taSk of the model is to identify the • Polynomial Regression
~set. an
...ne. ._. ClauiflcatJon
SI""' biJle is already trained on all types of
-,.,... inacand when 1t •
• cIassifies
• fiind s a new s bape, 1t . .......on algon"thms are used when the output
. Classifi-,:
' sb81"5~ 00 the bases of a number of sides, and vanable IS categorical, which means there are two
~cts tbe output classes such as Yes-No, Male-Female, True-false., etc.
P';wing are the steps involved in Supervised • Random Forest
, fo • • • Logistic Regression ,1
~g • . th type f . . dataset
fast I)etenrune e o traming • • Decision Trees
0
eoUect/Gather the labelled training data. • Support vector Machines f"J ' I 1•
0
~ Split the training dataset into training dataset, a_ 6.10.2 AdwMlll'U '11 Sapavlsed .......
teSt dataSet. and validation dataset
1 With the help of supervised learning, the model•
Determine the input features of the training •
o, can predict the output on the basis of prior
dataSCt. which should have enough
• I I experiences .
• • knowledge so that the model can accurately
,,, 2. In supervised learning, we can have an exact idea
predict the output.
about the classes of objects.
0 • Determine the suitable algorithm for the 3. Supervised learning model helps us to solve
" model, such as support vector machine, various real-world problems such as trawl
• decision tree, etc. detection, spam filtering, etc.
0 Execute the algorithm on the training dataset
,Sometimes we need validation sets as the a. 6.10.3 Disadvataps of Sllpervlsed .......
·control parameters, which are the subset of 1. Supervised learning models are not suitable for
training datasets. handling the complex tasks. •,
o Evaluate the accuracy of the model by 2. Supervised learning cannot predict the correct
•Jfi (
,·., .1providing the test set. H the model predicts output if the test data is different from tho training•
. J.be correct output, which means our model is dataset. • 1
accurate. 3. Training required lots of computation times.
' Supervised learning can be further divided into 4. In supervised learning, . we ~ . enough
two types of problems: Regression and knowledge about t h e ~ Qf o)>jcct.. •
Classification. ,, t::j, •

IMS-126) 111,,. ~ Tech-Neo Publications-A SAOiIN SHAH Venture11


Plannin and Leamln ••• Pa No. 6-16
Artificial lntelll nee MU - Al & OS / Electronics
. . crates two clusters with represcn,~o!..
1t gen . -iv~
Syllabus Topic : Unsupervised Leaming centrOl•ds i-~-~
U~
Jearrung.

Clusters show that po·
Ults
with similar properties and closeness are ~
"'6.(.i • UNSUPERVJSED~LEARNING together.
0
r------------=-----~----------,
• ~• ~at are the types of unsupervised learning? r
1 '
( GQ. What are the advantages and disadvantages of~I
I
{ unsupervised learning?
'
..
I
,

0 0
00 0 0
ooo
o oOoo
ooo
0 0
@) 0
0

i UQ. What is unsupervised learning? Give example of : oO O O


l each. , 0 0
0
j O
~
•----- ------ ------ ------ ------ I Urlat)eled
• Unsupervised learning refers to learning from data
unlabeled data. It is based more on similarity and {1D3)Fig. 6.11.1: Umupem sed learning
differences than on anything else. In this type of
learning, all similar items are clustered together in Unsupervised learning is a set of algorithms
a particular class where the label of a class is not

where the only information being uploaded is
known. inputs. The device itself, then, is responsible for
• It is not possible to learn in a supervised way in grouping together and creating ideal outputs ~
the absence of properly labeled data. In these on the data it discovers. Often, unsupervised
scenarios there is need to learn in an unsupervised
learning algorithms have certain goals, but they
way. Here the learning is based more on
similarities and differences that are visible. These are not controlled in any manner.
differences and similarities are mathematically • Instead, the developers believe that they have
represented in unsupervised learning. created strong enough inputs to ultimately
program the machine to create stronger results
• Given a large collection of objects we often want than they themselves possibly could. The idea
to be able to understand thes~ objects and
visualize their relationships. For an example here is that the machine is programmed to nm
based on similarities, a kid can separate birds flawlessly to the point where it can be intuitive
from other animals. It may use some property or and inventive in the most effective manntl
similarity while separating, such as the birds have possible.
wings. • The information in the algorithms being run by
• The criterion in initial stages is the most visible unsupervised learning methods is not labeled,
aspects of those objects. Linnaeus devoted much classified, or categorized by humans. Instead, the
of his life to arranging living organisms into a unsupervised algorithm rejects responding to
hierarchy of classes, with the goal of arranging feedback in favor of identifying commonalities in
similar organisms together at all levels of the the data. It then reacts based on the presence. or
hierarchy. Many unsupervised learning algorithms absence, of such commonalities in each new piece
create similar hierarchical arrangements based on of data that is being inputted into the machine
~imilarity-based mappings. itself.
• The task of hierarchical clustering is to arrange a • It is used to draw inferences from dataSCIS
set of objects into a hierarchy such that similar consisting of input data without labelled
objects are grouped together. Nonbierarchical responses. Clustering is the most comroon
clustering seeks to partition the data into some unsupervised learning technique. It is used for
number of disjoint clusters. The process of exploratory data analysis to find hidden pattefllS
clustering is depicted in Fig. 6.11.1. ~r groupings in data. Applications for clustering
mclud~ gene sequence analysis, market research,
• A learner is fed with a set of scattered points, and and obJect recognition.
(MS-126) , t II l\t.
li1 Tech-Neo Publications...A SACHJN SHAH V ~
...n...ics====.........:""".............=...J;~=~~~ ~~~~~~:;,:.! J,,
&...o...s...1...e...1ect=ro
..•~~~~M-U;...·,..A....1...
1ntt11I'l£
~ ~ of Unsupervised Leaming 2. Unsupervised learning is preferable as it is easy to
ff,1 ,,r--
, 6• .AJJOrithm get unJabeled data in comparison to labeled data.
I • al gon'thm can be
upervised learrung ~ 6.11.l Dlsaclvmuges of Unsupervised
'fbt UJlS • ed into two types of problems: Leaming
, categonz~:::.: ---~----,,
~ •ng 2. Association
C)usten l. Unsupervised learning is intrinsically more
' J.
I • g difficult than supervised learning as it does not
~ CiostertJJ have corresponding output
t/ . g is a method of grouping the objects into
; CJUSterJJlsuch that objects with most similarities 2. The result of the unsupervised learning algorithm
~ cJus: into a group and has less or no similarities might be less accurate as input data is not labeled,
J rt~ the objects of another group. Cluster analysis and algorithms do not know the exact output in
' \\'I: the commonalities between the data objects advance.
' tin categorizes them as per the presence and
• In practical scenarios there is always need to learn
, and e of those commonal'1ties.
absenc from both labeled and unJabeled data. Even while
f 1- AS,Wclation learning in an unsupervised way, there is the need to
An association rule is an unsupervised learning make the best use of labeled data available. This is
method. which is used for finding the relationships referred to as semisupervised learning. Semisupervised
t,etween variables in the large database. It learning is making the best use of two paradigms of
[ learning - that is, learning based on similarity and
determines the set of items that occurs together in
r lhc dataset. Association rule makes marketing learning based on inputs from a teacher.
\- strategy more effective. Such as people who buy Semisupervised learning tries to get the best of both
f' xitem (suppose a bread) are also tend to purchase the worlds.
I- y (Butter/Jam) item. A typical example of
1" Association rule is Market Basket Analysis. a. 6.11.4 Difference between Sapervlsed ad
Unsupervised LumJna
~ Below is the list of some popular unsupervised
~g algorithms : • Supervised and Unsupervised learning arc the two
techniques of machine learning. But both the
t, K-means • Neural Networks techniques are used in different scenarios and with
"" clustering different datasets. Below the explanation of both
• Principle
learning methods along with their difference table
fl KNN (k-nearest Component Analysis
is given.
"- neighbors)
• Independent
• Supervised learning is a machine learning method
Hierarchal Component Analysis in which models are trained using labeled data. In
clustering supervised learning, models need to find the
• Apriori algorithm
' Anomaly mapping function to map the input variable (X)
r detection
• Singular value
with the output variable (Y).
decomposition
y = f(X)
! 6•11-2 Advmtages of Unsupervised Lumlng • Supervised learning needs supervision to train the
model, which is similar to as a student learns
(l•• 1as
Unsupe • learning is used for more complex
rvised things in the presence of a teacher. Supervised
l in ks as compared to supervised learning because, learning can be used for two types of problems:
. unsupervised learning we don't have labeled Oassification and Regression.
10pu1 data. '
• rl26) ~Tech-Neo Publications...A SACHIN SHAH Ventunt
Artificial lntell' nee MU • Al & OS/ Electronics Plannln and Leamin ...Pa No. g.
18
• Example : Suppose we have an image of diffe th input data. Unsupervised learning does
types of fruits. The task of our supervised learn
rent ~ any supervision. Instead, it finds pau
c:
model is to identify the fruits and classify ing ~m the data by its own.

I
accordingly. So to identify the imag them be
e in
supervised learning, we will give the input
data as
• Unsupervised leamin_g can ~ f~r two t}'J>cs
of problems : Clustenng and AssOC1ation.
well as output for that. which means we will

I
train • Example : To understand the. UDSU
the model by the shape, size, color, and taste J)ervjSCd
of )earning, we will use the ~xample given abov
each fruit. Once the training is completed, e. So
we will unlike supervised learrung, here we will
test the model by giving the new set of fruit not
. The provide any supervision to the model. We will
model will identify the fruit and predict the
output iust
using a suitable algorithm. provide the input dataset to the model and
allow I
the model to find the patterns from the data.
• Unsupervised learning is another mac With
hine the help of a suitable algorithm, the mod
learning method in which patterns inferred el Will
from train itself and divide the fruits into diffe
the unlabeled input data. The goal of unsuperv rent
ised groups according to the moSt similar featu
learning is to find the structure and patterns res
--- -- --- - --- --~~- --- - - --- --- --- -: -- - - -
,-........._... from between them.
-....- ..
:_~ _w_y,at is the difference between
--- diffe---
--
supervised leaming and u~~~-~
~
- -.,,,- -,
The main---
~-~7___ ___ ___ ___ ___ ___
------
rences betw ~---- d and---
een Supervise--- --
Unsupervised learning are given below :
_:

Suoervlsed Learning U • -"l


·-- -- ·- . -
Supervised learning algorithms are trained
data. using labeled Unsupervised learning algorithms are trained
using
unlabeled data.
Supervised learning model takes direct feed
back to check Unsupervised learning model does not take
if it is predicting correct output or not. any
feedback.
Supervised learning model predicts the outp ..
ut.
In supervised learning, input data is provided
to the model
Unsupervised learning model finds the hidde
patterns in data.
n I
along with the output. In unsupervised learning, only input
The goal of supervised learning is to train
that it can predict the output when it is give
the model so
n new data.
provided to the model.
The goal of unsupervised learning is to
hidden patterns and useful insights
data is

find the
I
Supervised learning needs supervision to train
the model.
unknown dataset.
Unsupervised learning does
supervision to train the model.
from the

not need any


I
Supervised learning can be categorized in
and Resrression problems.
Supervised learning can be used for those
Classification

cases
kno w the input as well as corresponding outp where we
Unsupervised Learning can
be classified in
Cus terin g and Associations problems.
Unsupervised learning can be used for thos -
I
uts. where we have e cases
only input data and DO I
Supervised learning model produces an accu corresponding output data.
rate result. Unsupervised learning -
model may give less
Supervised learning is not close to true Arti accurate result as compared to supervised leam
ficial in.k..
intelligence as in this, we first train the Unsupervised learning is more close to
model for each the true
data. and then only it can predict the correct Artificial Intelligence as it learns similarly
outp ut as a
chil d. learns daily routine things
expenences. by bis
It includes various algorithms such as Line
Logistic Regression, Support Vector Machine
ar Regression,
It includes various algorithms such as Clus -
, Multi-class tering,
Classification, Decision tree, Bayesian Lo2 KNN , and Apriori algorithm.
ic, etc.
-
(MS-126) , . Iii Tech-Neo Publications...A SACHIN SHAH Ver,t\11!
Al & DS / Electronics
,..,U .
nee ..
Semi-Supervised Learning teacher solves for the class as an aid in solving another
~ r pfc:
5y11111>1J•~i:::::::_-_-::_-_-_-_-_-_-_-~-;;
9 0 set of problems. In the transductive setting, these
unsolved problems act as exam questions. In the
~J-S UPE RVI SED LEARNING inductive setting, they become practice problems of
'.'.f.:6,f,Z ., , the sort that will make up the exam.
~ r v i s e d ]earning is an approach to
Syllabus Topic : Ensemble learni ng
selll-1· pe. g that combines a small amount of
50
1~ t h a Jarge amount of unlabeled data
~hillt
~ . (lata Wl
~ed . ·ng. Semi-supervised learning falls ..,- 6.U ENSEMBLE LEARNING
~g ~per vised learning (with no labeled
, - - - - - - - - - - - - - - - - - - - - - -.- ~ ,
~.ee0 ; ) and supervised learning (with only
a,iJIIOg •ning data). It is a special instance of weak ~ ~ - ~ ~•~~ :~~~ ~-~' ;t~_ !!.~~ ½,~_ !
1

,.1,,o]ed uai_.
~ • 100.
An ensemble is a machine ]earning mode] that
SU~abeled data, when used in conjunction with a combines the predictions from two or more
runount of labeled data, can produce models.
~d rable improvement in learnin g accuracy. The • The models that contribute to the ensemble axe
I .

:es
coos1 e
'sition of labeled data for a earrung proble ~ often
a skilled human agent (e.g. to transcnbe an
audio segment) or a physical experiment (e.g.

called as ensemble members.
They may be of the same type or of different
types.

deltflllUllllg the 30 structure of a protein or • They may or may not be trained 00 the same
deltflllUllllg whether there is oil at a particular training data
)ocation). Toe cost associated with the labeling process
Rema rb
thus may render large, fully labeled training sets Ii"

infeasible, whereas acquisition of unlabeled data is 1. An ensemble method is a technique that uses
relatively inexpensive. In such situations, semi- multiple independent similar or different models
supervised learning can be of great practical value. to derive an output or make some predictions.
Semi-supervised learning is also of theoretical interest For example, a random forest is an ensemble of
in machine learning and as a model for human
multiple decision trees.
learning.
2. Ensemble Learning : This uses a single mac.hine
A set of l independently identically distributed
examples x1, ... ,x1e X with corresponding labels learning algorithm. It is an unpnmcd decision
Y,,..,,Y1E Y and u unlabeled examples x1+ 1, ... ,x 1+ u E tree. It trains each model on a different sample of
X arc processed. Semi-supervised learning combines the same training datase t
1
Ibis information to surpass the classification The predictions made by the ensemble members
performance that can be obtained either by discarding are combined using simple statistics. such as
~ l e d data and doing supervised learning or by voting or averaging.
g the labels and doing unsupervised learning.
.W 6.14 CltOSS-VAllDATIOH IN MACH
~~-su pervi sed learning may refer to either ,. • ~ (.....
LEARNING
~ uc~ve learning or inductive learning. The goal of
the o; uctive learning is to infer the correct labels for Cross-validation is a technique for validating the
.,.ven.unlabeled d ata x, + 1,...,x I+ u only. The goal •
of ind . model efficiency by training it on the subset of
UCtive 1........:_ . .
frolll}{ to Y.........wg JS to infer the correct mappmg input data and testing on previously unseen subset
of the input data. It is a techni que how a
lntui • statlsticaJ model genera lises to an independent
c~ tiveJy, the learning proble m can be seen as an
dataset.
~ l e d data as sampl e problems that the
Oi!s.126) lilrech -Neo Publications...A SACHIN SHAH Venture
ArtificlaJ lntellinee MU - Al & OS / Electronics
s.20
Plannin and Leamin ••• Pa e No.

• In machi ne learning, we need to test the stability one may build a slWDP which contai·th ns a leaf for cac.h
the
of the model. It means based only on the training possible feature value or a stump w1 two leav
' cs,
dataset, we cannot fit our model on the training one of which corres ponds to some chosen categnn,
eaf · -·,,
dataseL and the other l to all the other categones.
• For this purpose, we test our model on the sample For binarY features these two schemes arc
.
which is not part of the training dataseL And then 1deot1c. al A missing value may be treated as a Yet

we deploy the model on that sample. another category·
• This complete process comes under cross-
validation. a 6.15.1 for Continuous futur es
• The basic steps of cross-validation are : Usually, some threshold feature value is selected,
(i) Reserve a subset of the datase1 as a validation and the stump contains two leaves-for values below
set. and above the threshold. However, rarely, multiple
thresholds may be chosen and the stump therefore
(ii) Provide the training to the model using the contai
ns three or move leaves.
training dataseL
Decision stumps are often used as components
(iii) Evaluate model performance using the (called "weak learners" or "base
learner'') in machine
validation seL learning ensemble techniques such as bagging and
• If the model performs well with the validation set, boosting.
perform the further step, else check for the issues.
a 6.1 5.2 Remarks
1~ 6.15 STUMPING ENSEMBLES (1) Meani ng of stump In decisi on tree
• A decision stump is a decision tree, which
A decision stump is a machine learning model uses only a single attribute for splitting.
consisting of a one-level decision tree. That is, it is a
• For discrete attributes, this means that the
decision tree .with one internal node (the root) which is
tree consists only of a single interior node
immediately connected to the terminal nodes (its
(i.e., the root has only leaves as successor
levels).
nodes).
A decision stump makes a prediction based on the • If the attribute is numerical, the tree may be
value of just a single input feature. Sometimes they are more complex.
also called as 1-rules.
(2) Are decisi on stump s linear 1
[An example of a decision stump that
discriminates between two of three classes of iris A decision stump is not a linear model. The
flowe r data set : iris verslcolor and iris verginlca. The decision boundary can be a line, even if the model
petal width is in centimetres. This particular stump is not linear.
achieves 94% accuracy on the iris dataset for these two
Syllab us Topic : Reinfo rceme nt Leaming
classes.]
Petal width 1.75
~ 6.16 REINFORCEMENT LEARNING -~
No Yes ,-----------------------------~
: GQ. What is Reinforcement Leaming? Explain with an :
Iris Verslcolor example. ..
Iris Verginica 1
L--\.. ~
- - - - - - - - - - - - - - - - - - - .._ - - _..._ _..._ - __.,..
Fig. 6.15.1 • Reinforcement Leami ng is a feedback-based
Machine learning technique in which an agent
Depending on the type of the input features, learns to behave in an environment by perforroin&
several variations are possible. For normal features, the actions and seeing the results of actions.

.,
(MS-126) •<' '' J
Iii Tech-Neo Publications...A SACHIN SHAH Venllft
MU . Al & OS / Electronics Plannin and Leamln No. 6-21
,nielli neeo<><i action, the agent gets positive Environment
~fof e3'b ;d for each balad action, the agent gets
eeDbllc"• ee<Jback or pen ty.
' f gative f ent Leaming, the agent learns
rl- ~ejJl{orcelll using feedbacks without any
1
lJI 11Jlltical Y _1:1re supervised learning.
,otO _,1 data. UJU.U- Reward. Actions
•..,.tc;u . labeled data, so the agent is State
1P-- tbere is 00 . I
siJ!Ce team by its expenence on y.
, i,ol)Jld to specific type of problem where
!{I, ~oJv~~g is sequenti_al, and ~e goal is
de'j510~ such as garne-playmg, robollcs, etc.
1oog-te ' . teracts with the environment and
Agent
1be age~t bin itself. The primary goal of an agent
~plo~ it :ent learning is to improve . the .
iJ)force . th . (1DCI Fig. 6.16.1
ill re by gettJ.ng e maximum positive
pestorinance
• For machine learning, the environment is typically
iewardS· iearns with the process of hit and trial, represented by an "MDP" or Markov Decision
'J1)e age: on the experience, it learns to perform Process. These algorithms do not ncccssarily
aod bas 10
. a better way. Hence, we can say that assume knowledge, but instead are used when
we ~k roent Jeaming is a type of machine exact models are infeasible. In other words, they
"Re~orcemethod where an intelligent agent are not quite as precise or exact, but they will still
teaf1Ull~ r program) interacts with the serve a strong method in various applications
(co~Pu e ent and learns to act within that" How a throughout different technology systems.
e0viro_nmdog reams the movement of his arms is • The key features of Reinforcement Learning are
Robotic l .
exaJJlple of Reinforcem ent earrung. mentioned below.
80
. core part of Artificial intelligence, and all o In RL~ the agent is not instructed about the
n~a f .
Al agent works on the concept o reinforcement environment and what actions need to be
teaming. Here we do n?t need to pr~pro~ the taken.
own expenence without
agent, . its
as it teams from
0 It is based on the hit and trial process.
any human intervention.
The agent takes the next action and changes
Example : Suppose there is an ~ agen~ present 0

within a maze environment, and his goal 1s to find states according to the feedback of the
the diamond. The agent interacts with the previous action.
environment by performing some actions, and 0 The agent may get a delayed reward.
based on those actions, the state of the agent gets 0 The environment is stochastic, and the agent
changed, and it also receives a reward or penalty needs to explore it to reach to get the
as feedback. maximum positive rewards. .I
, The agent continues doing these three things (take
action, change state/remain in the same state, and a. 6.16.1 Appnwbes to lmplemem
get feedback), and by doing these actions, he Reinforcement Learnlns
learns and explores the environment.
' -- -------- -- ------ --- --..--..~ -- ---- ------1
• The agent learns that what actions lead to positive 1 GQ. What are the approaches for. ReinforcementpJ
feedback or rewards and what actions lead to I • 'Z ,,,;;,
~ ___ Leam1n$ '-) _ _..,.-', •~ 4'¥ 'l'·~•o<:,~,c,1\aw c;~::-4
negative feedback penalty. As a positive reward,
the agent gets a positive point, and as a penalty, it There are mainly three ways to implement
gets a negative point. reinforcement-learning in ML, which are :

!Ms.126)1 Iii Tech-Neo Publications...A SACHIN SHAH_Venture


ArtificlaJ lntelllQence (MU - Al & OS/ Electronic (Planning and Leaming) ••. Page No. (6-~
s)
1. Valu e-ba sed:
RL can be used in al.m i . n. It ~
os t any applCatio is a,
The value--based approach is about to find • base d on expe rience algorithm. a declS·.
the Jearrung
optimal value function, which is the maximum . •on
er algorithm. an algo nthm that •earns
value at a state under any policy. Therefore, mak
the autonomously, an optimizati. on algon ·,1,.,_
= that over
agent expects the long-term return at any state
(s) time learns to maximize its reward,
~de r policy 1t. the reward can be
defined by the engineer to reach the objective
2. Polley-based : of the
problem.
Policy-based approach is to find the optim
al
~ 6. t 6.2 Challenaes of lleln forc enw
policy for the maximum future rewards with
out a, a.--,
using the value function. In this approach, Here are the major challenges you will face While
the
agent tries to apply such a policy that the actio
n doing Reinforcement earning :
performed in each step helps to maximize
the 1. Feature/reward desig
future reward. The policy-based approach n which should be very
has involved
mainly two types of policy :
2. Parameters may affect the speed of learning.
o DetenninJstlc : Toe same action is produced
by the policy {1t) at any state. 3. Realistic
• • environments can have ""..;. r-....,,
o Stochastic : In this policy, probabili observab ility.
ty
determines the produced action. 4. Too much Reinforcement may lead to an
overload
3. Model-based : In the model-based appro of states which can diminish the results.
ach, a
virtual model is created for the environment,
and 5. Realistic environments can be non-stationary.
the agent explores that environment to learn
it.
Ther e is no particular solution or algorithm
for a. 6.16 .l App Oca don sofll eiaf orc •••
this approach because the model representation
different for each environment.
is Leamlna
Here are important Here are applications of Reinforcement Learning
characteristics of :
reinforcement learning 1. Robotics for industrial automation.
• There is no supervisor, only a real number or 2. Business strategy
reward signal planning
• Sequential decision making 3. Machine learning and data processing
• Time plays a crucial role in Reinforcement 4. It helps you to create training systems that provide
problems custom instruction and materials according to
the
• Feedback is always delayed, not instantaneous requirement of students.

-------~---....______________,
• Agent's actions determine the subsequent data it 5. Aircraft control and robot motion contr
receives ol

1------·······--··--------
-----------------i

(MS-126) 1
Iii Tech-Neo Publications...A SACHIN SHAH Vert~
~Q•

t6.4 1telnforcement Leaming vs. Su (Planning anct Leaml


6•,,..-' _ - ~ - - - - - - - - - _____ - ~ leirntng ng)... Page No. (6-2!!.
, , t Is the difference between Reinforcerrr - - - - - - __
wr,a - - - - - - - - - - - - - - - - - -ent Leaming d /""~ - ,-. -.- - -
h,, , , - - '
ete~
n•,•~
.
t'
.
,
R • ' . - - - - - - an- - Supen.:sed
einforcementi
•,
__
~11lin
- ,.. Leamin
;.- - -· - - ..,
. __ 97
•----
g I •t•· •·• ,·• ., 11' ' •----------
- -:,c - - - - - - ,. - - -
'. f" 1 - I~ ,.••·• lt':O\·;:
•:.·····1 ·>1" ~ ~
~ I
-" • -. ,~-..
-l•'tl•

.. Reinforcement learning hel


scyle J
111
1'J /i ~ r:i:. j Sun..rvi~ ~ ., . 7. -. ~{t.,..
is1011 d . . ps you to •-'· '"'---"-~' ••
eclSlons sequentially "ll'.e your In thi " ,• .,,,
--1----:-----.:_-.:~'.. :,·___ . s method, a decision is mad
Works on interacting with th . input given at the be • • eon the
wo~on., e env1ronment glllIUng,
Works on ex:imnl .

eoc__Y In RL method learning


~;-n~~~~~~~~;~;;;;
on ~~~da:ta.~;~:-~::es~
• is de d
dee·1s1on o~r~gt~v:e:n~sam:: plje
pepeod 1~ Therefore, you should - pen ent. Supervised l •
de'jsiOll dependent decisions. give 1abets to all the •d eanung the decisions which

ted Supports and work j~ar~e~i~~epe~nid;en;


; ~-1~~~~~~;~;~~~~~~= better in AI h are given for every decision. s:o~l:abe:lsj
t~oif~eac;h~o;th~er~,
sest sui . . . u, , w ere human I
t is mostly operated with an interactive
interaction 1s prevalent

Object recognition
FJ3111P~l;e~~~~~~~;C~h~e;s~s~g~am~~e~~~~~~;~~~~~~~~~~;;;;:;;cs;o~ftw~;are~;~s~y~s~te~~mi~o~r~a-:-pp~~li-·;ca~ t1-·;o~ns~.~~~~?J
.......,. • , ,,: • 2 , (iii) In the figure •
~~... "r..n1£RAL'LEARNiH "'· " •;1-"""' ' environment represents overall
j~f!Jr•~~•-~ . . r G MODE1.vJr. learner sys~m. The environment may be regarded
~ as . ~ne which produces random stimuli or a
We develop general learning model and note the training source such as a teacher which provides
r.;tors affecting learning performance. carefully selected training examples to the teamer
. component
6) Leaming can be d one m several ways. But
teaming requires that new knowledge structure be (iv) Some representation language for communication
l1 created from some form of input stimulus. between environment and the learner component
mast be used. The language scheme must be same
.,,.
This type of new knowledge must be assimilated as that used in the knowledge base. When they are
into a knowledge base and be tested in some way same, we say that 'single representation' is used.
, for i~ utility. (v) Inputs to the learner component may be physical
ij Testing means that the knowledge should be used stimuli of some type or descriptive, symbolic
in the performance of some task from which training examples.
meaningful feedback can be obtained. The The information conveyed to the learner
I

feedback provides some measure of accuracy and component is used to create and modify knowledge
in the knowledge base.
usefulness of the newly acquired knowledge. We structures
exhibit the general learning model in the figure. The performance component uses this knowledge
Stimuli to carry out some tasks, such as solving a problem,
examples, _ _ _ __
Leamer Feed back u playing a game etc. ,
Li
; 'component (vi) When a task is given to performance component,
it describes its actions in performing the task and
Critic produces a response. Then critic module evaluates
Performance
evalua1Dr this response relative to an optimal response.

Response
(vii)Critic module sends the response to the leartlC!'
,. Performance component to check whether or not _ihe
• Component Tasks \ performance is acceptably and if so n:q~
modifying the structureS in the knowledge base.
Fig. 6.17.1: General Learning Model \
~ Tech-Neo Publications-A $ACHIN SHAH Ventllte
P1Jnn1n ond Lonmin ••• Pe e No. 6-24
t...~ tr 5 .3nC8 iW. ~ s OS ~ )
. "'kd 11c.,t iimc. it cnn be directly obtain~•
If tbc {rOpC!2" lc:tmin,g ~ :11::h~,-ro. then the ii- nt-c •
nu:mor)'. 111stend of being gcnen:ited ~"
~m the b
pa
""'"'"rfiJF.u1-IWtlXx of the system "'ill be unpro,"ed b) the Y
bxK lt,clgc MSC compooen'- the npplication. .
.d Phone's cache compnscs 510,,...
(tiii) The cycle dcsa-ibcd in the abov-e figure is IO be • Toe Androl . '"" of
small bits of informauon that your npps and Wei)
rcpcated a number of times (i) until the
browser use to speed up perfonnance. But cached
pcrfurmancc of the system rcochcs some
files can l)ecome co_rrupted and overloaded llnd
acx:q>C3blc le~"Cl, (u) or a known learning goal is
. performance issues. Cache need not ..._
ac:l.ucved. or- (w) afta- some repetitions no change cause .odi U\:
constantly cleared, but a pen c c 1can-out can be
occars in the knowledge base.
helpful.
~ 6.18 TECHNJQUES USED 1M LEARNING ~ .._ 8 .2 wmfq by Direct Instruction
e> 6. t •

The most common tcclmiqucs (methods) used for This type of teaming is slightly more complex.
learning arc as follows : •
This requires more inference than rote learning.
Here to intcgr.ite knowledge_ into 'knowledge
(i) Memorization (Rote learning)
base', it must be tranSfonned into an operationaJ
(u) Leaming by direct instruction
form.
(w) Leaming by analogy, When a number of facts are presented to us
(iv) Leaming by induction

directly in a well-organised manner, we use this
(v) Leaming by deduction type of learning,
(VI) Leaming using neural network
a 6.18.3 laminar by Amlo8Y
a. 6. f 8. f LeamJng by MemoriDdoo • This is a process of learning a new concept or
(Rote Le.amfng) solution by using the similar known concepts or
solutions. For example, in an examination
• It is the simplest form of learning. It requires the previously learned examples help one to solve
least amount of inference. Here, learning is new problems (in an exam)
achieved by simply copying the knowledge that is
used in the knowledge base. For example, for We make frequent use of analogical learning. This

memorizing multiplication table, we use this type fonn of learning requ.ir'Cs more inferring lhan
of learning. either of the previous forms.
• When a computer stores a piece of data, it • It is because 'difficult transformations' must be
perfonns a rudimentary form of learning. It is a made between the known and unknown situations.
simple case of 'data caching'. We store computed
values so that we do not have to recompute them a 6.18.4 wminc by lnducdon
lat.er.
• This is a powerful form of teaming which also
• When computation is more expensive then this requires more interring than the first two methods.
strategy can save a significant amount of time. This form of learning is a form of invalid but
• Caching is used in Al programs to produce some useful inference.
supnsmg performance improvements. Such • Herc we formulate a general concept after seeing
caching is known as rote learning. a number of instances or examples of the concept
Iii" Remark : Data Caching For example, we learn the concepts of sweet taste
or color after experiencing the sensations
• Data Caching is a technique of storing frequently associated with several examples of sweet foods
used data in memory, so that when the same data or coloured ob'ects.

J
(MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Venture
.t
'
Plannin and Leamin ... Pa e No. 6-25
human
application that uses a knowledgebase of
expertise to aid in solving problems .
I.,eamin g is achieved thro ugh a
of d procedure
e step s usin g 4. Expert system is a model and associate
of deductive inferenc ee of
new facts or that e~hib_its, within a specific domain, a degr
facts. From the kno wn facts , ble to
exam ple, expertise in problem solving that is compara
hips are derived logically . For
a human expert.
d learn deductively that sita is the cousin ble of
have kno wled ge of sita and 5. Expert system is a computing system capa
esh, if we ledge
representing and reasoning about some know
h's parents and Rules for cousin a hum an
rich domain, which usually requires
lems
nship. expert, with a view towards solving prob
ires mor e inter ence than of perf orm ance
uve )earning requ and/or giving advice. Its level
rmetbods makes it expert .
ne.
Expert system = Knowledge+ Inference engi
U Neural Network ledge
All in all, an expert system contains know
Networks can be loos ely sepa rated into rtise in some
acquired by interviewing human expe

t methods and learning rules. domain. Expert system cannot operate


och and pitts developed the first network that call for common sense. Thu
~els to explain bow the signals passed from perform a task in limited dom
f°euron to another within the network.
s, Exp
in situations
ert system
ains that require human
expertise like medical diagnosis, fault diag
nosis, status
ral exploration,
monitoring, data interpretation, mine
king etc.
tutoring, computer configuration, credit chec
/ Syllabus Top ic: Expert Systems ert System
Syllabus Topic : Components of Exp

6.19 DEFINIJ'ION OF EXPERT SYSTEM en System


~ 6. 19 .1 Componmts of Exp
~J -fi¼ -~.
··-- .~-, -,- --- --- ~- -~
• , -r I Toe major components are :
What 15 an expert system? I
any expert
t_~~ e~:_ _' _ ~ _ ~~-~ ~ __ .,_.. 2_i: _ ~ _! 1. Knowledge bas e: The core module of
•-~~~~r (KB). It is a
ert system is its Knowledge-Base fic
ll'ing are various definitions of exp warehous e of the dom ain-s peci knowledge
s : the
IYSltm captured from the human expert via
e arc man y
d knowledge acquisition module. Ther
E\pcn system is a artificial intelligence base ways of representing the knowledge in the KB. In
rt in
IYstcm that converts the knowledge of an expe antic
into a softw are code . lbis code earlier chapters we threw light on logic, sem
'specific subje ct scripts.
the nets, frames, conceprual dependency and
: be merged with other such codes based on In this section, we shal l elab orate on the most
Wledge of other experts and used for sentation (KR )
r. commonly used knowledge repre
l :~ering queries submitted thr~ugh a compute structure, viz., prod uction rules.
--.pen are whic h uses d as 'rule
syste m is a piec e of softw 2. Inference engine : Also calle
daiabascs f ce or performs the
""Le . ~ expe_rt knowledge to offer advi interpreter' an inference engine (IE).the responses
._ dec:1s task of matching antecede nts from
nosis.
l ~ ions m such areas as medical diag a rule
5 s
a com puter prog ram that cont ains given by the user and firing rules. Firing of
1 kllc) Y tem is
causes two major things to happen.
!bat :edgebase and a set of algorithms or rules of rules
• Triggers another rule, thereby a network
~llliocr new facts from knowledge isandanfrom Al
& data. An expert system is triggered.
;;:--------
'-.....
'-----------------:::: nture

~
N SHAH Ve
tions ....A SACHl
e o Publica
~Tech-N
... Pa e No. 6-26
Artificial lntelli ence MU • Al & OS/ Electronics
. runent When there is a foflllal
• Implies that an action has been carried out. This xternal enVlfO .
e . ·t is done via the user mterface. In
adds new information to the database of inferred consultauon, I th f
. rt systems where ey orm a pan of
facts. real-ame expe .
loop system, it 1s not proper to exi,ec1
The major task of the IE is to trace its way the closed • t feed ·
human intervention every ume ~ ·m the
through a forest of rules to arrive at a conclusion. .. vailing and get remedies. Moreover
condiuons pre . . '
Basically there are two approaches. They are forward . ·s too narrower m real ume systems•
the ume-gap J • . .
chaining and backward chaining. interface with its sensors gets the
The external · . . .
. b minute informauon about the situation
@@-----~:::;;=::;;:=J rrunute Y ea1 • ES il
and acts accordingly. Such~ tune w 1 be of
tremendous Val
ue in industnal
. .
process controls
'
in
nucIear P!ants, in supersoruc Jet fighters etc.
lanation facility : The meth~ by which an
6. Exp I
expert system reaches a cone us1on may not be
obvious to a human user, so many expert systems
will include a method for explaining the reasoning
process that lead to '?e final answer of the
Fig. 6.19.1
system. The basic questJons any user would like to
3. User Interface : User interface provides the needed query the system are WHY and "HOW".
facilities for the user to communicate with the Whenever a user poses the question "HOW", the
system. An user, normally would like to have a answer is available.
consultation with the system for the following The answer to "WHY" is got from the rule it is
aspects: about to fire. Every FACT FRAME has a slot of
• To get remedies for his problem. ''TIIENS-OF' which is exactly the rule the system is
• To know the private knowledge (heuristics) of the firing. The answer to the question "WHY" is obtained
system. from this. Moreover, Genie's Inference Procedure
organization of knowledge and data regarding a
• To get some explanations for specific queries.
problem is explicitly available in the dynamic version
Presenting a real-world problem to the system for of the frames. Hence the explanation facilities are
a solution is what is meant in having a considerably superior.
consultation. Here, the user-interface provides as
much facilities as possible such as menus, Syllabus Topic : Knowledge Base
graphical interface etc., to make the dialogue user-
I
friendly and lively. I~,. 6.20 TECHNIQUES or KNOWLEDGE
4. Knowledge acquisition facility : The major -~ ' ACQUISITION
bottleneck in ES development is knowledge
acquisition. Present day ES do not have a ·------------------------------·
1, '\J
: GQ. Explain knowledge acquisition process.
,• \. I

sophisticated version of a learning system. Hence


these systems perform by 'being told'. To carry : GQ. Explain techniques of knowledge acquisition. :
.. _1._ ,_..., - - - - - - - - - - -~- - - - -- - - - - - - _ ..._ - - •
out the process of 'being told', systems provide Knowledge Acquisition Process The
what is called a Knowledge Acquisition Facility knowledge acquisition process begins in the
(KAF). KAF creates a congenial atmosphere for specification phase and continues into the development
the expert to share the expertise with the system.
phase. There are three kinds of cases the developer
5. External interface This provides the should discuss with expert which are namely, current
communication link between the ES and the cases, historical cases and hypothetical cases.

(MS-126) li1 Tech-Neo Publications...A SACHIN SHAH Venture


I
. intelli nee MU· Al & OS Electronics Plannln and Leamin ... Pa No. 6-2
ii¢ Observing the current cases expert perfonns the knowledge source is the domain expert. Difficulties in
PY • aJ cases helps expert to provide Knowledge Elicitation Technical nature of spec1·alist
~w~ · d
?' rv{rorn past expenences an past data. fields that hinders knowledge elicitation by
lll°w!ed~~ cases serve as dummy situations for non-specialist knowledge engineers. Experts tend to
~)'l'°tb~ as to describe the process and how to cany think less in terms of general principles and more in
5
."11
c,11'""ts5k- tenns of typical objects and commonly occurring
~1t!Je rt system developer looks for different events. Difficulties in searching for a good notation for
'JbC eX~wledge which depends upon his need. expressing domain knowledge and a good framework
Q-f of kn r requires strategic knowledge which for fitting it all together.
11Je c1evel°!:ate flow chart of the system. The Techniques for knowledge acquJsition : The
10
~ps ta! knowledge helps to define the inference technique for acquiring, analyzing and modeling
~en d describes the reasoning process which knowledge are : Protocol-generation techniques,
~ an pert. To describe the characteristics and Protocol analysis techniques, Hierarchy-generation
by exattributes of the objects in the system
ostd rtaJ)I techniques, Matrix-based techniques, Sorting
~ kJ)owJedge is required. techniques, Limited-information and
~ developer roust learn bow the expert performs constrained-processing task, Diagram-based
'lbeknowledge of acquisition in a variety of cases. techniques.
~iask wledge acquisition process, starts in the 1. Protocol-generation techniques : 1bis method
~:on phase, continues into the development includes many types of interviews (cm~tructured,
ir-- semi-structured and structured), reporting and
piase~ are basically tb~ee kinds of cases the observational techniques.
should discuss with the expert : current, 2. Protocol analysis techniques : Tiris techniqu.e is
1iveIoper . based
liltOrical, and hypothettcal. used with transcripts of interviews or text·
Current : by watching the expert perform a task. information to identify basic knowledge objects
within a protocol, such as goals, decisions,
Historical : by discussing with the expert a task relationships and attributes. These act as a bridge
Ital was performed in the past. between the use of protocol-based technique and
Hypothetical : by having the expert describe how knowledge modeling techniques.
ilalisbould be performed in a hypothetical situation. 3. Hierarchy-generation techniques Tiris
Knowledge acquisition includes the elicitation, technique involves creation, reviewing and
rollection, analysis, modeling and validation of modification of hierarchical knowledge.
l:oowledge. Hierarchy-generation techniques, such as
wues in Knowledge Acquisition. The important laddering, are used to build taxonomies or other
~ in knowledge acquisition are : knowledge is in hierarchical structures such as goal trees and
fihead of experts. decision networks. The ladders are of various
' Experts have vast amounts of knowledge. forms like concept ladder, attribute ladder,
composition ladders.
' Experts have a lot of tacit knowledge.
, The d kn d 4. Matrix-based techniques : This technique has
Y o not know all that they ow an use. process of construction and filling-in a 2-D matrix
Tacit knowledge is hard (impossible) to describe. (grid, table), indicating such things, as ma~ be, for
Experts are very busy and valuable people. Single example, between concepts and properties e~.
expert does not know everything. Knowledge has The elements within the matrix can contain:
a shelf life • symbol (ticks, crosses, question marks), colors,
1 Knowledge elicitation : Knowledge elicitation is numbers, text

-----
lyPc of tb_e.~kn=o~w~led::g:e~a:c~q:ui:s:iu:·o:n_w=he:::re:_:th:e~o-=nl:.::y~----;;~-----------~~~=
lls.126) ~ Tech-Neo Publications...A SACHIN SHAH Venture
c,a:::::9 and Leamiog)•••Page No. (6-~
At1l!ic:ia • ■►- 108 (MU - AJ & OS / Bedlu..a)
c;bainin& wbetC the system ~arks forward from the
Mele : MOLE is a knowledge acquisitioo system . onnarioO it baS in the working memory. In for...ard
which is med for heuristic classificatioo problemS, infi. . g. the cooflict set will be ~ by rules Which
mch as cfiaposjng dixases It is used in coo_;unctioo chainJl1 their antee,edents crue m a given cycle. 1oe
wi:lb the CiOffr and differentiate problem~lviJJg :;:,:"continOCS till the conflict set becomes C11lpty.
mdhod. An c:xpert system produced by MOLE
a::cq,cs input data. geoerate5 the set of candida!e SyflabUS Topfc: User Interface .._
Clplamtjoos or classifications that rover the data and
theo uses diffc:rcntiating knowledge to detet1Jline
which ooe is best. The process is carried out ~ 6Zl USO (NTEl.fACl
imcractively' became explanation neem to be justified.
With the help of a user interface, the expen
till o.Jtima.tc causes are confirmed.
stem interacts with the user, talces queries as an input
sy and
Syflabu8 Tope : Inference Engine in a readable fonnat. passes it• to the •tnfcrence

eogine. ~~
.. &.- getting the response from the inference
eogi.De. it displays the output to the U5er. In other
.... 6.21 lllffUlla EJiGfNt
• words, it is an interface that beJps a DOD-ffl)ert D!ler'
The infueoce engine is the program part of an to commo:nlcate wtth the expert system to ftnd a
expert syaem. It represents a problem solving model solutioo.
which uses the rules in the knowledge base and the
situa:tioo-spcci:fic knowledge in the WM to solve a a. 6.22.1 lllferace Easlae(ltale s of Eadae>
problem. • Toe inference eogine is known as the brain of lhe
Given the conten1s of the WM, the inference expert system as it is the main processing unit of
engine ddermines the set of rules which should be the system. It applies inference rules to lhe
considered. Tbcse are the rules for which the knowledge base to derive a conclusion or deduce
conscqoeots match the current goaJ of the system. The new information. It helps in deriving an error-free
set of rules which can be fired is called the conflict set solution of queries asked by the U5er.
Out of the rules in the conflict set. the infereoce engine • With the help of an inference engine, the system
selects one rule based on some predefmed criteria.
extracts the knowledge from the knowledge base.
This process is called confliet resolution. For example,
a simple conflict resolution criterion could be to select • There are two types of inference engine:
the first rule in the conflict set. • Detennlnlst k Inference engine: The cooclosions
A rule can be fired if all its antecedents are drawn from this type of inference engine are
satisfied. If the valoe of an antecedent is not Jcnown (in assumed to be true. It is based on Cads and nda
the WM memory), the system checks if there are any • Probabilistic Inference eoglne! This type of
other rules with that as a consequent; thus setting up a inference engine contains uncertainty in
sub--goal If there are no rules for that antecedent, the conclusions. and based on the probability.
mer is prompted for the valoe and the value is added to Inference engine oses the below modes to derive
the WM. If a new sub-goal bas been set up, a new set the solutions:
of .rulea will be considered in the next cycle. This • Forward Chaining; It starts from the known
procc:&s is repeak.d till, in a given cycle, there are 00 facts and rules, and applies the inference rules to
sob-goals or ahematively , the goal of the p(oblem- add their conclusion to the known facts.
aolving bas been derived..
• Backward Chaining; It is a baclcward reasoning
This inferencing strategy is called backward method that starts from the goal and worts
chaining (si.oa it reasons backward from the goal to be backward to prove the known facts.
derived). There is another stralegy, called forward
• The knowled e that storeS
(MS-126)
Ii) Tech-Neo Pubricatlons..A SACHJN SHAH Ver,tLWe
~~1n~te~1r~e:gnce~~M=U=·-:A""l'."'&:::;D:::;S;;;/'."'E""Jectro~=-"=ics=======~:==========~~~~~~~~~~~~~~
Jedge acquired from the different experts of Each antecedent of a rule typicaily checks if the
~ow • I • 'd
s1
articular domain. t is con ered as big particular problem instance satisfies some condition.
(be p of knowledge. The more the knowledge For example, an antecedent in a rule in a TV
storage . . be
the more precise wiJI the Expert System. troubleshooting expert system could be: the picture on
!)3Se,
. •miJar to a database that contains the TV display flickers.
Jt 1s 5 1•
• jn(orlllation and rules of a particular domain or The consequents of a rule typically alter the WM,
subject. to incorporate the infonnation obtained by application
can also view the knowledge base as of the rule. This could mean adding more clements to
• 0n;ections of objects and their attributes. Such as the WM, modifying an existing WM clement or even
coLion is an object and its attributes are it is a deleting WM elements. They could also include
~arnrnal, it is not a domestic animal, etc. actions such as reading input from a user, printing
messages, accessing files, etc. When the consequents
Syllabus Topic : Working Memory of a rule are executed. the rule is said to have been
fired.
~~.%:J WORKING MEMORY In this article we will consider rules with only one
:•~ consequent and one or more antecedents which are
Toe working memory represents the set of facts combined with the operator and We will use a
tnown about the domain. The elements of the WM representation of the form:
reflect the current state of the world. In an expert rule id: If antcccdcnt! and anteccdcnt2 .... then
system, the WM typically contains information about consequent
the particular instance of the problem being addressed. For instance. to represent the knowledge that if a
For example, in a TV troubleshooting expert system, person bas a runny nose. a high temperature and
the WM could contain the details of the particular TV bloodshot eyes, then one bas a flu, we could have the
being looked at. following rule:
The actual data represented in the WM depends on rl: If is(nose, runny) and is(tcmpcraturc, high) and
the type of application. The initial WM, for instance,
is(cyes, bloodshot)
can contain a priori information known to the system.
The inference engine uses this information in then disease is flu
conjunction with the rules in the knowledge base to This representation, though simple, is often
derive additional information about the problem being sufficient. The disjunction (ORing) of a set of
solved. antecedents can be achieved by having different rules
with the same consequent. Similarly, if multiple
Knowledge Base
consequents follow from the conjunction (ANDing) of
The knowledge base (also called rule base when a set of antecedents, this knowledge can be expressed
If-then rules are used) is a set of rules which represents in the form of a set of rules with one consequent each.
the knowledge about the domain. The general form of Each rule in this set will have the same set of
aruJe is:
antecedents.
If condl and cond2 and cond3 ... Sometimes the knowledge which is expressed in
then action!, action2, ... the form of rules is not known with certainty (for
bio The conditions condl, cond2, cond3, etc., (also example our flu rule is not absolutely certain). In such
wn as antecedents) are evaluated based on what is cases, typically, a degree of certainty is attached to the
~nUy known about the problem being solved (i.e., rules. These degrees of certainty are called certainty
contents of the working memory). factors. We will not discuss certainty factors further in
this article.

~ Tech-Neo Publications...A SACHIN SHAH Venture


p1annin and Leamin ••• Pa e No. ~O
Artificial lnten· nee MU - Al & OS / Electronics
hat concepts are needed to I>rOd
Decide on w uee
Syllabus Topic : Developm ent of Expert Systems . One important factor to be dec·d
the soJuuon. • Cd
. the level of knowledg e C&ranulanty)
here 15
• ·
6.24 DEVELOPMEMT Of AN EXPERT .
StartJOg WI
"th coarse granu1anty, the syste
lll
SYSTEM t proceeds towards fine &ranularity
developmen •
. the task of knowledge acquisiti
1. Identification of the problem After tbjs, . . on
. Th knowledge engineer and the dorn,.;_
2. Decision about the mode of development beglllS· e -
. teract frequently and the domain-SJ>Ccifi
3. Development of a prototype expert 10 1c
)cnowledge is extracted.
4. Planning for a full-scale system
5. Fmal implementation, maintenance and Once the knowledge is acquired. the knowledge
evolution engin. eer decides on the method of representation
.
In the identification phase, a conceptual picture of
L Identification of the problem knowledge representation would have emerged. In
this stage, that view is either enforced or modified.
In this stage, the expert and the knowledge
engineer interact to identify the problem. The When the knowledge representation scheme cl1ld
the knowledge is available, a prototype is
major points discussed before for the constructed. This prototype undergoes the process
characteristics of the problem are studied. Toe f teSting for various problems and revision of the
scope and the extent are pondered. The amount of ;rototype takes place. By this proc_es~, knowledge
resources needed, e.g. men, computing resources, of fine granularity emerges and this 1s effectively
finance etc. are identified. The coded in the knowledge base.
return-of-investment analysis is done. Areas in the
4. Planning for a full-scale system
problem which can give much trouble are
identified and a conceptual solution for that The success of the prototype provides the needs
problem and the overall specification is made. impetus for the full-scale system. In prototype
2. DecJslon about the mode of development construction. the area in the problem which can be
implemented with relative ease is first chooscn. In
Once the problem is identified, the immediate step
the full- scale implementation. sub-system
would be to decide on the vehicle for
development. The knowledge engineer can development is assigned a group leader and
develop the system from scratch using a schedules are drawn. Use of Gann chart, PERT or
programm ing language like PROLOG or LISP or CPM techniques are welcome.
any conventional language or adopt a shell for s. Final implementation, maintena nce and
development. In this stage, various shells and evolution
tools are identified and analyzed for the This is the final life cycle stage of an expert
suitability. Those tools whose features fit the system. The full scale system developed is
characteristics of the problem are analyzed in implemented at the site. Toe basic resource
detail. requirements at the site are fulfilled and parallel
3. Development of a prototype conversion and testing techniques are adopted.
Before developing a prototype, the following are The final system undergoes rigorous testing and
the prerequisite activities :
later handed over to the user.

(MS-126) Ii] Tech-Neo Publications...A SACHIN SHAH Venture


No. 6-31

Problem 1. Study charac1erlst1cs of the problem


ldentiflcatlon 2. Study scope and extent of the problem
3. Amount of resources needed
4. ROI analysis
5. Problem area identification

Decide on the vehicle ~ - - - - . . t 1. Whether language


for development 2. Characteristics of toots

Prototype 1. Concepts needed for solution


'--•de•v•el•o.;.p.me_n_t-J -- -- -, 2. Task of knowledge acquisition
3. Method of knowledge representation
4. Testing of the system

Plan for full scale 1. Additional interaction betwe en


system multiple experts
1,_ _.. ..;~ ~-- .1" --- -1 2. Heavy planning takes place
3. Implementation scheme takes form

Implementation 1. Basic resource requirement at site


maintenance and fulfile d
evalu ation of the full 2. Parallel conversion and testing
. .- - - - - 1 3. Masrt enance of knowledge base and
system historical database
.__ _ _ _ _ __.
4. Security of various subsystems

Fig. 6.24.1 : Life cycle or an expert system


evaluation
the are only satisfactory. Sinc e the yardstick for
Maintenance of the system implies tuning of However,
ent is not available. it is difficult to evaluate.
knowledge base because knowledge, the environm ly a of problems
. The utmost wha t one can do is to supp set
and types of problems that arriv e are never static pare the
d and the mino r to the system and a hum an export and com
historical database has to be maintaine the system
be kept results. Whe n th.is method was adopted for
modifications made on inference engine has to
MYCIN, it swpa sscd hum an experts.
track off. Maintenance engu lf security also.
.
Evaluation is a difficult task for any AI programs
AI roble ms
As mentioned reviousl , solutions for Chapter Ends...
□□□

You might also like