AI Sem-4 Textbook
AI Sem-4 Textbook
II
SIM 5-IJOS.CSE (DS),CSE (IIML),
,
IIML, DE.
SIM 7 EllCTRDIIIC EIIIIL ,_,.
ARTIFICIAi INTElllGENCE J
)
\ --e Semester 5 {CSC503) .._
--~
EDITlOt1
JUNE,20 23
I
Compute( Science and Engineering (Data Science)
ComJ:iuter Science & Engineering (Artificial Intelligence and Machine Leaming)
Artificial Intelligence and Data Science
Artificial Intelligence and Machine Leaming
Data Engineering
~
/ ~
~
"-
iialiii~. . . . . 'iaP"!!ir~:::~ II
~....... .4'
Q•·
·.t
. ;':•::r :Jj 1, ~
IJ!'•ff • ~ ... , rt '1 I
; r -
, ,
... ,- '
. -
,-------------__,.;;~~-~~;.......,.4
Unive rsity of Mumb ai
~ :j ...I'>'
fklTEcH -NE o
~ PUBL ICAT IONS
MS-126
--
Course Code
CSC503
Course Name
Artificial Intelligence
Credit
03
-
Pre-requisite : C Program ming
Course Objectives : The course aims
I. To gain perspective of AI and its foundations.
2. To study different agent architectures and properties of the environment
3. To understand the basic principles of AI towards problem solving, inference, perception. knowledge reprtSeulation,
and learning.
4. To investigate probabilistic reasoning under uncertain and incomplete information.
5. To explore the current scope, potential, limitations, and implications of intelligent systems.
Course Outcomes
After successful completion of the course students will be able to
I. Identify the characteristics of the environment and differentiate between various agent architectures.
·2. Apply the most suitable search strategy to design problem solving agents.
3. Represent a natural language description of statements in logic and apply the inference rules to design Knowledge
Based agents.
4. Apply a probabilistic model for reasoning under uncertainty.
5. Comprehend various learning techniques.
6. Describe the various building blocks of an expert system for a given real word problem.
Detailed Syllabus
Module Detailed Content Hoon
1 Introduct ion to Artlficial Intelligen ce 3
1.1 ~fi~ial lnt~lligence (Al), AI Perspectives : Acting and Thinking humanly, Acting and
Thmkmg rabonally.
1.2 History of Al, Applications of Al, The present state of AI, Ethics in AL (Refer Chapter 1)
-
2 Intelligen t Agents 4
-
◄
3 Solving Problems by Searching 12
3.1 Definition, State space representation, Problem as a state space search, Problem formulation.
Well-defined problems.
3.2 Solving Problems by Searching, Performance evaluation of search strategies, Time
Complellity, Space Complexity, Completeness, Optimality.
3.3 Uninformed Search : Depth First Search, Breadth First Search, Depth Limited Search,
Iterative Deepening Search, Uniform Cost Search, Bidirectional Search
3.4 Informed Search : Heuristic Function, Admissible Heuristic, lnformcd Search Technique,
Greedy Best First Search, A• Search, Local Search : Hill Climbing Search, Simulated
Annealing Search, Optimization : Genetic AJgorithlIL
3.S Game Playing, Adversarial Search Techniques, Mini-max Search, Alpha-Beta Pruning.
(Refer Chapter 3)
4.4 Forward Chaining, Backward Chaining and Resolution in FOPL (Refer Chapter 4)
5
5 Reasoning Under Uncertain ty
5.1 Handling Uncertain Knowledge, Random Variables, Prior and Posterior Probability,
Inference using Full Joint Distribution.
5.2 Bayes' Rule and its use, Bayesian Belief Networks, Reasoning in Belief Networks.
(Refer Chapter S)
5
6 Planning and Leaming
6.1 The planning problem, Partial order planning, total order planning.
6.3 Expert Systems, Components of Expert System : Knowledge base, Inference engine, user
interface, working memory, Development of Expert Systems. (Refer Chapter 6)
Total 39
Q> Assessment
Internal ~ment
is compleled
Assessment consists of two class tests of 20 IIlllrks each. The first-class test is to be conducted when approx.~ syllabus
and second class test when additional 40% syllabus is completed. Duration of each test shall be one hour.
End Semester Theory Examination
I. Question paper will consist of 6 questions, each carrying 20 mm-ks.
2. The srudeots need to solve a total of 4 questions.
3. Question No. I will be compulsory and based on the entire syllabus.
4. Remaining question (Q.2 to Q.6) will be selected from all the modules.
-- ... - -
[ Artificial Intelligence Lab (CSL502) l
Lab Code Lab Name Cttd.it
~
I CSLS02
I Artilidal lntdligence Lab I 1
I
Prerequisite : C Programming Language
Lab Objectives
I To design suitable Agent Architecture for a given real world Al problem.
2 To implement knowledge representation and reasoning in Al language.
3 To design n Problem-Solving Agent.
4 To incorporate reasoning under uncertainty for an Al agent.
lab Outcomes
At the end or the course, students will be able to -
I Identify suitable Agent Architecture for a given real world Al problem.
2 lmplement simple programs using Prolog.
3 lmplemcnt various search techniques for a Problem-Solving Agent.
4 Represent natural language description as statements in Logic and apply infcreoce rules to iL
5 Construct a Bayesian Belief Network for a given problem and draw probabilistic inferences from it.
Suggested Experiments : Students are required to complete at least IO experiments.
Sr. ' ii':=",!'.-- -~ -•.,.,··.,11,, .I: ,~ Name' of tbe &p,;'~..;~➔ ~ Q ~
"''No.·. 1., J,1,"~ ,_,
'J
A
7j ll".l
, . •1$4' ~--;;-~
~ G
"' • ·.,.. Ii',?.!! ~·~~~,...
~ , - ~ - a'Pf<CU.""'
M
\"li"'~~l,'fw )
I Provide the PEAS description and TASK Environment for a given Al problem.
2 Identify suitable Agent Architecture for the problem. ~
Syllabus ...
University of Mumbai
Artificial Intelligence (Code : ELDLO7013)
-
Semester 7 : Electronics Engineering
ELDW7013 Artlflclal
Intelligence
03 - - 03 - - 03
-
Examination Scheme
Theory Marks
Artificial
I
ELDW7013 Intelligence 20 20 20 80 03 - - 100
~ Course Objectives
I. To gain perspective of Al and its foundations.
2. To study different agent architectures and properties of the environment.
3. To understand the basic principles of Al towards problem solving, inference, perception, knowledge representation, and
learning.
4. To investigate probabilistic reasoning under uncertain and incomplete information.
5. To explore the current scope, potential, limitations, and implications of intelligent systems.
G" Course Outcomes . -
After successful completion of the course students will be able to
I. Identify the characteristics of the environment and differentiate between various agent architectnrcs.
2. Apply the most suitable search strategy to design problem solving agents.
3. Represent a natural language description of statements in logic and apply the inference rules to design Knowledge
Based agents.
4. Apply a probabilistic model for reasoning under uncertainty.
5. Comprehend various learning techniques.
6. Describe the various building bloc.ks of an expert system for a given real word problem.
Note : The actlon verbs according to Bloom's taxonomy are highlighted in bold.
-
--
r
f
Module
Ir
No.
Unlt
No. ~ h
-~
_....
! •.
.-
~ ..-+ ·; .
... ,e,,l!_i.-•~-"'...
; ,«.ll'N'
.-~ .,~•,
i: ::_C:fs, fii ~i,{- _,/{
__ . .,.,-u _ •c, ,.,,..., .c·_;,'"'1',: ~cl'.H~·- ,:-<
-~
. l
r.
Hrs.
."!!
~
l
1.1 Artificial Intelligence (Al), Al Perspectives : Acting and Thinking humanly, Acting and
1 Thinking rationally. s
1.2 History of Al, Applications of Al, The present state of Al, Ethics in Al. (Rerer Chapter 1)
Intelligent Agents
2.2 Types of Agents: Simple Reflex. Model Based, Goal Based, Utility Based Agents.
2 6
2.3 Environment Types : Deterministic, Stochastic, Static, Dynamic, Observable, Scmi-<>bscrvable,
Single Agent, Multi Agent (Refer Chapter 2)
3.1 Definition, State space representation, Problem as a state space search, Problem formulation.
Well-defined problems.
3.2 Solving Problems by Searching, Performance evaluation of search strategics. Time Complexity,
Space Complexity, Completeness, Optimality.
3.3 Uninformed Search : Depth First Search, Breadth First Search, Depth Limited Search, Iterative
Deepening Search, Uniform Cost Search, Bidirectional Scarcb. 8
3
3.4 Informed Search : Heuristic Function, Admissible He.uristic, Informed Search Technique.
Greedy Best First Search, A• Search, Local Search : Hill Climbing Search, Simulated
Annealing Search, Optimization : Genetic Algorithm.
3.5 GaJllC Playing, Adversarial Search Techniques. Mini-max Search, Alpha-Beta Pruning.
(Refer Chapter 3)
4.2 Propositional Logic (PL) : Syntax, Semantics, Formal logic-connectives, truth tables, tautology,
validity, well-formed-formula.
8
4
4.3 Predicate Logic : FOPL, Syntax, Semantics, Quantification, Inference rules
in FOPL,
Introduction to logic programming (PROLOG).
5 5.1 Handling Uncertain Knowledge, Random Variables, Prior and Posterior Probability, Inference
using Full Joint Distribution.
5.2 Bayes' Rule and its use, Bayesian Belief Networks, Reasoning in Belief Networks. 5
(Refer Chapter 5)
6 6.1 The planning problem, Partial order planning, total order planning.
6.2 Leaming in AI, Leaming Agent, Concepts of Supervised, Unsupervised, Semi -Supervised 7
Leaming, Reinforcement Learning, Ensemble Leaming.
6.3 Expert Systems, Components of Expert System : Knowledge base, Inference engine, user
interface, working memory, Development of Expert Systems. (Refer Chapter 6)
TotaJ 39
. .
.
--
[ Index )
♦ Lab Manua l ........... ........... ........... ........... ........... .......... L-1 to L-8
□□o
Module 1
CHAPTER
Introduction to
1 Artificial Intelligence
1.1 Artificial Intelligence (Al), Al Perspectives: Acting and Thlnklng humanly, Acting and Thinking rationally.
1.2 History of Al, Appllcatlons of Al, The present state of Al, Ethics In Al.
'I
.• '!
01 , I •I I
f (Introduction to Artificial lntelllgence) ... Page No. (1-~
Artificial Intelligence (MU • Al & OS / Electronics)
~ 1.1
Syllabus Topic: Artificial Intelligence
Intelligence is the ability to tbinlc and understand instead of doing things by instinct or automatically. These
definitions lead us to define what 'Thinking' is :
'Thinking is the activity of using your brain to consider a problem or to create an idea'.
So, in order to think. someone or something has to have a brain or, in other words, an organ that enables one
or something to learn and understand things, to solve problems and to make decisions.
So, now we can define intelligence as 'the ability to learn and understand, to solve problems and to make
decisions'. Now the question arises whether computers can be intelligent. or whether machines can think and
make decisions.
Here, we enter into the Domain of Artificial Intelligence. Artificial Intelligence (Al)' is intelligence
demonstrated by machines, as opposed to natural intelligence displayed by animals including humans.
It is defined as 'the field of study of intelligent agents'; any system that perceives its environment and takes
actions that maximise its chance of achieving its goals.
AI is also described as machines that perform functions that humans associate with human mind, such as
'learning' and 'problem solving'.
s- AI applications include
1. Advanced web search engines (e.g. Google)
2. Recommendation systems (used by You Tube, Amazon and Netfli:x)
3. Understanding human speech (such as Siri and Alexa),
4. Self-driving cars (e.g. Tesla)
5. Automated decision - making and competing at the highest level in strategic game systems (such as chess).
As machines become increasingly capable, tasks considered to require 'intelligence' are removed from the
definition of Al. This phenomenon is known as •AI effect'. For example, optical character recognition is
excluded from things considered to be Al. It is a routine technology.
The field was founded on the assumption that human intelligence "can be so precisely described that a
machine can be made to simulate it''.
This raises philosophical arguments about the mind and the ethics of creating artificial beings endowed with
human-like intelligence.
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU - Al & DS / Electronics) (Introduction to Artificial lntelligence) ... Page No. (1--3)
Science friction and futurology have also suggested that, with its enormous potential and power, AI may
l
become an existential risk to humanity.
(1) Humankind has given itself the scientific name homo sapiens-man the wise-because our mental capacities
are so important to our everyday lives and our sense of self. Toe field of artificial intelligence, br AI,
attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves.
e
(2) But unlike philosophy and psychology, which are also concerned with intelligence, artificial intelligenc
to study artificial intelligenc e
11 strives to build intelligent entities as well as understand them. Another reason
is that these constructed intelligent entities are interesting and usefuJ in their own right
in its
(3) Artificial intelligence bas produced many significant and impressive products even at this early stage
el
development.. Although no one can predict the future in detail, it is clear that computers with human-lev
our everyday lives and on the future course of
intelligence (or better) would have a huge impact on
civilization.
(4) Artificial intelligence addresses one of the ultimate puzzles. How is it possible for a slow, tiny brain {brain},
whether biological or electronic, to perceive, understand, predkt, and manJpolate a world far larger and
more complicated than itself? How do we go about making something with those properties? These are hard
questions, but unlike the search for faster-than-light travel or an antigravity device, the researcher in artificial
intelligence has solid evidence that the quest is possible.
was
(5) Artificial intelligence is one of the newest disciplines. It was formalJy initiated in 1956, when the name
it
coined, although at that point work had been under way for about five years. AJong with modern genetics,
scientists in other disciplines . A student in
is regularly cited as the "field I wouJd most like to be in" by
physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein,
e,
and the rest, and that it takes many years of study before one can contribute new ideas. Artificial intelligenc
on the other hand, stiJl has openings for a full-time.
and
(6) Artificial intelligence (Al) is a field that has a long history but is still constantly and actively growing
everyday lives. It has uses in
changing. Artificial intelligence (Al) technology is increasingly prevalent in our
, a variety of industries from gaming, journalism/media, to finance, as well as in the state-of-the-art research
fields from robotics, medical diagnosis, and quantum science.
Syllabus Topic: Al Perspectives
i GQ:• .Explain Intelligence and Artificial Intelligence. How does conventional computing differ from the intelligence ,:
~ I
I .,;· ·• '""'"'• ·• • . .; ,. I
I,.. ,,. ,computing?
---------------------------------------------------------------- • I
I '
(1) Informati on
• All data are information. However, there is some part of information that is not considered as a data
Such distinguis hed informatio n can be considered as a processed data, which makes decision making
easier. Processing involves an aggregation of data, calculations of data, correction s on data, etc. in
such a way that it generates the flow of messages.
• Information usually has some meaning and purpose that is; data within a context can be considered as
information.
(MS-126)
lil Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intern nee MU . Al & OS I Electronics Introduction to Artificial lntelli I
(2) Know ledge
Knowledge is a justified true belief. Knowledge is a store of inform
ation proven useful for 8 capacity to act.
II
(3) l"!telll gence
• Unlike belief and knowledge, intelligence is not information: it
is a process, or an innate capacity to
use information in order to respond to ever-changing requirements.
• It is a capacity to acquire, adapt, modify, extend and use inform
ation in order to solve problems.
Therefore, intelligence is the ability to cope with unpredictable circum
stances.
Q' (A) Huma n Intelli gence
The first proposal for success in building a program and acts human
ly was the Turing Test. To be considered
intelligent a program must be able to act sufficiently like a human
to fool an interrogator. A human interrogates
the program and another human via a terminal simultaneous
ly. If after a reasonable period, the interrogator
cannot tell which is which, the program passes. To pass this test
requires:
1. Natural language processing 2. Knowledge representation
3. Automated reasoning 4. Machine learning
This test avoids physical contact and concentrates on "highe
I would require the program to also do:
r level" mental faculties. A total Turing test
l •
•
Comp uter vision
Robotics
(MS-126)
~Tech -Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU - Al & OS/ Electronics} (Introduction to Artificial lntelligence}... Page No. (1-5)
Thinking Humanly
This requires "getting inside" of the human mind to see bow it works and then comparing our computer
programs to this. This is what cognitive science attempts to do. Another way to do this is to observe ~-human
problem solving and argue that one's programs go about problem solving in a similar way.
Example
GPS (General Problem Solver) was an early computer program that attempted to model human thinking. Toe
developers were not so much interested in whether or not GPS solved problems correctly. They were more
interested in showing that it solved problems like people, going through the same steps and taking around the
same amount of time to perform those steps.
Syllabus Topic : Thinking and Acting Rationally
• Aristotle was one of the first to attempt to codify "thinking". His syllogisms provided patterns of argument
structure that always gave correct conclusions, giving correct premises.
• Example : All computers use energy. Using energy always generates heat Therefore, all computers generate
beat. This initiate the field of logic. Formal logic was developed in the late nineteenth century. This was the
first step toward enabling computer programs to reason logically. By 1965, programs existed that could,
given enough time and memory, take a description of the problem in logical notation and find the solution, if
one existed. The logicist tradition in AI hopes to build on such programs to create intelligence.
• There are two main obstacles to this approach: First, it is difficult to make informal knowledge precise
enough to use the logicist approach particularly when there is uncertainty in the knowledge. Second, there is
a big difference between being able to solve a problem in principle and doing so in practice.
Acting Ratlonally: The rational agent approach
• Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent is just something
that perceives and acts.
• In the logical approach to AI, the emphasis is on correct inferences. This is often part of being a rational
agent because one way to act rationally is to reason logically and then act on ones conclusions. But this is
not all of rationality because agents often find themselves in situations where there is no provably correct
thing to do, yet they must do something. There are also ways to act rationally that do not seem to involve
inference, e.g., reflex actions.
The study of AI as rational agent design has two advantages :
1. It is more general than the logical approach because correct inference is only a useful mechanism for
achieving rationality, not a necessary one.
2. It is more amenable to scientific development than approaches based on human behaviour or human thought_
because a standard of rationality can be defined independent of humans.
Achieving perfect rationality in c9mplex environments is not possible because the computational demands
are too high. However, we will study perfect rationality as a starting place.
~ 1.5 HISTORY OF Al
t
I •• I I •• I ♦
Second Al IBM Ottp blue Al In Home!: IBM s Walson: Google now Chatbol Eugene Amazor,
Winer : first oompu~r Roomba Wins a quiz Goonnan:Wl ns Echo
to bNt a WOtld show a"Twlng lest
~Champio n
(MS-126)
Iii Tech-Neo Publications...A SACHJN SHAH Venture
Artificial lntelHgence (MU -Al & OS f Eleclrnnlcs) (lntro(b:don to Artifldal mtelligence)... Page No. (1-n-
o Year 195(; : The word • Artificial Intelligence" first adopted by American Computer scientist 1olm
McCarthy lt the Dartmouth Conference. For the first time, Al coined as an academic field.
• o , At that time high-level computer languages such as FORTRAN. LISP. or COBOL wem inYeoted: Aad
the enthusiasm for AI was very high at that time. - -, -
• 1begoldenyears-Earlyenthusiasm(l9.5~1974) • • • - • ·-. - • ,-, .... - ·•·. -,. a.·,:-""• • • -"~
• o • Year 1966 : The miearchers emphasized developing algorithms which cad~ so~~ mathematical'
problems. Josepli Weizenbawn created the fiDt Chaabois in 1966, which wu named as HL.JZA... . ~- • ~~~--
.. 0 . year lffl . : . The first. intelligent humanoid robot WU .. built in Japan .Which. WU .oamed. a&-
WABOT-1.
• The first Al winter (1974-1980)
, o The duration between years 1974 to 1980 was the first Al_ winla duration. Al_ winier rcCcrs to lbe_ time
period where computer scientist dealt with a sey_ere shortage of funding from government foi ~
researches. • · - • • .. " ,. ,. · · ·•· · •. ·_ , -. ~ •.. '·
o During AI winters, an interest of pob1icity on artificial intelligeoce was dccTeased.
.- ._,:t'·t: :. ;•i •• (.(:
• A boom of AI ( 1980-1987)
(ro Year 1980 : After Al winter duration, AI came back with '"Expert System'\ lhpc:rt. systems
WCR:
: ·:_. -. programmed that emulate the decision-mating ability of a bDman expert. .
-o. • In. the Year ·1980, the first natiooal conference of the American ~ ~ AriiticlaJlntclligcoce
( . . , :washeldatStanfordUniv,-;ev. ......~,...,J
•• • • • • ~- • • , • .,,• . - • • •
•_..,. :'.Ibe second Al winter.(1987-1993)
o The duration between the years 1987 to 1993 was the secood Al W"mll:r duratioo..
o Again Investors and government stopped in funding for Al research aS due to high cost ~- oot efficialt
result. The expert system such as XCON was very cost effective.. ·'
• The emergence of intelligent agents (1993-2011)
o Year 1997 : In the year 1997. IBM Deep Blue hem world chcu champion. Gary Xaspmo-.,. and
becametbefirstcomputertobeataworldcbesscbampion. .· .. ,.,_, -,,\ n; :.~ q
...,, e> Year 2002: for the first time, Al entered the home in the form ofR.oomba. a.vacumn~eancr..
o Year 2086 : AI came in the B ~ world till the year 2006. Companies like ~acebook, Twitter, and
Netflix also started using Al. •
• Deep learning, big data and artificial general intelligeocc (2011-present) _.. ... .1 t'. ; ~
o Year 2011 : In the year 2011. ffiM's Watson won jeopardy, a quiz show, where ii had to solve the
complex questions as well as riddles. Watson had proved that it coold unck.rstand oallmd language and
can solve tricky questions quickly.
o Year 2012 : Google bas launched an Android app featun: •Google now•. which was able to providc_1
information to the user as a prediction.
o Year 2014 : In the year 2014, ~ "Eugene Goostman" won a ,;ompetition in the infamous '7uriog
test.. ' :. ' .. '·• . '. " . . •
o Year 2018 : The "Project Debater" from IBM debtl~ <>q CQDq>~ topics-~ ~o mam:r dcbatas and
also performed extremely well ,· . ._;, : . , .. ,· . . .,, ,_ :- ;
_o Google 1w demonstrated an Al program "Oyplex" v,,bicb was'- virtual usistant and which bad taken
'hairdresser ·appointment on call. and lady mi other "side didn't notice that sbe was talking_: with the
OW::binC, • • ' • I
• Now Al has developed to a remarkable level. The concept of Deep learning, big data, and data science are
now trending like a boom. Nowadays companies like Google, Facebook, IBM, and Amazon are working
with Al and creating amazing devices. The future of Artificial Intelligence is inspiring and will come with
high intelligence.
Syllabus Topic : Appllcations of Al
~
1.6 ' SUB
' AREAS AND APPLICATIONS OF Al
I ., - I
• Artificial Intelligence is used to identify defects and nutrient deficiencies in the soil. This is done using
computer vision, robotics, and machine learning, AI can analyze where weeds are growing.
• Al bots can help to harvest crops at a higher volume and faster pace than human laborers.
(3) AI In Gaming
• Another sector where Artificial Intelligence applications have found prominence is the gaming sector.
• It can also be used to predict human behavior using which game design and testing can be improved.
(4) AI In Automobiles
• Artificial Intelligence is used to build self-driving vehicles. Al can be used along with the vehicle's
camera, radar, cloud services, GPS, and control signals to operate the vehicle.
• Al can improve the in-vehicle experience and provide additional systems like emergency braking,
blind-spot monitoring, and driver-assist steering.
(5) AI In social media
• lnstagram : On Instagram, Al considers your likes and the accounts you follow to determine what
posts you are shown on your explore tab.
• Twitter: AI is used by Twittec for fraud detection.,removing propaganda. and hateful content. Twitt.a'
1,:1:; ::
. also uses AI. to recommend
''with:. :;., ' ..
twee~ that users might 'enjoy, based on what type of tweets they mgage
r;.·. •/.••:. :,!,-., ,·,, .' '•._, ,•.
_•. ,,-
,. •···, • ·'.<,. ';, .,( ~
; ., . ' ._, Artifici-1 in~lligence applications are populat.:in the.marketiog,doioaiD as,wcn., " " ,, ,:.,.a"'-,l;<. • " '
•• - - •.... o • • Using AI, matkeren can: deliver highly targeted and pmonaliud ads with die help ofbehaVlotli!
. ~ysis, Pattel!I reco~~on, e~~ •~ also helps with retarg~ au~~~ at_~_ri~..~e ~~:~
better results and reduced fcelipgs of distrust and annoyance. ...
• ! ~l f '. . .~ . ' .I .. • . ' . •, J ' ' •• ) ,! • ..
. _../· :: ,AI. can belp w~th co~nt ~keting in~ way that matcli.es the brand's style and voice1 ltcan be~ to 1
handle rou~e ~ like performance. campaign reports, and much more. '. • . ' .. ; • • . . . . •
. . . . . . . . . ' . . .• . . ..·: .. ::
;}L., ~.: _ Chatbots powered by AI, Natural Language Processing, Narural Language Generatioo. and Natural
_, Language U[!.ders~g· -~ analyze the user's language· aid respond in tile ways hUlD3DS do. •. ' .
, :.:r! !l •· , AI can provi~ users witlueal-time ~~zatio~ based on their behaviour.and can be ~,to c:Jdit ~
optimize marketim; campajgI!,S to .fit a local market'.s ~ - ,_
". Computer. Vision : Face recognition. programs in use by. banks, government. etc. Haodwriting
recognition, electronics and .rruµiufacturing inspection, photo ~ baggage ~ n , reverse
e n ~ g to . .
automatically
. . . construct
. . a 3D geometric model
;
• Expert Systems : Another very important cognitive ability of human. being is Decision making. This
ability of human is based on experience and knowledge which makes ooe iot.elligent expert. Expert
systems are
required in industries and especially in an organization where analytics plays an ~ t
role. there it acts as a mediator to haridle multiple activities to make system efficient such as Flight
tracking system, medical system etc.
• Diagnostic Systems : MYCIN system for diagnosing bacterial infections of the blood and suggesting
treatmenl Pathfinder medical diagnosis system, which suggests tests and makes diagnoses
F'lnanclal Decision Making : Credit card companies, mortgage companies, banks employ 'Al systi:ms
to detect fraud and expedite financial transactions. By considering the usage patterns,·· AI can help
reduce the poasibility of credit card frauds taking place. Many customers prefer to buy a product or
service based on customer reviews. AI can help~ and ~ e fake revi,ews.
• Classffleatton Systems : Put information into OlMl of a fix(.d set of caiegcric:., USUl3 sevaal soum:s of
information. B.g.. financial decision making systems. NASA developed a system for classifying Ver}
1 •· . faint areas in aslronomical images into either stars or gataJics wi.lh ~ higli ~ b y . ~ from
human experts' cwsificatioos. •. . '.l , '
(7) Scheduling and Plennlng
Automatic scheduling for manufactwing.
:(1) Artlfldal Neural Networu • .1. ; • ' ..,- : •• , • :\ ·,,:, ·' '. • ' ;-: •
• System that simulate intelligence by ~ t'.o '.rqmhlcc the ~ of-'physical; corincctirm that
occur in animal brains.
While the COVID - 19 pandemic impacted many aspects of how we do business, it did
not diminish J
lives. AI remains a key trend when it comes to technolo~
impact of Artificial Intelligence (AI) on our everyday
and innovations that will fundamentally change how we live, work, and play in the near future.
- day lives. W~
AI is the force behind many modem technological comforts that are now part of our day -to
as health-ca re, reta
continuous research. technology has made massive developments in major fields such
digital age with hii
auto~_otive, manufacturing and finance. AI is one essential component that_ transforms the
prec1S1on and accuracy. So, here there is an overview of what we can expect 10 the years to come.
~A solution ~ge from producing an automated email response to deploying thousands of bots• Ead
1
'J •
1s programmed m an ERP system to automate rule-based tasks.
(2) Convers ational AI
• Conversational Al increases the customer experience's reach, responsiveness and personalisatJon.
.
• To better understand what the human says and needs Al
, • g (NLD) anI
uses .natural .language processm
, • h
j I macbine ~earrung to provide a more nah,...,,1
..... ..., near uman - level mteraction.
I
Artificial Intelligence (MU - Al & OS/ Electronics) (Introduction to Artificial lntelligence) ... Page No. (1-11)
(3) The role of AI In healthcare
• Big data has been extensively used to identify COVID patients and critical hot points.
• AI is already helping the health-care sector to a great degree with high accuracy besides, researchers
have developed thennal cameras and mobile applications to collect data for healthcare organisations.
• By leveraging data analysis and predicting various outcomes, AI can support healthcare facilities in
several unique ways.
• AI instruments offer insights into human health and also recommend preventive steps to avoid the
1 .. , spread of diseases.
• AI solutions also help doctors remotely track the health of their patients, thereby advancing
teleconsultation and remote care.
t •
(4) Increase In demand for ethical AI
• This demand is at the top of the list of emerging developments in technology.
• Looking at how trends are rapidly changing, values-based customers and workers expect businesses to
implement AI responsibly.
• Companies will actively choose to do business with partners committed to data ethics in the next few
years.
I ' '-
• In the coming years, knowledge will grow and will be accessible, and digital data will be at greater risk
of being compromised and exposed to hacking . AI will help deter cybcrcrimes in the future with
improved cyber security measures.
• Fake digital activity that match criminal trends will be detected by the AI-enabled framework.
(6) The Intersection of the Internet of Things with AI (AIOT)
• There is hardly any boundary between AI and IOT. Although both technologies have individual
characteristics, when used together, better and more unique possibilities open up.
•• The ability of AI to gain insights from data quiclcly makes IOT solutions more intelligent
(7) Natural Language Processing (NLP)
• NLP is one of the widely used applications of AI. N LP is used in Amazon, Alexa and Google Home.
• Toe need for writing or communicating with a screen has been eliminated by NLP as now humans can
communicate with robots that understand their language.
• NLP is used for sentiment analysis, machine translation, process description, auto-video caption
generation and chatbots is expected to increase.
• I I • .(J.J
• Some cases of use of RL are robotics in planning business strategies, optimising advertisement content,
automating industries, controlling aircraft, and making motion control robots.
,n'
(9) Quantum AI
• To measure the Qubits for use in supercomputers", advanced companfcs will begin using qu'intum
supremacy. Because of quantum bits, quantum computers solve problems at a 9uic~ p~ than classic
computers do.
,(M5-U6)
li1 Tech-Neo Publications...A SACHIN SHAH Venture
U - Al & OS / 8ectronics Introduction to Artificial lntelli ence ... Pa e No.
• Also they assist in the interpretation of data and then forecast several unique trends•
• Quantum computers will help multiple organisations identify inaccessible issues and also predict
meaningful solutions. Future computers will also be used in fields like healthcare, finance and
chemistry.
(10) AI-Powere d Business and Forecasting and Analysis
• AI solutions help in redefining business processing with real-time alerts.
• Content-intelligent technologies, along with AI-supportive practices, will assist digital workers to
develop outstanding abilities.
• Such skills can help them cope with the automation of natural language, judgment, context formation,
reasoning and data-related insights.
(11) Edge computing
• Edge computing provides gadgets with servers and data storage to access their devices and allows them
to put data into them. It is defined as data processing in real-time and is more powerful than 'cloud
.. \
computing services' .
• There is another instance of edge computing that uses nodes. It is a mini-server located in the vicinity of
a local telecommunications provider.
• Nodes help to build a bridge between the local service provider. It costs less, saves time and provides
customers with fast service.
(12) Rise of a hybrid work force
• Post the COVID-19 pandemic, companies will change on to RPA bandwagon. which means that
cognitive AI and RPA will be widely applied to cope with high volume, repetitive activities.
• If usages grow, the office will move to a hybrid workforce environmenL
i 11 • The human workforce will work with various digital assistants. The emergence of a hybrid workforce
will imply more collaborative experiences with Al.
Syllabus Topic: Ethics In Al
Completing multiple tasks is another aims and objectives of artificial intelligence. One of the largest
difficulties to overcome has been making it possible for an AI program or a "robot'' to do more than one
taSk.
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU • Al & DS / Electronics) (Introduction to Artificial lntelligence) ... Page No. (1-13)
It is very easy to program a system to complete a certain task. For instance, it can bring an item from point A
to point B.
However, if you want the program to understand that it must pick up the item and then either bring it to point
A or throw it in the trash based on arbitrary rules that a human would know that's a different story. Id
simpler terms, it might be a while before your housemaid is a robot
Ii" Objective #3 : Artificial Intelligence shapes the future of every company
AI is quickly becoming a crucial tool for all companies. They are using this technology to streamline their
processes. It's no secret that the goal is to continue this trend for as many low-level tasks as possible. It
ultimately saves the companies money in the long run, and it allows them to up productivity in other areas.
Ii" Objective #4 : Artificial intelligence prepares for a boom in big data
Big data has already taken the world by storm. Big data is the large-scale, and sometimes even random,
collection of data about people's lives, habits, conversations and more. AI will be able to do much more for
the analysis of this data than humans ever did, so data-driven research, advertisements, and content are going
to explode. '
s- Objective #5 : Artificial intelligence creates synergy between humans and AI
One of the key goals in AI is to develop a strong synergy between AI and humans, so that they can work
together to enhance the capabilities of both.
Ii" Objective #6 : Artificial intelligence is good at problem-solving
So far, AI is unable to employ advanced problem-solving abilities. That is, it can tell you a factual answer,
but cannot analyze a specific situation and make a decision based on the very specific context of that
situation.
Ii" Objective #7 : Artificial Intelligence helps with planning
One of the most human traits in existence is the ability to plan and make goals and subsequently accomplish
them. And one of the goals for AI is to have AI be able to do these things.
li" Objective #8 : Artificial Intelligence performs more complex tasks
The key goal is this: to develop AI programs that can complete more and more complex tasks. Already the
abilities are shocking, although not yet widespread. However, over time these will develop and ultimately.
scientists hope, be able to do basically the same things humans can do.
Cliapter Ends...
□□□
I • ,
I
f
I. I . I ;r Q
.-- --- --- --- --- -1 ·Module 2
CHAPTER
.,. ,Intelligent Agents
1t : . J
i. :H :
, •• ,!
2.1 Introduction of Agents ...... .' •• :..... ,•• _.,•. .-.... i:•... •·:· .... ' ... :......... • ....................... ..................... • ........... ·_.....- ................ 2·2
• ... 2. 1.1 Intelligent Agent ..... ·.: .. •• ... --....... ·:·: ...... •... ,·.. ,•. •..-: .............. •.............................................._................. • ..-...... 2-2
2-3
2.2 Structure of lntelfigent agents .............. '.......................:...:.....'.........................................................................................
2.3 Characterisllcs of·lnteUlgent Agent ...................................:............................................................................................. 2-3
U_Q. Define lntelUgent Agent. What are the characteristics of lnlellgent Agent? .......... 2--3
............2-'
2,4 Simple Reff ex Agent...........................................................................................................................................
i
Model-based Reflex Agent ..........................................................................................................·-······-·"···· ................ 2~
2.5
UQ. Explain Model based Reflex agent with block diagram.
Ir·.1U Q 2 aI Dec. 19. 10 M,irks O 2(c,. Dec 16. 5 r,lilrk:; 0 5Ib1 l.1.11 1t, 0 r.h,~s ...................................-.... ~
2.6 A Goal-based Reflex Agent..........................................................................................................................._ ....,......... 2-6
UQ. Elq)lain Goal Based with block diagram.
1MU · 0 2(bl. Dec 18 0 2(61. Dec. 17 10 f.1.11ks Q ltd) l.liJ, 17. -l fl.1;ir\,.:;> .........................................................2-6
2.7 An Ullllty·based Reflex Agent., ......................................................................................................................................2-7
UO. Explain Utility beSed agent with block diagram (r.lU • a. 2 in. Dec. 19 10 M;irks a. 21c). Dec 16 5 r.i~1ks
a 2(b. Dec 18. O :>(B). Dc-c. 17. 10 M,11ks o 1(d). l\1;i, 17 4 r,1:irks. a 21~1. Dec 19 10 r•.1cirks
.............................................................................................................................................2-7
o 5(bJ. M;iv 16 5 r,1arksl
2.8 Comparison of Model Based Agent and UtiHty Based Agent......................................................................................... 2-8
UQ. Compare of Model based Agent and Utility based Agent. .................- ....... 2-8
2.9 Comparison Model Based Agent wHh Goal Based Agent..............................................................................._...... - .... 2-8
UQ. Compare Model Based Agent with Goal Based Agent .................................- ..... 2-41
2.10 Types of environment..............................................................................................- .............._._........- •.•..·-·-···-··-..•N
2.10.1 Complete YS. l"Cl?mplete Env_ironments ............................................'. .............................................................. 2·11
2.10.2 Competitive vs. Collaborative Enllironments ...............................................................:.................................... 2-1'1
• Chapter Enda .. •............................ •................................................' .....·.:.... •........................................: .• •......... ......... 2-11
Artificial lntelli ence MU - Al & OS/ Electronics . •:
Syllabus Topic : Introduction of Agents
An agent is just something that acts (agent comes from the Latin agree, to do).
In artificial intelligence, an intelligent agent (IA) is an autonomous entity which observes through sensors
and acts upon an environment using actuators (i.e. it is an agent) and directs its activity towards achieving goals.
• Intelligent agents may also learn or use knowledge to achieve their goals.
• They may be very simple or very complex, Example : a reflex machine such as a thermostat is an intelligent
agent.
• An agent is anything that can perceive its environment through sensors and acts upon that environment
through effectors.
Agent's structure can be viewed as :
(1) Agent= Architecture+ Agent program. (2) Architecture= the machinery that an agent executes on.
(3) Agent program = an implementation of an agent function.
• IA like Rabul and Gopal are examples of intelligence as they use sensors to perceive a request made by the
user and automatically collect the data from the internet without the user's help.
• That can be used to gather information about its perceived
environment such as weather and time. Thus, an intelligent Agent Senecn
agent is an autonomous entity which acts upon an
environment using sensors and actuates for achieving
goals. An intelligent agent may learn from the environment
;~
to achieve their goals.
• The term 'percept' means the agent's perceptual inputs at r=L,~.. ,
any given instant An agent's percept sequence is the
complete history of everything that the agent has
perceived. We illustrate this idea in the Fig. 2.1.1.
~ '
AdLa18s
•
--+-t action
~ 2. t . t lntelllgent Agent
• An intelligent agent is a programme that can make decisions or perform a service based on its environment.
user input and experiences.
• These programs can be used autonomously to gather information on a regular, programmed schedule oc
when prompted by the user in real time.
• IA may be simple or complex - a thermostat is considered as an example of an intelligent agent, as is a
human being, as is any system that meets the definition, such as firm, a state or total quantity.
(MS-126)
~ Tech-Neo Publications..A SACHlN SHAH VentU"
Artificial Intelligence (MU • Al & OS/ Electronics) (Intelligent Agents) ...Page No. (2-3)
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Artlflclal Intel ence MU • Al & OS / Elec
f tronlcs lntelli ent A ants ...Pa N
8
o. ~'.'-\
2. Goal-oriented : Intelligent agen
ts carry out the particular taSk prov
ided by '~ r Sla ~m ent or goa1s,.
moves around from one mac hine to anoth er and can react in response to lt
. • their envrromnent and t~L
initiative to exhibit goal directed beh ""Cl
aviour.
3. Independent : Intelligent agen
t is self-dependent, in the sense that
it functions an its own without hllln
intervention. It makes decisions on ait
its own and initiate them. It commun ndepend
iofonnation and other agents and achi icat es i entl y with data ot
eves the objectives and tasks an beh
alf of the user.
4. Intelligent : Intelligent agent can
collect data more intelligently. The
existing knowledge of its user and y can reason out things based on the
environment on past experiences inte
lligently• To evaluate condition in
the external environmental intellige
nt agents follow present rules.
S. Reduce net traf fic : Agents com
municate and co-operate with othe
perform the taSks, such as informa r agents quickly. Thi s way, they can
tion searches quick1y and efficient
thereby. ly. And network traffic gets reduce
(MS-126)
~Te ch• Neo Publications...A SAC
HIN SHAH Venture
Artificial lntelllgence (MU· Al & OS I Electronlcs) (lntemgent Agents) ... Page No. (2-5)
reflex agent'
(6) This kind of connection-action rule, written as it hand is in fire, then pulls it away. The 'simple
rules,
bas a library of such rules so that if a certain situation should arise, it is in the set of condition-action
bot have
the agent wilt lcnow bow to react with minimal reasoning. These agents are simple to work with
very limited intelligence, such as picking up 2 rock samples. Refer Fig. 2.4.1.
IJ '•fl
[=:J (rectangles): To represent the current internal state of the agent's decision process.
~ I (ovals): To represent the background information used in the process. 'I I
Agent J
Sensors
now
......,.....__ ,....._..,____ - 1
I! '
~C:" -I Acoo~ be I l
Effectors
~.,
--t-t-
(MS-126)
~ Tech-Neo Publications...A SACHIN SHAH Venture
Artificial lntelli ence MU - Al & OS/ Electronics
• - . : ; - - - -- - - - - - - - - - - - -
} yQ. ,explain Goal Based with block diagram.
- - - - - - - - - - - - - - -
t
- - - - -~ - - -~~r~- -~ - -1"1 ~~~ -r-- - - - - -1
{ ~ .: ' "'=" ,:
il1,'
- --- - .,- ---- - --- - - -- - - - - -- - -- - - - - - --- - -- - -- - - - - - - - - - - -- - -- - _...,_ - - ___
• • • ' '
< t;J,• ., I
~·- I
(MS-126)
Ii] Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU - Al & OS/ Elecironics) (lnteffigent Agents) ... Page No. (2-7)
s . - ___..___
______...,
,:-
• What rrrJ actlons do
,J
what action I
Goals
shouldclooow
I"- (MU - Q. 2(a), Dec. 19, 10 Marks, Q. 2(c), Dec. 16, 5 Marks, Q. 2(b), Dec. 18, Q. 2(8),
~ .,, .. 11
,. ~ \
·------------------------------------- -------------------~-----:....
(1) An utility-based reflex agent is like the goal-based agent but
with measure of 'how much happy' an action would make 5en90fa
it rather than the goaJ-based binary feedback [happy,
unhappy].
(2) This kind of agents provides the best solution. An example is ig
the route recommendation system which solves the 'best'
route to reach a destination.
Condi11on -
action
What action i
(3) The agents which are developed having their end-uses as rules -< I should do naw
building blocks are called utility-based agents. When there
are multiple possible alternatives, then to decide which one
r
Adue1onJ
Adlon
'~- ~
is best, utility-based agents are used. They choose actions
based on a preference (utility) for each state. Fig. 2.7.J
(4) Goal-based agents are important as they are used to expand the capabilities of the model-based agent by
having the 'goal' information.
(5) They choose an action, in order that they will achieve the goal. Utility based agent-act based not only goals
but also the samples, thanks to achieving the goal.
(6) The utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in
order to perform the best action. The utility function maps each state to a real number to check how
efficiently each action achieves the goals.
(7) In artificial intelligence, utility function ~lgns values to certain actions that the AI can talce. An AI
agent's preferences over all possible outcomes can be captured by a function that maps the outcomes to a
utility value, the higher the number, the more that agent likes that outcome.
(8) In Economics, the utility function measures the welfare or satisfaction of a consumer as a function of the
consumption of real goods, such as food or clothing. Utility function is widely used in rational choice
theory to analyze human behavior.
(9) Utility theory bases its beliefs upon individual's preferences. It is a theory postulated in ec
behavior of individuals based on premise; people can consistently rank order their choice
their preferences. We can state that individual's preferences are intrinsic.
--· 2.8
' COMPARISON ~F MODEL BASED AGENT AND UTILITT BASED AGENT
r;-------------------------------------------------------------
' t14~ • Compare of Model based Agent and Utility based Agent. Q. (MU - 4(b). May 17, 4 Marks)
------------------ - - - - -- - - -- - - ---------------------------·
le \
, Sr. Model based Agent .. . Utility based Agent.
',.,
,No~
1.
- ' J
Goal-based agents are very important as Utility based agent-act based not only goals but also the
they are used to expand the model-based simplest thanks to achieving the goal.
agent by having the goal information.
1\ I
2. They choose an action in order that they A utility based agent makes decisions based on the
will achieve the goal. maximum utility of its choices.
3. A model based reflex agent that uses It's usefulness (utility) of the agent that makes itself
percept history and internal memory to distinct from its counterparts.
make decisions about the 'mod.el' of the
world around it.
4. - Internal memory allows these agents to A goal-based agent makes decisions based simply on
store some of tlieir navigation history to achieving a set goal. Suppose you want to travel from
I \
help understand things about their current Pune to Mumbai. Mumbai is the goal and the goal based
environment even when everything that agent will get you there.
I they need to know cannot be directly
observed.
5. Model-based agent uses GPS to understand But if you come across a closed road, the utility-based
its location and predict upcoming drivers. agent will analyse other routes to get you there.
And it will select the best option for maximum utility.
•.__ Hence, the utility-based agent is a step above the goal-
based agent.
6. Model-based reflex agents are made to Utility based agent is more agile and sophisticated since
deal with partial accessibility. It does this it has some decision making capabilities.
by keeping an internal state that depends
: l on what it has seen before. So it holds (
(I) A goal-based agent has an agenda. It operates on a goal based in front of it and makes decisions based
on how best to reach that goal.
(2) A goal-based agent is capable of thinking beyond the present moment to decide the best actions to talce
to achieve its goal.
(3) A goal based agent operates as a search and plannJng function.
(4) It targets the goal ahead and finds the right action in order to reach iL
Syllabus Topic : Environment Types : Deterministic, Stochastic, Sta11c, Dynamic, Observable, Semi-
observable, Single Agent, Multi Agent
The agent environment in artificial Intelligence is classified into different types. The environment is
categorized based on how the agent is dealing with it.
Classification is as follows:
J. Fully observable & Partially observable 2. Static & Dynamic
3. Discrete & Continuous 4. Deterministic & Stochastic
5. Single-agent & Multi-agent 6. Episodic & Sequential
7. Known & Unknown 8. Accessible & Inaccessible
I
•
cards. The used cards, cards States are = Partially observed
[ N~sy + __1':in_ocu_so_ra_:_1
1 + missing envlronmen
reserved for the future, are not
visible to the user. Fig. 2.10.2
• An environment that consists of only a single agent is called a single-agent environment. All the
operations over the environment are performed and controlled by this agent only. If the environment
consists of more than one agent or multiple agents for conducting the operations then such an
environment is called a multiple agent environment. '
(MS-126)
Iii Tech-Neo Publications..A SACHIN SHAH Ventute
Artificial Intelligence (MU - Al & DS / Electronics) (Intelligent Agents)... Pape No. (2-11)
• In a vacuum cleaning environment, the vacuum cleaner is the only agent involved in the environment.
And it can be considered as an eumple of a single--agent environmenL
• Multi-Agent Systems, computer-based environments with multiple interacting agents are the best
example of a multi-agent environment Computer games are the common MAS application. Biological
agents, Robotic agents, computational agents, software agents, etc are some of the agents sharing the
environment in a computer game.
6. Episodic a. Sequential
• An environment with a series of actions where the current action of an agent will not make any
influence on future action. It is also called the non-sequential environment/Episodic environment.
Sequential or non-episodic environments are where the current action of the agent will affect the future
action.
• For a classification task, the agent will receive the information from the environment concerning the
I time, and actions are performed only on those pieces of information. The current action doesn't have
l. any influence on the future one, so it can be grouped under an episodic environmenL
But for a chess game, the current action of a particular piece can influence the future action. If the coin
takes a step forward now, the n.ext coming actions depend on this action wbae to move. And it is
sequential.
1. Known a. Unknown
• Known & unknown is an agent's state rather than the property of an enviroomenL If all the possible
results of all the actions are known to the agent then it is a known environment And if the agent is not
aware of the result of the actions and it needs to learn about the environment to make decisions. it is
called an unknown environment
8. Accessible a. Inaccessible
• If the sensors of .Uie agent can have complete access to the state of the environment or the agent can
~
access complete information about the environmental state then it is called an accessible enviromneot.
Else it is inaccessible or the agent doesn't have complete access to the environmental state.
l
mcomplete environments as AI strategies can t anttc,pate many moves 10 advance and, IDStead, they focus on
finding a good 'equilibrium" at any given time.
a. 2.10.2 Compeddve vs. Colbboradve Eavlroamats
Competitive AI environments face AI agents against each other in order to optimize a specific outcome.
Games such as GO or Chess are examples of competitive to avoid collisions or smart. home sensors interactiom
are examples of collaborative AI environments. •
QQpM,bd.s...
ClClC
I , I
Module 3
CHAPTER
Solving Problems by
3
r
Searching '
f.
'I
Definition, State space representation, Problem as a state space search, Problem formulation, WeD-deflned prot,lems.
Solving Problems by Searching, Performance evaluation of search strategies, TIITl8 ~xity. Space Complexity,
Completeness, Optimality. Uninformed Search: Depth First Search, Breadth First Search, Depth Limited Search, Iterative
Deepening Search, Uniform Cost Search, Bidirectional Search. Informed Search: Heuristic Function, ~
Heuristic, Informed Search Technique, Greedy Best First Search, A• Search, Local Search: HiD ClimtlCng Search,
Simulated Annealing Search, Optimization: Genetic Algorithm. Game Playing, Adversarial Search Techniques. Minknax
Search, Alpha•Beta Pruning.
• t
The state search representation fonns the basis of BST is a collection of nodes arranged in a way
most of Al methods. Its structure corresponds to the where they maintain BST properties. Each node has a
structure problem solving into two important ways. key and bas an associated value.
It allows for a formal definition of a problem as While searching. the desired key is compared to
per the need to convert some given situation into some the keys in BST and if found, the associated value is
desired situation using a set of pem:tissible operation. retrieved. We mention below a pictorial representation
ofBST
Syllabus Topic : Problem as a State Space Search
~ 3.2 SEARCHING
,- - - - - - - - - - - -;..;.-,-,- -,..r,". ":"' - - ~ - ~.,,.-
I r..n
...~
What,s-searc
• .
hmg1 ~ , <'·S • (;,,. "'""' ;;:.,,~
1
:-- •. i.:.-~A ..,,,, v-- -~,
~._ - - - - - - - - - -- - - _,,_ - - - - - - - -~- - ~~u Fig, 3.2.1
Search plays a major role in solving many
artificial intelligence (Al) problems. Search is a Note that the root node key (27) bas all less.. I
universal problem-solving mechanism in Al. ln valued keys on the left sub-tree and tbe higher valued
many problems, sequence of steps required to solve a k_eys on the right sub-tree.
problem is not known in advance but must be
Iii' Basic Operatio n
determin ed by systematic trial-and-error exploration of
alternatives. The basic operations of a tree
Search techniques try to "pre-play" the game by (i) Search : Searches an element in a tree
evaluatin g the future states (game tree search) and may (ii) Insert : Inserts an e.lement in a tree
use also heuristic to prone bad choices or speed things (iii) Pre-order Traversal : Traverses a tree in a pre.
up. They theoretically can make an exact and perfect
order manner
choice, but are slow.
(iv) In-order Traversal : Traverses a tree in an in-order
The problems that are addressed by A1 search
manner
algorithm s fall into three general- classes :
(v) Post-order Traversal : Traverse a tree in a post-
1. Single-ag ent path-finding problems order manner.
2. Two players games (vi) Remade: There must be no duplicate nodes.
3. Constrai nt- satisfaction problems
Remark
a. 3.2. t Node Representation In Surch Tree Let us suppose we want to search for the number :
(i) We start at the root
A binary search tree (BS1) is a tree in which all
the nodes follow the below mentioned properties (ii) We compare tbc value to be searched with the
value of the root
(i) The value of the key of the left sub-tree is less
than the value of its parent (root) node's key. (iii) It is equal. we complete the search.
(iv) If it is lesser, we need to go to the left sub-tree.
(ii) The value of the key of the right sub-tree is greater
since in a binary subtree all the clements in the left
than or equal to the value of its parent (root)
subtree are lesser and all the clements in the right
node's key. subtree are greater.
Stack
Fig. 3.5.3
2.
Flg.3.5.4
l
,1 3.
Fig. 3.5.5
4.
I!
Visit D and mark it as visited and put onto the
Top-
m
rn
stack. Here we have B and C nodes, which are
adjacent to D and both are unvisited. But again
I S£l
we choose in an alphabetical order.
Stack
Fig. 3.5.6
(MS-126)
~Tech-Neo Publications...A SACHIN SHAH Venture
Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searching) ... Page No. (3-9)
Fig. 3.5.7
6.
7.
Fig. 3.5.9
Top-
I ,,,
Slack
Only unvisited adjacent node is from D is C. So
we visit C, mark it and put it into the stack.
• As C does not have any unvisited adjacent node, so we keep popping the stack till we find a node that has an
unvisited adjacent node. Here, there is none and the stack is empty and the program is over.
a. 3.5.2 Performance Measures of DFS and efficiency of programs.
The performance measuring factors of an • There are four ways to measure the performance
of an algorithm :
algorithm are as follows :
(i) Completeness : DFS is complete if the search tree
• The two most common measures are : speed and
is finite, it implies that for a given finite sean::b.
memory usage; other measures could include
DFS will have a solution if it exists.
transmission speed, temporary disk usage, long-
term disk usage, power consumption, total cost of (ii) Optimality : DFS is not optimal, it means that the
ownership, response time to external stimuli, etc. number of steps in reaching the solution., or the
cost spent in reaching it is high.
• Performance measure is generally defined as
regular measurement of outcomes and results
which generates reliable data on the effectiveness
(Ii[) Time complexity : The time complexity of DFS, a J.5.5 Solved Exmtple on DFS
if the entire tree is trnve~d. is O M where V is
the number of nodes.
For a directed graph. the sum of the sizes of
UEx. 3.5.1 fl&lffl.m:111"
Consider the following graph shown in Fig. Ex. 3..S.
the adjacency lists of all nodes is E. So, the swting from A execute DFS the goal node is G. Sb
time complexity in this case is
the order in which the nodes are expanded. AsSl.l
O(V)+O(E) = O(V+E) that the alphabetically smaller node is expanded
For an undirected graph, each edge appears
to break ties.
"'
twice.
A
{Iv) Space complexity : For DFS, which goes along a
single 'branch' all the way down and uses a stack B / C
implementation, the height of the tree matters. The
space complexity for DFS is O (h) where h is the l"o/~
maximum height of the tree. E I
'a. l.S.l Advmtaces and DIYdvmuces of F
Depth First Search
IGi" Advantages of DFS
I
H
I. 11111)Flg. Ex. 3.5.1
Memory requirement is linear with respect to
nodes. 0 Soln.:
2. Less time and space complexity rather than BFS. 0
3.
1.
Solution can be found out without much more
search.
Disadvantage s of DFS
Not guarantee that it will give you solution.
l /.
2. Cut-off depth is smaller so time complexity is
more.
3. Determination of depth until the search proceeds.
11114\Fig. Ex. 3.S.l(a)
4. The major drawback of depth-first search is the
determination of the depth till which the search is uex:-·s:s.2: MU·O. 5 b. Ma 16. 10 Marks
reached. This depth is called cut-off depth. The Consider the graph given in the Figure. Assume thar
value of cut-off depth is essential because the initial state is A and the goal state is G. FUld a path
otherwise the search will go on and on. from the initial state to the goal state using DFS. Also
If the cut-off depth is smaller, solution may not be ,report the solution cost, ..._
found and if cut-off depth is large, time-
complexity will be more.
'a. l.S.4 AppOaidons of DFS
l 1.
2.
3.
Finding connected components.
Topological sorting.
Finding bridges of graph. Fig. ~3.S.2
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Ventln
Artiftelal Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchlng) ... Page No. (3-11)
0 Soln. : A is given initial state and G is the goal (2) BFS is the core of many graph analysis algorithms
node. and it is used in many problems, such as social
► Step (I) : Place the starting node into the stack network, computer network analysis and data
8= organization.
(3) BFS involves search through a tree one level at a
► Step (II) : Now the stack is not empty and A is time. We traverse through one entire level of
not our goal node. Hence we move to next step. children nodes first, before moving onto traverse
► Step (Ill) : The neighbours of A are B and C. through the grand children nodes.
I BI CI A
(4) Breadth first search is an algorithm for node that
satisfies a given property. It starts at the tree root
► Step (IV) : Now, B is top node of the stack. Its and explores all nodes at the present depth prior to
neighbours are E and D. moving on to nodes at the next depth level.
IE ID I Ji (5) BFS uses Queue-data structure for finding the
shortest path. BFS can be used to find single
► Step (V) : Eis top node of the stack. We find its
source shortest path in an on-weighted graph,
neighbour, but there is no neighbour of E in the
because in BFS, we reach a vertex with minimum
graph, so
number of edges from a source vertex.
,..--B-r-1D--.,- E
'2,. 3.6.1 BFS Travel'YJ Alrorfthm
► Step (VI): Now, Dis top node; and its neighbours
are F and C; we push D into the stack. ► Step 1 : Add a node / vertex from the graph to a
IF IC I D •
queue of nodes to be 'visited:.,
J.
.. . ·, -i
-~i
I
► Step (VIl) : Now, G is our top node of the stack,
which is our goal node.
► Step 2 : Visit the t~mo~t ✓node in the queue, I
- and mark it as such.. , • '
► Step (VID) : solution cost is
J. ,,
A ➔ B➔ E➔D➔G
► Step 3 : If that node bas any neighbours. check
" to see if they have been 'visited' or not.
Syllabus topic : Breadth First Search
J. --:
► Step 4 : Add any neighbouring nodes that sli!
~ 3.6 "BREADTH-FIRST SEARCH (IFS)
., <
:,-;- nee<! to.be :visit~· to the_~ - ...
------------------------------~
: GQ. Comment upon statement that breadth-first search : Illustrative Example
:• is a .special case of uniform cost search. 1 Ex. 3_6 _1 : Which solution would BFS find to move
: GQ. Explain breadth first search with its algorith.m. 1 from node s to node G if run on the graph below 1
-------------------------------
(1) BFS stands for Breadth-First Search is a vertex
based technique for finding a shortest path in
graph. In BFS, one vertex is selected at a time
when it is visited and marked then its adjacent are
visited and stored in the queue. It is slower than
Fig. Ex. 3.6.l
DFS.
d
I
Equjvalent to the number of nodes traversed in
BFS until the shallowest solution.
Level
Level2
L...=.~--~~-__;~--~~
Space complexity
Fig. 3.6.2
Eqwvalent to how large can the fringe get :
1 S' Algorithm
S (n) = 0 (n )
ti (1) Mark any node as starter or initial
Completeness
(2) Explore and traverse un-visited nodes adjacent to
BFS is complete, meaning for a given search tree,
starting node.
BFS will come up with a solution if it exjsts.
(3) Mark node as completed and move 10 next
Optimality adjacent and un-visited nodes.
BFS is optimal as long as the costs of all edges are
equal.
(4) NavtgatJon Systems : BFS can help fmd all the Hence the space-complexity = 0 (bd )
neighbouring locations from the main or source a 3 . 6 . 8 Limitations of Breadth First Surch
location.
(5) Network Broadcasting : A broadcastcd packet is I. Amount of time needed to generate all the nodes
guided by the BFS algorithm to fmd and reach all is considerable because of the time-complexity.
the nodes it bas the address for. 2. Memory constraint is also a major hurdle because
• lt is possible to run BFS recursively without any of the space-complexity.
data structures, but with higher complexity.
3. The searching process remembers all unwanted
• DFS, as opposite to BFS, uses a stack instead of a
nodes which is of no practical use for the search.
queue, so it can be implemented recursively. Note
that the code used is iterative but it is trivial to
·------- ------- ------- ------- --.
: GQ, Which storage structure is preferably chosen fof
node representation in open list, while performTng
make it recursive. 1
bes1-first search over a state space and why7
• In BFS, a queue data structure is used. One can ~-----------t------------------•
mark any node in the graph as root and start OPEN is a priority queue in which the element&
traversing the data from it. with the highest priority are those with the most
• BFS traverses all the nodes in the graph and keeps promising value of the heuristic function.
dropping them as completed.
a 3.6.9
E:omple of Breadth Flnt Search
• BFS visits an adjacent unvisited node, marks it as
•- - -y,- -.--.:- - -~ ::,,- - -ccv£ - -: --,- - ""- - •
done, and insens it into a queue. ! GQ. Give an example of a problem for which breadth- :
1 ,. • 'fi'rst would work better than dep1h first search and 1
a 3.6.7 Perfomw1ce Mu.sores of BFS • .-•:,;_ • • "' I
~~!~~~ -e~;._~ ~'Y-~- - -~ - - - - - - :
G" Time Complexity
In general, BFS is better for problems related to
Breadth-first search, being a brute search
finding the shortest paths or somewhat related
generates all the nodes for identifying the goal. The
amount of time taken for generating these nodes is problems. Because here we can go from one node to
proportional to the depth d and branching factor b and all node that are adjacent to it and hence we effectively
is given by, move from path length one to path length two and so
2
l+ b + b + b3 + ..... + bd ~ bd on.
Hence the time-comple xity= O (bd) While DFS on the other hand, helps more in
connectivity problems and also in finding cycles in
IQ" Space Complexity
graph (cycles can be found in BFS with a bit of
Unlike depth-first search wherein the search modification). Determining connectivity with DFS is
procedure has to remember only the paths it has trivial, if we call the explore procedure twice from the
generated, breadth-first search procedure bas to DFS procedure, then the graph is disconnected (this is
remember every node it has generated. Since the for an undirected graph). We can see the strongly
procedure has to keep track of all the children it has connected component algorithm for a directed graph
generated, tho space-complexity is also a function of here, which is a modification of DFS. Another
the depth d and branching factor b. Thus space application of the DFS is topological sorting.
complexity becomes,
2 3 d d
l+b+b +b + ... +b =b
_J
Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchlng)... Page No. (3-15)
Limit ■ 3
~ ~
I \ I \ I
M \
~
I \
/
I
0
\
e
I
E
\ I
F
\
C
I
G
\
Jn.
M
I \ I \ I \
C
I
G
\
Fig. 3.8.1
(MS-126)
lil Tech-Neo Publications...A SACHIN SHAH Venrure
Artificial lntelll ence MU-Al & OS/ Electronics Solvin Problems b Searchin ... Pa e No. 3-1
8
'a l.8.2 Adnnu,es of IDDFS Herc, instead of inserting all vertices into a
•
priority queue, we insert only source, then one by
I. The main advantage of IDDFS in game tree
one we insert nodes when needed.
searching is that the earlier searches tend to
improve the commonly used heuristics, such as a J.9.1 AJ1orithmo fU.C.S.
the killer heuristic and alpha- beta pruning, so that
n more accurate estimate of the score of various Unifonn Cost Search is an algorithm used to
•
nodes at the final depth search can occur and the move around a directed weighted search space to
search completes more quickly since it is done in
go from a start node to one of the ending nOdcs
a better order.
with a minimum cumulative cost.
For example, alpha-beta pruning is most efficient
if it searches the best moves first. • This search is an urtlnformed search algorithll\,
2. A second advantage is the responsiveness of the i.e. it does not take the state of the node or search
algorithm. Because early iterations use small space into consideration.
values ford, they execute extremely quickly. This • It is used to find the path with the lowest
allows the algorithm to supply early indications of cumulative cost in a weighted graph where nodes
the result almost immediately, followed by are expanded according to their cost of traversat
refinements as d increases, when used in an from the root node. This is implemented using a
interactive setting. Such as in a chess-playing priority queue where lower the cost higher is its
program, this facility allows the program to play
priority.
at any time with the current best move found in
Algorithm of Uniform Cost Search : (In Al)
the search it has completed so far. This can be
~ t 4 ... ,
phrased at each depth of the search core, ► Step 1 : Insert Root Node into the queue.
producing a beller approximation of the solution, J,
though the work done at each step is recursive. ►, Step 3 :· Repeat till queue.is not empty.
This is not possible with a traditional depth-first J,
search, which does not produce intermediate ► Step 3 : Remove the next clement with the
results. highest priority. from the queue.
J,
3. The time complexity of IDDFS in well-balanced
trees works out to be the same as Depth-first ► Step 4 : If the node is a destination node, then
search: 0 (bd) print the cost and the path and exit, else
insert all the children of removed
-~
clements into the queue with their
Syllabus topic Uniform Cost Search cumulative cost as their priorities.
........-.
. -
3.9 UNIFORM COST SEARCH. , ~-
~ ~.
Here root Node is the starting node for the path,
and a priority queue is being maintained to maintain
• Uniform-cost-search is an uninformed search the path with the least cost to be chosen for the next
algorithm that uses the lowest cumulative cost to
traversal.
find a path from the source to the destination.
• Nodes are expanded, starting from the root,
a. J.9.2 Execution of Al1orithm of Uniform
Cost Sur-ch
according to the minimum cumulative cost. The
uruform-cos t search is then implemented using a • Unifonn-cost search is similar to Dijkstra's
algorithm.
Priority Queue.
Remark
Even if we reach the goal state we continue
searching for other possible paths (if there are multiple
goals).
• The elements in the priority queue have almost Fig. 3.9.1
the same costs at a given ti.me, and thus the name Node ( A, B, C, D, E and F } are the intermediate
Uniform Cost Search. nodes. Our motive is to find the path from S to any of
• It may appear that elements do not have almost the destination state with the least cumulative cost
the same costs, but when applied on a much larger Each directed edge represents the direction of
graph it is certainly so. movement allowed through that path, and its labelling
represents the cost is one travels through that path.
• Uniform costing refers to acceptance of identical
costing principles and procedures by all or many Thus overall cost of the path is a sum of all the
units in the same industry by mutual agreement. paths.
• 'Uniform Cost Search (UCS)' algorithm is mainly Por e.g. : A path from S to G 1 -
used when the step costs are not the same but we lS > A > G1 } whose cost is SA + AG 1 = 5 + 9
need the optimal solution to the goal state. In such = 14.
cases, we use Uniform Cost Search, to find the
Here we maintain a priority queue the same as
goal and the path including the cumulative cost to
BFS with the cost of the path as its priority, lower the
expand each node from the root node to the goal
cost higher is the priority.
node.
We use a tree to show all the paths possible and
• Uniform cost-search is optimal. This is because, at
also maintain a visited list to keep track of all the
every step the path with the least cost is chosen,
visited nodes as we need not visit any node twice.
and paths never get shorter as nodes are added,
ensuring that the search expands nodes in the
. f
!.
I \ I
iI
Let c• be the cost of optimal solution, and e be each Fig. Ex. 3.9.l(b)
step closer to the goal node. Then the number of ► Step m : The node A has minimum distance I.•
steps is, we keep it aside and add the node G.
c•
= (e + 1)
\ • ,<, , I
I
~ ,3.10 BIDl~C'J1QNAI'. SEAR~'!
1
~ ij
I
I.
The principle used in a bidirectional heuristic
r:r ~ I. search algorithm is to find the shortest path from the
current node to the goal node. The only difference
being the two simultaneous searches from the initial
-:-:r , .,~ ,1.ll'IL •
' .. ,,, l r point and from goal vertex. The main idea behind
1'" Fig,&. 3.9.l(f) bidirectional searches is to reduce the time taken for
► Step VIII : Now, minimum cost is F, so it is search drastically.
removed (alphabetically) and subnode J is added.
~126) i -;,:• ,;;. ,:.1,. A H .:>llh.1I ,LI~ I. .,-, I, Iii Tech-Neo Publications...A $ACHIN SHAH Venture
Solvin Problems b Searcflin ... Pa e No. 3-24
Artificial lntelli ence MU-Al & OS/ Electronics
Tius t.alces place when both searches happen (2) Front to front BFFA t
simultaneously from the 'initial node depth' or Here the distance of all nodes is cal~u~at~, and b
'breadth-first' and 'backwards from goal nodes'. They is calculated as the minimum of all beunstic distancea.
intersect somewhere in between of the graph. from the current node to nodes on opposing fronts.
The path traverses from the initial node through
the intersecting point to goal vertex and that is the Q> Performance measure
shortest path found because of this search. (1) Completeness : Bidirectional search is complete
We consider an ex.ample: if BFS (Breadth-first search) is used in both
Forward searches.
search I ,1
(2) OptlmaHty : It is optimal if BFS is used for
I search and paths have uniform cost.
(3)•· Time and space complexity
Time and space complexity is O (b~.
When to use bldlrectJonal approach
/ Backw~• r I •I
f I '
Bldlrectlonal search alg'orlthm ' (1) Both initial and goal states are unique and
U I J completely defined.
► Step I : Let A be initial node and O the goal node
1 (2) Toe branching factor is exactly the same in botb
and H is tne intersection node;''· '
t , l,J directions.
► Step 2 : We start searching simultaneously from
i t ., r• l ~ ~ Why bldlrectlonal approach 7
start to goal node and backwards from goal node to
1(i) In many cases it is faster, and it reduces the
Start node. 'I I ' ' J'
i I
amount of required exploration.
► Step 3 : When the forward search and bac~ard (ii) Suppose if branching factor of tree is b and
search intersect at one node, therl searching stops. distance of goal vertex. from source is cl, then ~
Also observe that bidirectional' searches are normal BFS/DFS searching complexio/._ is O (b\ t
complete if a breadth-first search is used for both But for two search complexity is O (b"'j which is
traversals, i.e., for both paths from start node till far less than O (b').
intersection and from goal node till intersection.
Two main types of ?idirectional searches are as Syllabus topic : Informed search
follows: , J
I.It adds domain -specifi c informa tion to select the a. 3.11.2 Alrorfth m for Beam Search
best path along which to continue searching.
2. Define a heuristic function h(n) that estimates the ► Step 1 : Let width,_of_beam = W: '--
"goodness" of a node n. Specifically' ► Step 2 : Put the initial node on a list START.
h(n) = estimated cost (or distance) of minjmal
► Step 3 : 1f (START is empty) or (START
cost path from n to a goal state.
= GOAL), then terminate search.
3. The heuristic function is an estimate of how close
► Step 4 : Remove the first node from START,
:'e are . to a goal, based on domrun-specific
information that is computable from the current Call this as node a.
state description. Some of the examples of ► Step 5 : If (a = GOAL), then terminate search
informed search are best first search, beam search, with success.
A* and AO* algorithms etc.
► Step 6 : Else if node a has successors, generate
lei' Inform ed search algorit hms all of them and add them at the tail of
Informed search algorithm contains an array of START.
knowledge that tells us how far we are from the goal, ► Step 7 : Use a heuristic function to rank and
path cost, how to reach to goal node etc. llis sort all the elements of START.
knowledge helps agents to explore less to the search
► Step 8 : Determine the nodes to be expanded.
space and find the goal node more efficiently.
The number of nodes should not be
The informed search algorithm is more useful for
greater than w. Name 1bese as
large search space. Informed search uses the idea of
heuristic, hence it is also called as Heuristic search. START1. •
► Step 9 : Replace START wilh STARTl. • ·" ~ ,__
a. J.11.1 Example for Informed Search ► -Ste 10 : Goto Ste 2. ..
-- -.search. --- -'--,
A " t--
.-~ GQ.---
-~-,.- - -~ - for informed
-...- ~ ---- --- ---
_..._...._ _- - -·
Fig. 3.11.1 shows bow beam search proceeds.
1
paths.
The searching process is similar to breadth-first
search wherein searching proceeds level by level. At
each level, heuristic functions are applied to reduce the
number of paths to be explored. In fact, it is done to
keep the width of the beam to be minimal. Toe width
of the beam is fixed and whatever be the depth of the
Fig. 3.11.1: Beam search procedure
tree, the number of alternatives to be scanned is the
product of the width and the depth.
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Vefl~
Artificial Intelligence (MU-Al & OS/ Electronics) (SoMng Problems by Searchlng) ... Page No. (3-27)
Syllabus topic : Admissible Heuristic
-21~
X
I. Is the problem decomposable into a set of (nearly) a. 3.12.4 Can Soludon Steps be (pored or
independent smaller or easier sub problems?
Undone 1
2. Can solution steps be ignored or at least undone if
they prove unwise? • Problem fall under three classes ignorable,
3. Is the problem's universe predictable? recoverable and irrecoverable. 1bis classification
is with reference to the steps of the solution to a
4. Is a good solution to the problem obvious without
problem. For example, consider proving the
comparison to all other possible solutions?
theorem. We may later find that it is of no help.
5. Is the desired solution a state of the world or a We can still proceed further, since nothing is lost
path to a state? by this redundant step. This is an example of
6. Is a large amount of knowledge absolutely ignorable solutions steps.
required to solve the problem, or is knowledge • Now consider the 8 puzzle problem tray and
important only to constrain the search? arranged in specified order. While moving from
7. Can a computer that is simply given the problem the start state towards goal state, we may make
return the solution, or will the solution of the some stupid move and consider theorem proving.
problem require interaction between the computer We may proceed by first proving lemma But we
and a person? may backtrack and undo the unwanted move. This
only involves additional stepS and the solution
~ 3.12.3 Is the Problem Decomposule 1 steps are recoverable.
• Lastly consider the game of chess. If a wrong
A very large and composite problem can be easily
move is made, it can neither be ignored nor be
solved if it can be broken into smaller problems and recovered. The thing to do is to make the best use
recursion could be used. Suppose we want to solve. of current situation and proceed. This is an
Ex : J(x2 + 3x + sin2x cos 2x) dx example of an irrecoverable solution stepS.
(i) Ignorable problems
This can be done by breaking it into three smaller
Bx : Theorem proving
problems and solving each by applying specific rules.
In which solution steps can be ignored.
On adding the results, complete solution is obtained•
.,
~Tech-Neo Publications...A SACHJN SHAH Venture
(M5-126)
Artlllclel lntolll once MU-Al & OS/ Eloctronlca
1. First, the start node S is expanded. It ha1 lhIt,
(II) Recoveroble problems Ex: 8 puzzle
children A, B and C with values 3. 6 alld ;
ln which solution i;teps can be undone.
respectively. These values approximately indicaii
(Ill) lrrecoveroble problems Ex : Chess how far they are from the goal node.
In which solution steps can't be undone. 2. The child with minimum value namely A I
A knowledge of these will help in determinfog the chosen. The children of A are generated. They art
control structure. D and E with values 9 and 8.
3. The search process has now four nodes to search
Syllabus topic : Informed Search Technique, Greedy
for. i.e., node D with value 9, node E with valucg
Beet Flr•t Search, A• Search
node B with value 6 and node C with value 5. O!
.. them, node C bas got the minimal value which t
~ 3.13 BEST FIRST SEARCH
expanded to give node H with value 7.
,- - - ..... - - -- -- - - - - - - - - - - - -- - -- - -111 - - -
• I
: GQ. Explaln1'est first search wltn algorithm. I • • "': 4. At this point, the nodes available for search ~
I
(D : 9), (E : 8), (B : 6) and (H : 7) where (a : ~
: OR When best first search algorith.m, will be applicable 7 :I
1 indicates that (a) is the node and ~ is 11;
I • With a suitable algorithm and example explain the' I
I
evaluation value. Of these, B is minimal ax
I besffirst search. : t -
I
,.,.._ - - - - - - - - _,._ - - - - _ G - - I
J'-~- - t. _., ..__, -
hence B is expanded to give (F: 12), (G: 14).
Definition : This search procedure is an
5. At this juncture, the nodes available for search i!!l
evaluation function variant of best the first
I search. The heuristic function used here called
(D: 9), (E: 8), (H: 7), (F: 12) and (G: 14) outd
which (H : 7) is minimal and is expanded to gi1:
an evaluation function is an indicator how far
(I : 5), (J : 6).
the node is from the goal node. Goal nodes have
an evaluation function value ofzero. 6. Nodes now available for expansion are (D • 9
(E: 8), (F: 12), (G : 14), (1 : 5), (J : 6).
Best-first search is explained using a search graph
given in figure :
cnve the goal node.
Of these, the node with minimal value is (I : 5) which is expanded to O'
9 D
Evaluation
/ function
value
1 K
Start
Node
l J~--~~l _ Goal
Node
The entire steps of the search process are given in Table 3.13.1
Table 3.13.1: Search process ofbest-ffnt search
experience to know that hsLD is approximately . . ' ::. ... "" ~ ••, . .•
-~-~ upqate the costs tb'this node. >' '
equal to actual road distance, and hence it is a !f;IftR~ 'auoo'essoris not ,. ;.. . ,\, • aluate'
useful heuristic. cost.on
:-
..-· -.,.:U,
,,. .....
)~ l '". .'
~v: ..,. ... ~
~ Soln.:
► Step I : We have to find greedy search. We
choose the order of the node
Consider the tree
(i) Cost: F (A)= H (A)= 10
F(D) = H (D) = 8 Fig. Ex. 3.14.l(d)
(i) Cost : F(O) = H (0) = 0
Open Queue : [S], [A D], [S D E], [S D EB F], [S
DEBO ]
Closed Queue : [SJ, [AD], [S D E], [S D EB], [S
DEBO ]
Total cost
Fig. Ex. 3.14.l(a) 8 + 6.5 + 3 + 0 = 17. 5 and optimal path
(ii) W closed [S] and [S D] on closed queue and S ➔D➔E➔F➔O
(MS-126) ,, I
~ Tech-Neo Publications...A SACHIN SHAH Venture
Solvin Problems b Searchin ...Pa e No. 3--3
Artificial lntelli ence MU-Al & OS / Electronics
2. If it is possible for one to obtain the evaluau011
H(A) = 10, H (B) = 6, H(E) = 6, H (F) = 3; H (0) function values and the cost function values, tJie.
=O A• algorithm can be used. The basic principle is
And the total cost= 10 + 6 + 6.5 + 3 = 25.5 that sum the cost and evaluation function value f0r
a state to get its "goodness" worth and use this aa
(2) OR. if we have chosen the path
a yardstick instead of the evaluation functi
S ➔ D ➔ A ➔ B ➔ B ➔ F ➔ O; then value in best-first search. The sum of
H (D) = 4, H (A) = 3, H (B) = 4, H (E) = 5, H (F) evaluation function value and the cost along
= 4 and H (0) = 0 path leading to that state is called fitness number.
Then the total cost= 4 + 3 + 4 + 5 + 4 = 21 3. Consider Fig. 3.15.l again with the
evaluation function values. Now associated w·
(3) For any other path, the total cost would have been
each node are three numbers, the evaluati
greater than 17. 5
function value, the cost function value and
:. S ➔ D ➔ E ➔ F ➔ 0 is optimal path and
fitness number.
optimal cost is 17.5
4. Toe fitness number, as stated earlier, is the total
1~ :S, 15 A* "MD AO• SEARCH
-
the evaluation function value and the cost-
value. For example, consider node K, the fi
- -~ ---- -.- -
,-....-... --- -,-..-....--...- -- -
--- -...-....- ----·-.
1 number is 20, which is obtained as follows :
} GQ.. Explain A• algorithm. ~
- __._ - - _.._.._ - -'"- ---•- - - - -"- -- _,.._ - - - - - -- - _.. I (Evaluation function of K) + (Cost
1. A* Algorithm : In best-first search, we brought in involved from start node S to node K)
a heuristic value called evaluation function
value. It is a value that estimates how far a = 1 + (Cost function from S to C + Cost
particular node is from the goal. Apart from the from C to H + Cost function from H to I +
evaluation function value, one can also bring in fun.ction from I to K)
cost functions. Cost functions indicate how much = l + 6 + 5 + 7 + l = 20.
resource like time, energy, money etc. have been
spent in reaching a particular node from the start. While best-first search uses the eval •
While evaluation function values deal with the function value only for expanding the best node, A
future, cost function values deal with the past uses the fitness number for its computation.
Since cost function values are really expended, 5. Fig. 3.15.l gives the algorithm for A* alg
they are more concrete than evaluation function method.
values.
~ - Fitness number
D
- - - Evaluation !unction value
I I
, . , 1 _node-
SI.art i
I I1 ,, I ' I '.
l 11
.. J • fl
6~
Ir
lit Fl . 3.15.l ~ Sam le tree with fttness ,,_.__
I
n...,._r used ror A• search
• A* is a graph traversal and path search algorithm, h(n) S h* (n) [':his adm.issible]
which is often usC9 in many fields of computer, :. g(n) + h(n) S g (n) + h* (n)
science due to its completeness, optimality, and :. f(n) s f(G) < f(O1) ...From equation
l_ optimal efficiency. Thus A• is the best solution in (ii)
many cases.
:. A• ,will never select 0 1 for expansion. (2) The travelling salesman problem, in which a
solution is a cycle containing all nodes of the
Remarks graph and the target is to minimise the total length
(1) A fringe, which is a data structure used to store all of the cycle.
the possible states (nodes) that one can go from (3) The Hopfield Neural Networks problem for which
the current states. finding stable configuration in Hop-field network.
(2) The main idea of the proof is that when A• finds a
path, it has found a path that has an estimate lower .., 3.18 HILL CUt.(BING ALGORITHM
than the estimate of any other possible path.
' -.. -- - ---: - -~- - --~ - - -- -- - - - - -
GQ, Briefly define hPI dimbrng algorithm.
-- ~
)
Syllabus topic : Local Search: HIii Cllmblng Search,
UQ. 'Explain Hill Climbing and its Drawback in details.
Slmulated Anneallng Search
i~- 4
- 3:17 LOCAL SEAR
'!,-....-,...,,.. ~--- - ---~--- - --- - - ---,(;f- ~ -...----..,- -....- . . . , Explain Hill-clfmbing algorithm with an example.
:
: ~Q. ~ Write short note on : Local search Algorithms .
_, . . .~ .1
) (MU - Q. l(d). Dec 17. 5 Marks)
~-----~----~-------------------
_..__,._ - --- -- - -~- - - - - ---~ - -~ _-.._:._..._..,__. ._ - J Definition : This algorithm, auo called ducrd~
optimization algorithm, uaea a 1impk heuristic
• In computer science local search is a heuristi9
function viz., the amount of distance the nocu u frorn
method for solving computationally hard the goal. The ordering of choicu u a heuriltic
optlmlsatfon problems. measure of the rern.aini.116 distance one htu u, trave,w
to reach the goal nocu.
• Local search starts from an initial solution and
evolves the single solution that is mostly better In fact, there is practically no difference between
solution. hill-climbing and depth-first search except that the
children of the node that has been expanded are sorted
• At each solution in this path it evaluates a number by the remaining distance.
of moves on the solution and applies the most
suitable move to take the step to the next solution. 'a. 3.18.1 Algorithm for HID-Cllmblns
Procedure
It continues this process for a high number of
iterations until it is terminated. ► Step 1 : Put the inftial node on a list START.
• Local search uses a single search path and moves ► Step 2 : If , (START is empty) oi
facts around to find a good feasible solution. (START = GOAL), then term.i.oatc.
• search. -
Hence it is natural to implement
► Step 3 : Remove the first node from START.
• Local search algorithms are widely applied to Call this as node a.
numerous hard computational problems,
►.. Step 4 : If (a = GOAL), then terminate searcll
including problems from artfflclal intelllgence, • ,, with success.
mathematJcs, operations research, engineering •►. Step !,: -~lse if node a has successors, gen •
and bioinformatics. ,. • l ...all of them: Find out how far they ~
Some problems where local search is applied are : .. ·from. the g_oal node. Sort them by th •
(1) The vertex cover problem, in which a solution is i:.· ei.-~~ • remaining distance from the goal and
a vertex cover of a graph, and the target is to find • ... _ add them at.the beginning of START.
a solution with a minimal number of nodes. • S 6: GotoS 2..
(MS-126) •, •
~ Tech-Neo Publications..A SACHlN SHAH Ventii'
A111ficial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchlng) ... Page No. (3-35)
:
I
~
•
Eitnlain
~ '>'•
simple hill climbing algorithm
,
with its
'
1
I
1
1
GQ. Explain steepesi-ascent hill dimbing algorithm with {
1 !Imitations. 1 1 its limitations. , :; • ,..»i f1
,________.._.._ __..._ - - _._...__...._ - - - _. . _ __..._ - __._..._..._,..._..._,.._, l _ _...__-ci._ _..._..._ _.... _____ - - _ - - -
._;> _,..,_..._
The simplest way to implement bill climbing is as Steepest-ascent hill climbing : A useful variation
fo!Jows: on simple bill climbing considers all the moves from
I. Evaluate the initial state. If it is also a goal state, the current state and selects the best one as the next
then return it and quit.. Otherwise, continue with
state. This method is called steepest-ascent hill
the initial state as the current state.
climbing or gradient search. Notice that this contrast
2. Loop until a solution is found or until there are no
with the basic method in which the first state that is
new operators left to be applied in the current
state: better than the current state is selected. The algorithm
works as follows :
(i) Select an operator that has not yet been applied to
the current state and apply it to produce a new
state.
~ ~ ' .... • ~- "' •' ,. . .. -J, UQ. What are the problems/frustrations that occur
in hill3
ilM',1111111;
lete
12. Loop until a solution is found or until a comp
1 iteration produces no change to Curre nt state:
- • .....-If it is not better, leave SUCC alone. maxi ma are particularly frustrating becau
se they
(c) 'lf~the SOCC is better than current state, then set often occur almost within sight of a solution.
: , current state to SUCC. this case, they are called foothills.
A plateau is a flat area of the search space
ii
2.
To apply steepest-ascent hill climbing to the which a whole set of neighboring states have ts
all
color ed blocks problem, we must consider same value. On a plateau, it is not possible
the
perturbations of the Initial state and choose determine the best direction in which to move by
are
best. For this problem, this is difficult since there making local comparisons.
so many possible moves.
3. A ridge is a special kind of local maximum. It ii
to
Ther e is a trade-off between the time required an area of the search space that is higher
hill
selec t a move (usually longer for steepest-accent surrounding areas and that itself has a slqlC
to a
climbing) and the number of moves required to get (which one would like to climb). But
that
solut ion (usually longer for basic hill climbing) orientation of the high region, compared to the
od will
must be considered when deciding which meth of available moves and the directions in wbid
work better for a particular problem. they move, makes it impossible to traverse a •
by single moves.
: GQ. \vha~. are th~ ~:y~ -o; d:al;n~ ~~; ~o;a; ~::;a-;
choice rather than by exhaustively exploring all
the consequences. It shares with other local
: • plateau a!'d ridge problems' which• arise' In hill :
;, : methods, the advantage of being less
( climbing? •• Ji
_v:_ - - - -"'- _ ..._ - - - - - - ---- ___..:"_.. -:.,.__ ..J combinatorically explosive than comparable
Problems associated with hill cli~~i~; ~; : ... - - - global methods . But it also shares with other
(i) Local maxima, (ii) plateaus, (iii) ridges. local methods a lack of a guarantee that it will be
effective. Although it is true that the hill-climbing
There are some ways of dealing with these procedure itself looks only one move ahead and
problems, although these methods are by no means not any farther, that examination may in fact
guaranteed :
exploit an arbitrary amount of global information
• Some problem spaces are great for hill climbing if that information is encoded in the heuristic
and others are terrible. function.
• Random restart : Keep restarting the search
from random locations until a goal is found. ~ l.20 Sl!'fULATED ANNEALING (SA)
Problem reformulation : Reformulate the search I • -,- - - - - - - - - - - - - - - - - - - - - - - - - - - -1
• 1 UQ. Define the term simulated annealing. Explain 1
space to eliminate these problematic features. : simulated Annealing with suitable example.
1
(MS-126)
Iii Tech-Neo Publications...A SACHJN SHAH Venture
Artificial lntelll nee MU-Al & OS / Electronics Solvin Problems b Searchin ••• Pa e No. ~
gradually cooled until some solid state is reached. ~ 3 _20. 2 Slmuwed Annulln1 In Machine
The goal of this process is to produce a minimal·
Leamln1
energy final state. Thus this process is one of
valley descending in which the objective function • S.A. is a technique that is used to find the beat
is the energy level. solution for either a global minim um ot
• Simulated annealing is an effective and general maxim um without having a check for evei.,1
form of optimisation. Annealing refers to an single possible solution, that exists. This is suPtt
analogy with thermodynamics, specifically with helpful when addressing massive optimisati
the way that metals cool and anneal. Simulated problems like the one previously stated.
annealing uses the objective function of an • S.A. is a stochastic global search optimlsatiGQ'
optimisation problem instead of the energy of a algorithm. The algorithm is inspired by annealing
material. in metallurgy where metal is heated to a high
Uses of SA temperature quickly, then cooled slowly and that
SA is a probabilistic technique for approximating increases its strength and makes it easier to work
the global optimum of a given function. Specifically, it with S.A. executes the search in the same way.
is a met heuristic to approximate global optimisation in • Annealing is a heat treatment process that changes
a large search space for an optimisation problem. the 'physical and sometimes' also the chemical
properties of a material to increase ductility and
~ 3.20.1 Types and Use of Simulated Annealing
reduce the hardness to make it more workable.
• Simulated annealing algorithms are essentially • It configurated correctly and under certain
random search methods in which the new condition, S.A. can guarantee finding the global
solutions, generated according to a sequence of optimum, whereas such a guarantee is available
probab illty distribution (for example, the to Hill Climbing / Descent if the all local optima
Boltzmann distribution) or a random procedure in the search place have equal scores / costs.
(e.g. a hit-and-run algorithm) may be accepted •
even if they do not lead to an improvement in the S.A. has been widely used in the solution of
Annealing. optimisation problems. As known by many,
researchers, the global optima cannot be
• Simulated Annealing is a process where the guaranteed to be located by simulated annealing
temperature is reduced slowly; starting from a
unless a logarithmic cooling schedule is used. •
random search at high temperature and then
eventually it became purely greedy descent as it
approaches zero temperature. S.A. maintains a
t~! 3.21 PARAHEtE1t·f01t S.A.
current assignment of values of variables. l. Choice of parameters depends on the expected
• Simulated Annealing is an effective and general variation ln the performance measure over the
form of optimisation. It is useful in finding global search space.
optima in the presence of large number of local 2.
A good rule of thumb is that the initial
optima. It is analogous to temperature in an temperature should be set to accept roughly 98° cl
annealing system. At higher value of T, uphill the moves and that the final temperature should be
moves are more likely to occur. low enough that the solution does not unprove
• S.A. will accept an increase in the cost function much, if at all. To improve simulated anncalin3r
with some probability based on annealing we have to do the following:
algorithm. Simulated Annealing is based on an (i) Improve the accuracy.
analogy to a physical system which is first melted
(ii) Alter the parameters of the algorithm.
and then cooled or annealed into a low energy
(iii) You could run your own meta-optimisation Olldf
state.
arameters of our roblem.
(MS-12
ii,
6)
[ii Tech-Neo Publications...A SACHIN SHAH I/
l'
Artlflclal Intelligence (MU-Ai & OS/ Electronics) (Solving Problems by Searchlng) ... Page No. (3-39)
{
~" ...,
(MS-126)
.. J ~ Tech-Neo Publications...A SACHIN SHAH Venture
SoMn Problems b Searchln ... Pa e No. 3::_4o
ArtlficlaJ lntelll nee MU-Al & OS / Electronics (iii) Gene: A gene is one element position (or Part) 0)
(i) Popula tlon : The popuJation of GA is same as the a chromosome.
population for human beings except that instead of (iv) Allele : It is the value taken by a gene for •
human beings, we have candidate solutions that
particular chromosome.
are similar to human beings.
(jj) Chrom osome s : Chromosome is regarded as
solution to the given problem.
r1 1
I 1 I o I 1 -, o I I
' ' 0 I 1 11011
I 0 1 0 1 0 1 0 1
I L.J
Population
set of chromo.ame) 0 Gene
Alele
.
,
Fig. 3.22.1
Encodin
Fig. 3.22.2
I
(v) Genotype : Genotype is the population in the Decoding has to be fast as it is carri
computation space. Using computing system, repeatedly in a GA during the fitness
solutions represented in the computation space can calculation.
11 be easily understood and easily manipulated. Remark : For simple problems the phen ~
I I' (vi) Phenotype : Phenotype is the population in which and genotype spaces are same. '
solution are represented in a way they are
Fitness Function : A fitness function is a functial
represented in actual world situations.
which takes the solution as input and produces
(vii)DecodJng and Encoding : Decoding is a process suitability of the solution as the outpu t
of transforming a solution from the genotype to
.~
the phenotype space.
' ., Encod ing is process of transforming from the
In some cases the fitness function and obj-
functions are same. Depen ding on the prob!
they may be differe nt
pheno type tci genotype space.
II I
(M_S-126) Iii Tech-Neo Publications.. .A SAC HIN SHAH V
(SoMng Problems by Searchlng) ... Page No. (3-41
)
A111flci~I Intelligence (MU-Al & OS/ Electronics)
. F'.tness function is a function, we want to • The solutions may be 'seeded' in areas where
opturuse. optim al solution may be found.
(MS-126)
~ Tech-Neo Publications...A $ACHIN SHAH v~
Artificial Intelligence (MU-Al & OS/ Electronics) (Solving Problems by Searchfng) ...Page No. (3-43)
6. The ~ene~c •algorithm utiHzes payoff (objective 'c:t.. 3.22.9 ApplJatlons of Genetic AJ,orltbm
function) infonnation, not derivatives.
7. The genetic algorithm works well on mixed 1. Genetic Algorithm In Robotics
discrete functions.
• As we know Robotics is one of the most discussed
8. The genetic algorithm concept is modular, fields in the computer industry today. 1
separate from application. • It is used in various industries in orderi to.increase
( t ·. . . .
9. In the genetic algorithm concept, answer gets profitability efficiency and accuracy.
better with time. • As the environment in which robots work with the
time change, it becomes very tough for developers
10. The genetic algorithm concept is inherently
to figure out each possible behaviour of the robot
parallel; easily distributed.
in order to cope with the changes,
11. The genetic algorithms work on the Chromosome,
which is an encoded version of potential
• This is the place where the Genetic Algorithm
places a vital role.
solutions' parameters, rather the parameters
themselves. • Hence a suitable method is required. which will
lead the robot to its objective and will make it
12. The genetic algorithms use fitness score, which is adaptive to new situations as it encounters them.
obtained from objective functions, without other • Genetic Algorithms are adaptive search
derivative or auxiliary information techniques that are used to learn high-performance
knowledge structures. •
'c:t.. 3.22.8 Llmlcatlons of Genetic A11orkhm r
2. Genetic Algorithm in Flnandal Planning
1. The Genetic Algorithms might be costly in
computational terms since the evaluation of each
• Genetic algorithms are extremely efficient for
financial modelling applications.
individual requires the training of a model.
• As they are driven by adjustments that can be
2. These algorithms can take a long time to converge used to improve the efficiency of predictions and
since they have a stochastic nature. return over the benchmark set.
3. The language used to determine candidate • In addition, these methods are robust, permitting a
solutions must be robust. It must be able to endure greater range of extensions and constraints, which
random changes such that fatal errors don't may not be accommodated in traditional
mistake. techniques.
4. A wrong decision of the fitness function may lead
Syllabus topic : Game Playing, Adversarial Search
to significant consequences
Techniques
5. Small population size will not give enough
solution to the genetic algorithm to produce
precise results.
6. A frequency of genetic change or poor selection
scheme will result in disrupting the beneficial • Adversarial search is a search when there ii an
schema. 'enemy' or 'opponent' changing the state of the
problem every step in a direction which we do not
7. Though Genetic algorithms can find exact
solutions to analytical sorts of problems, want to have.
traditional analytic techniques can find the same • Each agent needs to consider the action of other
solutions in a short time with few computational agent and effect of that action on their
data. perfonnance. So, searchers in which two or more
players with conllicting goals are trying to explore
the same search space for the sol'!tioo, are called (ill) Determlnlstic games : Deterministic g
follow a strict pattern and set of rules ro:"
adversarial searches, often known as Games.
games. There is no randomness associated \Vi
• Adversarial search is search when there is an them. Examples: Chess, Tic-Tac-Toe etc. ~
"enemy" or "opponent'' changing the state of the
(iv) Non-deterministic games : These are th
problem every step in a direction you do not want.
games which have various unpredictable ev
• Examples : Chess, business, trading, war. You and have a factor of chance or luck. These ~
change state, but then you do not control the next random, and each action response is not fi.~
state. Opponent will change the next state in a way Such games are called as stochastic garn
1. , Unpredictable 2. hostile to you Example : Poker etc.
You only have to change every alternate state. (v) Zero-sum Game Zero-sum games
In adversarial search we examine the problem adversarial search which involves
which arises when we try to plan ahead of the world competition.
and other agents are planning against us. Io zero-sum game each agent's gain or loss
(i) We study situations where more than one agent is utility is exactly balanced by the losses or gains
searching for the solution in the same search utility of another agent.
space, and this situation usually occurs in game One player of the game tries to maximise o
playing. single value, while other player tries to minimise it.
(ii) The environment with more than one agent is Each move by one player in the game is called
termed as multi-agent environment in which each 'ply'.
agent is an opponent of other agent and playing Chess and Tic-Tac-Toe are examples of a
against each other. Each agent needs to consider sum game.
the action other agent and effect of that action on Zero-sum game : Embedded thinking :
their performance. The Zero-sum game involved embedded
(ill) searches in which two or more players with in which one agent or player is trying to figure out
conflicting goals are trying to explore the same (i) What to do
search space for the solution, are called (ii) How to decide the move
adversarial searches, often known as 'Games'. (iii) Needs to think about bis opponent as well
•I (iv) Games are modelled as a search problem and (iv) The opponent also thinks what to do. Each of
1.I heuristic evaluation function, and these are the players is trying to find oat the response of
I
two main factors which help to model and solve opponent to their actions. This requires em
games in Al thinking or backward reasoning to solve the
problems in Al.
I ~ 3.23.1 Types of Games In Al
·: I Formallzatlon of the problem
I I (l) Perfect information : Agents have all the
.mformation about the game, and they can see each A game can be defined as a type of search in Al
.I
Ii-
I
l
•,\, • other moves also. Examples : Chess, Go etc.
(U) Imperfect information : If in a game agents do
not have all information about the game and not
which can be formalised of the following elements:
(i) Initial state : It specifies bow the game is set
at the start.
l
aware with what is going on, such type of games (ii) Player (s) : It specifies which player bas moved·
are called the game with imperfect information, the state space.
such as Battleship, bridge etc. (ill) Actions (s) : It returns the set of legal moves
states ace.
I. Adversarial search is a search where we examine Searches in which two or more players • with
the problem which arises when we try to plan conflicting goals are trying to explore the same
ahead of the world and other agents are planning search space for the solution, arc called Adversarial
against us. searches, often known as Games.
2. To find optimal solution Heuristics techniques are Games are modelled as a search prof>lem and
used. heuristic evalution function, and these are the two
In adversarial search, the result depends on the main factors which help to model games in Al.
players which will decide the result of the game.
3. Pruning is a technique which allows ignoring the Perfect Information : A game with the perfect
unwanted portions of a search tree which make no information is that in which agents can look into
difference in its final result. the complete board. Examples are chess, Go etc.
(MS-126)
1'ITech•Neo Publications...A SACHIN SHAH Venture
- : ! :.
Solvin Problems b Searchin
Artlfic:laJ lnten· nee MU-Al & OS / Electronics
~ Games
• Search
,_ -~:;._ .;.\;!:.>-'!! - - - - - - - - - - - - - - - - - - - - - - - _;
• Zero-sum games are adversarial search which
involves pure competition.
• In Zero-sum game each agent's gain or Joss of
•
TERMINAL-TEST (s) : It defines that the game
bas ended and returns true.
UTILIT Y (s,p) : It defines the final value with
'ff ~
o ~ ay a game, we use a game tree to know all known as Objective function or Payoff function.
the poss1b!e choices and to pick the best one out. There The price which the winner will get i.e. ' •
•
are followm g elements of a game-p 1aymg: • (-1) : If the PI/4YER loses. .d. t
• SO .: It is the initial state from where a game
~~- .
•
(+1): If the PLAYER wins.
(0) : If there is a draw between the PLAYERS.
• PLAYER (s) : It defines which player is having 1• I ,
For example, m chess, tic-tac-toe, we have two or
ti)
I I
' I J
3.25.1 for tic--tac-
Let's understand the working of the•elements with the help of a game tree designed in Fig.
players.
toe. Here, the node represents the game state and.edges represent the moves taken by the
,11) 4
-
A ., 'li
MAX (x) r', . ,.
_.,, r;:i:iU ~'- 'II •
, ;m . ' J If
JIJ
MAX (0) ~lLJ'jr'lO') ~,i ,~1 :'.I~ 1 < ·1 , ,:. :1 1n , .,,
tJ ,,
•1.~.'.I' tq
J
e ,: .:'l•Hi'J I~
'£0" -:>rl
I I NJ ,a, •iii~ ,, ,, ,,f.l
1•:. .1s ~ • ., i;, •
• (,., .. -1 MIN (0)
I." l rl • r ,, • •''i ~,uc:r:r) , / r'J":,.•u I ,i1r • '.J
,lq fl V .·, ··•·", ');,, • :,J'J- -~1. .... ••• I
I , .J I ..... •.... '''I 11 .I I f 11 'JI, I u 1J ,r ':,,;;I,· ,·:, :ii
n ., <
Terminal
lxl61xt lxl6jxj
0 X 0 0 X X
!M ,n l I ~ .,,,J -,j ~ii :: ..•
;,l•:rt
.U,•"J ! ,.
l'.i ,..,,u ,, 1
t, ft''l>·..
it! 1 1
PLAYER (s) : There are two players, MAX and • RESULT (s, a) : The moves made by MIN and
• MAX will decide the outcome of the game.
MIN. MAX begins the game by picking one best
~ Tech-Neo Publications...A SACHIN SHAH Venture
(MS-126)
'
for artificial intelligence in scenarios .that do not priority. If there exist more than one cell with the
require real-time decision making and have a maximum priority, then any one of them can be
relatively low number of possible choices per selected. The PS class of moves makes~ sure that
play. the player has control over the most important
cells on the board. I
• The most commonly-cited example is chess, but
they are applicable to many situations. Game trees 2. Motion (M) : The M class of moves finds all
are generally used in board games to determine tracks with only one cell filled with the symbol
the best possible move. For the purpose of this assigned to the player and other two cells blank.
article, Then one of these tracks with the highest priority
Tic-Tac-Toe will be used as an example. is chosen. After that the blank cell with higher
• The idea is to start at the current board position, priority in the chosen track is selected. The M
and check all the possible moves the computer class of moves makes sure that the player
can make. Then, from each of those possible continues filling a track in which there is still a
moves, to look at what moves the opponent may chance to win.
make. Then to look back at the computers. 3. Definitive offense (DO) : This class of moves..
Ideally, the computer will flip back and forth, finds a track with exactly two cells filled 'witli the
making moves for itself and its opponent, until the symbol assigned to the player and the third cell
game's completion. It will do this for every blank. This blank cell is selected. This moves is
possible outcome, effectively playing thousands meant to provide an immediate win to the player.
(often more) of games. From the winners and 4. DeflnJtlve derense (DD) : This class of moves
losers of these games, it tries to determine the finds a track with exactly two cells filled with the
outcome that gives it the best chance of success. symbol assigned to the opponent player and the
,- - ---~ ~-- --- - --- -- ---
...-----..-....- ~
• GQ,.. How- AI technique is used to, solve tlc-tac'..toe 11
-- -----~ ....---:;-,
third cell blank. This blank: cell is selected. It is
1 ~ problem? • ,,;f " ' .... • .., meant to prevent the player from an immediate
~ - _..._ - _... - _..._..._ - - - - - - - - ..n ---- - _._ - - - ___ _,._ .•
loss. If the moves are not used then the opponent
q, Heuristi cs function for Tic-Tac- Toe problem definitely gets a chance to win in the subsequent
move.
The board used to play the Tic-Tac-Toe game
: This class of moves
consists of 9 cells laid out in the form of a 3x 3 5. Tentative offense (TO)
matrix. finds all pairs of intersecting tracks in which both
the tracks have exactly one cell filled with the
The game is played by 2 players and either of
symbol assigned to the player and the other two
them can start. Each of the two players is assigned a
cells including the common one blank. All such
unique symbol (generally O and X). Each player
common cells of the intersecting tracks are
alternately gets a tum to make a move. Making a move
identified and the one with the maximum
is compulsory and cannot be deferred. In each move a
priority is selected. This move tries to keep the
player places the symbol assigned to him/her in a blank
foundations of victory simultaneously on two
cell.
tracks. If the TO class of moves can be applied
Seven classes of moves have been designed then the player can win in the next move.
using the available heuristics. Each class of moves
6. Tentativ e defense (TD) : This class of moves
represents a set of functionally cohesive moves in
finds all pairs of intersecting tracks in which both
order to achieve a certain objective during a game.
the tracks have exactly one cell filled with the
These classes of moves are defined and their roles in
symbol assigned to the opponent playec and the
playing the game are discussed below :
other two cells including the common one blank.
1. Prioritiz ed selection (PS) : The PS class of All such common cells of the intersecting traclcs
moves selects the blank cell with the maximum are identified and the one with the maximum
(MS-126)
~ Tech-Neo Publications...A SACHIN SHAH Venture
Artificial lntelll nee MU-Al & OS / Electronics Solvin Problems b Searchln ... Pa e No. :!:§a
priority is selected. lt tries to undo the effect of Let the evaluation function, e ( P) of a P<>sitio
the tentative offense class of moves applied by . . . ll~
be given simply by, lfp 1s not a wmmg position f~
the opponen t If the move is not applied then the
player can lose in the next move. either player.
7. Diagonal correctio n (DC) : The above six e (P) = (number of complete row, colulllll,
#
classes of moves may be insufficient to prevent a diagonal that are still open for X) ,
loss for the player moving second if either of the
- (number of complete row, column, ot
two diagonal tracks are filled in the first three
moves. This may happen even if the losing player diagonal that are still open for 0)
controls the more important cells. It is used to If P is a win for X,
save the player from losing in such conditions. e (P) = oo (a very large value)
In Tic-Tac-Toe, player alternate putting marks in
If P is a win for 0,
a3x3 array, one mark (X) and other mark (0).
e(P) = - oo
Thus if Pis
We have e(P) = 6.4 = 2
'J , .
,·
... .,
~·.•:u
:1 I
I C
Max'•Mov•
•, ':JC,. ®' G>
L I\)
Jn ,11
J-•....c.. :', .,
I
' t I
#- -- -# ~- -,-
I' ,, II, (..,,_.
.. , . 111
*5-5•0
r:. ' .
i· ' ,,- t·
. I.,, . •
• I
..
~ , IH , #5-8•-1
e
I I
al • #5-5•0
, I! I . ,.,.- ·, II .,' f I
r (,1
.d
I It,
r · #--~e-s-1
rr ' I•
I I I! J
,
•
IJ ••i .. •r.' • , I 8-5• 1
, XO
Ffe, 3.26.1: First state of search In Tic-Tac-Toe
.. • I I
,I
~ J.26.1 dc-uc-toe Problem 2. Here, the maximizer has to play first followed by
-~-----------------
: GQ, How AI technrque is used'~; ~~v_e_ti~ - - - --,
the minimizer. Thus maximizer assigns - 6 and B
1 •bl em 7
p,o ~~~·, which is passed back to A. This is replaced by 3,
1
A _____ _.. __ _' value passed by C as A has the maximizer move.
· - - - ~ _..._ -s- - - - - - - - - - - - - -
+2
1. As mentioned above, game trees are rarely used in
+7 +4 +3
real-time scenarios (when the computer isn't given
Fig. 3.26.2 : A game tree expanded by two levels and
their associated static evaluation function value very much time to think).
2. The method requires a lot of processing by the
8. If A moves to C, then the minimizer will move to
computer, and that takes time.
K (static evaluation function value = O) which is
the minimum of 3, + 5, 7 and 0. So the value of 0 3. Por the above reason (and others) they work: best
is backed up at C. On similar lines, the value that in tum-based games.
is backed up at D is 2. The tree now with the
4. They require complete knowledge of how to
backed up values is given in Fig. 3.26.3.
A move.
Maxmizefs Move 5. Games with uncertainty generally do not mix well
with game trees.
6. They are ineffective at accurately ascertaining the
best choices in scenarios with many possible
choices
-2 +3 +5 +7 0 +'4 +3
(MS-U6)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Solvin Problems b Searchln ... Pa e No. ~~
1
[)I
game . The plausible move generator generates
three children for that move and
evaluation function generator assigns the values
given along with each of the states.
the static
nodes.
7. If A moves to B it is the minimizer who will
to play next. The minimizer always tries to
the minimum benefit to the other and hence
2. It is assum ed that the static evaluation function will move to G (static evaluation
gener ator returns a value from - 20 to +. 20, value - 6). This value is backed up at N.
wherein a value o( - 20 indicates a win for the
maximizer and a value of - 20 a win for the
a. 3.27. 1 Properties of Min Mu Al,orf:cba
minimizer. A value of O indicates a tie or draw. 1. Mini-max algorithm is a recursive or bac:Jctr:idilli
3. It is also assumed that the maximizer makes the algorithm which is used in decision-making
first move. (It is not essential so. game theory.
Even a minimizer can 2. It provides an optimal move the for
i·l l make the first move.) The assuming that opponent is also playing optimally.
maximizer, always tries 3. Mini-max algorithm uses recursion
to go to a positi on where through the game-tree.
. the static evaluation
function value is the
4. Mini-max algorithm is mostly used for
~ 2 7 playing in AI. Such as chess, checkers, tic- '
maxim um positive value.
l I
Fig. 3.17.2 : InJtJaJ state or the game
go and various tow-players game.
5. This algorithm computes the mini-max
,l! 4. The maximize being the player to make
move to node D becau se
the
the
first
static
for the current state.
move ; will
,/;
,.!.,
(MS-126)
• 1 ~ Tech-Neo Publications...A SACHIN SHAH V •
• • I
1
Artfficlal lntelll ence MU-Al & DS / Electronics
_I-
e No.
b Searchin ...Pa
Solvin prol)lems
layers, so 'irit
H er e th ere are only 3
S/ Electronics ► Step IV :
th e root, i.e. lbc
Artlflaal lntelll
nee MU-Al & O
m edia tely re ac h to the root. A t
im e minirnl!Qi
os t po int, m in ha s to ch oo se th
topm
value. 4
w e ev al ua te m in {4, 5, 8} =
is th e left 'no.
So,
T oe be st op en ing move for min
2
:.
2 8
5
3 node max decisiOQ
m ov e is called as mio
Fig. Ex. 3. 27 .l( a) Not e th at this
th e assumptiOQ
-m os t of the the utility under
right as it maximises imise,
► St ep m : Similarly, tefomr axthe{8, 2} = 8 po ne nt is pl ay in g optimally to min
alua that the op
sa m e layer, w e ev
it. , 3, 1),
on = min {max (4
:. Minmruc de ci si
{8, 2}}
II
m u . {5, 2}, m u .
4
= min {4, 5, 8} =
I • I,
• .,rf• fI
- I .> : / '
Jl
,fl I)
1
11 1, J
J•
• r I ," •"'• •
'•li- '4
• JI l1 .1 '· " :
I. I ' lU l
J
; ; • •• , If, '• '
• IL I r
'
ll n 1 K
iH
►
I
'.
Module 4
CHAPTER
Knowledge and
4 Reasoning
S Ila ua ,,
Definition and importance of Knowledge, Issues in Knowledge, Representation, Knowledge Representation Systems,
Properties of Knowledge Representation Systems. Propositional Logic (PL): Syntax, Semantics, Formal
logic•connectives, truth tables, tautology, validity, well•formed•formula, Introduction to logic programming (PROLOG).
Predicate Logic : FOPL, Syntax, Semantics, Quantification, Inference rules in FOPL, Forward Chaining, Backward
Chaining and Resolution in FOPL.
i4.6
::::! ~":':~~.~~~~::-::::::~'.·.:~·::::::'.::::~:'.:::::'..::~:'.·::::.~::~:::::::::::::::::::::::·.:::::~::::::::::::::::::::::::::::::::::::::::::::::::::it;
Vondi11onal Connectives Of Implication ..................,,;:';;. .............................................~................................................. 1 ..r,
't- 4.6.1 E>Campl~s; .... ., ..'; .........'_.......'1.::.-:...:.'..:.'.. ::........t ............................................:.................................................. 4,.
f 4.6.2 Condltlorial Statements and Variations ..:.'.................................................·-::................................................ 4,~
i~ ... ~'"· :.s.~,:--, Advanta~e and Di~advanta'!~s of P~~sltio~~~.~!~·~··.;::····.. 1'...·;·;··..:·'.";··;·....'. ••••;.;:;·······..•••••••••••..•••••••••••• ~}
4.6.4 Theorem of Contra-Positive of the Statemerrts.'....;'..;.:..: ..-:..:..:.... '.....:....:.;......................................................~ql
4.6.5 Bloonditional : p ++ q ......................................................................................................................................411
4.7 Arguments..................................................................................................................................................................._~-
4.7.1 Theorem on Tautology ....................................................................................................................................:'•
•• 4.7.2 • • •Fundamental Prtnclple of Logical Reasoning ...,:.:.'..::..::..... ;...........•;.:............................................................;4-
4.7.3·-· •• Verification of Law of Sylloglsm.. .-: ... ,·...: .. :...............i.'.... r................::.........:.;..:................................................
1
;_,
:·· •
1
4.9.3 • Example& on Logical Equlvalency....•.•.-•..-..............................................................- ........._..............................
4:1°0 Normal Forms .......... ·.......... •..·...... ·.·... ·... ·.·.•....• ·.·.... ·...... •· ....... -...... · ... · ..... •................................................................_
•.. •• 4.10.1 • Disjunctive Normal-Forrn ...,..........................................................................................................................:_+a
: i-- • • • 4.10.2 • Examples of ONF ········•·•·.. •····•·••••••.. ·••..·•u• .. ••••••• .. •••••...·••·U••·..······~:·.:.~~:............................................................:_
4.10.3 Conjunctive Normal Form (cnf) ......................·...........:....;;·.........:.:..:.:..:..•.;...................................................._
~-: •••. · 4.10.3{A) Conversion from PL•to CNF ...............................................................,..............n••·•·•··.. ~·-~ ........................-
•.•.-· . UQ.. Explain the stepe involved In converting the propositional logic statement Into CNF with a suitable
example. ........." ...................................~..... ~, .......................................~.....
. ..t. UQ. Convert the following propositional logic statement Into CNF. A ➔ (I ~ C)
1l11U- 0 1 r- Dec. 11. '., r:h11k ,1 0
~ 4.1.1 .~ -
,- - - ---- - - ---- --- --- - - -- - - - -- -- -- -- - - - - -- - - - - -.- - -- - --..::, -~--,
Knowledge Progression
- .. ~;
~ GQ. What is knowledge 7
---- ---- ---- ---- ---- ---- ----
sion
----
that starts
----
with
----
data
----
which is
----
of
----
limited
-------J--
utility. By organiz.ing
• Definit ion : Knowledge is a progres
ation.
analyzing the data, we understand what the data means, and this becomes inform
anding of the prlncjp/
• The interpretation or evaluation of information yields knowledge. An underst
embodied within the knowledge is wisdom.
Organizing Interpretation Understanding
Know ledg• t----- --._ _ __
Data t - - - - - i Information Princip~s
Analyzing Evaluation
.-.
~ 4.1.4 Levels of ICnowledp Represemadoa
---- ------------. --------
•·GQ. Write a note on "knowledge representation". Or •
I '
-. I
the sequence of numbers 1 1 2 3 4 7. A change of
base in the number from IO to 2 transforms the
number to 01 1011011011011011.
J what are the different levels. of~ knowledge ,
: '- representations. Or J
I
a. 4. t .5 V.to.s Levels of ICDowtedpbased
' .,
I
What are the methods of knowled_ge representation. : Apat
1 1
L
Knowled e and Reasonin
Arti1lciaJ lntem nee MU-Al & OS/ Electronics
a. 4. 1.6 ICnowleclp Level
a- (I) Knowledge-base , I
Knowledge-base is required _for ~
(a) Knowledge-based agents arc those agents who knowledge for an agent to learn with expenCllcea.
have the capability of maintaining an internal state and talce action as per the knowledge.
of knowledge, reason over that knowledge, update
their knowledge after observations and talce w (11) Inference system
actions. These agents can represent the world with Inference means deriving new sentences from old,
some formal representation and act intelligently. •
• Inference system allows us to add a new sen~
(b) Knowledge-based agents are composed of two to the knowledge base.
main parts:
• Inference system applies logical rules to KB to
(I) Knowledge-base and deduce new information.
(Il) Inferenc.e system. Inference system generates new facts so that
A knowledge-based agent must be able to do the
•
agent can update KB.
following: An inference system works mainly in two
(i) An agent should be able to do represent states,
•
which are given as :
actions, etc. (i) Forward chaining (ii) Backward chaining
(ii) An agent should be able to incorporate new
W Operations performed by KBA
percepts.
(iii) An agent can update the internal representation of Following are three operations which
the world. performed by KBA in order to show the intelli
(iv) An agent can deduce the internal representation of behaviour:
the world. 1. Tell : 1bis operation tells the knowledge
what it perceives from the environment.
(v) An agent can deduce appropriate actions.
The architecture of knowledge-based agent is 2. Ask : 1bis operation asks the knowledge
what action it should perform.
shown in Fig. 4.1.3.
3. Perform : It performs the selected action.
1:---
Environment
-----•
1 ----------------------------,
KBA
Input from a. 4.1.7 Lopcal Level
environment Output
• At this level. we observe how
Leam~ representation of knowledge is stored.
(updating KB)
• At this level, sentences arc encoded into diff
Knowledge
logics. At the logical level. an encoding
baae
knowledge into logical sentences occurs.
Fig. 4.1.3 : Architecture of knowledge-based agent a. 4.1.8 lmplemeatadoa Level
Toe Fig. 4.1.3 is representing a generalised (1) This is the physical representation of logic
architecture for a knowledge-based agent (KBA). knowledge. At the implementation level
KBA takes input from the environment by pexforms actions as per logical and knowl~
perceiving the environment level.
Toe input is taken by the inference engine of the (2) At this level. an automated taxi agent acniaJl11
agent and it communicates with KB. The learning implement his knowledge and logic so that be r:JI
element of KBA regularly updates the KB by learning reach to the destination.
new knowledge.
fifteen years has shown this to be the single most A knowledge representation system shai
of the building
time-consuming and costly part provide ways of representing complex know ledge )
process. should possess the following characteristics :
(8) This has led to the development of some 1. The representation scheme should have a
sophisticated acquisition tools, including a variety of well-defined syntax and semantics. This
of intelligent editors; editors which provide much help in representing various kinds of knowledgc. j
assistance to the knowledge engineers and system 2. The knowledge representation scheme ~
users. have a good expressive capacity, a '
(9) The acquisition problem has also stimulated much expressive capability will catalyze the infi
research in machine learning systems, that is; mechanism in its reasoning process.
systems which can learn new knowledge 3. From computer system point of view,
autonomously without the aid of humans. representation must be efficie nt By this we
1
that it should use only limited resources
(10) Since knowledge-based systems depend on large
compromising on the expressive power.
quantities of high quality knowledge. for their
success, it is essential that better methods of ~ 4.1.12 (A)Kn owled aebpr esent adoa
acquisition. refinement, and validation be
developed. Various Knowledge Representation Scheme!·
as follows : i
(11) The ultimate goal is to develop techniques that
(i) Semantic networks (ii) Frames
permi t systems to learn new knowledge J
autonomously and continually improve the quality 1(ili') Conceptual dependency (iv) Scripts
of the .knowledge they possess. r,
I • r.
(MS-126) 1'
Iii Tech-Neo Publications...A SACHIN SHAH v,
---------------~M~WII~ .. Mtliw.lfij_ _ _ _ __
For example : a set of logical assertions can be • For the reason that there is more than one value
combined with a resolution theorem prover to give a that satisfies the predicate, but only pne value is
complete program for solving problems. There is a needed, the answer to the question will depend on
different way, though, in which logical assertions can the order in which the assertions are examined
1
be viewed, namely as a program, rather than data to a during the search for response. • _..,, 1J
"
1 :n
~
•-~·sr.:.Nc "' ~~P
.. ~ . ,~:.,..z.i.. . roc edu ral Kiio wle
. d ~,'»• ~ . •
ge. ..,.,.,
.. s~
Based on knowledge in the process of sy
3. Pos sibl e to fast er usag e.
design. ---....,
wle dge form at that may be ~
4. in a part icul ar resu lt According kno
Wh en we hav e to achi eve
manipulated and analyzed. ---..
'
.. 5.
6.
Kno win g how to do something.
Sim ple data type can be used. I
It is knowledge abo ut something.
Large data type can be used. --
~
q
I
Followed by SQL.
7. Fol low ed in C++ and Cobol. ~
line i1
task. According SQL any simple task in one
' 8. Acc ord ing and program.ming any simple
cod e i.e. one line SQL statement. ~
. I
n. Programmer should not interact with SQL ,.
9. Pro gram mer sho uld understand executio
r.
Initially slow er but possibly faster late ~
10. Init ially fast er bu_t late r it can be slow.
I!'
I
Wo rk on data eng ine
11. Wo rk on interpre~er of language.
~ I
with the DBMS.
Example : Man moh an is olde r than mon
u. ~~
r.
II 'I 12. Exa mpl e : If Marunohan or Mon u is olde
~
·l
I>
'11 :
l'' I
ot'r.
Syl labu s top ic : Propositional Log
t-"" ... . , t. ~
ic (PL)
• , I this
~ ""';! .
Wh at was the colo ur of the
bear ? Now, actually the bear
walked 3 sides of a squ are, Lik e
LJ 'f
F1g.4.2.1
,,:4~2' .. •. '"'-OPOSITIONALLOGJC. ~~
I
.(FIW;,O.R.Di
-:,,~;_.,
~;;;1,'q;,~'fVf i tOG ICf •• ;::" ••.
-:.;.L ~ ~ But since it ended up whe re it star ted
from, @
I• ... ..,_ '• • .,.,·c ,..... I, ~._
D
4.2 .1 ,,~
,,
'
Log ic is the disc ipli ne that deals with
the met hod s .n
of reasoning. On an elem enta ry leve
l, logic provides ti~
whether a given
rule s and tech niqu es for determining
:J..~
F1g. 4.2.2
argu men t is vali d.
this coaY
I r Com mo nse nse log lc The only two places in the wor ld whe re
th-pole. 1k
personal happen is either the North-pole or Soo
It is deri ving conclusions from impossible •
that som ething south-pole is not possible, sinc e it is
exp erie nce or kno wle dge . A conclusion bear was ,
I l' ng doe sn't make travel south from south (pole). So the
mak es sens e, so it is righ t or somethi the beat .,,.
ti,' sen se, so it is wro ng. North pole, i.e. it was a pola r bear. So
j1
A bear white.
i! •I: Let us con side r one clas sic example:
left and dge, i.e., i
'I ., wal ked one k.m . due sout h, then turn
ed to the This problem is solv ed by knowle
.I I I
requires a logical type of min d to app ly that
mowlediC
wal ked the left again and
one k.m . due Eas t. The n it turned to be called Id
I to a particular problem. It can also
it ~
'I
!
I. ',
' lrj wal ked one k.m . due Nor th and arriv
star ting poin.L
ed back at its
Inductive logic, which doe s not perm
outside the facts available. 1
' I·
~, liJ Tech-Neo Publications...A SACHIN SHAH"'
fJ :,. Ir- (MS-126)
k ~ .. t• •
~' I I
~ a l Intelligence (MU-Al & OS/ Electronics)
(Knowledge and Reasonlng)... Page No. (4-11}
J'I Logical reasoning
llluatrattve Ex. 4.2.1 :
lt is, used in mathematics to prove theorems,. m .
. rify (i) The earth is round
computer science to ve the correctnes f
so programs (ii) 3 +4 = 7
and to prove. theorems. And in our everydaY lives to
solve a multitude of problems. (iii) 4+ x= 9
(iv) Do you speak Gujarathi 7 •lJ
~ 4.2.2 Loalc Lanaua,e
(v) Take two aspirins. 1
• One of the basic . difficulties
.
• an
m· deve1opmg (vi) The temperature on the surface of the planet mars
approach to logic 1s the limitation of ordinary is 500°F.
ts
. it comes to presenting statemen
language when (vii) The sun will come out tomorrow . •• ,,.jnr ,.,
d 1 [Exactly the same prob! •
..an cone us1ons. em anses @ Soln.:
wi~ computers. You cannot instruct computers in
ordinary language-it has to be consistent with the (i) and (ii) are statements which are true.
input/output capabilities of the computer]. (iii) it is not a statement, since it is true or false
depends on the value of x. But we can say that it is
• Our aim is now very simple- to give each
statement an exact meaning and manipulate such a declarative sentence.
statement in a logical manner, determined by the If we put x = 5, it becomes a true statement, if we
rules and theorems. Here, we discuss a few of the take value of x 'I:: 5, it becomes a false. Such statements
basic ideas; i.e. rules and theorem. are open statements.
Thus if a mathematical statement is oeitber true
~ 4.2.3 Syntu nor false, it is called open statement.
Syntax is the order or arrangement of words and (iv) It is a question, not a statemenL
phrases to fonn proper sentences. The most basic (v) It is a command, but not a statement
syntax follows : (vi) It is a statement, because in principle we can
A subject + verb + direct object formula determine if it is true or false.
For example : Ramesh hits the ball. (vii)It is a statement, since it is true or false but not
both.
~ 4.2.4 Semmdcs
&. 4.2.6 Compound Proposldoas ' t'
The 'semantics' mean meanings in language.
Semantic technology simulates how people Propositions composed of sub propositlom are
understand language and process information. called compound propo.tjtiom. A proposition is said
to be primitive if it cannot be broken down into
By approaching the automatic understand of
simpler propositions; that is, if it is not composite.
meanings, semantic knowledge (technology/overcomes
Compound propositions or statements are
the limits of other technologies).
composed of various logical connectives.
Ii~ And it has a TRUE value. one or p or q Is true; it is rat.se when both p lllld
Defmiti,on : If P and q are rat.se.
q are t r u e , then. p " q is also true,
1 I~ otherwise p " q is false. The truth table of p V q :
We prepare the table for the truth-value of p "q.
p ,_q pvq
Table 4.3.1 : Truth table for Conjunction
~q,. pAq
. T T T
p
T F T
T T T
F T T
T F F
F F F
F T F
(lll)Negation: ( -,)
F F F
If p be any statement then negation of p is dcnoecd
(i) In the first row; if p is true and q is true then p " by-, p and is read as 'not p'.
q is true.
If p is true then-, p is false. If p is false then -, pis
(ii) In the second row : if p is true and q is false then true.
p A q is false. And so on.
a- Truth table for Negation If
Remark : p A q is true only when p and q both are
true. Table 4.3.3 : Truth table for Nepdon
'
ll ,J
.
Remar : egauon 1s also denoted by O • Thus if pis
trUC. then o p is false 'Zs. 4.4.2 Examples Based on the Proposldon
Ex. 4.4.1
Syllabus topic: truth tables, tautology, valldlty, (i) Fwd the truth-table of the proposition
well-formed-f ormula -, (p I\-, q)
,~ - '1t"f=l
p q -,q ,P ."::, q -, (pA-, <y
;I
p q .;(pA~
~ ' V
Truth-table : The truth-value of a proposition The truth table of the preposition consists of the
columns under the variables and the column under the
depends upon the truth values of its variables. (Thus proposition, as shown in (b).
the truth value of a proposition is known once the truth
Ex. 4.4.2 :Fmd the truth-table of-, p " q.
values of its variables are known). This relationship
0 Soln.: We construct the table:
can be shown through a truth-table. ,~
p' -9 -rp ,pA q
~ 4.4.1 Method of constructing Truth-table
T T F F
of the Proposition
T F F F
► Step (i) : The first columns of the table are for the F T T T
variables p, q, T F
F F
► Step (ll) : Allow the rows for all possible
(i) For two variables p, q we choose the truth-values
combination of T and F for these variables. (For 2 in the first two columns as shown.
variables, i2 = 4 rows are necessary; for 3 (ii) We fwd the truth value of-, p using the negation
d •
variables, 23 = 8 rows are necessary, an , m
general, for n variables, 2n rows are required. (ill) We fmd the truth value of -, p " q using the
conjunction "·
► Step (ill) : There is a column for each
"elementary" stage of the constructions for the UEx.4.4.3 MU· 0. 5 b. Ma 19. 10 Marks
1. John likes all kinds offood.
truth-value of the proposition
't/x : food(x) ➔ likes (john, x)
► Step (iv) : Toe truth value of each step being
2. Apples are food.
determined from the previous stages by the
food(apple) l • • • I
definitions of the connectives A, v, -,
3. Chicken is food
► Step (v) : Finally, in the last column, we obtain food(chicken)
the truth value of the proposJtlon. 4. Anything anyone eats are isn't kiled by is food
Vx: (3y:eats{y, x) A -, killed by (y, x))
➔ food(x)
L@ Soln .:
'Ix: cats(Bilt, x) ➔ eats (Sue, x)
'Y~i' ly;allve Ot)~: :, k,illcdby_(~ Y ) ~
.
'
-,food (Pean ute) -,eate (a,b) V klllec1
V eanut a
Vtooc1<•1
(,)
(MS-126)
~ 4.6.1
p q ....p 'p➔,/' ~·4-➔
Examples
T'
Ex. 4.8.1 : Determine the truth value of the following TT F F_L_T.:_-,--,--,--r , ,
statements : .. rti F T T F ,
~TfF~F+2_T+_;__l-:-r--:--r---..: ·>
(i) If Bombay is in India, then 4 + 5 = 9.
(ii) If Bombay is in India, then 4 + 5 = 3. 1 J ,- ~T T
1~~2_j~F~_T:_..-r-=-F7r:F7--l• t
• •, l,, T T T l I
@ Soln.: tFtFLT~_'._T_L~-~=:--::=--=-=~
(i) Let p = Bombay is in India, q = 4 + 5 = 2 - Only the contra positive '-,q ~ -.p' is l<>gican,
·: pis true, q is true, :. p ➔ q is tnie. eqwvalent to the original conditional probai,;,,,.'
•
~'Ill}
p ➔ q.
(ii) Let p = Bombay is in India,
J q = 4+5=3. ., I .. t a 4.6.l AdvantaP and Dlsadv~ of ....
,. ~·. t➔ q 1s false.
·: pis true,' but q is false·
'I Proposfdonal lope
_ _ _ _ _ _ ___:__ _:__ _ _~----:--:---)f, J ~ •~ Advantages of propositional Logic I '
\I,
Ex. 4.6.2 : Rewrite the following statements ,.without
-=-:-=-~--~~7--l..-~--------
(MS-126) 1§;1 t, tt"1' " ·, I" ,,~, ·· •
Statement If p, then q F F T
~-" ,G
If not p, then not q_
Inverse Toe bicooditional p 4-+ q is true when ever P and q
If not q, then not p. ~·
Contrapositive have the same truth values and false otherwise. q ,1
The contrapositive of a conditional statement of
the form "if p then q" is "If -, q then -, p".
s- Theorem
Symbolically the contrapositive of p q is -, q -, p. The propositions P (p, q, - ) and Q (p, q, -) are
(i) Conditional : The conditional of q by p is "If p logically equivalent if and only if the proposition.
then q" or "p implies q" and is denoted by 'p ➔ P(p, q,-) tt Q (p, q, -) is a tautology.
19~ • 1
q'.
and q Proo f
(ii) Bicondftional : (Hf) : The biconditional of p
is "p, if and only if, q" and is denoted by p H q. ► Step (I) : Let P (p, q, ...) = Q (p, q, ...). Then they
(iii) Only if: p only if q means "If.not q then not p"
or have the same truth table. ..
equivalently, "if p then q". :. P(p, q, ...) t t Q (p, q, ...) is true for any value s
of the variables p, q, ..., It means that the propo sition
is,
(iv) Sufficient condition : p is a sufficient condition
for q means "if p then q". tautology.
Ex. 4.6.3 : Determine the contra positive of the ► Step (Il) : Since each step is reversible, ., ·' ,,
statements. :. Converse is also true.
(i) If Bbide is a teacher, then ho is poor.
(ii) If Thombare studies, he will pass the test.
~Tec h-Ne o Publications...A SACHIN SHAH Ventu
re
tMS-'126) H. t2 11.~ _ . znc,t. •
Artfficial Intel! nee MU-Al & OS / ElectronlcS
19- 4.1.1
~ 4.J ~ ARGUME~TS 'Jbe argurne
nt p P:z, --, P. r- Q is valicl
i, it ~
proPOsition <P1 "P2" - "PJ...,. Q
Definit ion: An argume nt is an asse,ti.on such P,. onlY if the ~1
. . ns p P:,, ...,
that for a given set of propositw 1, iautology,
.
called premhe •, gives nse to another led the
proposit ion Q (or a consequence Q), cal d proof ecaJJ that P1, P2, ...,.P., are true if llnd
We r . °'11);,
concl,u ion. And such an argument is denote
p AP2A .... /\ p n lS true..... ~-(i)
•
by P1, Pa, ....., P,. Q
I
nus th anmeot P,, P2, .... .P. I- Q is VaJidif.
e aft,- . Q~
De/milio n : Valid Argumen t or Logical Argument :
trUe whenever p i, p2, •.. • PO is true or tqui,.,a1,.,,.,
~
1. • sai•d to be logical
An argument P1, P2, ..., P0 Q 1s p A Pz A ... A p0 is true.
or valid if Q is true whenever all the premiseS Pi, 1
'!bat is the proposition <P1 "P2" ... "P.) ~Q,
P2, ... , P0 are true.
a tautology.
2. An argument which is not valid is called a (allacy.
~ 4.7.2 Fancfanaental P1bidple of LOdQI
Ex. 4.7.1 : Show that the following argument is valid
Reasonins
p, p ➔ q t- q (This is called as Law of detachment).
@ Soln. : We prepare the truth-table for p ➔ q The principle states that :
"If P implies q and q impHes r, then P imp&. r.•
' ¥·-:-
T
T
T
F
T
F
T
-
I Smee the proposition [(p ➔ q) A-, p] ➔-, qis
~:-:-=--~-~--:-------;-----.!!..._no_t_a_ta-;u;:;to;ilo_:gy:_:()as=t~c=o=lu:m:m~}~,:
(MS-126)
.. ,1 4
,.. Iii
hen:c:e~it~i=s~a~fall=a&Y=•
---
Te.ch-Neo Publications...A SACHIN SHAH vedJI'
1oteUi08__nce (MU-Al & OS/ Electronics)
~ e r v e that in line (3) of the truth (Knowfedaa and Reasonlng} ... Paoe No. (4-19
~so are true but, q is false. table P ~
_,d..., p ~ q p.-.q ,q !P ➔ q) ... (-,q)] -,p [(p ➔ q) ... t, q)J;-t'~
4 A: rrove that the argument is Valid
1,.1,
fl, qr P·
•
-
TT
TF
T
F
F
T
F F T
T
r~q, F F
Soil'I•: F T T
¢ can establish the validity of the
F F T T
we argument in F F T T T T T
cliJ!efCDl methods :
~ 1 Since the proposition [(p ➔ q) " rq] ➔ rP is a
~construct truth-table for p H q, tautology, the given argument is valid.
.. ' •
Ex. 4.7.6: Show that (p" q) ➔ (p v q) is a tautology.
p q PH'q ,
0 Soln.:
T T T We prepare truth table:
T F F Since the truth value of (p /\ q) ➔ (p v q) is T for
.. ~~ i
1
11
all values of p and q, the proposition is a tautology.
F T
.' .. [J
F F
F ,.
p "q ~
p:i(q' ~vq •
j:;;'·q~pv·q ---~ J
T ,.
T T T T T
ro Now•1 p ~ q is true in lines 1 and 4' and qm
· T F F T T
!inesl F T F T T
and 3. F F F F T
:. p +-+ q and q are both true in line 1, where pis
Ex. 4.7.7: State the converse of each of the following
also true.
implications :
:. The argument p ++ q, q I- p is valid
(i) If 4 + 4 = 8, then I am not the PM of India.
Method 2 (ii) If I am late, then I did not take the train to work.
Now, we construct the truth-table of [(pH q) "q) (iii) If I have enough money, then I will buy a car and
➔pas follows : I will buy a home.
. ..,
f.·-·c1 I,.ii,!" l°A "1
p+-+q '(p·~q)li'..q [(pH q) Aq),:;+p
0 Soln.:
tl', 9 (i) If I am not the PM of India, then 4 + 4 = 8.
T T T T T
(ii) If I do not take the train to work, then I am late.
T F F F T , (iii) If I buy a car and I buy a house, then I have
T enough money.
F T F F
Ex. 4.7.8 : Determine the truth value for each of the
F F T F T
following statements :
Since [(p H q) " q] ➔ p is a tautology, the (i) If 8 is even, then Bombay has a large population.
argument is valid.
(ii} If 8 is even, then Bombay has a small
Ex.4.7.5: Determine the validity of the argument population.
P➔ q, .., q I--, p. (iii) If 8 is odd, then Bombay has a large population.
@Soln. : We construct the truth table for (iv) if 8 is odd, then Bombay has a small population.
[(p ➔ q) /\ -, q] ➔ -, p
O,l5.U6}
~ Tech-Neo Publications..A SACHIN SHAH Venture
~ •.. Pae:: .l
MU-Al & OS / E1ec1ron1cs . c1eetarative language, which tneans
15 8
(2) Prolog consists of data based on the fcleb ~
Let P = 8 be even, q = Bombay bas large population. a pro~ logical relationships rather ~
We prepare the truth-value table.
rules _i.e.hoW to find a solution.)
cornpunng . .
~ I
~~t. No. .
1 .... ~¾ A logical relationship des~be~ the relation.,k
p q p➔ (3) . h Id for the given applicallon. "II)
t.: '- '.·
' ,. .1. J- q I I which o
(i) T T T ►.M ,., ._PRECEDENCE RULE
(ii) T F F
(ill) always use a truth table to show that
F T T •
(iv) =nt-for m is valid. We do this by 5~
F F T
that whenever premises are true, the conc1~
Ex. 4.7.9 : Construct truth tables to determine whether must also be true.
the given statement is a tautology, a However, this can be a tedious approach. ~
contingency or an absurdity : • example, when an argument form invol\'es 10
(i) p ➔ (q ➔ p) (ii) q ➔ (q ➔ p). different propositional variables, it requires 2io:::
@ Soln.:
1024 different rows to show the argument forin is
(i) We prepare the truth-table : valid, or not
Fortunately, we do not have to resort to ~
•
T T T tables. Instead we can first establish, Rule ((
T F T precedence, i.e., the order of preference in wbiQ
F T
F F
F
T
T
T
.. ,:~
the connectives , are applied in a formnJa ((
propositions that bas no brackets is :
Since the truth-value of p ➔ (q ➔ p) is T for all (i) -, I (ii) /\ (iii) v and --, (iv) ➔ and...I
values of p and q, the proposition is tautology.
(ii) The truth-table is : I '
.., I
p, q· '...
q;p· 'q ➔J'q-+p) In this section, we shall consider formulae which
c.,L L:;J contain the
T T T T connectives A, v, -,. Well shall see lalr:r
• II
T F T T that, any formula containing any other connective can
F T F ' 1: ,, , be replaced by an equivalent formula containing only
F
l ',I .II , these three connectives.
F F T T
Since the truth-value of q ➔ (q ➔ p) is not T for , Definition : Two formulae, A and A•, are said to r,,
all values of p and q, the proposition is contingency. duau of each other if either one can be obtaiMd frrJa
(MS-126)
liJTech-Neo Publications...A SACHIN SHAH Venttl' I
►
Oj tl~ ic a)
-z,.. 4.9 .1 fflasndve E. .... ....
•
-- --
e,c. 4.9 -- -- -- -- based Oft Daals oi (Knowledoe and R9810nlnp) ... Pape No.
.1 : Wr ite the dua ls of. (4-21)
... ILi So ln. :
(i) -
(p V q) /\ r ; (ii) (p I\ q) • Mota ,ne'='n,·r...
(iii) show ·•·-"
that - n ell'Of ln' thi
"'a • ..,.,
--- "'-
(p VT ), (iv ) -, (p V q) I\ (p .._,-, wet haw
'-·'
lo
V V r) -, (p A q} and (-, p V -, q))
(ti So ln. : Du als arc 'C. • ..,..
are loglcaDy
-, (q "- , r) ••
~
,t: suitable example. t ,
Using distributive Jaw, [(p ➔ q) =,p V q] ..... "
' I
L- - - _ ___....;;.i_..__.._ - - - - - - - - - - - - - - - _..._ - J_.,I
• [-,p /\ (-,p /\ q)] V [q /\,p /\q)
II
To convert propositional formula to CNF, wt
Using associativ e and commutative laws ; perform the following steps.
• [(-, p /\-, p) /\ q) V [q /\ q /\-, p] ► Step 1 : Push negations into the foonula,
Using idempoten t law ; i.e. -, p /\ -, p = -, p and q " q repeatedly applying De Morgan's Law, until all
=q negations only apply to atoms. We obtain •
= [-, p /\ q] V [q /\-, p] is dnf. formula in negation normal form : J •
(i) -, (p V q) to(-, p) /\ (-, q)
Ex. 4.10.2 : Obtain the d.n.f. of the form: p " (p ➔ q)
(ii) -, (p V q) to(-, p) V (-, q)
0 Soln. : We have
► Step 2 : We apply distributive law ~
pA(p ➔ q) where a disjunction occurs over a conj~
=pA (-,p vq) (": p ➔ q•-,pvq) When it is completed, the formula is in CNF. tJ
(i) P V (q Ar) to (p v q) /\ (p v r) 1r
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH VedJP
1· _J
I
-~ .;,1i?Dill----~-
~ ~~~-----------aii;:~\~~~-
'Jbe proposition R (1, 2, 3) is obtain (2) The proposition "for all x. P(x)", . wbic\ ~
.
_ I y = 2, z = 3 in the statement R ( ed by PUtting interpreted as "for all values of x. P(x) 1S true. lS
r.- ' x, Y, z). We see a proposition in which the variable x is said to be
d}at ) • "l 2 3" 1..: 'universally quantified' [and 'V' is known tas
R(I, 2, 3 1s + = Wwcb is true. universal quantifier]
Also, note that R(0, 0, 1) which is th
I. c.,
"() + o= 1" , 1s
• false. e statement 'a. 4. t 3. t Exlscendal Quadflen
~ Remark • Suppose for the predicate P(x), V x [P(x)] is false,
but there exists at least one value of x (or some
I• In general, abestatement
d
involving n variabl
es Xi, values
Xz,···.xn can enoted by P(xi, x2,.. ,x,J, and is
of x) for which, P(x) is true, then we say that, x is
the value of the propositional function p at the n-
tuple (x 1, X2, ••• , x0 ) bounded by existential quantfflcation.
p is also called an n-place predicate or a n-ary • The symbol used for 'there exists' is denoted by
predicate. '3'. Thus 3x (P(x)) means 'there exists a value of
x in the domain for which P(x) is true'.
z. A predicate is generally not a proposition but
q- Remark
every proposition is a propositional function or a
predicate. (1) Negation of V x(P(x)) is "V x. P(x) is not true"j
i.e. 'there exists' at least one x for which P(x) is
Syllabus topic : Quantlflcat1on .. not true, or in other words, there exists an x for
which •..., P(x)' is true.
.,,. .,.,. ~.,. (2) Observe that the statement 3 x P(x) is false if and
• RSAL QUANTIF only if there is no element x in the domain for
•
which P(x) is true.
, Many mathematical statements assert that a
We summarise the meaning of universal
• property is true for all values of a variable in a
particular domain. The 'domain' of the values of a quantfflcation of P(x) :
, variable is also called as 'universe of discourse', Table 4.13.1 : Univenal qwintfflcatlnQ
or 'domain of discourse', (often just referred to as
the 'domain'). ~fillet
, Toe universal quantification of P(x) for a ';/ x P(x) P(x) is true for There is an x for
particular domain is the proposition that tells that every x which P(x) is
P(x) is true for all values of x in the domain. false
3 xP(x) There is an x for P(x) is false for
De{in.itif,n : The universal quantifi,canon of P(z) is
which P(x) is every x.
Jl the statement. true
, "P(x) for all values of x •ID the d Omain"
• The
Specifying the domain is mandatory when
notation V x P(x) as "for all :x P(:x)".
quantifters arc used. The truth value of a q~tifie<I
• An element for which P(x) is false is called a
statement often depends on which elements are m the
', counterexample of V :x P(:x).
domain.
(MS-126)
~ Tech-Neo Publications..A SACHIN SHAH Venture
~~~ ~~~ ~~~ ~~~ ~~~ ~~: ;-r~ ;j:~ ~;~ . 4-
i MU-AI & OS/ EJec1JOOlcS
EXAMPLES OF QUANTIFICATION _j
~~a ~nd ~A =ea son ~~· in•i l!=i ··~· P•a
::1 p I
(lt ) v P(itz} v ••• V p (ltn)
. •
a!C e~N o~
•
aecause this disjuncn~n is tru(-re, ~ and Only
• 1east one of p(it1), p(xv , ••• , p "nl lS true. •
If\
Ex. 4 ·14-1 : Let P(x) be the statement
....,_2 > O", where
~e universe of discourse consists of all integ _ : What is the truth value of 3x P(~)
ers. What Ex, 4.14 4 .. 2 10'' d th
. th ' \\ih.._.
IS the truth-value of the quantifica
tion V x P(x) 7 P(it) 1s e statement x . >. . an e uni ~
onsists of pos1uve mte gers not c~ vCtlc
@ Sotn.:
disco C
urse ar
(No te : 'Uni vcn e of discourse' is dom
-- ~
2
ain of the 47
state men t "x > O" consisting of all integers] so1n .:
!ti 1be given domain is { 1, 2, 3, 4}. From the
We see that x = 0 is a counter-example becaU ~
2 SC •
x = 0 whe n x = 0, so that x2 is not greater problem. we say ~ ~e proposition 3 x
than Owhen P(~ _e
x=O . the same as the disJuncuon.
) IS
1 llir Remart< P(l) v P(2) v P(3) v P(4)
Whe n all the elements in the domain can be aecause P(4), which is the statement "42 >
listed- lO" .
say, X1, Xz, ...x,. - it follows that the • •IS
universal trUe, it follows that
quantification [P(l) v P(2) v P(3) v P(4)] is true
\;/ x P(x) is same as the conjunction P(x ) A
1 P ('½) A ••• :. 3x P(x) is true
A P(x J, beca use this conjunct
ion is true if and only if
P(x, ), P(xi), ..., P(x J are all true. Ex. 4.14.5: What do the s~ en ts \;/ x < O 2
(x > O). ~
Ex. 4.14 .2 : Wha t is th; truth-value of V
x P(x), where
cl
y * O * 0) and 3 z ~ 0 (z = 2) mean.
where the
P(x) is the statement "x < 10" and the dom domain in each case cons1Sts of the real num
ain consists bers.
of positive integers not exceeding 4 ? @ Soln .:
@ Sol n.: (i) The statement V x < 0 (x2 > 0) states that
for evuy
The domain consists of integers 1, 2, 3, real number x with x < 0, x2 > 0. Tha t is,
4. We it ~
obse rve that (from the above remark) the state "The square of a real num ber is positive"
ment ¥
x P(x) is the same as the conjunction. This statement is same as \;/ x (x < 0 ➔ x2 >
O).
P(l) A P(2) A P(3) "P (4).
The statement P(4) "42 < 10" is false
(ii) The statement V y :1:- 0 cl :1:- 0) states that for evay
real number y with y :I:- 0, y3 :I:- 0.
:. \;/ X P(x) is false. That is, it states that "the cube of ever y non-
mo
Ex. 4.14 .3: Let Q(x ) denote the statement real number is non-zero".
''x = x + l".
Wh at is the truth value of the quantification This statement is equivalent to
3x Q(x),
whe re the domain consists of all real numbers. v Y <Y *o➔ l * o)
@ Sol n.: (iii) The statement 3 z > 0 (z2 = 2) state
s that there
We note that Q(x ) is false for every real num exiSts a real number z' with z > O such that
ber x,
~ is, it states "There is a positive square
z2 =2.
:. The existential qualification of Q(x), whic root cl.
h is 3 2 • The statement
...
X Q{x), is false.
Remart<
3z (z > O" z2 = 2) .
is equivalent to
. ersal Generalisation
l'n'',• . state men t
C
nega
true ?
tive false (i) Uruv
... ,. (ii) Universal Insta O~
.
• bOD
.
-. 3x P(x) Y x-, For every Ther e is an (iii)
P(x) Ex,istentia
l JnstaOba~OO
P(x) is X for
. X,
false. which P(x)
(iv) Existential Jntroducbon
(i) Universal Generalisation
is true
Universal generalisation is a valid inference
-. Y x P(x) 3x- . P(x) There is an P(x) is true rule
which states that if premise P(c) is true
x for which for every x for any
arbitrarY element c, then we can have a conc
P(x) is lusion as
Vx P(x). It can be represented as :
false
p (c)
a. 4.14 .6 Different Inference Rules for FOPL V x P(x)
·- ---- --- --- -------------- --- ---..
~uQ.·.. Explain different inference Rules for FOPL
(il) Universal Instantiation (UI)
I ' Universal Instantiation is also calle d as unive
. rsal
\.. -
•
, - - - ------ - ---- - ------ ------'
.).
Inference in First-Order Logic is used to dedu
I
elimination or UI is a valid infer ence rule.
It can be
ce applied multiple times to add new sente nces .
new facts or sentences from existing sentences.
As per UI, we can infe r any sent ence obtai
• Let us first see som e basic terminologies used nell
in by substituting a ground urm ror the variable
FOL. .
The UI rule states that we can infer any sentc
Substitution : Substitution is a fundamental ncc
P(c) by substituting a ground term c (a cons tant
operntion performed on terms and formulas. within
It occurs domain x) from Vx P(x) for any obje ct in the
in all inference systems in first-order logic univcne
. The of discourse.
substitution is complex in the presence of quan
tifiers in
FOL. It can be represented as V x P(x)
If we write F [a/x], so it refers to substitute P(c)
a
cons tant 'a' in plac e of variable 'x'. Example (l) : If "Every pers on like ice--cream
" ➔ Vx
Equ ality : First-order logic also uses what P(x) so we can infer that " John likes ico-crcain
•
calle d as Equality in FOL. For this, we
is ➔ P(c). 1
can use
equa lity sym bols which specify that the Example (U): "All kings who are gree dy are
two terms Evil".
refer to the sam e subj ect In FOL form:
Exa mpl e : Brother (Ramesh) = Ashok
't/x king (x)" gree dy (x) ➔ Evil (x)
(MS-126)
ue.x @•l•Mi&&tl4•11
. 4.14.10 iw.i t u
What is FOP L7 Rep rese nt the 10
:idM%
. _,J ~ •
"
OWt ng sen • .
UEx.'""4."14.9 MU- 0. 2 b . 2016
. 5 Mark s
;wri te first orde r logic statements for using FOPL. _. •• : , i
~~
Soln .: - ·- -- -- --
(i) John has atleast two friends
"(il) If two people are friends then they
are ,.
. :J _
-,
.,,.. ., (MU • Q. 2(b). May 19, 10 Marks
.Sr.~
I'.•
Parameters
..
ProP9Sltio~Logic (PL)., ,:. ,
...
--- ·
7, -::.~. 5.·.•• •'!1. ,..,~. . .
~ l
. ..:Ar.ial intelliaence (MU-Al & DS / e1-.,__
~~uv11ic s)
~
IKnowtedae and Reasonlnal...Paae No.14-31)
p
i:5r.
..- parame.ters l i
Predicate Logic (FOL) · • ' -,j
'. ~~lti _oi:, .Logic {PL) • . ,
" .• .... !•.Jt I.;,
lfN(),, •<t ' •
•.....:.,, ,~ • i.,"tl?Qill~ .....,,J)S .,r ,·
..
I
' ~- , .,dt, '«"1· ... ' t ~
~ truth values A
_proposition has truth ~ " .. . . .
4.
means it values (0, I) Predica te logic is an express ion cons1stlllg
which • I •
can have one Of th e two of variables with a specific domru.n. t 1S
values i.e. True or Pal
1~~ ~~~as ~~1Boo ~~1 ~·~----~
t~;:i~;;-1~;;;~=::~~~s~e·:____ __
It · th so known can ogic.
L--- ,_,_ bas·
S. Usefum ess 1s e most
ic and widely used logic. Predicate logic is an extension of
_ _~
--t---~ -7-;~: :=~=- ~:---- ----~p r~o~p ~o~si ~ti~o ~nal~ lo~gi ~·.:.c ·:....__ _ _ _ _
This logic is used ~
powerful s h or th~ development of Predicate logic deals with infinite
. earc algonthms includin structures as well. The quantifiers are the
unplementation methods. g
linguistic marks that permit one to treat
with such infinite structures.
...---
Propositional logic is used . AI for A predicate with variables can be made a
1 • 10
I I
p anrung, problem-solving intelligent proposition by either authorising a value to
- 6. Nature
control and for decision-making.
uncertainty.
the variable or by quantifying the variable.
It also_ includes certainty as well as It consists of objects, functions, relations
between the objects.
-7. Representations It led to the foundation for machine Predicate logic helps artalyse the scope of
learning models. the subject over the predicate.
81 Language It is a useful tool for reasoning. It is different from propositional logic
which lacks quantifiers.
9. Level logic It has limitation because it cannot see Predicate logic is undecidable, since
inside prepositions and take advantage of universal and existential quantifiers treat
relationships among them. with infinite structures.
( ' - -- I
l UQ:'"Explain Forward-chaining and Backward-Chaining ,
i _; with ~g,plSe;r - - - - - - - - - - - - - - - ~- - :
Forward chaining employs the system starts from
l•a:i....
~ a(gorithril with the help of example. :
I that a set of facts, and a set of rules, and tries to find a
,cmr.r,.-- • I
IF.~:; ~-
(• ,;,' t • I. • • • • I. • I way of using those rules and facts to deduce a
\ -J~f t . . • t . I• ' I • ; conclusion or come up with a suitable course of action.
'..._ - ~ -- - - - - - - - - - - - - - - - - --- - - - - - This is known as data-driven reasoning because
Rule-based system architecture consists of a set of
the reasoning starts from a set of data and ends up at
rules, a set of facts, and an inferen ce engine. The need
is to find what new facts can be derived. the goal, which is the conclusion.
(MS-126) ~
Iii Tech-Neo Publications...A SACHIN SHAH Venture
Artifldal In t<nowled 8 and Reasonin
U-AI & OS / EJectronicS
q- ste ps fo llo we d « ......,ard ch ain ing us
In for wa rd chaining ea ~... ed 1n
(i) W he n applying for pla n•
ward chaining, the first ste
P~
to tak e the facts in the
any combination of
f.act database and see if
these matches all the
• A pIan
is a sequence of actions
deeides to take to solve •
tha t a ~ "'
t''O Rh~
a particular Pro b\"111
antecedents of on e of Back.Ward chaining ca n
the rules in the rule mak~ the PrOccss~
database. ting a plan more eff ici en
(ii ) W he n all the antec
fo ~~1a t than t Of
edents of a rule are match chailllJlg. or,,,,clfd
facts in the database, then ed by
this rule is triggered. • 8 1cward chaining in this wa y starts with the
(ii i) Us ua lly , when a rul ac
e is triggered, it is then Or
ed, state, which is the set of conditions the &Oai
wh ich means its conclusio wishes to achieve in carrym . . agctit
n is added to the facts g ou t its plan. h ~
database. If the conclusio examines this state and sees
n of the rule that bas wh at actions c°'1ld
be en fired is an action or lead to it.
a recommendation, then
\ the sys tem may cause tha
t action to talce place or
• For ex.ample, if the goal
the recommendation to be state .involves a block_
made. being on a table, then on
(iv ) Fo r ex am ple , consi e possible action w<lllld
der the following set of rul be to place that block on the
us ed to control an ele es table.
vator in a three-story This action might not be
bu ild ing : • possible from the start
\ Ru le 1: IF on first floor
state, and so further action
s ne ed to be added
and button is pressed on before this action in order
I fir st flo or
state.
to reach it from the start
I TH EN op en do or
Ru le 2 : IF on first floor • In this way, a plan ca n
from the goal and working
be formulated Staning
I AN D bu tto ns is pressed
on second floor THEN go state. Toe benefit in this
ba ck toward the start
method is particularty
I to sec on d floor.
Ru le 3 : IF on first floor
clear in situations where
very large number of possi
the fir st state allows a
I AN D buttons is pressed on
third floor THEN go to • In this kind of situation, it
ble actions.
can be very inefficiczt
I thi rd floor.
Ru le 4 : IF on second flo
or AN D button is pressed
to attempt to formulate
chaining because it involv
a pla n using forward
es examining every
on first floor AND alr possible action, without pa
eady going to third ying any attention to
floor which action might be the
\ TH EN remember to go to goal state.
be st on e to lead to the
first floor later.
a. 4. 15 .2 Backward Ch
• Backward chaining ensur
es tha t ea ch action that i1
aining taken is one that will defin
j'- -~.. G- ..- -~ -4""-;- itely lead to the goal.
-,-, r"" )- - - - - , -:: .-1 and in many cases this wi
1 u'Q: ill u~ "e - - - -;- -- ll ma ke the planning
Forw'~"rd iliaining and bac
kwa'ro ctfain!ng· in1 process far more efficient.
f( propo~itlonal logic with
example.
~ . :
·~ ~ = -~ !J!':_..._·_.,.__ - -.
a. 4.1 5.3 Forward Reason
ing
In ba ck wa rd chaining, we
wh ich is the hy po the sis
- - - - - - ... - - ---
-----·
start from a conclusion, ~-~--------~-----------------~-
: GQ," •~a in forward· and
backward reasoning
we wish to prove, and we : ~ examples.~
to sh ow ho w tha t conc aim ,r, •
lusion can be reached fro . .... ,
rul es an d facts in the datab m the 1 GQ Explain'reasoning wit
ase. h example. Compare roiw
1
~ • and backward reason atd
To e conclusion we are aim ing with example.
ing to prove is called a 1'r ·1
go al. an d so reasoning GQ' Differentiate between .
in this way is known as goa ' forward and bad('#lff'>-.;
~~
l- ~ reasoning. ".I
dr ive n rea son ing .
(MS-126)
~~~-----~------------------~~~ • •
{ jf
\
~
e lnt"nioef'IC8 (MU-Al & OS/ Electronics)
P
(Knowledge and Reason1np>··· a,...
- No, (4-33,J,
• -
a.
a. 4.15.8 Backward CIYiDlng Proof I
4.15.6 Backward Chaining
In Backward chaining, we begin with our goo
Backward chaining is aJso known as a backward
predicate, which is criminal (Robert) and then in/11
deduction or backward reasoning method when using
further rules.
an inference engine. A backward chaining algorithm is
Step 1 : We assume the goal fact And from~
a form of reasoning, which starts with the goal and ► goal-fact, we infer other facts, and at last, tt
works backward, chaining through rules to find known
' prove those facts true.
facts that support the goal.
So our goal fact is "Rober is Criminat, JO
following is the predicate of it
(q/T,)
y {A,
► Step 5 : Now, we can infer the fact Enem
ile
► Step 3 : At step 3, we extract further fact Miss America) from Hostile (A) which satisfies
Rule
ies
(q) which infer from weapon (a) , as it satisf (6). And hence all the statements are prov ed
true
the
Rule (5). Weapon (q) is also true with using backward chaining.
substitution of a constant T 1 at q.
Mllslle q
ward Reasonlnr
a 4.15 .9 Comparison between Forward and Back
It begins with new data. It begins with conclusions that are uncertain.
2
ort the
The objective is to find a conclusion The objective is to find the facts that supp
3.
conclusions.
Venture
~Tec h-Ne o Publications...A SACHIN SHAH
(MS-126)
( ~ and Reasoning) ...Page No
Artific:taJ Intelligence (MU-Al & OS/ Electronics)
7. First step is that the goal state and rules are SClCct~
The inference engine searches the knowledge -~
base with the given information depending on
the constraints.
8. The first step is that the system is given one Sub-goals are made from the selected rule>
or more constraints. need to be satisfied for the goal state to be true. bici.
9. The rules are searched for in the knowledge The initial conditions are set such that they-;;;
base for every constraint. all the sub-goals. The ~tablished states are l'Ilatc~
to the initial state provided.
r---+----
10.
-------+~ =-===--If-=----::- :==-~--- ----
the condition is fulfilled. then the goat is Ilic
The rule that fulfils the conclition is selected.
solution. Otherwise the goal is rejected.
11. Every rule can produce new condition from If tests have less number of rules.it provides ~
the conclusion which is obtained from the amount of data.
12.
invoked one.
New conditions can be added and arc It contains less number of initial goals and has large
processed again. The step ends if no new number of rules
-
conditions exist.
13. It follows top-down reasoning. It follows bottom-up reasoning technique.
In forward reasoning, reasoning proceeds forward. On the other hand. when the right side of the rub
beginning with factor, chaining through rules and is instantiated first, the left-hand conditions become
finally establishing the goal. subgoals. These subgoals may in tum cause SIii,.
When the left side of a sequence of rules is subgoals to be established. and so on until facts ac
instantiated first and the rules are executed from left to found to match the lowest subgoal conditions. Wla
right the process is called forward chaining/reasoning. this form of inference takes place, we say dill
backward chaining is performed. This foon cl
This is also known as data-driven search, since, inference is also known as goal-driven inference siDI%
input data are used to guide the direction of the an initial goal establishes the backward direction of tbt
inference process. For example, we can chain forward inferring.
to show that when a student is encouraged. is healthy,
For example, in MYCIN the initial goal in a
and has goals, the student will succeed.
consultation is "Docs the patient have a ccrtaiD
ENCOURAGED (student) MOTIVATED disease?" This causes subgoals to be established such
(students) as "are certain bacteria present in the patieot1'
MOTIVATED (student) & HEALTHY (student) Determining if certain bacteria are present may require
such things as tests on cultures taken from the paticd-
WORKHARD (student) WORKHARD (student)
This process of setting up subgoals to confirm a goal
& HASGOALS (student)
continues until all the subgoals are eventually satisfied
EXCELL (student) EXCELL (student) ➔ or fail. If satisfied, the backward chain is established
SUCCEED (student) thereby confirming the main goal.
I!
(MS-126)
Iii T""-N«> PublkationUI SACHIN S!Wl V - J
J
~telligence (MU-Al & DS / Electronics)
(Knowledge and Reasoninp)-.. Page No. (4-3Il
......-'sorne systems use both forward and b t-...
. d din ac-.. ward Table 4.16.1 : .Deduction repraeotation
c;baiJUll~ reaso~g, epenail •g o~ the type of problem ...
the informaaon av able. Likewise rules be Bi!ds "'"C° 1~!:'1'.n~~ ·Jla)e~
and . I I .
iested ~usnve y or se ectively, depending on the
may
A,B,F 1,2
contt0l sttUcture.
A,B,C,F 2 2
~ ,,~_t 6 EXAM,LE TO COMPARE fORWAU
i AND BACKWARD CHAINING A,B,C,D,F, 3 3
~ =~
A,B,C,D,E,F 4,5 4
In th.is case, we will revert to our use of symbols
for logical statements, in order to clarify the A,B,C,D,E,F,G 5 5
eXPJanation, but we could equally well be using rules A,B,C,D,E,F,G,H 6 STOP
a))oUt elevators or the weather.
• Now we will consider the same problem using
Jules: backward chaining. To do so, we will use a goals
Rule 1 : A A B- C Rule 2 : A --+ D database in addition to the rule and fact databases.
Rule 3 : C A D - E Rule 4 : B " E" F--+ G In this case, the goals database starts with just the
•
Rule 5 : A " E - H Rule 6 : D "E" H --+ I conclusion, H, which we want to prove. We will
DOW see which rules would need to fire to lead to
Facts
Fact2: B this conclusion. Rule 5 is the only one that bas H
Fact 1: A Fact3: F
as a conclusion. so to prove H. we must prove the
Goal antecedents of rule 5, which are A and E.
Our goal Is to prove H Fact A is already in the database, so we only need
•
• First let us use forward chaining. As our conflict to prove the other antecedent, E. Thetcforc, E is
resolution strategy, we will fire rules in the order added to the goal database. Once we have proved
they appear in the database, starting from rule 1. E. we now know that this is sufficient to prove H,
• In the initial state, rules 1 and 2 are both triggered.
We will start by firing rule 1, which means we •
so we can remove H from the goals databa~.
So DOW we attempt to prove Fact E. Rule 3 bas E
-
add C to our fact database. Next, rule 2 is fired, as its conclusion. so to prove E. we must prove
meaning we add D to our fact database. the antecedents of rule 3, which are C and D.
• We now have the facts A, B, C, D, F, but we have • Neither of these facts is in the fact databa~ so we
not yet reached our goal, which is G. need to prove both of them. They are both
• Now rule 3 is triggered and fired. meaning that therefore added to the goals database. D is the
fact E is added to the database. conclusion of rule 2 and rule 2's antecedent, A, is
already in the fact database, so we can conclude D
• As a result, rules 4 and 5 are triggered. Ruic 4 is
and add it to the fact database.
fired first, resulting in fact G being added to the
database, and then rule 5 is fired. and Fact H is • Similarly, C is the conclusion of rule 1, and rule
added to the database. 1's antecedents, A and B, are both in the fact
database. So, we have now proved all the goals in
• We have now proved our goal and do not need to
the goal database and have therefore proved H
go on any further. This deduction is presented in
and can stop.
the following table :
!Ms-126) Iii ech-Neo
T Publications...A SACHIN SHAH Venture
,:sr:,:i:m and Reasoning)...•_Page ~4.llo
Mffk:lal lntellg.,,,ce <MU-Al & OS I Bec:11010)
u:ce,dents, then forward chaining l l l i ~
• Tb.ls Pl'OCa9 fa npresam d In the table bdo" : an .._ even more inefficien t 1te1
1iave ,.,.,...n
Table 4.16.2: Proceu leple9mtad oa _ _1 t,ackward chaining is apPr0n,,;_
]JI gena..., .r• "lie •
• where there arc few possible conc1\lh,.. lll
cases 'bl -~111 (~
even J·ust one) • and many poss1. e facts• -.
"""
v~
A.13.P H s many
of which are necessarily relevant to
Git
conclusion.
A.13.P E 3
~ • Forward chaining is more _appropriate When lhert
A.13,F c.o I are many possible conclusions. The way in ~
forward or backward chaining is usually chosen it
A.13,C,P D 2 to consider which way an expert would solve Ille
A.13,C,D,F D STOP problem. This is particuJarly appropriate becaa.t
ru]e-bascd reasoning is often used in CXPQt
• In this case, backward chaining needed to use one
syste1IIS-
fewer ruJe. If the ru]e database had a large number
of other rules that had A. B and F as their
..t-,.
. .fT DIFF'ERENCE BETWEEN FORWARD AND BACKWARD CHAINING
Forward chalnhlg - B~cba hrlng
• J '
1. In this a problem solving technique used by expert It is a reasoning technique employed by expert
system when they are faced with a scenario and system to take a goal and prove that this goal
have to give a solution or conclusion to this founded legitimately according to the rule base
scenario. it possesses.
2. The sysr.em will work its way through the roles, It is a form of reverse engineering, which is very
findui'g which ones fit and which leads to which applicable in situations where there arc so many
goals using deductive reasoning. roles that could be applied to a single problem.
The system could be shifting through roles
before it gets anywhere.
3. Forward chaining is used when a conclusion is not Backward chaining is more appropriate when
known before hand and it has to reason its way the conclusion is already known.
through to one.
4. If matches conditions and then Generates inferences Backward chaining is used for Interrogative
from those conditions. And then generates appllcatiom (finding items that fulfil certain
inferences from th.ose conditions. These conditions
criteria) one commercial example of a backward
can in turn match other roles. Basically, this takes a
chaining application might be finding which
set of initial conditions and then draws all
insurance policies are covered by a particular
inferences from those conditions.
reinsurance contract
s. Starts with initial facts.
Starts with some hypothesis or goal.
(MS-126)
~ Tech-Neo Publications...A SACHIN SHAH ventil'
I'
(4--39_1
llkJence (MU-Al & OS/ Electronics) and Aeasoning) ... Page No.
rnte ··- . __, ....._f
~~ ~
Forward cbalning ...,., . --
, . . ..Backwa
rd cbafnlPI'~ .. .(.;1
KS, (
. ....,,,_ .,, .... • ,;•. ~l; i~ ...... ·~-
--,.
il{o
,-1,-:fS
~ .
tJons.
i.;:;--- ;.sJcs roanY ques Asks a few questions. - ·-
~ f eSts aIJ the rules. Tests some rules.
-
~ i--- -- •
SJow, 1,ecause 1t tests
all
the rules.
Fast, because it tests fewer rules.
~ _..-rrovides a huge amount of infonnation from )Ust
.
a Provides a small amount of information
from
9. smal1 amount of data. just a small amount of data.
v ~rn pts to infe r everything possible from
the
Searches only that part of the knowled
ge base
JO.
available information. that is relevant to the current problem.
i..-- ~a ril y data-driven. Goal-driven.
IJ.
---
12.
Uses input; searches rules for answer. Begins with a hypothesis; seeks info
rmation
until the hypothesis is accepted or rejected
.
~
Top-down reasoning. Bottom-up reasoning.
J3.
ort the
l,...--
ks forward to fmd conclusions from the
facts . Works backward to find facts that supp
J4. Wor
--15, Tend
s to be breadth-first.
SHAH Venture
~Te ch-N eo Publications...A SACHIN
(MS.126)
l(nowled 8 and Reasonin
Artificial lntelli nee MU-Al & OS / Electronics Game
Isa
_
Syllab us topic : Seman tic N~or ka aco111
,- -- ----
( ,:n Wh
- ----
.. •
- -----
• •
--"" - -- - ---- --,
; .i, Ff&. 4_18.l: A sematk network for n-place p ~
, ,+'C-' at are semantic' networics {or senianu c netS) 8 nd I1
ikda •fl
,__:_-ol-
I~ • •
~• cal!9n?- -r,:; ... . , , - ..~- - •·Ti'...,.JI
~ _.._.._~ -- - ~ --..'ir._ ._ _..._...._ --- - - - --- - - ,..._ ~ 4.1 e.1 AJ)vantalfl, Dlsad v-uae s of
• Semantic networks are an aJternative to $efflalldc Nets
predic ate logic as a form of knowledge
representation. « Advantages of seman tic nets
1. Semantic network can represent default YalQea I(
• The idea is that we can store our knowledge in the different categories.
form of a graph, with nodes representing objects
2_ Semantic networks are simple and easy to
in the world. and arcs representing relationships understand.
between those objects. 3_ Semantic networks are easy to translate in to
• The physical attributes of a person can be prologue.
4. Semantic network arcs represent rela ~
represented as in Fig. 4.18.1
Mammal between nodes.
laa 5. In semantic networks the relatlomb.ips an
I l
Peraon haa_part I Head
6.
bandied by pointers.
Semantic networks provid e good visualizatioa.
Instance Being diagralDID8tiC representation they at
-..z-.===~
I
Black/Blue I
team colours
I ?Yuvraj ~ PWI
«
easy to view.
Umlta tfon• of .eman tic netwo r1'9
Fig. 4.18.1 : A semantic network
I. Lack of link names standards; make it difficuJt ID
• These values can also be represented in logic as : understand the net meaning.
Is a (person, mammal), instance (Yuvraj, person) 2.
Even the nodes naming is not standard. H a DOde
team (Yuvraj, PWI). We have already seen bow is labelled "car" this may mean
conventional predicates such as lecturer (Poonam) • The class of
• A specific • The ~
can be written as instance (Poonam. lecturer). a car car of a car
• But, we have a problem: how we can predicate 3. Answering
negative query like "is XYZ a m"
have more than 2 place predicates in semantic takes a very long time.
nets ? For example score (PWI. India, 20). 4. Semantic nets arc logically inadequate bccas
• Solution is that, create new nodes to represe.nt they cannot define knowledge in a way logic cao..
new objects either contained or alluded to in the 5. Semantic nets can be used better in reprcscntmC
knowledge, game and fixture in the current binary relations, but not all types of relations.
example. Relate information to nodes and fill up 6. Logic enhancements have been made and heuristic
slots. enhancements have been tried by attathinl
procedures to the nodes in the semantic nets. ~
procedures will be executed when the node is
activated.
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH VedJ"
~cial lntelli ence MU-Al & OS / Electronics
No. 4-41
"'-~•:.!·:_ E-7-'~ ~f~~-~ ~kea, r~ (i) (a) Semantic net representation of sentence
,';""' Give semantic representation fo • f-. .:-,- :-<:- t""..· 1' ~ - ,
and Reason!
has colour
I I Batter Hit
1 . ..t ~) Every dog has bitten a postman. general statements. Every element GS has at least
:•00 ~ Bharathla~ Universi~ Computer Centre, the ·mini-
.,,()
two attributes, a form which states the relation that
~ ', computer system is. a generic' node because !Tiany is being asserated. For even dog d, there exists a
1 -' mini-c9mputer systems exist' and that nod.! has to
biting event b and a postman m such that d is the
eate; to all of 'them. On the contrary, lndividu~I of
assailant of b and m is the victim.
~ . - instance nodes explicitly state that they are s~c
·"
.__
1'l •
_,,_
Instances of,a generic node. •
-- - - - - - - - - - - - -- - -- - -- --- ---
.J
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Venture
~ed 8 and Reasoni
ArtiflclaJ Intel U-AJ & OS / Bectronlcs
c;Bharathllrt
HCLhorizon
Ill
I
~ Une1)rimer 1 unlveraity
computer
~centre •
t,as_a
I
haa_port Speed
Hammor-oa nk
pert_of
I
I
ls_ln MonitOr
Bharathlar ~
Coimbatol'9 unlveraity
I
I
I
d' For example, consider the following
I
I.) { • , : ~- • • ,.., ~, • ,; ~:.i-. ,q. I
\ Or • Write short note on resotution algorithm, resolvent (disjunction of A and B after the
._ - - - - -- - - - - .:. - -- .:. - - -"- _._ - -- _._. - - - :.:, !,. 1
I
complementary clauses are removed is generated).
a. 4. t 9. t Resolution and Unffladon That resolvent again has a literal Q Whose I
If various statements are given, and we are negation is available in C. Hence resolving those two,
required to state a conclusion of those statements, then one has the final resolvent I
this process is called Resolution. A:PvQ vR (given in the problem)
Resolution is a single inference rule which can B :-PvQv R (given in the problem)
efficiently operate on the conjunctive normal form or D:Qv R (resolvent of A and B) I
clausal form.
Unlflcation is a 'key concept in proofs' by
C:-QvR (given in the problem) I
E:R (resolvent of C and D)
resolutions.
It is possible to picturize the path of the problem I
a. 4. t 9.2 Resolution Algorithm using a deduction tree. In fact. it is easier for one to
grasp the flow of the problem using the deduction nee.
Robinson in 1965 introduced the resolution The deduction tree is, I
principle which can be directly applied to any set of PvQvR -PvQvR
clauses. The principle is "Given any two clauses A and I
B, if there is a literal Pl in A which has a ~
complementary literal P2 in B, delete Pl and P2 from QvR -QvR I
A and B and coosi:ruct a disjunction of the remaining
clauses. The clause so constructed is called the
~
R
resolvent of A and B". Fig. 4.19.t : Deduction tree I
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH VentU"
I
I
-~ 1ntelli ence MU-Al & ~S / Electronics Know! e and ReaSOlll
th
A!!! ornes, theNIL
resoluuon might ultimat (U) Resolve them together : The resolvent will be e
50Jlle • e1Y lead to
"' set or • The Following is h disjunction of all literal of both parent clauses th
etx1P•1 sue an
sll with appropriate substitution perfonned and wi
~aJllpJe. the following exception. If there is one pair of
.., 9.J University Solved Eumples
1 literals Tl and -, T2 such that one of the parent
~ or•
. pertorrn resolution on the set of cl clauses contains T 1 and the other containS T2 and
,.19. 1 · auses
~ QvRB:PvR if Tl and T2 are unifiable, then neither- Tl nor T2
A: pv should appear in the resolvent We call Tl and T2
.-1'1 p:1l
c: \£ complementary literals. Use the substitution
li'.1 50111·: produced by the unification to create the resolvent
(given)
J\:pvQvR If there is more than one, pair of coroplernents_z
(given)
13
-pvR literals, only one pair should be omitted frolll the
(Resolvent of A and 8)
x:QvR resolvent.
(given) (iii) If the resolvent is the empty clause, then a
c:-Q
contradiction has been found. If it is not, then add
r (Resolvent of X and C) it to the set of clauses available to the procedure·
uex. 4.19.2 68i•i••ifD•&i44◄@1Miv . .,*'4·
:R -
D: ~ (given)
z : NIL- (Resolvent of Y and D) Consider following facts :
l. lt is Humid 2. If it is Humid then it is hot
'Jbe deduction tree is,
pva~QvR f'If it is bot· 'd it will rain. Pro
~~·
QvR -QvR
0 Soln.:
~
I t """• I
v·
Fig. Ex. 4.19.1
► Step I : Propositional symbols
It is humid : H
It is bot : 0
It will rain : R
l! I
If a A is a formula of predicate calculus, then. ► Step Il : Propositional logic
(x I t). A denotes the formula that results when every (i) H (ii) H ➔ 0 (iii) H /\ 0 ➔ R
occurrence of x in A is substituted by t.
► Step m : In CNF rorm
Ii' Algorithm Steps (i) H (ii) , H V O (iil1 , H V , 0 V R
I. Convert all the statements of F to clause form. , r
2. Negate P and convert the result to clause form. Step IV : We assume negation
►
Add it to the set of clauses obtained in 1.
It is not raining, i.e. , R • II
3. Repeat until either a contradiction is found. no
► Step V : We form resolution tree. J
progress can be made, or a predetermined amount
. . /'_
of effort has been expended.
(I) Select two clauses : Call these the parent clause.
~
-,p p
~
~ -,H~-iGVH
~
We conclude that it will rain.
UEx. 4..19.3 • t -,W
Consi der the follow
ing axioms:
!M people who are _graduatjng are happy.
Fig. Ex. 4.19.3
!All happy people smile. someone is smiling.
Someone is graduating.
Explain the following : . UEx. 4.19.4 (fvlU . a. 4(aJ. Dec. 18. 1O Marks .
a. 4 A . Dec. 17. 10 Marks
i). R'!= 'ent these ..;om, in .fint: orooi p<ed i1 Consider the statements : mamma_ s t1 tll .-,1• I
~
!JDOrtal. man is a mammal, Tom JS a man
i!:e rt ~b formula to clause form.
s.J.mPOSCd ~ these.
P,rove that "Is. someone smiling?'' using resolution'
~ Soln. :
technique. Draw ther £S.Q lutio n~-- •
► Step I: We have
@ Soln .:
► Step I : Symbolic logic : Tom is a man.
Man is a mammal,
x = people
G = people graduating,
Mammals drink milk.
H = happy people, So we have to establish that Tom drink s milk.
S = smiling people First we write down the impli cation proposition.
► Step Il : First order propositional logic M : Mammals drink milk.
(i) \:;/x G Vx H Mammals (Tom) ➔ drink (Tom. Milk)
(ii) Vx H, Vx S A : Man is mortal.
(iii) 3x G V \:;/x G (f (x)) == G (y) Man (Tom) ➔ mortal (Tom)
► Step m : In clause form : Man is a mammal.
(i) \:;/x G Vx H;
N : Man (Tom) ➔ mammal (Tom)
CNF : ,Gv H
Tomi sama n.
(ii) \:;/x H, VS (x);
S: Man(Tom)
CNF : 7 H V S (x)
Goal : Tom drinks (milk)
(iii) Claus e form :
► Step ll : We note that
3x7 G V H (i) M
ammal (Tom) ➔ drink (Tom, milk)
(iv) Vx7 H VS (ii) Man (Tom) ➔ mortal (Tom)
(MS-126)
[i!Tec h-Neo Publications...A SACHIN SHAH V ~
.,,;dBIIO~~~p:a::~~~M~U=·~A~l&:D~S~/E;l:ectron~~lcs~----~---~Knowl=:a::~:and~R~easor!:=:~'"~·•":P=a~N~o~.~4~-4~~5_,
~ (folll) ~ mammal (fom) are propositions FOPL: American.(x) "weapon (y) "sells (x, y, z)
' ~S
rJj)
1
~ "" tJ1ail (fom) is an assertion. 11. Hostile (z) ➔ criJDinal ( x ) •
teP Jll : Now in disjunction form,
2. Nono has some missiles :
~ S JJlllll1'11al (fom) OR
drink (fom, Mille)
3x Owns (Nono, x) /1. Missile (x).
O) r,1ot ~ (fom) OR mortal (fom).
:. FOPL : Owns (Nono, M) and missile (M).
(111 r,1ot (fom) OR mammal (fom).
-Qn
(Jll•') 1•
..rot w-- ,. 3. All of its missiles were sold by ColoD:el West. ,-
SteP i µ
► .,.rr... V""'
dT • oeso)ution Tree :
•-'
r
Mammal (Tom) Man (Tom) V Mammal (Tom) ◄
FOPL : Missile (x) ➔ Weapon (x) 1 ,I •
~ 6. West is American.
'
Fig. Ex. 4.19.4
FOPL : American (West)
7. The country Nono is an enemy of America.
FOPL : Enemy (Nono, America)
'IbUS man (fom) is not a man but
► Step Il : To represent CNF : (using dJsjunctlon)
Toro is a man (": We have arrived at empty
3. , cocmy (x. America) v Hostile (x) To show that x3 smile (x3) I •1~ < I
We negate the statemen t .,.
4. , Missile (x) v weapon (x) ' I ~F
5. Owns (Nono, M) ,f p
6. Missile (M) Resolution tree
, !n~
• ·r,
~
' I I.'
S lle(x ) -.happy( x 1) V &mlle(x,) f.l);,
7. America n (West)
-, m " / _ , 1
8. Enemy (Nano, America) "'-/'X:/ X1 --•~
9. , Crimina l (West) , 11 i -.happy(x 1) -.gradua tlng(x) Vhap
► Step IV : Conclus ion '"""' l''I ~ PY(l)
\
•'
We discard that West is not criminal. Hence, we
conclud e that 'West is criminal'. r ' '
,gmd ••"V "'"•l l•v
• _4.1e.arlM•.■·MMi-WtJMtffift1 ~~
DSLder followm g
axioms :r l •• ' :"'1 .,'
people who are graduating are ha_ppf:.·~!.! ~ (NuU 88t)
happy ~ple smile. . / .t ·;,..J,J; "~ ·"' Jf-S : Fig. Ex. 4.19.6
• ,:, • ;£'.,jJ, 'ii•
Someon e is graduati ng ,.;, . ~ . », . ·: • • •••
,..... • ,. '--.;'.,. t .... .-. Our assumption is wrong.
q~ Represent these ,axioms ·in FOL. ~-.• :. Someone is smiling.
Convert each formula'.iti.' Cl'{P.. •• •
• Pr~ve·tha~~~ m~e ~ sm.ilirig us· .resolutio~ 4.19.4 Ualfkadoa a. r
~~hnig ue. Ql:a"Y.,.the resolution ~=--........................ J
@ Soln.: 1. It is the process of finding substitutions for liftm
inference rules, which can make different logi:al
► Step I : Converting the given axioms in First expression to look similar (identical)
Order Logic (F.O.L)
2. Unification is a procedure for determfla& I
(i) Let x stand for people :
substitutions needed to make two fust order logic
(a) Vx : graduati ng (x) ➔ happy (x) , •r, r expressions match.
(b) Vx : happy (x) ➔ smile (x) .. 3. Unification is important component of all ftnf
I
(c) Someon e is graduati ng: 3x: graduating (x)
►
order logic Inference algorithms.
I
Step II : 4. The unification algorithm takes two sentenceS and
Convert ing First Order Logic (F.O.L.) to
returns a unifier for them, if one exists. I
conjucti ve normal form (C. N. F.) Unifier : A substitution that make two cJ4lllll
I
Note that (x ➔ y) is equivalent to (-,x vy) resolvable is calkd
a unifier and the procdl (
identifyin g au.ch unifiers is carried out 1,y,,_tJ,,. I
.•. (a) ., graduating (x) v happy (x)
unification algorithm.
(b) . , happy (x 1) v smile (x1) ,1~1111 r
The unification algorithm tries to find out ~ I
(c) graduating (x:z) most General Unltler (MGU) between a giv~ set rJ
r ' I
, II
atomic formulae. Any substitution that makes 2 .-1
more ex ression ual is called as ve • • lineal:
;I
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH I/_,,. J
I
I
_J
~ 1nte1ligence (MU-Al & DS / Electronics)
:-:gorit hn'I : Unify (L1, L2) (Knowledge and Reasoning) ...Page No. (4-471
De/initu, n : Conflict set. is the set of rulu that have their
1.
If Ll or L2 are both variables or constants th . conditions satisfied by u,orldng memory el.emmU -
, en. Conflict ruolution normally seiecu a migk rule to
(a) If Ll and L2 are identical, then return NIL.
/ire.
~~·· .
(b) Bise if Ll is a variable, then if L1 ---..urs m L2 q- The Popular conflict resoluti on mecha n~•
then return (FAIL}, else return (L21L1). are:
I.
(c) Bise if L2 is a variable then if L2 occurs in L1
l. Refractory 2. Recency 3. Specificity • t.
then return (FAIL}, else (Ll/L2).
(d) Else return (FAIL}. 1. Refractory : A rule should not be allowed to fire
z. If the initial predicate symbols in L1and L2 are more than once on the same data. Discards
not identical, then return {F All..}. executed rules from the conflict set. Prevents
3_ If Ll and L2 have a different number of undesired loops.
arguments, then return {FAll..}. 2. Recency : Rank instantiations in terms of the
4. Set SUBST to Nil... (AT the end of this procedure recency of the elements in the premise of the rule.
sUBST will contain all the substitutions unify Li Rules which use more recent data are preferred.
andL2). Working memory elements are time-tagged
indicating at what cycle each fact was added to
5. For i +- 1 to number of arguments in L1:
working memory.
(a) Call unify with the ith argument of L1 and the ilh 3. Specificity : Rules which have a greater number
argument of L2, putting result in S. of conditions and are therefore more difficult to
l' satisfy, are preferred to more general rules with
(b) If S contains FAIL then return (FAIL}.
fewer conditions. More specific rules are 'better
(c) If S is not equal to NIL then : because they take more of the data into accounL
(i) Apply S to the remainder of both Ll and L2.
"a. 4. t 9.6 Rehmtion
(ii) SUBST : = APPENDS (S, SUBST). -....- - ---- - - - - - - - - - - --- - ---- - ---- ---~'9'r, ,-~~~
6. Return SUBST. : UQ. Explain' Resolution by Refutrti~ with • ;itable ~
~
example. ..,
1 , 1
'll. 4. t 9 .5 Conflict Resolution - - - -~ - - - - - - - - - - - - - - - - - a. - - - --- - ~
Refutation is nothing but a technique that a
f; ex;a~n- -:o~;i; ~~~; - ~l;i~ - ;;: -; ~-: resolution procedure used to prove a statemen
t. i.e., an
~ _ refutation. And let's take an example to explain this: : attempt to show that negation of the statemen
t
(f foll~ng statements are assum~ to be true : s i
produces a contradiction with known statements.
lt Steve only likes easy cou{SeS. • We consider an example :
t~
\
Science cQUrses are hard.
tne
, :
1
I
The following statements are assumed to be true:
\ ;. Alf courses in the basket weaving department ~ 1. Steve only likes easy courses.
~ are easy. ,
, i 2. Science courses are hard.
1- 'BK30[ is a basket weaving course. We ask : What I
course would steve like? 3. All the courses in the basket-weaving departm ent
OR • • •
1~~ 'What is conflict ~solution? Ulu_strate WI~ an : 4.
are easy.
BK 101 is a basket-weaving course.
°-·- example In any production system.
-- - - - - -
--------- --- ---
___.,,,._ ___....__..._,,,._
_._ -~~ We ask: What course would Steve like 7
-----------__L-n;ici~~~---~---
~S-126) l!ll Tech-Neo Publications...A SACHIN SHAH Venture
~ lntell 'R:e IMU-AI & DS I Elecironlell)
. ,!5'.??M$e and Reasonl?81, ..Pape No_t ~
lfkd(stefft .s) : A ~olub on Pl'Oof ~
Tlte predJc:ate Joatt! eocncUne ot Che preml les of tbe 6•
obtained by tM foJlowmg seq~ e of ~J . ~
Pml ou Pl"Oblan 11 • foDows:. (each step includeS a_ parenthesized numtier ~
1. '\f(x) eaay(x) ➔ likes(steve. x) resol~ent generated m the curren t step; 1 llit
2!'°·: V·(x) science(x) ➔ easy(x) JDeans that we fC801V~ clause s ( 1) and (5). ~ 5
3• t 'I (x) baske and 6 . .:.ltf... resolvent-easy(x).
t weaYing(x) ➔ easy(:x) ,- 1. 1
4• :, ~ wcavm . _. 4 and 7 yields resolvent -basketwcaving{x).
a (BK301) ',., 8
'.l"bc c.oaclua.ion ia encoded as, likes(stev~.x). ,' 9. 5 and s yields empty clause: th~ substibitio.
XJBK301 is proouced by the WlUJCation alg •
wbi&h says that the only wff of rhe fonn~
P'mt we put our premises in. the clauae form and
the negati on of conclusion lo ouc·set of.clauses (we use (steve, x) which follows from the .Pl'Clnisc. .
numb en in parentheses to number the clauses):;,. ~(sre ve.BK 30J). ll
1. :; easy(x ) or likcs(stcve, x) • ThUI. resolution gives us a way to find ~
2;, scieac e(x) or -wy(x ) aasumpdom .(io this case s = BK.301) which Ulake ~
3. scicnce(x) or. -easy(x) I .. j ~tru e... •
4 - •basketweaving(x) or ~(x ) :.
• '.' ••• I
~s~· -bas ketw -:-_- ~_vm g_·~ ~=~= -01~ >-·-- ,---- --....
: •.I, ,., ...
:-l--, -__,. .,.-''_ ·~--c ----- ----';
• ',./.'' i-:
••
'.•
~
•
.,,.:
. ' , \
: I
0
.; .t l,'1~-; , _.v.. :·._.1
.... .: : i..:-:•!. :.. ,: .·':·~ ):_(•<; H:-,:,/ ) ...;_:~ i~
il-' •l :J .. -:-~ -,~-- ~::!. • j~ .. -.~;. ,. ' l~n,-:·: .. J ; -~-·
• :,;); .~ .. ,,)a.:~ ~·i;) ;~ 11 ;r:·,ir:.: ri,!.:f· :, :•:1·,u~·•t·~: ;
;
. ,
: 1,,;:, . Iii 1 •
I .
. 1).
• ~ '
..,,
cJIAPTER
Reasoning Undef
5 Uncertainty
I •
~
Syllabua topic : Uncertain Knowledge and Reasoning
Handling Uncertain Knowledge IJITIIODUCTION TO
~ S.% yAJUABLES ---J
~ S.1 ltEASONING := We daily come across the se~tences like:
Possibly, it will rain to-rught.
• In a reasoning system, there are several types of 1.
There is a high chance of my getting the i.J..
uncertainty. Reasoning under uncertainty research 2. ,UV
in July.
in AI is 'focused on uncertainty of truth value' in above sentenCCS, with words like '00s.til.i...
1b
order to find the values other than True and False. • 'bi~-chance' indicate a degree of ~
• To develop a system that reasons with uncertainty about the happening of the event.
means to provide the following : • A numerical measw-e of uncertainty is .Pl'Ovided
by a very important branch of mathematics called
1. An explanation about the origin and nature of the 'Theory of probability'•
the uncertainty.
• Broadly, there are three possible states ct
2. To represent uncertainty in a formal language. expectation-'certainty', impossibility' aai
3. A set of inference rules that derive uncertain 'uncertainty'.
conclusions. • The probability theory describes ~ t y by 1,
4. An efficient memory-control mechanism for impossibility by o an d the vanous grades Ii
uncertainty management. uncertainty by coefficients ranging bctweea
0 and 1.
a. 5.1.1 Non-monotonic Loaks • According to Ya-Lin chou- "Probability is the
science of decision-making with calculated risb
• A reasoning system is monotonic if the in the face of uncertainty.
truthfulness of a conclusion does not change when
new information is added to the system.
.., 5.l IASIC TDMINOLOGY
• In contrast, in a system doing non-monotonic Here we explain the various terms which are used
reasoning the set of conclusions may either grow in the definition of probability :
or shrink when new information is obtained. 1. Random experiment
2. Outcome
• Simply spealcing, the truth value of propositions
3. Trial and event
in a non-monotonic logic can be classified into the
4. Exhaustive events of cases
following types : 5. Favourable events or cases
1. Facts that are definitely true, such as 'Crow is 6. Mutually exclusive events
a bird'. 7. Equally likely Events
2. Default rules that are normally true, such as 8. Independent Events
9. Joint and Conditional events
'Birds fly'.
3. Tentative conclusions that are true, such as ► 1. Random experiment
'Crow flies'. If in each trial of an experiment conducted under
• Remark : When an inconsistency is recognised, identical conditions, the outcome is not unique, bU1
may be any one of the possible outcomes. then
only the truth value of the last type is changed.
such an experiment is called a random experunenl-
e.., selectin a card from a ack of la in cards:
(MS-126) [il Tech-Neo Publications..A SACHIN SHAH Ven'/J$f
~1~nw:n~i~e~n~~•M•U--A•l•&•D•S•/•E•lectr•o•n~i~~----~---.JR~ea=~so~n~i~n~Und~e~r~U~nce~rtal~n~.···~P~ag:,~N~o~.~5-3-•
out~ . ► 8. Independe nt Events ______ _ .--.,
ult of a random expenmen t is called ,-.- - - - - - -;y - - - -,r, -,-.- - -,--- - - ~
t i I t events • "
'J1)e res an UQ;! State and explain :
~- Independen
e
outcOIIl •
'
'
r.m
t • . : 111wa•asa••w:- _·:.:-:-- I
I
t 3-. ADY particu_lar perfonn~ ce of a random (i) ~=~e;~ :~ n:~.:: ~~ :::e~~= t. is no:
(1) e~runen t is called a tnaJ and outcome or affected by the happening (or non-bapperung) 0
colilbinations of outcomes are called as the remaining events.
th
events. (ii) For example when a die is thrown twice, e
.. Jf a coin is tossed, we may get head or tail. f
result of the ust throw does not affect the result
•
(IJ) 'Jbus tossing of a com • 1s. a random ofth e second throw.
experiment or trial and getting head or tail is ► 9. Joint and Conditional events
an event i - - - ~-•i.-: - - - - -.;; - -P:lil"; - - -:; .-, - : -,.- ~'iF - --.-:
and explain: Joint and condrtiOna(events. ,
Es)18usf.lve events of cases ,, UQ. State 1
►• 4. ') 'Jbe total number of possible outcomes of a ! - - - - - - - - - - - _., - _._ - - - - - - - - - -· - _, _.._ '
(1 random experiment is known as exhaustive □ Definition : Two events X and y are
said to be
events or cases. independent if P (X) 7' 0, P (Y) 7' 0, and if
1
(ii) For exa_mple, in tossing a die, there are 6 p (X I Y) = p (X) and p (Y I X) = P (Y)
exhaustive cases. Q" Joint events :
?
' 5• Favourable events or cases Let X and Y be two events, then the happening of
►
LIMITATIONS OF CLASSICAL
., -, • I> - ...
1. As empty set , is a subset of S, , is
also an CVQI,
PROBABILlrt known as impossible evenL
y
theo ry.
t• 5.7 nR Ms
THEORY
ustD IN AXIOMATIC "' s.■' AiG EII A Of EV IN1 $,
EY£NTS A,B,C
fOlt
.,..... ~
(i) AU B=[ {ce Sle e Ao rce B}
(l) Sam ple spa ce
(1) The set of all possible outc ome s (ii) AnB = {CE s I e E A, and C E B}
of a given
ran dom exp erim ent is called the
sample (iii) A (A complement) = {C E s I C E A}
spac e associated with that experiment. (iv) A- B = {c E SI c E A but c E B }
(2) Eac h poss ible outc ome or clement
in a sample (v) AcB ⇒ cveryce A,c e B
spac e is call ed a sam ple poin t
or an AcB ⇒ B::::>A.
elem enta ry even L
(vi) A= B if and only if A and B have sam
(3) Toe num ber of sample points in e clements.
the sample (vii)
spac e is den oted by n(S). A and B disjo int (mutually exclusive) ⇒ A n B
=, (empty set).
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH v~
~ I 1ntelli ence . DS / Electronics
\'iii)A u B:: A+ B if A and Bare disjoint Reasonln Under Uncertaln ...Pa e No. 5-5
( AOB denotes those 'e' belonging to exa tl Ex. 5•8•1 : A, Band Care three arbitrary even
(~) expres ts. Find
of AandB1 'e
, •• c.yo ne • 1..or the even
sion ts noted below, in the context
of A, B andC.
~ ::: AB U AB = AB + AB (disjoint events). (i) Only A occurs
(i) pe,roorgan's Laws : (ii) Both A and B but not C, occur.
AUB = AnB and Ana = Aun (ill) All three events occur.
(~)l,aWS of distribution : (iv) At least one occurs.
Au (B nC) = (AUB) n (AUC) (v) At least two occur.
An (BUC) = (AnB) u (An q (vi) One and no more occur.
(vii)Two and no more occur.
-i,. 5,8, 1 Table of Probability Terms (viii) None occurs.
"';\:,',·, ~v•"'t.!);;,
_sr- - ~
'"st,a'tenfei( =
--1 ....
>•
-~ .., • <t
" .. .
.M~ nJ .t ..
0So ln.:
- 9. Sample space
-- -- -- -- -- -- -- -~ rr ic ~ = ~ = = ~
IMs. )
126 ~ Tech-Neo Publications...A SACHIN SHAH
Venture
L
Reasonln Under Uncertain
Artlflclal lntelll ence MU-Al & OS / Electronics
~
(¾+%+ 1) p
S.10 THEOREMS ON PROBABILITY Of
EVENTS
l\l
... p = n
► .Theorem (1> liMi'WMi11
(!) Probability of an impossible event cj) is zero,
_.,.._~e.:.:•~P.(t) ='o. · ,;.. • .:... ~ - -
'1' ,. ~1~ ~
_l!
~
2 4
:. P(A) =11, P(B) = 11' P(C) =n5 ,
...
Proof: ._. SUcj,=S UEX, 5.11•2rm\lM€WI
\I ~I
P (SUcj,) = P(S) = l (Axiom 2) A ball is drawn at random from a box containing !
Jed, 18 white, 10 blue and 15 orange balls. F'Uld· ~
P{S) + P{cj)) = I (Axiom 3)
:. P(¢,) = 0
probability that y' I
(i) It is red or blue (ii) white, blue or orang~ ' -~
(2) Complimentary events : The events A and A ~
1
I
@i) Neither white nor orange. ~
where A is the complement of A in S are called
complimentary events. 0soln.:
► Step (I) : Let A,B,C and D be the events o f ~
► Theorem (2) : P(A) = 1 - P(A)
red, white, blue and orange balls respectively. Tota1
Proof: Wehave AU.A = S number of balls = 12 + 18 + 10 + 15 :: ss.
Probability of drawing one red ball is
:. P(AUA) = P(S) = I
12C1 12
= P(A) = 55C1 = 55' Similarly
P(A) + P{A) = 1, ("." A and A are disjoint)
18C1 18 lOC1 10
:. P(A) = 1 - P(A) 55
and P(B) = 55C1 = 55 ' P(C) = 55C1 =
• 1:1 ~SOLVE°irEXAMPLESON' 15C1 15
~;'~ :~•.r • AXIOMAT • • and P(D) = 55C = 55
•• ' .. , .;~ • ~
. . . ,. 4
1
u~i.11,.1 rmmw.mt:Ji'>". - ,~ ~ . •• -- ' ~ ► Step (II) : Probability of a red or blue ball
~B,'C are bidding for a CO.!}~ct. A' has exa~tly h~ lb~ = P(AUC) = P(A) + P(C)
c.Ji~ce that B ha~; ·B._in ~tum. is ~ as likel; as C to ~inl ·: A and C are mutually disjoim
r . .. .. , ..
:tliJ. c~nlra~t. Wh~! js tpelrqbabjlity for each to win'thei 12 10 22 2
' ·' • 'the co actis to.,,kgixen;to one Qf them:_ = 55 + 55 = 55 = 5
0Soln.:
(i) Probability of white, blue or orange
► Step (I) : Since the events A,B and C are exclusive, = P(BUCUD) = P(B) + P(C) + P(D)
:. P(A) + P(B) + P(C) = 1
1 ·: B, C, Dare mutually disi~
Now, P(A) = 2 P(B) and ... (ii) 18 10 15 43
4 = 55 + 55 + 55 = 55
P(B) = -5 P(C)
... (iii) (ii) Probability of neither white nor orange
► Step (ll) : Let p be the probability of C, from (i), = 1 - P (AUB) = 1 - [P(A) + P(B)]
I 4
2 • 5 P(C) + 5 P(C) + P(C)
4
= 1 = I [li 15] 33
- 55 + 55 = l - 55 = 55
22
= ¾ ...~
(MS-126) '
~ Tech-Neo Publications...A $ACHIN SHAH venture
I. r .
r
......
.:r,c1a11nte 111
~ .. 3
ence MU-Al & OS / Electronics
am,w.;mui. ·---:
.,..
.-< ·/ ·~ •• •'7•~
I _ •511, ·, . ' - 1 ; " ' , ' ) ·t,;;•.>?.
t(V· • ·rso;is including A an~.B~ stand in a 'i· •·~ ,.., . + P (S = 11) + P (s +.12). J t ,_(j),
tet't.Jl ~ ti.' Fina ~e prpbability thar tliere ~ ~~ f~r.
~~~ b -~B • • •"' ~, e, exact} ► Slep (D) : In a throw of two dice, the sample spacS
. 2
~~ ~ -.,;. ~ ·~,, contains 6 = 36 points.
it1Sol11· : The number of favourable cases are as follo.ws :
~ steP (I) : [ Note This is a problem on S = 9: (3,6), (6,3), (4,5), (5,4) i.e. 4 sample points
, perinut.ations ]
4
,r 11 ◄
nl :. P(S=9) =36 Jr • J, ILA
o -~)I
tJSO, Pr- (n-r
S = 10: (4,6), (6,4), (5,5)) i.e. 3 points.
persons can stand in a row in
seven 71 71 :. P (s = 10) = } )
6
7P7 = (7-7)! =01=?!Ways, (': 01=1)
S = 11 : (5,6), (6,5), i.e. 2 sample points. . .. (i)
If there are ~o persons between A and B, then 2
P(S = 11) =
~ are 4 followmg cases 36
* * A * Required probability
* B *
4 3 2 1 f0 5"
* * * A * * B - 36 + 36 + 36 + 36 = 36 - 18
4- I
► Step (D) : In these 4 ways, A and B can Ex. 5.11.5 : A card is drawn from a pack of 52 cards..
interchange the positions. Hence there are 2 ways. Find the probability of getting a king or a hc,rt ox: a.red
:. There are 8 ways. card. . t. :
I -
mand total number of cases = 7 ! = n B = the card drawn is a heart ~ ". '~. 1 1
1, > •
•.11edPr b bil. m 8x5! 8 C = the card drawn is a red card ' '.
:. Requtr O a tty = D = ~ = ? X
6
Note that A, B, C are nor. mutually exclusive
4 AnB = the card drawn is the king of hearts
= 21 ...Ans. !: .i:1
:. n (AnB) = t
I 11 I : . •.,/.)
Ex. 5.11.4 : If two dice are thrown, what is the :. p(AnB) = 52 .l.
probability that the sum is greater than 8.
I I
0So1n.: • ~- BnC = B: the card drawn is a heart(': Be C)
► Step (I): Let S denote the sum on the two dice, :. n (BnC) = 13
then we want P(S > 8). 13
:. p(BnC) =s2 CnA : the card drawn is a ~ png;
The required event can happen in the following 'L .
~UlualJy exclusive ways : n (CnA) = 2
2
b)S:::9, (ii)S=lO, (iii)S=ll, (iv)S=12. :. p (CnA) = 52 I 111• 'I, • )( .
1
:. By addition theorem .... 1. I ~ fl : • ~ \ tt l
P(S:::,.g) = P(S=9)+P(S=10)
!Ms-126) ~ Tech-Neo Publicatlons...A SACHIN SHAH. Veptuce
a,
► Step (Il) : Total number of ways of getting the sum Proof: (i)
10 is = 6 + 6 + 6 + 3 + 3 + 3 = 27 ways. From the venn-diagram,
27 1
Pr0 ba b'li
1 ty=216 =s ... Ans. A s
I 'I
Proof: P(AUBUC}= P[AU(BUC)] I • •
A
IJ I -" ,, = P(A) + P(BUC} - P[An(BUC)]
= P(A) + P(B) + P(C)- P(BnC) - P((Al1B)u(AnC))
li'.Jsoln.:
:. P(B)-P(A)2:0
,I :. P(B) 2: P(A)
... A1UA2U ..... u Ak = S ··~
Ex 5 _1 2 .5 : If A and B are any two e v e n ~
:. P(A1UA2U ... UA0 = P(S) = 1
·babi.lity that exactly one of them will OCcur ;. . the
·: events are mutually exclusive, pro "'&lv~
... Ans.
by
P(A1) + P(A2) + .... + P(A0 = 1
(i) P((AnB) u (.AnB) = P(A) + P(B) - 2P (AnB)
Ex. 5.12.2: P(AnB) ~ P(A) + P(B)-1.
lt'.Jsoln.:
li'.J Soln. :
:. P(AUB) S 1
Since AnB and A n B are mutually exclusive,
P(A) + P(B) - P(AnB) S 1 s
, J .-. P(AnB) ~ PCA) + PCB) - 1 ·,
Ex. 5.12.4: If A!;;;B, then prove that Ex. 5.12.6: Show that : P(AnB) 2: 1 -P(A)-P(B)
(i) P(AnB) = P(B) - P(A) (ii) P(A) S P(B) li'.JSoln. : We have
@soln.: P(AnB) ~ P(A) + P(B) - 1
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH vtf!OJ'
Reasonln Under Unc:ertaln ...P e No. S.11
, .. .. ' ••• I
R•martca
(1) If P(B/A) = P (B), then events ·i\·aad D are"said lD
be independent.· · · :; ,..., •
In this case, knowledge aboUt eidier·-eveot doeS
.:.;.;_..-,.-~;..\w
not alter the likelihood of each~;_.-, -; ;; : ln:,·,'f
• coo~tional probability is the likelihood of an
(2) P (AIB) (the conditional probability of A given B)
' :: e~nt or outcome occurring, based on_ the differs from p (BIA)°: :· ··: ··-· • _. ·.- ; r • Fi ·-, .'· J' '·!
aocurrence of a previous event or out.come. ••
For example :.H a-pcnoo .bas ~~fever, be
•It iS·caiculated by multiplying the probablltty.-~f might have. 90% cliance· "of. beiflg· tested as
the preceding event by the updated probability of
tbC succeeding, or conditional, event ,...?QS*~C, ·_l, ·' . } ' .
,·i H~ lhe proba,bility of A ( ~ -~ positive) given
, fyf. ~xample : Event A : There is an 80% chance .... that B"has ~ is 90%.-. •. • . •••: •
0
•• _lbat -~~ individual applying for college will be
-'~ttd- · \ ! · ~ Wewrite.P(AIB)=~=0. 9
·• ..
·: .:-.;:,
1
O..S.126)
···-~,,-.;·)·.:':, ...---,.,;, :.-i- :---<~l
~ TedrNea Publicatlons...A SAOUN SHA)1_.y~
•- • • •
~ a l lntelll ence MU-Al & OS / Electronlcs Reasonln Under Uncertain
(i) If A and B arc any two events of And we have
n sam ple space 5
and F is an eve nt of S such thnl p (An 8) = p (A) • p (B)
P (F) 'it 0, then P ((A u 8)/ F) Def inU lon: TM even.ts A, Bar e independ
.tnt if
= P (A/F) + P (B/F) - P (A n 8/F). p (An 8) ::::p(
Pro or: We hav e
► Theorem (1) : If the events A ~~-
~!)~
~ arc su,. ~
~ o. P(B) ~ 0 and A is independent of
P [(A u B) IF]"" (P (A~ cM n F) P(A)
independent or A.
B'Ucb ~
~ 9,
_ [P (A n F ) u (B n F )] Pro o[ : Let A be independent of
- 8, then P(
p (F) P(A).
Aili) \
... (Using De-Morgan's Law) Now, P(A nB) = P(A/B) • P(B) = P(A
) • P(B)
= P (An F) + p (B n F)- P (An B n F),
p (F)
- p (A ("\ Fl p (B ("\ F) p (A ("\ P(A nB) -..(s.14_
B) Also, = P(B) 11
- P(F ) + P(F ) - P(F ) P(A) (': Afl B-s n
- A)
= P (A/ F) + P (B/F) - P (A n 8/F) :. P(BIA) = P(B)
~- s. t 4 INDEPENDENT EVENTS
.. . .':· .:; i~ :. Bis indepclldetitOf>.,
► Theorem (2) : For any event A in S,
--- ---
Two or mor e events are said to be (i) A and null eve nt, are independent
independent if
the hap pen ing or non -hap pen ing (ii) A and S are inde pen den t
of any one of them
doe s not affe ct the hap pen ing of othe
rs.
~ Ind epe nde nce Pro of: (i) P(A n ,> = P(~) = 0 = P(A) • P(t)
• In disc ussi ng conditional probabili :. A and , are inde pen den t
ty, we
con side red a pair of events A, B whi (ii} P(A nS) = P(A) = P(A ) • I =-P(A)
ch could be • P(S)
inter-related; one of them could give :. A and S are independent
infonnation
abo ut the other.
• It is just poss ible to carry out trials and ► Theof'em (3) : Mul tipli catio
define two n theorem or probabitiiy b
eve nts in such a way that one even independent events :
t gives no
info rma tion at all abo ut the other. If A and B are two events with P(A)
'I: 0, P(B)io,
• For exa mpl e : Sup pose our trial then A and B are independent if P(A
is a game in nB) = P(A).
whi ch a fair die is thrown onc e and P(B).
a fair coin is
toss ed twice. Pro of : •.• A and B are independent,
• We defi ne two even ts as follows : .-. P(AIB) = P(A) and P(BIA) = P(B)
A : the scor e with the die is 3
Now, P(A nB) = P(A ) • P(BIA) = P(A)
• P(B)
B : both tosses of the coin give tails
And P(A nB) = P(B) • P(AIB) = P(B)
• P(A)
• Sinc e A, B refer to sam e trial, they occu
r together.
But the info rma tion on wha t happened ► Theorem (4) : For any three even
to the die ts A,B,C defined oodl
is of no help at all in predicting wha sam ple spac e S such that BcC
t may have n.nd P(A) > 0.
hap pen ed to the coin. P(BIA) :!> P(CIA).
~ u o) u (B n D) = o ~ ~
J~ 5. t 5 PROBABILJmc REASONING
P(AnB) = P(A) • P(B) ... (5.14.2) • These are probable sentences for which we can
assume that it will happen but not sure about it. so
Now,P(AnB) = P(A) - P(AnB) here we use probabilistic reasoning.
= P(A) - P(A) • P(B) ... From (5.14.2) • Need of probabilistic reasoning in AI
= P(A) [l - P(B)] l. When there are unpredictable outcomes.
2. When specifications or possibilities of
:.P(AnB) = P(A) • P(B):. A and 8 are independent predicates becomes too large to handle.
(u)Again, P(AnB) = P(B) - P(A n B) 3. When an unknown error occurs during an
= P(B) - P(A) • P(B) from(5.14.2) experiment.
• In probabilistic reasoning, there arc two ways to
= P(B) [l - P(A)] = P(B) • P(A) solve problems with uncertain knowledge:
= P(A) • P(B) I. Baye's rule
2. Bayesian Statistics
Cw) P(Ans) = PC Aus) = 1 - P(AuB)
• As probabilistic reasoning uses probability and
= 1 - [P(A) + P(B) - P(AnB)] related terms, so before understanding
= 1 - P(A) - P(B) + P(A) • P(B) probabilistic reasoning, let's understand some
common terms :
...From(5.14.2)
1. Probability : Probability can be defined as a
= [l - P(B)] - P(A) [1 - P(B)] chance that an uncertain event will occur. It is the
= 1- P(B)][l - PIA] numerical measure of the likelihood that an event
will occur. The value of probability always
= PCB) • P(A) = P(A) • PCB) remains between O and 1 that represent ideal
:. A and B are also independent uncertainties.
t,
Reasonln Under Uncertain ... Pa N ,
Artificial lntelll ence MU-Al & OS/ Electronics
8
o. s.,
~ s. t 6 INFERENCE USING ~ •
• 0 S P (A) s I, where P (A) is the probability of an DISTRIBUTIONS )()flt)
event A.
• • 111
P (A) = 0, indicates total uncertainty • an event
:-= Probability inference means c o m ~
A. observed evidence of posterior probabilition ,
• P (A) = I, indicates total certainty in an event A. Cs.
The knowledge based answering the
• We can find the probability of an uncertain event
by using the below formula.
• represented as full joint distribution. ~, I
Prob b. . Number of desired outcomes • The probability distribution on a single Y . I
a ihty of Occurrence= Total number of outcomes must sum to I. ~ I
It is also true that any joint probability distrj~
• P (A) = probability of event A not happening. •
on any set of variables must sum to l.
• P (A) + P (A) = 1.
Any proposition 'a' is equivalent to
disjunction of all the atomic events in Wbicb ~
2. Event : Each possible outcome of a variable is •
called an event.
bolds.
3. Sample Space : The collection of all possible
Call this set of events e (a).
events is called sample space. •
4. Random Variables : Random variables are used • Atomic events are ~u~y exclusive, so dlt
to represent the events and objects in the real probability of any conJunction of atomic eva.,
1
world. zero.
5. Prior Probability : The prior probability of an • We have,
event is probability computed before observing
P (a) = Leiee(a) P (eJ
new information. For example, if the prior
probability that I bave cavity is 0.2, then we • Given a full joint distribution that specifies die
would write probabilities of all atomic events, this cquatQ
P (cavity = true) = 0.2 or P (cavity) = 0.2 provides a simple method for computing ~
6. Posterior Probability : The probability that is probability of any proposition.
calculated after all evidence or information bas
taken into account is called the posterior toothache -,toothache
probability. It is a combination of prior probability
catch .catch catch -.catch
and new information.
7. Condltlonal Probabllity: Conditional probability cavity 0.108 0.012 0.072 0.008
is a probability of occurring an event when .cavity 0.016 0.064 0.144 0.576
another event has already happened.
Let's suppose, we want to calculate the event A • For example, there are six atomic evenJS fct
(cavity V toothache) :
when event 8 has already occurred, "the
probability of A under the conditions of 8", it can 0.108 + 0.012 + 0.012 + 0.008 + 0.016 +o.~
be written as: =0.28
P (Ar, 8) • Extracting the distribution over a yariable
P(Al8) = p (8) (or some subset of variables), ~
where, P (An 8) = Joint Probability of A and 8. probability, is attained by adding the entries ill~
corresponding rows or columns.
P (8) = Marginal Probability ofB and
P (8) >0. • For example,
We can write, P (An 8) = P (A I 8) P (8) p (cavity)= 0.108 + 0.012 + 0.072 + o.008 ,.o.2
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH v,
1ntelll nee MU•AI & OS / Electronics Reasonln Under Uncertain ... Pa e No. 5-15
~ e can . write ~e following general • In other words, we can calculate the conditional
n¢gioaliZlltlon (sumnung out) rule for any sets of probability distribution without knowing P
....1es y and Z:
.,,attiw (toothache) using normalization.
P (Y) = r
ze Z
r cv. z)
Ex. 5.16.1 : In a class, there are 80% of the students
foC~ ple, who like English and 30% of the students who likes
English and Mathematics, and then what is tne
·cy) :: L f (cavity, z)
f (caYI z E {cavity, to othache} percentage of students those who like English, also
A variant of this rule involves conditional like mathematics? ., '
' probabilities instead of joint probabilities, using @so ln.:
the product rule: Let, A is an event that a student likes Mathematics
p (Y) = L e(Y I z) p (z) B is an event that a smdent likes English
ze Z P (A r'I B) 0.3
P (AIB) = p (B) = 0.S = 0.375
'Ibis rule is called conditioning.
I Marginalization and conditioning turn out to be Hence, 37.5% are the students who like English
useful roles for all kinds of derivations involving also like Mathematics.
probability expressions.
Ex. 5.16.2 : The probability that it will be sunny on
, Computing a conditional probability
P(cavity "tooth ache) Friday is 4/5. The probability that an ice cream shop
P(cavityltoothache) P(toothache) will sell ice creams on a sunny Friday is 213 and the
0.108 + 0.012 probability that the ice cream shop sells ice creams on
' • .,_. a non-sunny Friday is 1/3. Then find the proba
bility
·. = 0.108 + 0.012 + 0.016 + 0.064
that it will be sunny and the ice cream shop sells the
- 0.12 -0 6
I I - 0.2 - • ice creams on Friday.
Similarly, @so ln.:
. 0.016 + 0.064 O
= .4 Let us assume that the probabilities for a Frida y to
P(-caVlty I toothache) - 0 _2
r be sunny and for the ice cream shop to sell ice creams
• The two probabilities sum up to one, as they be Sand I respectively. Then,
should,
1 l . P (S) = 4/5
th=-a-c-he-) = 0.2 = 5 remains
In both the cases, -P-(t-o-o
P (I IS) = 213
constant, no matter which value of cavity we
calculate. It is a nonnalization constant ensuring P (I IS) = 1/3.
that the distribution P (cavity I toothache) adds up We have to find P (S r'I Q.
to 1. We can see that S and I are dependent events. By
' Let a denote the normalization constant. using the dependent events' formula of conditional
P(cavity I toothache)= ex P(cavity, toothache)
probability,
= ex [P (cavity, toothache, catch)
+ P (cavity, tooth ache, -, catch)] P (S r'I Q = P (I IS)· P (S) = (2/3) • (4/5) = 8/15.
= <l [ (0.108, 0.016) + (0.012, 0.064) 1 Answ er: The required proba bility = 8/15.
= <l [0.12, 0.08) = [0.6, 0.04)
~Tec h•Neo Publications...A SACHIN SHAH Ventw e
ll,15-l26) j
,a
Artificial lntelliaence (MU-Al & OS/ Elect
ronics) (ReasonIn Under Uncertalntvl ••• Paae No (S
·16
:~·b5 •1 6 ·~ : The table below shows the theorem calculates the probability bas,.,.
occurrence of • Bayes . ""on
•~ etes tn I OO people. Let D and N be the bypothes1s.
the events
~ ere a ra nd0 mly selec ted person let us state the theorem and its proof. B
"has diabetes•· and Now,
not over weig ht". The n find p (DIN ). • m states that the condiuona
. I ayes
probabilit •
theore Y
vent A given the occurrence of another e of
an e '
Diabetes (D) No Diabetes (D) B, is equal to the product o f the likelihood Vent
of 13
Not over weig ht (N) given A and the probability of A.
5 45 •
• It is given as :
Ove rwei ght (N) 17 33 P (BI A) P (A)
0So ln.:
p (A I B) == p (B)
------
~ lnten en ce MU-Al & OS 'Ele Re asonin Under Uncertain
ctronicS
(b ) •.. pa No
Ar c or dir ec ted arrow On the other hand, Mary
s represent the causal likes to Ii • S..ie
rel ati on sh ip or conditio music, so sometimes sh e
nal misses to h :n to hi&h
ran do m variables. Th ese probabilities between Here we would like to co
I ' ,I directed links or arrows mpute the p :c ai ,
co nn ec t the pa ir of node Burglary Alarm.
s in the graph. These links ro abilil} Of
I rep res en t tha t on e node
directly influence the other
no de , an d if there is no Ex. S.18.1 : Calculate the
directed link that means p ro b ab il it y ~
tha t no de s arc independe sounded, but there is
nt with each other. neither a bu rg la ry ~~
fI
• In Fi g. 5.18.1 below, A, earthquake occurred, and
John and Mary both Or
B, C, and Da re random aii
va ria ble s represented by the Harry.
the nodes of the network can~
gra ph . If we are consi @ So ln. :
dering node B, which is
co nn ec ted wi th node A • Toe Bayesian network
by a directed arrow, then for the above pr b
no de A is called the pa given below. o 1cm.
rent of Node B. Node C IS
ind ep en de nt of node A. is • The network structur
e is showing that b
• Th e Ba ye sia n network
graph does not contain any
and earthquake is the paren
t node of the al~&lary
cy cli c gra ph . Hence, directly affecting the pro
it is known as a directed bability of alann•s g ~
ac yc Jlc gr ap h or DA G. off, but John and M ary 's
calls depend o n ;:
• Th e Ba ye sia n netw
ork has mainly two
probability.
PB
co mp on en ts: Causal
Component and Actua
nu mb ers . l
• Ea ch no de in the Baye
sian network has condition
I
pro ba bil ity distribution
P(Xi I Parent(Xi) ), which
de ter mi ne s the eff ec t of F
the parent on that node. 0.9 5 0.0 5
• Ba ye sia n ne tw ork is 0.9 4 0.0 6
based on Joint probabilit 0.2 9 0.71
dis tri bu tio n an d conditio y 0.0 01 0.9 99
nal probability. So let's
fir st un de rst an d the joi nt
probability distribution : A T F
A T F
A Node T
F T 0.7 0.3
F 0.01 0.99
f1A2)Fig. P. 5.18.1
• The network is representin
g that our assumption.,
do not directly perceive
(1A 1)F ig. 5.18.1 : Directed Acyclic the burglary and also do
Graph not notice the minor earthq
uake, and they also DOI
• Le t's un de rst an d the Ba
yesian network through an
confer before calling.
ex am ple by cre ati ng a •The conditional distributi
directed acyclic graph. ons for each node are
• Example : Ha rry installed a new burglar alarm at given as conditional proba
bilities table or CYf.
1 bu t so me tim es he go
_ _:nn~·~g.111:'.'.·~g.:.an=d_c_al_ls_a_
when he h~ the alarm,
t confused with the phon
t_th_a_t_tirn_e_t_oo_._ _ _ _
e
are two parents, then
probability values.
bilities. Hence. if ~
CP T will contaUl
_-.. !_ _- ; ; ~ -
------
------
(MS-126)
~ Tech-Neo Publications...A SACHIN SH
--
AH Venni'
c:e MU-Al & OS / Electronics Under Uncertain ... P No. 5-19
. . . ReasonI
11ntelll en
,If, all events occurnng rn this network : • The value of this entry is given by the formula
J,.iSI of
8urgJllfY
, CB) D
0
Eaflhquake (E) I I P {x,, X2, ..., x.i) = TI 8 (X; I parents <Xi))
0 i=1
Alaflll (A)
0
Jobn Calls (J)
• Where parents (X1) denotes the values of Parents
(X;) that appear in x1, ... , x0 • Thus, each entry in
o t,1arrY calls {M)
0
the joint distribution is represented by the product
can write the events of problem statement in of the appropriate clements of the conditio?aJ
\'{ef ...,.. of probability : probability tables (CPfs) in the Bayesian
tbC 011w
network.
p(MJ,A,B,E) = P(MIA~x P~JIA) We can rewrite the above equation as
•
, X P (A I B n E) X P {B) x P (B) D
:: 0.70 X 0.90 X 0.001 X 0.999 X 0.998 P (X1, X2, ..., Xn) = TI P (X; I parents <Xi)) •
= o.0006281 i= 1
Hence, a Bayesian network can answer any query • The next step is to explain bow to construct a
;bout the domain by using Joint distribution. Bayesian network in such a way that the resulting
joint distribution is a good representation of a
i,. 5, 18.1 The semantics of Bayeslu Network given domain. First, we rewrite the entries in the
joint distribution in terms of conditional
Tbere are two ways to understand the semantics of probability, using the product rule.
tile Bayesian network, which is given below:
p (x1, X2, ..., Xn) = p (Xn I Xn- I• ..., x,) p <Xn- I• •••, X1)
I. To understand the network as the representat ion of • Then we repeat the process, reducing each
the Joint probability distribution.
conjunctive probability to a conditional
, It is helpful to understand how to construct the probability and a smaller conjunction. We end up
network. with one big product:
, One way to define what the network means is to P (xi, X2, ... , xJ = P (x0 I Xn-1 • ... ,xi)
p (Xn- I I Xn-2• ...,Xi) ... p (x2 I Xi) p (x1)
define how it represents a particular joint
D
distribution over all variables. To do so, we must
= TI P (X; I Xi- t. ... , x 1)
first retract what we said earlier about the i= 1
parameters associated with each node.
• This is known as the chain rule. It holds true for
• We stated that those parameters correspond to any set of random variables.
conditional probabilities P(X; I Parents (X; ));
• For every variable X1 in the network,
While this is true, we should think of them as p (Xi I xi - I• •••• X1 ) = p (Xi I Parents (XJ ),
numbers provided that Parents (X1) !:: {Xi_ 1, ..., Xi}.
9 (X; I Parents (X; ) ) until we assign semantics to
• The above equation says that the Bayesian
, the network as a whole. network is a correct representation of the domain
• A generic entry in the joint distribution is the only if each node is conditionally independent of
probability of a conjunction of particular its other predecessors in the node ordering, given
its parents.
ilSsi&nments to each variable, such as P (Xi = x, 11
2. To understand the network as an encoding of a
"· II xn = Xn ). We use the notation p (x, ,..., Xn)
collection of conditional independence statements.
as an abbreviation for this.
It is helpful in designing inference procedure.
...
adva ntag es and disadvantages. They are listed
Adv anta ges
below : press
ure is depe nden t on the deflection • anct
'ghtlY coupled prob1em w hich this netwo• 1t.t ts
ti
to define and mak e dec1S. .
1ons.
.tbc
r, fa,1.
.a
roonotonidty axiom says that, when Consequently, we focus on expert systemS fcJ
analytic tasks.
I, 1bC .ring two lotteries, each with the same two
colllP"'"' . Decision theory also can be relevant to synthetic
a)tet11ative outcomes but different probabilities, a •
tasks, because useful alternatives often must be
decision maker should prefer the lottery that bas
selected from large numbers of options.
the higher probability of the preferred outcome.
a of Dedsloa Haklal
'Jbe decomposab!li~ axiom says that a decision 5 _19 _1 Types
2- Jl)8ker should be mdifferent between lotteries
that
inty : The outcome
)laVC the same set of eventual outcomes each with 1. Decision making under certa
of a decision alternative is known (i.e., there is
the same probabilities, even if they are reached by
different means. For example, a lottery whose only one state of nature).
r risk : The outcome of a
outcomes are other lotteries can be decomposed 2 Decision making unde
into an equivalent one-stage lottery using the • decision alternative is not known. but its
standard rules of probability. probability is known.
. 1be subsdtutabillty axiom asserts that if a 3. Decision making under uncertainty : The
3
decision maker is indifferent between a lottery and outcome of a decision alternative is not known.
some certain outcome (the certainty equivalent of and even its probability is not known.
the lottery), then substituting one for the other as a A few criteria (approaches) are available for the
makers to select according to their
possible outcome in some more complex lottery decision
preferences and personalities under uncertainty
should not affect her preference for that lottery.
Max1max Criterion
4. Fmally, the condnuity axiom says that if one 1.
prefers outcome x to y, and y to z, then there is An adventurous and aggressive decision maker
some probability p that one is indifferent betwe en •
may choose the act that would result in the
getting the intermediate outcome y for sure and a
lottery with a p chance of x (the best outcome) and maximum payoff possible.
{I· p) chance of z (the worst outcome). optimistic approach, "Best of
• Tb.is is viewed as an
' The consistency criteria embodied in classical bests".
decision theory can be stated as follows : Given a ► Step 1 : Pick maximum payoff of each alternative.
set of preferences expressed as a utility function,
maximums in
belief expressed as probability distributions, and a ► Step 2 : Pick maximum of those
set of decision alternatives, a decision maker Step I; its corresponding alternative is the
should choose that course of action that decision.
lllaXimizes expected utility.
~ Tech-Neo Publications...A SACHIN SHAH Venture
o.ts.126)
~~~~l!!ilntelll
Artlflclal ~~~~.!.!
ence,~:,"-!~~
MU-Al &~~----~~ 7s~:R;e a~s~o:n :ln::U~n ;d~e;rU o.
~ OS I Electronics
4. Lap
:~nc;e~
I e Criterion rt~a:in::
(Equally ··:·P~a~
likeUhOO ~
d) e~N~s.'.)"
ac
2. Maxim.in Criterion Th decision maker makes a simple as
• This is also called Waldian criterion . • e •
that each state of nature ts equally likelysulllpr
to ton
• This criterion of decision making stands for & compute average payoff for each. OcC\ir
choice between alternative courses of action Choose decision with highest average payoff
•
assumin g pessimistic view of nature. ► Step 1. : Calculate the average payoff f •
Or each
• This is viewed as a pessimistic approach, "Best of altemat1ve.
worsts" ► Step 2 •• The alternative with highest averag .
CJStbc
decision.
► Step 1 : Pick minimum payoff of each alternative
S. Hurwiczalpha Criterion (RationaJity or Reallsni)
► Step 2 : Pick the maximum of those minimums in
Step 1, its corresponding alternative is the decision • This method is a combina tion of
. .
Maxim.in
all(j
Maximax cntenon.
3. Minimax Criterion Also known as criterion of rationality. neither too
•
• Applica tion of the minimax criterion requires a optimistic nor too pessimistic
table of losses or table of regret instead of gains. ► Step 1 : Calculate Hurwicz value for each
alternative
• Regret is amount you give up due to not picking
► Step 2 : Pick the alternative of largest Htttwicz
the best alternati ve iµ a given state of nature.
value as the decision.
► Step 1 : Construc t a 'regret table', Hurwicz value of an alternative = (row max)
► Step 2 : Pick maximum regret of each row i~ (ex) + (row min) (1 - a.) where a. (0 5 ex S I) u
regret table, called coefficient of realism.
.I •••.. :i I II . : .fl
I '
.,!1•• \ ., •• J, '·. , t' ' '
• I • ': ' I I . 'I'
I 1_, : I J •• ,1 ! " ., .• • 1· f, I ' ,
l,1 I I
. / I•
I• 1, I I J ,.
r:, • •• '• I • '
.,.
., ,r
'r•.
.1 . 1
Module 6
cJIAPTER
&.1 The planning problem, P.artial order planning, total order planning.
6.2 Learning in Al, Leaming Agent, Concepts of Supervised UMUpe,vieed Semi ·Supemaed ~ .
Aelnforcernent Leaming, Ensemble Leaming. ' ' · .
• 6.3 Expert Systems, Components of Expert Syatem: Knowredge baee, Inference e,vne, user ntarface; working .
memory, Development of Expert Systems . .
,•
1 components of Planning...............................................................................................- .....·-·-···-·-·"...................;_ ... 6--3
t expliin the different components o1 planning system. OR Given the C0ff1)0nentt p1amng syatelTI and «
tJ. • tty explain them. OR Explain the various components of planning system. How can you . · .
~=resent a planning ~tion? .......... ,......................................................................................- ..·-··..··----.........- ...- ... 6-3
8.l.i Problem Solvmg Va. Plannlng ..................................................................................-...........-...- ...........~···M
40 Compare Problem Solving and Planning ..............................................................................._,.._.........~.............."'....... M.
2• problems A-oc·1atA"'
- ...,. With Plann·1ng Agent ...............................................................................'..........................- ........ ' 8-6.·· ·
~ Write short notes on : (I) Planning agent (II) State goal and action repreaentation.........., ......................- .......- -.... --· 8-5
Oil, 62.1 Planning Agent ...................................................................................................................._..'. ...'."'..···.·--~-.. 6-6
6.2.2 Tllree Key Ideas behind Planning ..........................................................................." .. _ .., u.........., ....- .....- ... 8-5
e.3 Partial Order Planning............................................................................................................M........._....:........ -,., ... _ ... 8-5
UQ. Explain a partial order planner with an example 6-6
/MU. Q 5 c Dec 15, 0 4 b Ma 17 0 5 J r,l.1 · ?019 lU !.l.irx-.
......................·-·-·---·-····--,..~···
5.3.1
Causal Link In Parta • 10rder PlannIng ....................................................................-.... ..... ·.•.......- ...........
. 6-6, 6-7°
Working ol a PartlaI Ord er Pranner ............................................................................~........
6.3.2 .. . _.............._.., .....· .. 6-7
GQ, How does partial order planner work ?..................:.....................................................................- .........."'::-·--
5.3.3 Example for Partial Or""r ,.,. Pianner ..............................................................................- ..............- .............- ... &- 6-7.
GQ Given an example for partial order PIanner.......................................................................... · ,~.....................
_ ,........ 6-8. 7'
6A ~dltional Planning .......... •··· .........................................................................................................· .... : ·· :··..-::: H
e4 1 o·tte noe In Hierarchical and Conditional Planning ..............,..........................................·-·····..............'.'.'::': &-e
B.5 ~·~
Rep~se~ation ...................... •••.... •••............ ••• •••• •............................
IS Planning with state space REsearch .......................................................
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::·&:1,0·
.. __...... 6-10
661 Foiward State-Space Search ...........................................................v............ • , . . 6-11
::.r
6.8.2 Backward State-Space Search·······............................................................................................:..:·..:.........:.. 6-11:
:~ ~!::~~~:r1:::'.~~:::::::::::::::::::::::::············:::::::::::::::::::::·:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
6.22.1 Inference Englne(Rule s of En In ) •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 28
: ; : ::::en:7~·~~;;~·~;~~;······~···~·:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::::::::::::: :::::::::~
(MS-126)
lil ech-Neo Publications..
T .A SAC HIN SHAH V ~
• ence MU • Al & OS / Electronics
; 'Initial state = initial situation solving because of the difference in the way tbeY
• Goal-test predicate = goal state description' represent states, goals, actions, and the diffetellCCS i.o
the way they construct action sequences.
• Successor function computed from the set of
operators. G'
Remem ber the search-based probltll'
r; Once a goal is found, solution plan is the solver had four basic eleme nts
sequence of operators in the path from the start • Representations of actions : ProgralllS dial
node to the goal node.
develop successor state descriptions IVhidl
represent actions. ----
(MS-126) , •. ·1 .1,...
(il Tech-Neo Publications...A SACHIN SHAH yt{d!ft
U _Al & OS/ Electronics
~
~
111
1~tt1 tsdOP of state : Every state description a. 6.2.2 Tlane ~ . . . ~ ........
• ,~ 'Ibis is because a complete
.,,plete• . . and .
actions • There are Three Key Ideas behind Planning .....t . ~,., ,
i-1 ,~...rioO of the initial state 1s given., _ . . . . . . I. ,.~ I .. . ..L ;;
)Dldal--.
....... • The agent is at home without any
.
• An intelligent agent cart act indepeodcntly and bas
well-defined goa1s. It can adapt its bdJavior to its
I, that be is wanung.
bjetlS environment "a general~purpose 9ystem ~ like
~ ~tor set : Everything the agent can· do. a human, can perform a variety of different tasks
• '\11\der conditions that may not be known a priori.~
An agent must be aware of the ma' 9: goals and
may have to behave in a changing world. · :: '
Syllabua Topic: Partial Order Plannl~ T~
• Order Planning
the agcni decides how to acquire the objects, it . • . This is . sometimes also' called' • non-linear
~·t dccide·wberc to go. Planning emphasizes planner,· which ia 'a• 1 misoomer·~ such
What is in operator and goal representations. planners often produce a linear plan.
~--:-------.----;:re,~--------
-·, 1!1:1 Tech-Neo Publications~ SACHIN SHAHVentur.e
. I
Artificial lntelll ence MU - Al & OS / Bectronlcs Plannln and Leamin ... Pa N
8
• . o. ~
A Pan:ial. Ordering is a less-than relation that is 8 The triple (acto, P, act,) ,s a causa l Ii
transi tive and asymmetric. • artial order specifies that action act _nl<. 'l'hc
p
before action act1, w hih. •
c 1s writte 11 0cc
n as a<\ llta
Parti al orde r plann er comp onen ts
Any other action A that makes P false rnUs~~ct,.
1. A set of actions (also known as operators). be before acto or after act1• 11bcr
2. A partial order : A partial-order plan is a set of s- Llnearlzatlon of partia l order plan
action s together with a partial ordering,
repres enting a "before" relation on actions, such A linearization of partial order plan is
that any total ordering of the actions, consistent lotaJ
order plan derived from the particular Partiala order
with the partial ordering, will solve the goal from plan, in other ~ords, both ~rder plans consist of the
the initial state.
3. same actions, with the order m the linearizauon be·
A set of causal links : It specifies which actions a linear extension of the part!'al order in the Ori&ina
Ulg
meet which preconditions of other actions. J
partial order plan.
Altern ately, a set of bindings between the
variab les in action. For example : A plan for baking a cake might start as
Write
act0 < ac~ if action acto is before action act follows:
1
occur s in the partial order.
4. A set of open preconditio
• Go to store,
ns : For uniformity, • Obtain flour, get milk, etc.
treat start as an action that achieves the relations
that are true in the initial state, and treat finish as •
Pay for all the goods,
an action whos e precondition is the goal to be • Go to kitchen .
solve d. The pseudo· action start is before every This is a partial plan because the order for finding
other action . and finish is after every other action. flour and milk is not specified; the agent can Wander
The use of these as actions means that; the around the store, accumulatin
g all the items as its
algor ithm does not require special cases for the shopp
ing list is complete.
initia l situation and for the goals. When the
preco nditio ns of finish hold, the goal is solved. ~ 6.3. t Causal Unk In Pardal Order Plannlas
5. In order to keep the possible orders of the actions
as open as possible, the set of other conditions and • Each causal link specifies a pair of steps and a
causal links must be as small as possible. A plan proposition, where the proposition is a post
is itself a soluti on if the set of open preconditions condition of the first step and precondition of the
is empty . secon d step. The first step is ordered before the
6. An action , other than start or finish, will be in a secon d step.
partial-order plan to achieve a precondition of • If a precondition of a step is not supported by
a
an action in the plan. Each precondition of an causal link, then it is a flaw in a partial-order plan.
action in the plan is either true in the initial state
or so achie ved by start, or there will be an action
• Any planning algorithm that can place two actions
into a plan without specifying which should come
in the plan that achieves it.
first is called partial-order planner.
7. We must ensur e that the actions achieve the
condi tions they were assigned to achieve. Each • POP (Partial-order-plan) : is a regression
precondition P of an action act, in a plan will planner; it uses problem decompos1n • •on·• it.
have an action actt associated with it such that searches plan space rather than state space; it
acto achleves precondition P for act,. builds partially-ordered plans; and it operates by
the principle of least commitment.
(MS-126)
Iii Tech-Neo Publications...A SACHIN SHAH Ventul'
...m:~nce~~M.;;u...-...A,..l,..&.,,o.,,s.,,.,,1.,,E1,..e..,ct..,ro.n=icsi=_,."""".,.....":""',..._...,,.,...,.....J~~~~~~~U,;.~i:,~~~
dln,!!111 e
~ worklnl of a Partial Order Planner needs to start off with an initial plan. This is an
...!7 6,J.2-..of- - --- - - - - - - -~ ..-l>.
- - -,- •- -i,t- - - - ., unfinished plan, which we will refine until we reach a
~~ -~ does partfal'oraer planner wofk ? '' 1.., : solution plan.
~~t.-'-' - - - ·- - - - - - - - - - - - - - --- - ·- - The initial plan comprises two dummy steps,
1,'.P . with the actions start and finish and the
called Start and Finish. Start is a step with no
1. B~ order start< finish.
preconditions, only effects: the effects are the initial
parti tanner maintains an agenda that is a set of state of the world. Finish is a step with no effects, only
~ 'fbe~ pairs, where A is an action in the plan and preconditions: the preconditions are the goal.
(P'. an atorn that is a precondition of A that must
:cbieved. Initially the agenda contains pairs (G, '2s. 6.3.l Enmple for Pardal Order Pbnner
ftnl.Sb), where G is an atom that must be true in
the goal state. t,<t,,~- ~.~;,;:~~;p1:~:~=a!fa:~~~~~
,;,. - - - - - - --- - - _...._ _,._ _.._,..zi~~- - ""'-~~~ _...
At each stage in the planning process, a pair (G,
1 adi) is selected fr?m the agenda, where P is a
-nndition for action act1• 7=,Jm/lm,W,J m>llmllm,mMM/nn»J
P!""i"
Initial state Goal state
l. ,Then an action, acto, is chosen to achieve P. That
Fig. 6.3.1 : Partial order planner
action is either already in the plan it could be the
start action, or it is a new action that is added to The above Initial state can be represented In
the pl~. Action acto must happen before act1 in POP as the following Initial plan :
the partial order. It adds a causal link that records
Plan(STEPS: {Sl: Op( ACTION: Start, EFFECT:
that acto achieves P for action acti. Any action in
the plan that deletes P must happen either before clear(b) /\ clear(c) /\ on(c, a) /\ ONTABLE(a) A
acfo or ~ter acti. ONTABLE(b) /\ ARMEMPI'Y}, S2: Op( ACTION:
I
l. If acto is a new action, its preconditions are added Finish, PRECOND: on(c, b) /\ on(a, c))},
)
to the ~genda, and the process continues until the ... ' r
ORDERINGS
agenda is empty. This is a non-deterministic
procedure. The "choose" and the "either ...or ..." {Sl < S2}, LINKS:{})
fonnchoices that must be searched over. This initial plan is refined using POP's plan
~ POP : is a regression planner; it uses problem refinement operators. As we apply them, they will talce
decomposition; it searches plan space rather than us from an unfinished plan to a less and less unfinished
state space; it builds partially-ordered plans; and it plan, and ultimately to a solution plan.
0
P«ates by the principle of least-commitment.
Ii' lllili' There are four operators, falling Into two
A Plan In POP (whether It be a finished one groups
or an unfinished one) comprises of
following l. Goal achievement operators - Step addition :
I A set Add a new step S1 which has an effect c that can
of plan steps: Each of these is a STRIPS achieve an as yet unachieved precondition of an
, r™or, but with the variables instantiated.
set existing step Sj . Also add the following
, ~f ordering constraints: S1 < SJ means
tep Si must occur sometime before SJ (not constraints: Si < SJ and SI c- -+ SJ and Start <
1 llcccss il • Si < Finish. - Use an effect c of an existing step
A ar Y Ullmediately before).
ac:t of causal links: c - ➔ SJ meaJ?s ~t~p Si
s, Si to achieve an as yet unachieved precondition of
So ~ves Precondition c of step SJ. another existing step Sj . And add just two
~~C:~~rises actions (s_~ps) fith constraints constraints: Si < SJ and Si c - -+ SJ .
Q.i· ~ d causality) on them. The algorithm
s.l.26)
!'1 Tech-Neo Publications...A SACHIN SHAH Venture
Artificial lntelli ence MU • Al & OS / Electronics Plannin and Leamin ... Pa 8 No. ~
It takes place in fully observable envtronrn
2. Causal links: must be protected from threats, i.e. • where the c~ent state of the agent in kno:
steps that delete (or negate or clobber) the
environment 1s fully observable.
protected condition. If S threatens link Si c - -
SJ : - PROMOTE: add the constralnt S < Si; or • Outcome of 8:" ac~on cannot be de~c d so the
- DEMO TE: add the constraint Sj < S the goal environment 1s said to be non-deternurustic.
achievement operators ought to be obvious
~ 6.4.1 Difference in Hlerucblc.11 ~
enough. They find preconditions of steps in the
Condld oul Pwanln s
unfinished plan that is not yet achieved.
The two goal achievement operators remedy this 1. Hierarchical Plannin g
either by adding a new step whose effect achieves the
precondition, or by exploiting one of the effects of a (a) In hierarcnical planning, at each level of
step that is already in the plan. nierarchy the objective functions are reduced
The promotion and demotion operators may be to a small number of activities at the next
less clear. Why are these needed? POP uses problem- lower level.
decomposition: faced with a conjunctive precondition, (b) The computational cost of finding the corrca
it uses goal achievement on each conjunct separately.
But, as we know, this brings the risk that the steps we way to arrange these activities for the CWTcot
add when achieving one part of a precondition might problem is small.
interfere with the achievement of another precondition. (c) Hierarcnical methods can result in linear time.
And the idea of promotion and demotion is to add (d) The initial plan of hierarchical planning
ordering constraints so that the step cannot interfere describes the complete problem which is a
with the achievement of the precondition. Finally, we
very high level description.
have to be able to recognize when we have reached a
solution plan : a finished plan. (e) The plans are refined by applying actioo
decomposition.
Q' A solutio n plan Is one In which
(f) Each action decomposition reduces a high
• Every precondition of every step is achieved by level description to some of the individual
the effect of some other step and all possible
lower level descriptions.
clobberers have been suitably demoted or
promoted. (g) The action decomposers describe how 10
• There are no contradictions in the ordering implement the actions.
constraints, e.g. disallowed is Si < SJ and SJ < Si 2. CondltJonal Plannin g
; also disallowed is Si < SJ , SJ < Sk and Sk < Si (a) It deals with the planning by some
.The solutions may still be partlaJly-ordered.
This retains flexibility for as long as possible. appropriate conditions.
Only immediately prior to execution will the plan (b) The agents plan first and then execute the
need linearization, i.e. the imposition of arbitrary plan that was produced.
orderin g constraints on steps that are not yet
(c) The agents find out which part of the plan to
ordered. (In fact, if there's more than one agent,
or if there's a single agent but it is capable of execute by including sensing actions in the
multitasking, then some linearization can be plan to test for the appropriate conditions.
avoided: steps can be carried out in parallel.) (d) Conditional planning works regardless of the
outcome of an agent
~ ., 6.4. CO~DITIONAL PLANNING
(e) It takes place in fully observable Envir0runen1
Conditional planning has to work regardless of where the current state of the agent in k:nOWll
•
outcom e of an action. environment is fully observable.
(MS-126)
Iii Tech-Neo Publications...A $ACHIN SHAH vennst
nics
nee MU - Al & OS / Electro
e of robots, the pre and
sl 1ntelli e .
tons can not be determined actions for an agent or in cas
8 outcome of act
fbe the environment .1s sai.d to be non- the post situations need to be
specified. These can also
(0 and the after effects arc
so . be called as preconditions
d~ su c.
''state node" is represented wit
h a "square" called post-condltJons.
od " • resented with a ve from one place to
IH\ A
~ and "chance n e 1s rep Fo r example, an action to dri
••circle". ows :
is happening in another can be mapped as foll
}{ere, we can che ck wh at Action<drive(c,from.to)
ine d points of the •
(b) vironment at pre det enn • Pre-condition: at(c,from)"car
(c)
action.
;an to deal with ambiguou_s at(c, to)]
ions at every state • Post-condition: -at <c, from)"
(1) It needs to tak e som e act s case Is :
ry outcome for Action Representation In thJ
and must be abl e to han dle eve
• Action Schema
the action it takes.
in linear time. Th e ➔ Action name
f{ierarcbical methods can res ult
ng describes the ➔ Precondition
. ·tiaJ plan of hierarchical pla nni
a very high level ➔ Effects
~plete problem wh ich is
Example:
del(ription. h the planning by • Action(Fly (p. from to),
Conditional planning deals wit (p from)Aplane
(p}
and
➔Pre-condition: At
Th e age nt pla ns firs t
some appropriate conditions.
lheOexecuteS the plan tha t wa s
produced.
rt(from) AAirport (to)
refusal the planning AAirpo
➔ Effect: At (p, from) .i\At{p.
to}}
As an alternative to out rig ht
n sub jec t to one or more into ADD list and
l)lbority can grant per mis sio Sometimes, Effects are split
etim es limit the use •
(OO(!ition. Planning con dit ion som DELETE list
mi ses to a nam ed person or
occupation of lan d or pre At (WHI, LNK), Plane(WIIl}
11
company. Airpost (LNK) Aiq>ort (OHA)
Syllabus Topic : Total Or de
Al
r Plann ing
[ • Fly(WH!,~[;NK·OHA)'
- ~
Lea rni ng In
At (WHl. OHA)-At (WHI.LN
K)
ere we hav e written
GOAL REPRESENTATION Here, in the post-condition, wh
that; the state is deleted
-at (c, fro m) which indicates
Goal is most often a partially spe
cified State. A cases, add and delete list
or is to be removed. In some
ach iev e or satisfy the
llaleor say proposition is said to can also be used.
the obj ect s required for entation, the state
gjvco goal if it consists of all In case of a slate variable repres
~ goal or may by som e oth er too. As
an example, if variables. Th e action be.re
comprises of different state
te tha t has kind n on the states,
lbe goalkind /\ hardworking, the n a sta is defined as a partial functio
goal. ually applied using
hhardwork.ing /\ pretty fulfils the « The actions can be act
abl e satisfies the
2xainple : Rich " Fam ous " Miser following criteria
toaJ ied for the variables.
D!
I\ICb " Famous A substitution is to be identif
state that exists'. identify
1. Th at is, for the current
It 4Ction representation that sattsfies the
the action with a pre-condition
can be subset of the
Whenever we dec ide to do
something, we are current state (the current state
~arc of the state we are in and wh at possibly the
1ft th re-condition •
mapping of e SHAH Venture
ccts arc. Wh en it com es to the ~ Tech-Neo Publlcations..A SACHlN
~s.126)
s substitution (for whatever part of (4) Add sufficient quantify of sugar and
cur ren t stat e it is applicable). the the coffee is
ready.
3. Add the post-condition (effects) to the The most straight forward approach is
remaining to use state.
sub set of the current state if any. space search.
Since the descriptions of actions in
~ ...._,.~ • PlANMIMG WITH STATE SPACE spec
a platini
pro blem ify both pre con diti ons and effects -t~g
RE.SEARCH .
just possible to each along both the dire . ' L LS
ct10ns :
(l) Forward from the initial state or
We highlight the main planning met
AI pro ble ms hod to solve (2) backward from the goal
Sta te space search is a process used in ~ 6.6 .1
the field of Forward State-Space Sur cb
com put er science, including artificia
l intelligence (AI) ,
in whi ch successive configurations This search is also called as p~ogr~ssive
or states of an planning,
inst anc e are considered, with the inte because it moves in the forward dtrectio
ntion of tlndlng n. It is similar
a goa l sta te with the desired proper to the problem-solving approach.
ty.
For finding the solution one can We begin with the problem's initi
make use of al state,
exp lici t search tree that is generated considering sequences of actions until
by the initial state we reach a goal
and the successor function that together state.
define the state
spa ce
The formulation of state-space sear
ch planning
Con stru ctio n of Sta te Spa ce problem is as follows
(i) The root of search tree is (a) Forward search is an algorithm
a search node that searches
corresponding to initial state. In this forward from the initial state of the
state we can world to
che ck whether the goal is reached. try to tlnd a state that satisfie
s the goal
(ii) If goal is not reached we conside formula.
r another state.
Thi s can be done by expanding from (b) The initial state of the search is
the current the initial state
stat e by applying successor func from the planning problem here, each
tion which state will be
generates new state. And from this set of position ground literals; not
, we get appearing
multiple states. literals are taken as false.
(iii) For eac h one of these, again we nee (c) The actions which are applicable
d to check goal to a state are all
test or repeat expansion of each state. those whose preconditions are satisfie
d. Adding
(iv) Toe choice of which state to exp the positive effect literals and dele
and is determined ting the
by the search strategy. negative effect literals, the success
or sate is
generated from the action.
(v) It is possible that some state can
not lead to goal (d) The goal test chocks whether the stat
stat e such a state we should not expand e satisfies the
. goal of the planning problem.
Exa mp le : sta te-s ear ch exa mp
le : (e) The step cost of each action
is taken as l.
Ma kin g coffee Different cost for different actions
may be
(1) Take som e of the boiled water in a cup and allowed.
add
necessary amount of instant coffee (f) Forward state-space search is not
powder to very practicab;
mak e decoction, It is because of a big branching fact
or. F~rw
(2) Add mil k powder to the remaining boiling water search considers all applicable actions
, (i.e. all
to mak e milk. relevant and non-relevant actions are con
sidered).
(3) Mix decoction and milk.
(g) A forward planner searches the stat
e-space graPh
. from the initial state to the oal-descri
lion.
~ Toch-Neo Publkatlon,..A SACHIN S H A H ~
(MS-1.26) • I ,
. nee MU • Al & OS / Electronics Plannln and Leamln ... Pa e No. 6-11
I 1ntell• e
~ ,entat lon (h) To obtain full advantage of backward search, we
ll•Pre need to deal with partially uninstantiated actions
pt' pace essentially consists of a set of nodes
5181e
s and states.
, A enting each state of the problem, arcs
rel'res n nodes represent the moves from one state For example, suppose the goal is to deliver a
t,el'Veether, an initial state and a goal state. specific piece of cargo to DelhJ :
10 aJ)O
. tate space takes the form of a tree or a This suggests the action unload (C, P, Delhi)
each s Thus, Action (unload (C, P. Delhi))
, ph, . .
gfll 5 that deternune which search algorithm or Precondition : Io (C, P) " (P, Delhi) " cargo (C) "
factor .
. ue will be used me1ude the type of the plane (P)" Airport (Delhi)
tecbnlq
bJem and how the problem can be represented. Effect : At (C, Delhi) " -, In (C, P)
pro
have presented state-space search as a forward Syllabus Topic : Learning Agent
We b • • al 'bl
' earch method, ut 1t 1s so poss1 e to search
5
kward from the set of states that satisfy the
bac
goal to the initial state. ~ 6.7 TWO TYPES OF A_GENTS.., ·:· ,.i.L·:. .11
'It- 6 ,6.2 Backward State-Space Search 1. Physical agents (usually known as robots)
2. Software agents (sometimes know n as softbots)
Backward state space search planning is we want
(a) to generate possible predecessors of a given goal 1. Physical agent s
state, work backwards toward the initial state. These are physical artefa cts and act in a physical
(b) If a solution exists, it should be found by a environment, e.g. send a physical agent into a
backward search which allows only relevant dangerous building. It must be able to see, it must
action know where it is, it must be able to move and
search, plan its goals, execute its goals
(c) The meaning or restriction to relevant actions only
means that backward search often has a much (physically), re-plan if necessary, communicate
lower branching factor than forward search. with other (possibly human) agents.
(d) Goal states are often incompletely specified. It Q> Consist of some or all of the follow ing :
expresses only what is desired in the fmal state, (a) Comp uters
rather than a complete description of the final
Top-level controller
state.
Low-level controller, e.g. to manipulate a hand
(e) It is also called as Regression planning because
Backward state-space search will find the solution (b) Senso rs
from goal to the action. Establish contact / non-contact with objects in the
(0 If there are many know n states possible to be environment
reached from goal state. Reaching to any one of To "scc 0
• A robot must protect its own existence as long as Planning essentially needs the representation in
such protectio n does not conflict with, the First or terms of time. This is required so that we are able
Second Law. to reason regarding the actions that are to be taken
• A robot may not harm humanity, or, by inaction, along with the reactions that we get back.
allow humanit y to come to harm. (iii) State Representation States are the
B" Agents and goals representation of the facts. States are represented
Agents and goals as conjunction. It comprises the positive literals
that specify the state.
~Agent - Sensors ~
t _. -.(• A state is represented with a conjunction of
State of the world now
positive literals using :
t . m
::,
Set of - What will It be Ilka if s.
actions I do action A ~ a Logical Propositions: Poor/\ Unknown
::,
3 FOL literals: At (Planl,O MA) " At (Plan2,JFK)
.... Goals _ '
What action I
should .ii , (D
I do now
•· a FOL literals must be ground & function-free
~l. •· •
~ ~..... ; ~ ~ '""" _.,.. • """' r,.,r.
'
i=ffectors.,, ..•,
•,.,._ ~ ,
°¼
classification of documents. In this particular case 000 0
ooo
a learner learns based on the available document
s 00
and their classes. This is also referred to
labeled data. as
0
• The program that can map the input documents
to 0
appropriate classes is called a classifier, because 0
it Clal&B
assigns a class (i.e., document type) to an objec
t
(i.e., a document). The task of supervised learning
is to construct a classifier given a set of classified (lD1)Fig. 6.10.1 : Supervised learning
training examples. A typical classification There are a number of challenges in superyiscd
is •
depicted in Fig. 6.10.1.
classification such as generalization, selection of
• Fig. 6.10. 1 represents a hyperplane that has been right data for learning, and dealing with
generated after learning, separating two classes variations. Labeled examples are used for training
-
class A and class B in different parts. Each input in case of supervised learning. The set of labeled
poin t presents inpu t-out put instance from samp examples provided to the learning algorithm u
le
space. In case of document classification, these called the training set.
poin ts are documents.
• Leam ing computes a separating line
• Supervised learning is not just about
or classification, but it is the overalJ process that
hype rplan e amon g documents. An unknown with guidelines maps to the most appropriate
docu ment type will be decided by its position with
decision.
respe ct to a separator.
a. 6. 1 O. 1 How Supervised Leaming Works?
5 151
@ Tut data
Hex ago n~~
Triangle
(MS-126) I
Iii Tech-Neo Publications...A SACHJN SHAH Ventu
r'
r:::
J.!
t.4U • Al & OS/ Electronics
tia¥e a da~ of different types of No. 6-15
,V,"~ ~:b includes square, ~tangle, triangle, ._. Regresalon
~ .,blo. Now the first step 1s that we need to ~egression algorithms ·are used i i ~ . _
' -~ pol~odel for each shape. relationship between the • . is a
F' i)le ~ "abl mput Variable and the output
i,,iJl ,, dJe o-criven shape bas ,our sides, and all the van e. It is used for the ,-,.1; •
•
•
r.....,ctJon of continuous-
0
;deS are equal, then it will be labelled as a vanables, such as Weather forecasting, Market Trends,-
~.-...re etchi: Bhelow are some popular Regression algorithms
~-- givens• hape bas· ... _ s1"des, then it will
u.uee w C come under SO~r..l ............ :
_..~ J-:-g
0
1f tb~Ued as a triangle. • Linear Regression
t,e la • • • Regression Trees
0 !it
, fbe given shape bas six equal sides then it
be labelled as hexagon.
r training, we test our model using the
•
•
Non-Linear Regression
Bayesian Linear Regression
rJo'lt', afte d the taSk of the model is to identify the • Polynomial Regression
~set. an
...ne. ._. ClauiflcatJon
SI""' biJle is already trained on all types of
-,.,... inacand when 1t •
• cIassifies
• fiind s a new s bape, 1t . .......on algon"thms are used when the output
. Classifi-,:
' sb81"5~ 00 the bases of a number of sides, and vanable IS categorical, which means there are two
~cts tbe output classes such as Yes-No, Male-Female, True-false., etc.
P';wing are the steps involved in Supervised • Random Forest
, fo • • • Logistic Regression ,1
~g • . th type f . . dataset
fast I)etenrune e o traming • • Decision Trees
0
eoUect/Gather the labelled training data. • Support vector Machines f"J ' I 1•
0
~ Split the training dataset into training dataset, a_ 6.10.2 AdwMlll'U '11 Sapavlsed .......
teSt dataSet. and validation dataset
1 With the help of supervised learning, the model•
Determine the input features of the training •
o, can predict the output on the basis of prior
dataSCt. which should have enough
• I I experiences .
• • knowledge so that the model can accurately
,,, 2. In supervised learning, we can have an exact idea
predict the output.
about the classes of objects.
0 • Determine the suitable algorithm for the 3. Supervised learning model helps us to solve
" model, such as support vector machine, various real-world problems such as trawl
• decision tree, etc. detection, spam filtering, etc.
0 Execute the algorithm on the training dataset
,Sometimes we need validation sets as the a. 6.10.3 Disadvataps of Sllpervlsed .......
·control parameters, which are the subset of 1. Supervised learning models are not suitable for
training datasets. handling the complex tasks. •,
o Evaluate the accuracy of the model by 2. Supervised learning cannot predict the correct
•Jfi (
,·., .1providing the test set. H the model predicts output if the test data is different from tho training•
. J.be correct output, which means our model is dataset. • 1
accurate. 3. Training required lots of computation times.
' Supervised learning can be further divided into 4. In supervised learning, . we ~ . enough
two types of problems: Regression and knowledge about t h e ~ Qf o)>jcct.. •
Classification. ,, t::j, •
0 0
00 0 0
ooo
o oOoo
ooo
0 0
@) 0
0
I
accordingly. So to identify the imag them be
e in
supervised learning, we will give the input
data as
• Unsupervised leamin_g can ~ f~r two t}'J>cs
of problems : Clustenng and AssOC1ation.
well as output for that. which means we will
I
train • Example : To understand the. UDSU
the model by the shape, size, color, and taste J)ervjSCd
of )earning, we will use the ~xample given abov
each fruit. Once the training is completed, e. So
we will unlike supervised learrung, here we will
test the model by giving the new set of fruit not
. The provide any supervision to the model. We will
model will identify the fruit and predict the
output iust
using a suitable algorithm. provide the input dataset to the model and
allow I
the model to find the patterns from the data.
• Unsupervised learning is another mac With
hine the help of a suitable algorithm, the mod
learning method in which patterns inferred el Will
from train itself and divide the fruits into diffe
the unlabeled input data. The goal of unsuperv rent
ised groups according to the moSt similar featu
learning is to find the structure and patterns res
--- -- --- - --- --~~- --- - - --- --- --- -: -- - - -
,-........._... from between them.
-....- ..
:_~ _w_y,at is the difference between
--- diffe---
--
supervised leaming and u~~~-~
~
- -.,,,- -,
The main---
~-~7___ ___ ___ ___ ___ ___
------
rences betw ~---- d and---
een Supervise--- --
Unsupervised learning are given below :
_:
find the
I
Supervised learning needs supervision to train
the model.
unknown dataset.
Unsupervised learning does
supervision to train the model.
from the
cases
kno w the input as well as corresponding outp where we
Unsupervised Learning can
be classified in
Cus terin g and Associations problems.
Unsupervised learning can be used for thos -
I
uts. where we have e cases
only input data and DO I
Supervised learning model produces an accu corresponding output data.
rate result. Unsupervised learning -
model may give less
Supervised learning is not close to true Arti accurate result as compared to supervised leam
ficial in.k..
intelligence as in this, we first train the Unsupervised learning is more close to
model for each the true
data. and then only it can predict the correct Artificial Intelligence as it learns similarly
outp ut as a
chil d. learns daily routine things
expenences. by bis
It includes various algorithms such as Line
Logistic Regression, Support Vector Machine
ar Regression,
It includes various algorithms such as Clus -
, Multi-class tering,
Classification, Decision tree, Bayesian Lo2 KNN , and Apriori algorithm.
ic, etc.
-
(MS-126) , . Iii Tech-Neo Publications...A SACHIN SHAH Ver,t\11!
Al & DS / Electronics
,..,U .
nee ..
Semi-Supervised Learning teacher solves for the class as an aid in solving another
~ r pfc:
5y11111>1J•~i:::::::_-_-::_-_-_-_-_-_-_-~-;;
9 0 set of problems. In the transductive setting, these
unsolved problems act as exam questions. In the
~J-S UPE RVI SED LEARNING inductive setting, they become practice problems of
'.'.f.:6,f,Z ., , the sort that will make up the exam.
~ r v i s e d ]earning is an approach to
Syllabus Topic : Ensemble learni ng
selll-1· pe. g that combines a small amount of
50
1~ t h a Jarge amount of unlabeled data
~hillt
~ . (lata Wl
~ed . ·ng. Semi-supervised learning falls ..,- 6.U ENSEMBLE LEARNING
~g ~per vised learning (with no labeled
, - - - - - - - - - - - - - - - - - - - - - -.- ~ ,
~.ee0 ; ) and supervised learning (with only
a,iJIIOg •ning data). It is a special instance of weak ~ ~ - ~ ~•~~ :~~~ ~-~' ;t~_ !!.~~ ½,~_ !
1
•
,.1,,o]ed uai_.
~ • 100.
An ensemble is a machine ]earning mode] that
SU~abeled data, when used in conjunction with a combines the predictions from two or more
runount of labeled data, can produce models.
~d rable improvement in learnin g accuracy. The • The models that contribute to the ensemble axe
I .
:es
coos1 e
'sition of labeled data for a earrung proble ~ often
a skilled human agent (e.g. to transcnbe an
audio segment) or a physical experiment (e.g.
•
called as ensemble members.
They may be of the same type or of different
types.
deltflllUllllg the 30 structure of a protein or • They may or may not be trained 00 the same
deltflllUllllg whether there is oil at a particular training data
)ocation). Toe cost associated with the labeling process
Rema rb
thus may render large, fully labeled training sets Ii"
infeasible, whereas acquisition of unlabeled data is 1. An ensemble method is a technique that uses
relatively inexpensive. In such situations, semi- multiple independent similar or different models
supervised learning can be of great practical value. to derive an output or make some predictions.
Semi-supervised learning is also of theoretical interest For example, a random forest is an ensemble of
in machine learning and as a model for human
multiple decision trees.
learning.
2. Ensemble Learning : This uses a single mac.hine
A set of l independently identically distributed
examples x1, ... ,x1e X with corresponding labels learning algorithm. It is an unpnmcd decision
Y,,..,,Y1E Y and u unlabeled examples x1+ 1, ... ,x 1+ u E tree. It trains each model on a different sample of
X arc processed. Semi-supervised learning combines the same training datase t
1
Ibis information to surpass the classification The predictions made by the ensemble members
performance that can be obtained either by discarding are combined using simple statistics. such as
~ l e d data and doing supervised learning or by voting or averaging.
g the labels and doing unsupervised learning.
.W 6.14 CltOSS-VAllDATIOH IN MACH
~~-su pervi sed learning may refer to either ,. • ~ (.....
LEARNING
~ uc~ve learning or inductive learning. The goal of
the o; uctive learning is to infer the correct labels for Cross-validation is a technique for validating the
.,.ven.unlabeled d ata x, + 1,...,x I+ u only. The goal •
of ind . model efficiency by training it on the subset of
UCtive 1........:_ . .
frolll}{ to Y.........wg JS to infer the correct mappmg input data and testing on previously unseen subset
of the input data. It is a techni que how a
lntui • statlsticaJ model genera lises to an independent
c~ tiveJy, the learning proble m can be seen as an
dataset.
~ l e d data as sampl e problems that the
Oi!s.126) lilrech -Neo Publications...A SACHIN SHAH Venture
ArtificlaJ lntellinee MU - Al & OS / Electronics
s.20
Plannin and Leamin ••• Pa e No.
• In machi ne learning, we need to test the stability one may build a slWDP which contai·th ns a leaf for cac.h
the
of the model. It means based only on the training possible feature value or a stump w1 two leav
' cs,
dataset, we cannot fit our model on the training one of which corres ponds to some chosen categnn,
eaf · -·,,
dataseL and the other l to all the other categones.
• For this purpose, we test our model on the sample For binarY features these two schemes arc
.
which is not part of the training dataseL And then 1deot1c. al A missing value may be treated as a Yet
•
we deploy the model on that sample. another category·
• This complete process comes under cross-
validation. a 6.15.1 for Continuous futur es
• The basic steps of cross-validation are : Usually, some threshold feature value is selected,
(i) Reserve a subset of the datase1 as a validation and the stump contains two leaves-for values below
set. and above the threshold. However, rarely, multiple
thresholds may be chosen and the stump therefore
(ii) Provide the training to the model using the contai
ns three or move leaves.
training dataseL
Decision stumps are often used as components
(iii) Evaluate model performance using the (called "weak learners" or "base
learner'') in machine
validation seL learning ensemble techniques such as bagging and
• If the model performs well with the validation set, boosting.
perform the further step, else check for the issues.
a 6.1 5.2 Remarks
1~ 6.15 STUMPING ENSEMBLES (1) Meani ng of stump In decisi on tree
• A decision stump is a decision tree, which
A decision stump is a machine learning model uses only a single attribute for splitting.
consisting of a one-level decision tree. That is, it is a
• For discrete attributes, this means that the
decision tree .with one internal node (the root) which is
tree consists only of a single interior node
immediately connected to the terminal nodes (its
(i.e., the root has only leaves as successor
levels).
nodes).
A decision stump makes a prediction based on the • If the attribute is numerical, the tree may be
value of just a single input feature. Sometimes they are more complex.
also called as 1-rules.
(2) Are decisi on stump s linear 1
[An example of a decision stump that
discriminates between two of three classes of iris A decision stump is not a linear model. The
flowe r data set : iris verslcolor and iris verginlca. The decision boundary can be a line, even if the model
petal width is in centimetres. This particular stump is not linear.
achieves 94% accuracy on the iris dataset for these two
Syllab us Topic : Reinfo rceme nt Leaming
classes.]
Petal width 1.75
~ 6.16 REINFORCEMENT LEARNING -~
No Yes ,-----------------------------~
: GQ. What is Reinforcement Leaming? Explain with an :
Iris Verslcolor example. ..
Iris Verginica 1
L--\.. ~
- - - - - - - - - - - - - - - - - - - .._ - - _..._ _..._ - __.,..
Fig. 6.15.1 • Reinforcement Leami ng is a feedback-based
Machine learning technique in which an agent
Depending on the type of the input features, learns to behave in an environment by perforroin&
several variations are possible. For normal features, the actions and seeing the results of actions.
.,
(MS-126) •<' '' J
Iii Tech-Neo Publications...A SACHIN SHAH Venllft
MU . Al & OS / Electronics Plannin and Leamln No. 6-21
,nielli neeo<><i action, the agent gets positive Environment
~fof e3'b ;d for each balad action, the agent gets
eeDbllc"• ee<Jback or pen ty.
' f gative f ent Leaming, the agent learns
rl- ~ejJl{orcelll using feedbacks without any
1
lJI 11Jlltical Y _1:1re supervised learning.
,otO _,1 data. UJU.U- Reward. Actions
•..,.tc;u . labeled data, so the agent is State
1P-- tbere is 00 . I
siJ!Ce team by its expenence on y.
, i,ol)Jld to specific type of problem where
!{I, ~oJv~~g is sequenti_al, and ~e goal is
de'j510~ such as garne-playmg, robollcs, etc.
1oog-te ' . teracts with the environment and
Agent
1be age~t bin itself. The primary goal of an agent
~plo~ it :ent learning is to improve . the .
iJ)force . th . (1DCI Fig. 6.16.1
ill re by gettJ.ng e maximum positive
pestorinance
• For machine learning, the environment is typically
iewardS· iearns with the process of hit and trial, represented by an "MDP" or Markov Decision
'J1)e age: on the experience, it learns to perform Process. These algorithms do not ncccssarily
aod bas 10
. a better way. Hence, we can say that assume knowledge, but instead are used when
we ~k roent Jeaming is a type of machine exact models are infeasible. In other words, they
"Re~orcemethod where an intelligent agent are not quite as precise or exact, but they will still
teaf1Ull~ r program) interacts with the serve a strong method in various applications
(co~Pu e ent and learns to act within that" How a throughout different technology systems.
e0viro_nmdog reams the movement of his arms is • The key features of Reinforcement Learning are
Robotic l .
exaJJlple of Reinforcem ent earrung. mentioned below.
80
. core part of Artificial intelligence, and all o In RL~ the agent is not instructed about the
n~a f .
Al agent works on the concept o reinforcement environment and what actions need to be
teaming. Here we do n?t need to pr~pro~ the taken.
own expenence without
agent, . its
as it teams from
0 It is based on the hit and trial process.
any human intervention.
The agent takes the next action and changes
Example : Suppose there is an ~ agen~ present 0
within a maze environment, and his goal 1s to find states according to the feedback of the
the diamond. The agent interacts with the previous action.
environment by performing some actions, and 0 The agent may get a delayed reward.
based on those actions, the state of the agent gets 0 The environment is stochastic, and the agent
changed, and it also receives a reward or penalty needs to explore it to reach to get the
as feedback. maximum positive rewards. .I
, The agent continues doing these three things (take
action, change state/remain in the same state, and a. 6.16.1 Appnwbes to lmplemem
get feedback), and by doing these actions, he Reinforcement Learnlns
learns and explores the environment.
' -- -------- -- ------ --- --..--..~ -- ---- ------1
• The agent learns that what actions lead to positive 1 GQ. What are the approaches for. ReinforcementpJ
feedback or rewards and what actions lead to I • 'Z ,,,;;,
~ ___ Leam1n$ '-) _ _..,.-', •~ 4'¥ 'l'·~•o<:,~,c,1\aw c;~::-4
negative feedback penalty. As a positive reward,
the agent gets a positive point, and as a penalty, it There are mainly three ways to implement
gets a negative point. reinforcement-learning in ML, which are :
-------~---....______________,
• Agent's actions determine the subsequent data it 5. Aircraft control and robot motion contr
receives ol
1------·······--··--------
-----------------i
(MS-126) 1
Iii Tech-Neo Publications...A SACHIN SHAH Vert~
~Q•
Object recognition
FJ3111P~l;e~~~~~~~;C~h~e;s~s~g~am~~e~~~~~~;~~~~~~~~~~;;;;:;;cs;o~ftw~;are~;~s~y~s~te~~mi~o~r~a-:-pp~~li-·;ca~ t1-·;o~ns~.~~~~?J
.......,. • , ,,: • 2 , (iii) In the figure •
~~... "r..n1£RAL'LEARNiH "'· " •;1-"""' ' environment represents overall
j~f!Jr•~~•-~ . . r G MODE1.vJr. learner sys~m. The environment may be regarded
~ as . ~ne which produces random stimuli or a
We develop general learning model and note the training source such as a teacher which provides
r.;tors affecting learning performance. carefully selected training examples to the teamer
. component
6) Leaming can be d one m several ways. But
teaming requires that new knowledge structure be (iv) Some representation language for communication
l1 created from some form of input stimulus. between environment and the learner component
mast be used. The language scheme must be same
.,,.
This type of new knowledge must be assimilated as that used in the knowledge base. When they are
into a knowledge base and be tested in some way same, we say that 'single representation' is used.
, for i~ utility. (v) Inputs to the learner component may be physical
ij Testing means that the knowledge should be used stimuli of some type or descriptive, symbolic
in the performance of some task from which training examples.
meaningful feedback can be obtained. The The information conveyed to the learner
I
feedback provides some measure of accuracy and component is used to create and modify knowledge
in the knowledge base.
usefulness of the newly acquired knowledge. We structures
exhibit the general learning model in the figure. The performance component uses this knowledge
Stimuli to carry out some tasks, such as solving a problem,
examples, _ _ _ __
Leamer Feed back u playing a game etc. ,
Li
; 'component (vi) When a task is given to performance component,
it describes its actions in performing the task and
Critic produces a response. Then critic module evaluates
Performance
evalua1Dr this response relative to an optimal response.
Response
(vii)Critic module sends the response to the leartlC!'
,. Performance component to check whether or not _ihe
• Component Tasks \ performance is acceptably and if so n:q~
modifying the structureS in the knowledge base.
Fig. 6.17.1: General Learning Model \
~ Tech-Neo Publications-A $ACHIN SHAH Ventllte
P1Jnn1n ond Lonmin ••• Pe e No. 6-24
t...~ tr 5 .3nC8 iW. ~ s OS ~ )
. "'kd 11c.,t iimc. it cnn be directly obtain~•
If tbc {rOpC!2" lc:tmin,g ~ :11::h~,-ro. then the ii- nt-c •
nu:mor)'. 111stend of being gcnen:ited ~"
~m the b
pa
""'"'"rfiJF.u1-IWtlXx of the system "'ill be unpro,"ed b) the Y
bxK lt,clgc MSC compooen'- the npplication. .
.d Phone's cache compnscs 510,,...
(tiii) The cycle dcsa-ibcd in the abov-e figure is IO be • Toe Androl . '"" of
small bits of informauon that your npps and Wei)
rcpcated a number of times (i) until the
browser use to speed up perfonnance. But cached
pcrfurmancc of the system rcochcs some
files can l)ecome co_rrupted and overloaded llnd
acx:q>C3blc le~"Cl, (u) or a known learning goal is
. performance issues. Cache need not ..._
ac:l.ucved. or- (w) afta- some repetitions no change cause .odi U\:
constantly cleared, but a pen c c 1can-out can be
occars in the knowledge base.
helpful.
~ 6.18 TECHNJQUES USED 1M LEARNING ~ .._ 8 .2 wmfq by Direct Instruction
e> 6. t •
The most common tcclmiqucs (methods) used for This type of teaming is slightly more complex.
learning arc as follows : •
This requires more inference than rote learning.
Here to intcgr.ite knowledge_ into 'knowledge
(i) Memorization (Rote learning)
base', it must be tranSfonned into an operationaJ
(u) Leaming by direct instruction
form.
(w) Leaming by analogy, When a number of facts are presented to us
(iv) Leaming by induction
•
directly in a well-organised manner, we use this
(v) Leaming by deduction type of learning,
(VI) Leaming using neural network
a 6.18.3 laminar by Amlo8Y
a. 6. f 8. f LeamJng by MemoriDdoo • This is a process of learning a new concept or
(Rote Le.amfng) solution by using the similar known concepts or
solutions. For example, in an examination
• It is the simplest form of learning. It requires the previously learned examples help one to solve
least amount of inference. Here, learning is new problems (in an exam)
achieved by simply copying the knowledge that is
used in the knowledge base. For example, for We make frequent use of analogical learning. This
•
memorizing multiplication table, we use this type fonn of learning requ.ir'Cs more inferring lhan
of learning. either of the previous forms.
• When a computer stores a piece of data, it • It is because 'difficult transformations' must be
perfonns a rudimentary form of learning. It is a made between the known and unknown situations.
simple case of 'data caching'. We store computed
values so that we do not have to recompute them a 6.18.4 wminc by lnducdon
lat.er.
• This is a powerful form of teaming which also
• When computation is more expensive then this requires more interring than the first two methods.
strategy can save a significant amount of time. This form of learning is a form of invalid but
• Caching is used in Al programs to produce some useful inference.
supnsmg performance improvements. Such • Herc we formulate a general concept after seeing
caching is known as rote learning. a number of instances or examples of the concept
Iii" Remark : Data Caching For example, we learn the concepts of sweet taste
or color after experiencing the sensations
• Data Caching is a technique of storing frequently associated with several examples of sweet foods
used data in memory, so that when the same data or coloured ob'ects.
J
(MS-126) ~ Tech-Neo Publications...A SACHIN SHAH Venture
.t
'
Plannin and Leamin ... Pa e No. 6-25
human
application that uses a knowledgebase of
expertise to aid in solving problems .
I.,eamin g is achieved thro ugh a
of d procedure
e step s usin g 4. Expert system is a model and associate
of deductive inferenc ee of
new facts or that e~hib_its, within a specific domain, a degr
facts. From the kno wn facts , ble to
exam ple, expertise in problem solving that is compara
hips are derived logically . For
a human expert.
d learn deductively that sita is the cousin ble of
have kno wled ge of sita and 5. Expert system is a computing system capa
esh, if we ledge
representing and reasoning about some know
h's parents and Rules for cousin a hum an
rich domain, which usually requires
lems
nship. expert, with a view towards solving prob
ires mor e inter ence than of perf orm ance
uve )earning requ and/or giving advice. Its level
rmetbods makes it expert .
ne.
Expert system = Knowledge+ Inference engi
U Neural Network ledge
All in all, an expert system contains know
Networks can be loos ely sepa rated into rtise in some
acquired by interviewing human expe
~
N SHAH Ve
tions ....A SACHl
e o Publica
~Tech-N
... Pa e No. 6-26
Artificial lntelli ence MU • Al & OS/ Electronics
. runent When there is a foflllal
• Implies that an action has been carried out. This xternal enVlfO .
e . ·t is done via the user mterface. In
adds new information to the database of inferred consultauon, I th f
. rt systems where ey orm a pan of
facts. real-ame expe .
loop system, it 1s not proper to exi,ec1
The major task of the IE is to trace its way the closed • t feed ·
human intervention every ume ~ ·m the
through a forest of rules to arrive at a conclusion. .. vailing and get remedies. Moreover
condiuons pre . . '
Basically there are two approaches. They are forward . ·s too narrower m real ume systems•
the ume-gap J • . .
chaining and backward chaining. interface with its sensors gets the
The external · . . .
. b minute informauon about the situation
@@-----~:::;;=::;;:=J rrunute Y ea1 • ES il
and acts accordingly. Such~ tune w 1 be of
tremendous Val
ue in industnal
. .
process controls
'
in
nucIear P!ants, in supersoruc Jet fighters etc.
lanation facility : The meth~ by which an
6. Exp I
expert system reaches a cone us1on may not be
obvious to a human user, so many expert systems
will include a method for explaining the reasoning
process that lead to '?e final answer of the
Fig. 6.19.1
system. The basic questJons any user would like to
3. User Interface : User interface provides the needed query the system are WHY and "HOW".
facilities for the user to communicate with the Whenever a user poses the question "HOW", the
system. An user, normally would like to have a answer is available.
consultation with the system for the following The answer to "WHY" is got from the rule it is
aspects: about to fire. Every FACT FRAME has a slot of
• To get remedies for his problem. ''TIIENS-OF' which is exactly the rule the system is
• To know the private knowledge (heuristics) of the firing. The answer to the question "WHY" is obtained
system. from this. Moreover, Genie's Inference Procedure
organization of knowledge and data regarding a
• To get some explanations for specific queries.
problem is explicitly available in the dynamic version
Presenting a real-world problem to the system for of the frames. Hence the explanation facilities are
a solution is what is meant in having a considerably superior.
consultation. Here, the user-interface provides as
much facilities as possible such as menus, Syllabus Topic : Knowledge Base
graphical interface etc., to make the dialogue user-
I
friendly and lively. I~,. 6.20 TECHNIQUES or KNOWLEDGE
4. Knowledge acquisition facility : The major -~ ' ACQUISITION
bottleneck in ES development is knowledge
acquisition. Present day ES do not have a ·------------------------------·
1, '\J
: GQ. Explain knowledge acquisition process.
,• \. I
-----
lyPc of tb_e.~kn=o~w~led::g:e~a:c~q:ui:s:iu:·o:n_w=he:::re:_:th:e~o-=nl:.::y~----;;~-----------~~~=
lls.126) ~ Tech-Neo Publications...A SACHIN SHAH Venture
c,a:::::9 and Leamiog)•••Page No. (6-~
At1l!ic:ia • ■►- 108 (MU - AJ & OS / Bedlu..a)
c;bainin& wbetC the system ~arks forward from the
Mele : MOLE is a knowledge acquisitioo system . onnarioO it baS in the working memory. In for...ard
which is med for heuristic classificatioo problemS, infi. . g. the cooflict set will be ~ by rules Which
mch as cfiaposjng dixases It is used in coo_;unctioo chainJl1 their antee,edents crue m a given cycle. 1oe
wi:lb the CiOffr and differentiate problem~lviJJg :;:,:"continOCS till the conflict set becomes C11lpty.
mdhod. An c:xpert system produced by MOLE
a::cq,cs input data. geoerate5 the set of candida!e SyflabUS Topfc: User Interface .._
Clplamtjoos or classifications that rover the data and
theo uses diffc:rcntiating knowledge to detet1Jline
which ooe is best. The process is carried out ~ 6Zl USO (NTEl.fACl
imcractively' became explanation neem to be justified.
With the help of a user interface, the expen
till o.Jtima.tc causes are confirmed.
stem interacts with the user, talces queries as an input
sy and
Syflabu8 Tope : Inference Engine in a readable fonnat. passes it• to the •tnfcrence
•
eogine. ~~
.. &.- getting the response from the inference
eogi.De. it displays the output to the U5er. In other
.... 6.21 lllffUlla EJiGfNt
• words, it is an interface that beJps a DOD-ffl)ert D!ler'
The infueoce engine is the program part of an to commo:nlcate wtth the expert system to ftnd a
expert syaem. It represents a problem solving model solutioo.
which uses the rules in the knowledge base and the
situa:tioo-spcci:fic knowledge in the WM to solve a a. 6.22.1 lllferace Easlae(ltale s of Eadae>
problem. • Toe inference eogine is known as the brain of lhe
Given the conten1s of the WM, the inference expert system as it is the main processing unit of
engine ddermines the set of rules which should be the system. It applies inference rules to lhe
considered. Tbcse are the rules for which the knowledge base to derive a conclusion or deduce
conscqoeots match the current goaJ of the system. The new information. It helps in deriving an error-free
set of rules which can be fired is called the conflict set solution of queries asked by the U5er.
Out of the rules in the conflict set. the infereoce engine • With the help of an inference engine, the system
selects one rule based on some predefmed criteria.
extracts the knowledge from the knowledge base.
This process is called confliet resolution. For example,
a simple conflict resolution criterion could be to select • There are two types of inference engine:
the first rule in the conflict set. • Detennlnlst k Inference engine: The cooclosions
A rule can be fired if all its antecedents are drawn from this type of inference engine are
satisfied. If the valoe of an antecedent is not Jcnown (in assumed to be true. It is based on Cads and nda
the WM memory), the system checks if there are any • Probabilistic Inference eoglne! This type of
other rules with that as a consequent; thus setting up a inference engine contains uncertainty in
sub--goal If there are no rules for that antecedent, the conclusions. and based on the probability.
mer is prompted for the valoe and the value is added to Inference engine oses the below modes to derive
the WM. If a new sub-goal bas been set up, a new set the solutions:
of .rulea will be considered in the next cycle. This • Forward Chaining; It starts from the known
procc:&s is repeak.d till, in a given cycle, there are 00 facts and rules, and applies the inference rules to
sob-goals or ahematively , the goal of the p(oblem- add their conclusion to the known facts.
aolving bas been derived..
• Backward Chaining; It is a baclcward reasoning
This inferencing strategy is called backward method that starts from the goal and worts
chaining (si.oa it reasons backward from the goal to be backward to prove the known facts.
derived). There is another stralegy, called forward
• The knowled e that storeS
(MS-126)
Ii) Tech-Neo Pubricatlons..A SACHJN SHAH Ver,tLWe
~~1n~te~1r~e:gnce~~M=U=·-:A""l'."'&:::;D:::;S;;;/'."'E""Jectro~=-"=ics=======~:==========~~~~~~~~~~~~~~
Jedge acquired from the different experts of Each antecedent of a rule typicaily checks if the
~ow • I • 'd
s1
articular domain. t is con ered as big particular problem instance satisfies some condition.
(be p of knowledge. The more the knowledge For example, an antecedent in a rule in a TV
storage . . be
the more precise wiJI the Expert System. troubleshooting expert system could be: the picture on
!)3Se,
. •miJar to a database that contains the TV display flickers.
Jt 1s 5 1•
• jn(orlllation and rules of a particular domain or The consequents of a rule typically alter the WM,
subject. to incorporate the infonnation obtained by application
can also view the knowledge base as of the rule. This could mean adding more clements to
• 0n;ections of objects and their attributes. Such as the WM, modifying an existing WM clement or even
coLion is an object and its attributes are it is a deleting WM elements. They could also include
~arnrnal, it is not a domestic animal, etc. actions such as reading input from a user, printing
messages, accessing files, etc. When the consequents
Syllabus Topic : Working Memory of a rule are executed. the rule is said to have been
fired.
~~.%:J WORKING MEMORY In this article we will consider rules with only one
:•~ consequent and one or more antecedents which are
Toe working memory represents the set of facts combined with the operator and We will use a
tnown about the domain. The elements of the WM representation of the form:
reflect the current state of the world. In an expert rule id: If antcccdcnt! and anteccdcnt2 .... then
system, the WM typically contains information about consequent
the particular instance of the problem being addressed. For instance. to represent the knowledge that if a
For example, in a TV troubleshooting expert system, person bas a runny nose. a high temperature and
the WM could contain the details of the particular TV bloodshot eyes, then one bas a flu, we could have the
being looked at. following rule:
The actual data represented in the WM depends on rl: If is(nose, runny) and is(tcmpcraturc, high) and
the type of application. The initial WM, for instance,
is(cyes, bloodshot)
can contain a priori information known to the system.
The inference engine uses this information in then disease is flu
conjunction with the rules in the knowledge base to This representation, though simple, is often
derive additional information about the problem being sufficient. The disjunction (ORing) of a set of
solved. antecedents can be achieved by having different rules
with the same consequent. Similarly, if multiple
Knowledge Base
consequents follow from the conjunction (ANDing) of
The knowledge base (also called rule base when a set of antecedents, this knowledge can be expressed
If-then rules are used) is a set of rules which represents in the form of a set of rules with one consequent each.
the knowledge about the domain. The general form of Each rule in this set will have the same set of
aruJe is:
antecedents.
If condl and cond2 and cond3 ... Sometimes the knowledge which is expressed in
then action!, action2, ... the form of rules is not known with certainty (for
bio The conditions condl, cond2, cond3, etc., (also example our flu rule is not absolutely certain). In such
wn as antecedents) are evaluated based on what is cases, typically, a degree of certainty is attached to the
~nUy known about the problem being solved (i.e., rules. These degrees of certainty are called certainty
contents of the working memory). factors. We will not discuss certainty factors further in
this article.
►
p1annin and Leamin ••• Pa e No. ~O
Artificial lnten· nee MU - Al & OS / Electronics
hat concepts are needed to I>rOd
Decide on w uee
Syllabus Topic : Developm ent of Expert Systems . One important factor to be dec·d
the soJuuon. • Cd
. the level of knowledg e C&ranulanty)
here 15
• ·
6.24 DEVELOPMEMT Of AN EXPERT .
StartJOg WI
"th coarse granu1anty, the syste
lll
SYSTEM t proceeds towards fine &ranularity
developmen •
. the task of knowledge acquisiti
1. Identification of the problem After tbjs, . . on
. Th knowledge engineer and the dorn,.;_
2. Decision about the mode of development beglllS· e -
. teract frequently and the domain-SJ>Ccifi
3. Development of a prototype expert 10 1c
)cnowledge is extracted.
4. Planning for a full-scale system
5. Fmal implementation, maintenance and Once the knowledge is acquired. the knowledge
evolution engin. eer decides on the method of representation
.
In the identification phase, a conceptual picture of
L Identification of the problem knowledge representation would have emerged. In
this stage, that view is either enforced or modified.
In this stage, the expert and the knowledge
engineer interact to identify the problem. The When the knowledge representation scheme cl1ld
the knowledge is available, a prototype is
major points discussed before for the constructed. This prototype undergoes the process
characteristics of the problem are studied. Toe f teSting for various problems and revision of the
scope and the extent are pondered. The amount of ;rototype takes place. By this proc_es~, knowledge
resources needed, e.g. men, computing resources, of fine granularity emerges and this 1s effectively
finance etc. are identified. The coded in the knowledge base.
return-of-investment analysis is done. Areas in the
4. Planning for a full-scale system
problem which can give much trouble are
identified and a conceptual solution for that The success of the prototype provides the needs
problem and the overall specification is made. impetus for the full-scale system. In prototype
2. DecJslon about the mode of development construction. the area in the problem which can be
implemented with relative ease is first chooscn. In
Once the problem is identified, the immediate step
the full- scale implementation. sub-system
would be to decide on the vehicle for
development. The knowledge engineer can development is assigned a group leader and
develop the system from scratch using a schedules are drawn. Use of Gann chart, PERT or
programm ing language like PROLOG or LISP or CPM techniques are welcome.
any conventional language or adopt a shell for s. Final implementation, maintena nce and
development. In this stage, various shells and evolution
tools are identified and analyzed for the This is the final life cycle stage of an expert
suitability. Those tools whose features fit the system. The full scale system developed is
characteristics of the problem are analyzed in implemented at the site. Toe basic resource
detail. requirements at the site are fulfilled and parallel
3. Development of a prototype conversion and testing techniques are adopted.
Before developing a prototype, the following are The final system undergoes rigorous testing and
the prerequisite activities :
later handed over to the user.