0% found this document useful (0 votes)
93 views

Intelligent System Sem-VII Lab-Manual

1. Experiment 2 implemented uninformed and informed searches to solve problems. Uninformed searches included breadth-first search (BFS) and depth-first search (DFS). BFS explores the neighbor nodes of the starting node in layers, while DFS explores each branch as far as possible before backtracking. 2. Informed search implemented was A*, which uses a heuristic function to estimate the cost of the shortest path to the goal to guide the search towards the optimal path. 3. The experiments aimed to understand and implement different graph search algorithms for problem solving.

Uploaded by

Farru Gh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views

Intelligent System Sem-VII Lab-Manual

1. Experiment 2 implemented uninformed and informed searches to solve problems. Uninformed searches included breadth-first search (BFS) and depth-first search (DFS). BFS explores the neighbor nodes of the starting node in layers, while DFS explores each branch as far as possible before backtracking. 2. Informed search implemented was A*, which uses a heuristic function to estimate the cost of the shortest path to the goal to guide the search towards the optimal path. 3. The experiments aimed to understand and implement different graph search algorithms for problem solving.

Uploaded by

Farru Gh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

Xavier Institute of Engineering

Department of Information Technology

Sem:VII Course code: ITL703 Course name: Intelligent System Lab

Faculty name: Prof. Jaychand U. Academic Year: 2020-21


Programming Tools Used : Python, Java, Prolog

Laboratory Manual

Experime
LO
nt Detailed Content of Experiment Hours
Mapping
No

Tutorial exercise for LO 1,


a) Design of Intelligent System using PEAS LO 2
1 2
b) Problem Definition with State Space Representation

Implementation of Uninformed and Informed searches


2 a) Uninformed Search (BFS & DFS) 6 LO 2
b)Informed Search (A*)
Implementation of Constraint Satisfaction Problem and Game playing
3 a) N-Queen Problem 4 LO 3
b) 8-puzzle game
Assignment on
a)Predicate Logic for forward/backward chaining and Resolution
4 b) Design of a planning system using STRIPS 4 LO 4

5 Implementation of Bayes' Belief Network. 2 LO 5


Mini project on
6 a) Animal Identification System 8 LO6
b)Medical Diagnosis System
Experiment No:1

A) Design of Intelligent System Using PEAS

Automated Taxi Driver


Vacuum Cleaner Agent

Aim: To understand the concept of PEAS.

PEAS: PEAS stands for performance environment actuators sensors.

a. Automated Taxi Driver


In designing an agent, the first step must always be to specify the task environment as fully
observable.
To understand PEAS in a better way, let us try to analyse the complex problem of automatic taxi
driver which is currently beyond the capabilities of existing technology. We would consider
characteristics of PEAS for description of taxi's task environment.

Performance measure is the first to which we would like an automatic driver to Aspire.
Desirable measures include getting correct destination, minimising fuel consumption, no wear
and tear, minimising trip time and cost, minimising violation of traffic laws and disturbance to
other
drivers, maximising safety and passenger comfort and maximizing profit. But in this scenario,
some of the goals may conflict, so there will be some trade off involved.

Environment: The basic question that comes in the mind is what is the driving environment that
a taxi will face? A taxi driver will face with a variety of roads, ruler lines and urban Valley to 12
Lane Freeway. The roads contain other traffic, pedestrians, stray animal, roads work, police
potholes and cars. A taxi must also interact with potential and actual passengers. There might be
some restriction on driving, such as left-hand side driving as in India, Japan, etc., or right-hand
side driving. Otherwise the roads may be soaring temperature, desert areas and all snowfall
regions like Kashmir. Thus, more restricted the environment, easier the design problem.

Actuators: The actuators available to an automated taxi will be more or less same as those
available to human driver (i.e., control over engine through the accelerator and control over
steering and breaking). In addition, it will output to a display screen or voice synthesizer talk
back to passengers and perhaps some way to communicate with other drivers or vehicle politely
or otherwise.

Sensors: The sensors will play a crucial role in determining where the taxi actually is, what else
is on the road and how fast it is going. The basic sensors should therefore include one or more
TV cameras, the tachometer and the odometer. To control the vehicle properly, especially on
curves, it will also need to know the mechanical state of vehicle so it will need the usual array of
engine and electrical system sensors. It might have instruments that are not available to average
human driver, a satellite global positioning system (GPS) to give accurate position information
with respect to an electronic map and infrared solar sensors to detect distance to other cars and
obstacles. Finally, it will require keyboard or microphone for passenger to request a destination.

b. Vacuum Cleaner Agent

Performance: Cleanness, efficiency: distance travelled to clean, battery life, security.


Environment: Room, table, wood floor, carpet, different obstacles.
Actuators: Wheels, different brushes, vacuum extractor.
Sensors: Camera, dirt detection sensor, cliff sensor, bump sensors, infrared wall sensors.

Conclusion: Thus, we have successfully designed Taxi driver and Vacume cleaner System Using PEAS.
Exp. 1 B) Problem Definition with State Space Representation

Aim: Implement Water Jug Problem Using Problem Formulation

Theory:

Problem Statement

In the water jug problem in Artificial Intelligence, we are provided with two jugs: one having
the capacity to hold 3 gallons of water and the other has the capacity to hold 4 gallons of water.
There is no other measuring equipment available and the jugs also do not have any kind of
marking on them. So, the agent’s task here is to fill the 4-gallon jug with 2 gallons of water by
using only these two jugs and no other material. Initially, both our jugs are empty.

So, to solve this problem, following set of rules were proposed:

Production rules for solving the water jug problem

Here, let x denote the 4-gallon jug and y denote the 3-gallon jug.

S.No. Initial State Condition Final state Description of action taken

1. (x,y) If x<4 (4,y) Fill the 4 gallon jug completely


2. (x,y) if y<3 (x,3) Fill the 3 gallon jug completely
3. (x,y) If x>0 (x-d,y) Pour some part from the 4 gallon jug
4. (x,y) If y>0 (x,y-d) Pour some part from the 3 gallon jug
5. (x,y) If x>0 (0,y) Empty the 4 gallon jug

6. (x,y) If y>0 (x,0) Empty the 3 gallon jug

7. (x,y) If (x+y)<7 (4, y-[4-x]) Pour some water from the 3 gallon jug to fill the four gallon jug

8. (x,y) If (x+y)<7 (x-[3-y],y) Pour some water from the 4 gallon jug to fill the 3 gallon jug.
9. (x,y) If (x+y)<4 (x+y,0) Pour all water from 3 gallon jug to the 4 gallon jug
10. (x,y) if (x+y)<3 (0, x+y) Pour all water from the 4 gallon jug to the 3 gallon jug

The listed production rules contain all the actions that could be performed by the agent in
transferring the contents of jugs. But, to solve the water jug problem in a minimum number of
moves, following set of rules in the given sequence should be performed:

Solution of water jug problem according to the production rules:

S.No. 4 gallon jug contents 3 gallon jug contents Rule followed

1. 0 gallon 0 gallon Initial state

2. 0 gallon 3 gallons Rule no.2


3. 3 gallons 0 gallon Rule no. 9

4. 3 gallons 3 gallons Rule no. 2

5. 4 gallons 2 gallons Rule no. 7

6. 0 gallon 2 gallons Rule no. 5


7. 2 gallons 0 gallon Rule no. 9

On reaching the 7th attempt, we reach a state which is our goal state. Therefore, at this state, our
problem is solved.

Conclusion: Thus, we have successfully implemented water jug problem.


Experiment No:2

A) Implementation of Uninformed Search

Breadth first Search:

Theory: There are many ways to traverse graphs. BFS is the most commonly used approach.

BFS is a traversing algorithm where you should start traversing from a selected node (source or starting node)

And traverse the graph layerwise thus exploring the neighbour nodes (nodes which are directly connected to
source node). You must then move towards the next-level neighbour nodes.

As the name BFS suggests, you are required to traverse the graph breadthwise as follows:

1. First move horizontally and visit all the nodes of the current layer
2. Move to the next layer

Algorithm:

BFS (G, s) //Where G is the graph and s is the source node


let Q be queue.
Q.enqueue( s ) //Inserting s in queue until all its neighbour vertices are marked.

mark s as visited.
while ( Q is not empty)
//Removing that vertex from queue,whose neighbour will be visited now
v = Q.dequeue( )

//processing all the neighbours of v


for all neighbours w of v in Graph G
if w is not visited
Q.enqueue( w ) //Stores w in Q to further visit its
neighbour
mark w as visited.
Depth-First Search:

Theory:
1. Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data
structures.
2. One starts at the root (selecting some arbitrary node as the root in the case of a graph) and
explores as far as possible along each branch before backtracking.

Algorithm:

DFS-iterative (G, s): //Where G is graph and s is source


vertex
let S be stack
S.push( s ) //Inserting s in stack
mark s as visited.
while ( S is not empty):
//Pop a vertex from stack to visit next
v = S.top( )
S.pop( )
//Push all the neighbours of v in stack that are not visited
for all neighbours w of v in Graph G:
if w is not visited :
S.push( w )
mark w as visited

DFS-recursive(G, s):
mark s as visited
for all neighbours w of s in Graph G:
if w is not visited:
DFS-recursive(G, w)

Conclusion: Thus in this way, we can implement Uninformed search method.


Exp. 2 B) Implementation of Informed Search

A* Search

Theory: It is an advanced BFS that searches for shorter paths first rather than the longer paths. A*
is optimal as well as a complete algorithm.

What do I mean by Optimal and Complete? Optimal meaning that A* is sure to find the least cost from the
source to the destination and Complete meaning that it is going to find all the paths that are available to us
from the source to the destination.

So that makes A* the best algorithm right? Well, in most cases, yes. But A* is slow and also the space it
requires is a lot as it saves all the possible paths that are available to us. This makes other faster algorithms have
an upper hand over A* but it is nevertheless, one of the best algorithms out there. Every node estimate itself
using following method

f(n)=g(n)+h(n)

where, g(n) is cost to reach current node; h(n)is cost to reach goal from current node

Algorithm:

A* Algorithm():

 Add start node to list


 For all the neighbouring nodes, find the least cost F node
 Switch to the closed list
o For 8 nodes adjacent to the current node
o If the node is not reachable, ignore it. Else
 If the node is not on the open list, move it to the open list and calculate f, g, h.
 If the node is on the open list, check if the path it offers is less than the current path and
change to it if it does so.
 Stop working when
o You find the destination
o You cannot find destination through all possible points

Conclusion: Thus in this way, we can implement Informed search method.


Experiment No:3

A) Implement Constraint Satisfaction problem : N-Queen problem

Theory:
This problem is to find an arrangement of N queens on a chess board, such that no queen can attack any other queens on the board.

The chess queens can attack in any direction as horizontal, vertical, horizontal and diagonal way.

A binary matrix is used to display the positions of N Queens, where no queens can attack other queens.

Input:
The size of a chess board. Generally, it is 8. as (8 x 8 is the size of a
normal chess board.)
Output:
The matrix that represents in which row and column the N Queens can be placed.
If the solution does not exist, it will return false.

1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 1
0 1 0 0 0 0 0 0
0 0 0 1 0 0 0 0
0 0 0 0 0 1 0 0
0 0 1 0 0 0 0 0

In this output, the value 1 indicates the correct place for the queens.
The 0 denotes the blank spaces on the chess board.

Algorithm:

isValid(board, row, col)


Input: The chess board, row and the column of the board.

Output − True when placing a queen in row and place position is a valid or not.

Begin
   if there is a queen at the left of current col, then
      return false
   if there is a queen at the left upper diagonal, then
      return false
   if there is a queen at the left lower diagonal, then
      return false;
   return true //otherwise it is valid place
End
solveNQueen(board, col)

Input − The chess board, the col where the queen is trying to be placed.

Output −  The position matrix where queens are placed.

Begin
   if all columns are filled, then
      return true
   for each row of the board, do
      if isValid(board, i, col), then
         set queen at place (i, col) in the board
         if solveNQueen(board, col+1) = true, then
            return true
         otherwise remove queen from place (i, col) from board.
   done
   return false
End

Program:
from itertools import permutations, combinations

text = input('How big is your chess board?')


n = int(text)
x = range(1, n+1)

def is_diagonal(point1, point2):


x1 = point1[0]
y1 = point1[1]
x2 = point2[0]
y2 = point2[1]
gradient = (y2-y1)/(x2-x1)
if gradient == -1 or gradient == 1:
return(True)
else:
return(False)

list_of_permutations = []

for permuation in permutations(range(1, n+1)):


y = permuation
all_permutations = list(zip(x,y))
list_of_permutations.append(all_permutations)

for possible_solution in list_of_permutations:


solutions = []
for piece1, piece2 in combinations(possible_solution, 2):
solutions.append(is_diagonal(piece1, piece2))

if True not in solutions:


print(possible_solution)

Conclusion: Thus in this way, we can implement N-Queen problem.


Exp 3 B) Implement 8-puzzle problem game with heuristic function using hill climbing (informed search)

Theory:
In an 8-puzzle game, we need to rearrange some tiles to reach a predefined goal state.
Consider the following 8-puzzle board.

This is the goal state where each tile is in correct place. In this game, you will be given a
board where the tiles aren’t in the correct places. You need to move the tiles using the gap to
reach the goal state.

Suppose f (n) can be defined as: the number of misplaced tiles.

In the above figure, tiles 6, 7 and 8 are misplaced. So f (n) = 3 for this case.

For solving this problem with hill climbing search, we need to set a value for the heuristic.
Suppose the heuristic function h (n) is the lowest possible f (n) from a given state. First, we
need to know all the possible moves from the current state. Then we have to calculate f(n)
(number of misplaced tiles) for each possible move. Finally we need to choose the path with
lowest possible f (n) (which is our h (n) or heuristic).

Consider the figure above. Here, 3 moves are possible from the current state. For each state
we have calculated f (n). From the current state, it is optimal to move to the state with f (n) =
3 as it is closer to the goal state. So we have our h (n) = 3.

However, do you really think we can guarantee that it will reach the goal state? What will you
do if you reach on a state(Not the goal state) from which there are no better neighbour states!
This condition can be called a local maxima and this is the problem of hill climbing search.
Therefore, we may get stuck in local maxima. In this scenario, you need to backtrack to a
previous state to perform the search again to get rid of the path having local maxima.

What will happen if we reach to a state where all the f (n) values are equal? This condition is
called a plateau. You need to select a state at random and perform the hill climbing search
again!

Program: Eight Puzzle Problem Using Python

# The state of the board is stored in a list. The list stores values for the
# board in the following positions:
#
# -------------
#|0|3|6|
# -------------
#|1|4|7|
# -------------
#|2|5|8|
# -------------
#
# The goal is defined as:
#
# -------------
#|1|2|3|
# -------------
#|8|0|4|
# -------------
#|7|6|5|
# -------------
#
# Where 0 denotes the blank tile or space.
goal_state= [1, 8, 7, 2, 0, 6, 3, 4, 5]
#
# The code will read state from a file called "state.txt" where the format is
# as above but space seperated. i.e. the content for the goal state would be
#187206345
### Code begins.
import sys
def display_board( state ):
print"------------"
print"| %i | %i | %i |"% (state[0], state[3], state[6])
print"------------"
print"| %i | %i | %i |"% (state[1], state[4], state[7])
print"------------"
print"| %i | %i | %i |"% (state[2], state[5], state[8])
print"------------"

def move_up( state ):


"""Moves the blank tile up on the board. Returns a new state as a list."""
# Perform an object copy
new_state= state[:]
index =new_state.index( 0 )
# Sanity check
if index notin [0, 3, 6]:

# Swap the values.


temp =new_state[index -1]
new_state[index -1] =new_state[index]
new_state[index] = temp
returnnew_state
else:
# Can't move, return None (Pythons NULL)
returnNone
def move_down( state ):
"""Moves the blank tile down on the board. Returns a new state as a list."""
# Perform object copy
new_state= state[:]
index =new_state.index( 0 )
# Sanity check
if index notin [2, 5, 8]:
# Swap the values.
temp =new_state[index +1]
new_state[index +1] =new_state[index]
new_state[index] = temp
returnnew_state
else:
# Can't move, return None.
returnNone
def move_left( state ):
"""Moves the blank tile left on the board. Returns a new state as a list."""
new_state= state[:]
index =new_state.index( 0 )
# Sanity check
if index notin [0, 1, 2]:
# Swap the values.
temp =new_state[index -3]
new_state[index -3] =new_state[index]
new_state[index] = temp
returnnew_state
else:
# Can't move it, return None
returnNone
def move_right( state ):
"""Moves the blank tile right on the board. Returns a new state as a list."""
# Performs an object copy. Python passes by reference.
new_state= state[:]
index =new_state.index( 0 )
# Sanity check
if index notin [6, 7, 8]:
# Swap the values.
temp =new_state[index +3]
new_state[index +3] =new_state[index]
new_state[index] = temp
returnnew_state
else:
# Can't move, return None
returnNone
def create_node( state, parent, operator, depth, cost ):
return Node( state, parent, operator, depth, cost )
def expand_node( node, nodes ):
"""Returns a list of expanded nodes"""
expanded_nodes= []
expanded_nodes.append( create_node( move_up( node.state ), node, "u",
node.depth+1, 0 ) )
expanded_nodes.append( create_node( move_down( node.state ), node, "d",
node.depth+1, 0 ) )
expanded_nodes.append( create_node( move_left( node.state ), node, "l",
node.depth+1, 0 ) )
expanded_nodes.append( create_node( move_right( node.state), node, "r",
node.depth+1, 0 ) )
# Filter the list and remove the nodes that are impossible (move function
returned None)
expanded_nodes= [node for node inexpanded_nodesifnode.state!=None]
#list comprehension!
returnexpanded_nodes
def bfs( start, goal ):
"""Performs a breadth first search from the start state to the goal"""
# A list (can act as a queue) for the nodes.
nodes = []
# Create the queue with the root node in it.
nodes.append( create_node( start, None, None, 0, 0 ) )
whileTrue:
# We've run out of states, no solution.
iflen( nodes ) ==0: returnNone
# take the node from the front of the queue
node =nodes.pop(0)
# Append the move we made to moves
# if this node is the goal, return the moves it took to get here.
ifnode.state== goal:
moves = []
temp = node
whileTrue:
moves.insert(0, temp.operator)
iftemp.depth==1: break
temp =temp.parent
return moves
# Expand the node and add all the expansions to the front of stack

nodes.extend( expand_node( node, nodes ) )


def dfs( start, goal, depth=10 ):
"""Performs a depth first search from the start state to the goal. Depth
param is optional."""
# NOTE: This is a limited search or else it keeps repeating moves. This is an
infinite search space.
# I'm not sure if I implemented this right, but I implemented an iterative
depth search below
# too that uses this function and it works fine. Using this function itself will
repeat moves until
# the depth_limit is reached. Iterative depth search solves this problem,
though.
#
# An attempt of cutting down on repeat moves was made in the
expand_node() function.
depth_limit= depth
# A list (can act as a stack too) for the nodes.
nodes = []
# Create the queue with the root node in it.
nodes.append( create_node( start, None, None, 0, 0 ) )
whileTrue:
# We've run out of states, no solution.
iflen( nodes ) ==0: returnNone
# take the node from the front of the queue
node =nodes.pop(0)
# if this node is the goal, return the moves it took to get here.
ifnode.state== goal:
moves = []
temp = node
whileTrue:
moves.insert(0, temp.operator)
iftemp.depth<=1: break
temp =temp.parent
return moves
# Add all the expansions to the beginning of the stack if we are under
the depth limit
ifnode.depth<depth_limit:
expanded_nodes=expand_node( node, nodes )
expanded_nodes.extend( nodes )
nodes =expanded_nodes
def ids( start, goal, depth=50 ):
"""Perfoms an iterative depth first search from the start state to the goal.
Depth is optional."""
for i in range( depth ):
result =dfs( start, goal, i )
if result !=None:
return result
def a_star( start, goal ):
"""Perfoms an A* heuristic search"""
# ATTEMPTED: does not work :(
nodes = []
nodes.append( create_node( start, None, None, 0, 0 ) )
whileTrue:
# We've run out of states - no solution.
iflen( nodes ) ==0: returnNone
# Sort the nodes with custom compare function.
nodes.sort( cmp )
# take the node from the front of the queue
node =nodes.pop(0)
# if this node is the goal, return the moves it took to get here.
print"Trying state", node.state, " and move: ", node.operator
ifnode.state== goal:
moves = []
temp = node
whileTrue:
moves.insert( 0, temp.operator )
iftemp.depth<=1: break
temp =temp.parent
return moves
#Expand the node and add all expansions to the end of the queue
nodes.extend( expand_node( node, nodes ) )

def cmp( x, y ):
# Compare function for A*. f(n) = g(n) + h(n). I use depth (number of
moves) for g().
return (x.depth+ h( x.state, goal_state )) - (y.depth+ h( x.state, goal_state ))
defh( state, goal ):
"""Heuristic for the A* search. Returns an integer based on out of place
tiles"""
score =0
For I in range( len( state ) ):
if state[i] != goal[i]:
score = score +1
return score
# Node data structure
Class Node:
def init ( self, state, parent, operator, depth, cost ):
# Contains the state of the node
self.state= state
# Contains the node that generated this node
self.parent= parent
# Contains the operation that generated this node from the parent
self.operator= operator
# Contains the depth of this node (parent.depth +1)
self.depth= depth
# Contains the path cost of this node from depth 0. Not used for
depth/breadth first.
self.cost= cost
def readfile( filename ):
f =open( filename )
data =f.read()
# Get rid of the newlines
data =data.strip( "\n" )
#Break the string into a list using a space as a seperator.
data =data.split( " " )
state = []
for element in data:
state.append( int( element ) )
return state
# Main method
defmain():
starting_state=readfile( "state.txt" )
### CHANGE THIS FUNCTION TO USE bfs, dfs, ids or a_star
result = ids( starting_state, goal_state )
if result ==None:
print"No solution found"
elif result == [None]:
print"Start node was the goal!"
else:
print result
printlen(result), " moves"
# A python-isim. Basically if the file is being run execute the main() function.
if name ==" main ":
main()

Conclusion: Thus in this way, we can implement 8 puzzle game.


Experiment No:4
A) Assignment on Predicate Logic for
forward/backward chaining and Resolution

Theory: Inference is the act or method of deriving logical conclusions from premises known
or assumed to be true. The conclusion drawn is additionally known as an idiomatic. The laws
of valid inference are studied within the field of logic.
Human inference (i.e. how humans draw conclusions) is historically studied inside the
sphere of cognitive psychology; artificial intelligence researchers develop machine-driven
inference systems to emulate human inference. statistical inference permits for inference from
quantitative data.
The process by which a conclusion is inferred from multiple observations is named
inductive reasoning. The conclusion is also correct or incorrect, or correct to within a certain
degree of accuracy, or correct in certain situations. Conclusions inferred from multiple
observations is also tested by additional observations. A conclusion reached on the basis of
proof and reasoning.
The process of reaching such a conclusion: "order, health, and by inference
cleanliness".
The validity of an inference depends on the shape of the inference. That is, the word
"valid" does not refer to the reality of the premises or the conclusion, but rather to the form of
the inference. an inference may be valid though the elements are false, and may be invalid
though the elements are true. However, a valid form with true premises can always have a
real conclusion.
For example,
All fruits are sweet.
A banana is a fruit.
Therefore, a banana is sweet.
For the conclusion to be necessarily true, the premises need to be true.
To show that this form is invalid, we demonstrate how it can lead from true premises to a false
conclusion.
All apples are fruit. (Correct)
Bananas are fruit. (Correct)
Therefore, bananas are apples. (Wrong)
A valid argument with false premises may lead to a false conclusion:
All tall people are Greek.
John Lennon was tall.
Therefore, John Lennon was Greek.
When a valid argument is used to derive a false conclusion from false premises, the inference
is valid because it follows the form of a correct inference. A valid argument can also be used
to derive a true conclusion from false premises:
All tall people are musicians.
John Lennon was tall.
Therefore, John Lennon was a musician.
In this case we have two false premises that imply a true conclusion.
In mathematical logic and automated theorem proving, resolution could be a rule of inference
resulting in a refutation theorem-proving technique for sentences in propositional logic and
first-order logic. In alternative words, iteratively applying the resolution rule in an acceptable
method allows for telling whether a propositional formula is satisfiable and for proving that a
first-order formula is unsatisfiable; this methodology could prove the satisfiability of a first-
order satisfiable formula, however not always, because it is the case for all ways for first-
order logic. Resolution was introduced by John Alan Robinson in 1965.

Forward Chaining:
If P is True and PQ then Q is True
Eg. Rani is hungry. If Rani is hungry then she barks. If Rani barks then Raja gets angry.
Prove Raja is angry using forward chaining.

Assume P=Rani hungry, Q= Rani barks, R= Raja angry


Given P=True
PQ
----------------
Q =True ………(Using forward chaining)
QR
----------------
R= True ………..(Using forward chaining)
Hence Raja is angry

Backward Chaining:
If Q is True and PQ then P is True
Eg. Rani is hungry. If Rani is hungry then she barks. If Rani barks then Raja gets angry.
Prove Raja is angry using backward chaining.

Assume P=Rani hungry, Q= Rani barks, R= Raja angry


Given R= True
QR
----------------
Q =True ………(Using backward chaining)
PQ
----------------
P=True ………..(Using backward chaining)
Hence Raja is angry

Resolution Rule: The resolution rule in propositional logic is a single valid inference rule
that produces a new clause implied by two clauses containing complementary literals. A
literal is a propositional variable or the negation of a propositional variable. Two literals are
said to be complements if one is the negation of the other (in the following, is taken to be the
complement to ). The resulting clause contains all the literals that do not have complements.
Formally:

where
all s and s are literals,
is the complement to , and
the dividing line stands for entails
The clause produced by the resolution rule is called the resolvent of the two input clauses.
When the two clauses contain more than one pair of complementary literals, the
resolution rule can be applied (independently) for each such pair; however, the result is
always a tautology.
Modus ponens can be seen as a special case of resolution of a one-literal clause and a
two-literal clause.
A Resolution Technique: When coupled with a complete search algorithm, the resolution
rule yields a sound and complete algorithm for deciding the satisfiability of a propositional
formula, and, by extension, the validity of a sentence under a set of axioms.
This resolution technique uses proof by contradiction and is based on the fact that any
sentence in propositional logic can be transformed into an equivalent sentence in conjunctive
normal form. The steps are as follows.
All sentences in the knowledge base and the negation of the sentence to be proved
(the conjecture) are conjunctively connected.
The resulting sentence is transformed into a conjunctive normal form with the
conjuncts viewed as elements in a set, S, of clauses.
For example,

gives rise to the set


.

Algorithm: The resolution rule is applied to all possible pairs of clauses that contain
complementary literals. After each application of the resolution rule, the resulting sentence is
simplified by removing repeated literals. If the sentence contains complementary literals, it is
discarded (as a tautology). If not, and if it is not yet present in the clause set S, it is added to
S, and is considered for further resolution inferences.
If after applying a resolution rule the empty clause is derived, the original formula is
unsatisfiable (or contradictory), and hence, it can be concluded that the initial conjecture
follows from the axioms.
If, on the other hand, the empty clause cannot be derived, and the resolution rule
cannot be applied to derive any more new clauses, the conjecture is not a theorem of the
original knowledge base.
One instance of this algorithm is the original Davis–Putnam algorithm that was later
refined into the DPLL algorithm that removed the need for explicit representation of the
resolvents.
This description of the resolution technique uses a set S as the underlying data-
structure to represent resolution derivations. Lists, Trees and Directed Acyclic Graphs are
other possible and common alternatives. Tree representations are more faithful to the fact that
the resolution rule is binary. Together with a sequent notation for clauses, a tree
representation also makes it clear to see how the resolution rule is related to a special case of
the cut-rule, restricted to atomic cut-formulas. However, tree representations are not as
compact as set or list representations, because they explicitly show redundant subderivations
of clauses that are used more than once in the derivation of the empty clause. Graph
representations can be as compact in the number of clauses as list representations and they
also store structural information regarding which clauses were resolved to derive each
resolvent.
A simple example
In plain language: Suppose is false. In order for the premise to be true, must
be true. Alternatively, suppose is true. In order for the premise to be true, must
be true. Therefore, regardless of falsehood or veracity of , if both premises hold, then the
conclusion is true.
Resolution in First-Order Logic: In first-order logic, resolution condenses the traditional
syllogisms of logical inference down to a single rule.
To understand how resolution works, consider the following example syllogism of term
logic:
All Greeks are Europeans.
Homer is a Greek.
Therefore, Homer is a European.

Or, more generally:

Therefore,
To recast the reasoning using the resolution technique, first the clauses must be
converted to conjunctive normal form. In this form, all quantification becomes
implicit: universal quantifiers on variables (X, Y, …) are simply omitted as understood,
while existentially quantified variables are replaced by Skolem functions.

Therefore,

So, the question is, how does the resolution technique derive the last clause from the
first two? The rule is simple:
Find two clauses containing the same predicate, where it is negated in one clause but
not in the other.
Perform unification on the two predicates. (If the unification fails, you made a bad
choice of predicates. Go back to the previous step and try again.)
If any unbound variables which were bound in the unified predicates also occur in
other predicates in the two clauses, replace them with their bound values (terms) there as well.
Discard the unified predicates, and combine the remaining ones from the two clauses
into a new clause, also joined by the "∨" operator.
To apply this rule to the above example, we find the predicate P occurs in negated form
¬P(X)
in the first clause, and in non-negated form
P(a)
in the second clause. X is an unbound variable, while a is bound value (term). Unifying the
two produces the substitution
X↦a
Discarding the unified predicates, and applying this substitution to the remaining
predicates (just Q(X), in this case), produces the conclusion:
Q(a)
For another example, consider the syllogistic form
All Cretans are islanders.
All islanders are liars.
Therefore, all Cretans are liars.
Or more generally,
∀X P(X) → Q(X)
∀X Q(X) → R(X)
Therefore, ∀X P(X) → R(X)
In CNF, the antecedents become:
¬P(X) ∨ Q(X)
¬Q(Y) ∨ R(Y)

(Note that the variable in the second clause was renamed to make it clear that variables in
different clauses are distinct.)
Now, unifying Q(X) in the first clause with ¬Q(Y) in the second clause means
that X and Y become the same variable anyway. Substituting this into the remaining clauses
and combining them gives the conclusion:
¬P(X) ∨ R(X)
The resolution rule, as defined by Robinson, also incorporated factoring, which
unifies two literals in the same clause, before or during the application of resolution as
defined above. The resulting inference rule is refutation complete, in that a set of clauses is
unsatisfiable if and only if there exists a derivation of the empty clause using resolution
alone.

B) Assignment on Design of a Planning System Using STRIPS (Block World Problem )

Aim: Study of planning agent.


Theory:

Language of Planning Problem:

What is STRIPS?

The Stanford Research Institute Problem Solver (STRIPS) is an automated planning


technique that works by executing a domain and problem to find a goal. With STRIPS, you
first describe the world. You do this by providing objects, actions, preconditions, and effects.
These are all the types of things you can do in the game world.

Once the world is described, you then provide a problem set. A problem consists of an initial
state and a goal condition. STRIPS can then search all possible states, starting from the initial
one, executing various actions, until it reaches the goal.

A common language for writing STRIPS domain and problem sets is the Planning Domain
Definition Language (PDDL). PDDL lets you write most of the code with English words, so
that it can be clearly read and (hopefully) well understood. It’s a relatively easy approach to
writing simple AI planning problems.

Problem statement
Design a planning agent for a Blocks World problem. Assume suitable initial state and final
state for the problem.
 Representation of goal/intention to achieve
 Representation of actions it can perform; and
 Representation of the environment;
Then have the agent generate a plan to achieve the goal.
The plan is generated entirely by the planning system, without human intervention.
Assume start & goal states as below:

a. STRIPS : A planning system – Has rules with precondition deletion list and addition list
Sequence of actions :
b. Grab C
c. Pickup C
d. Place on table C
e. Grab B
f. Pickup B
g. Stack B on C
h. Grab A
1. Precondition & Deletion List : holding(x)
2. Add List : hand empty, on(x,table), clear(x)
i. R3 : stack(x,y)

1. Precondition & Deletion List :holding(x), clear(y)


2. Add List : on(x,y), clear(x)

j. R4 : unstack(x,y)

1. Precondition & Deletion List : on(x,y), clear(x)


2. Add List : holding(x), clear(y)

Plan for the assumed blocks world problem


For the given problem, Start $\rightarrow$ Goal can be achieved by the following sequence:
1. Unstack(C,A)
2. Putdown(C)
3. Pickup(B)
4. Stack(B,C)
5. Pickup(A)
6. Stack(A,B)
Experiment No:5
Implementation of Bayes' belief network (probabilistic reasoning in an uncertain
domain)

Aim:To implementation of Bayes' belief network (probabilistic reasoning in an uncertain


domain).

Theory:
Probabilistic reasoning
The aim of a reasoning is to combine the capacity of probability theory to handle uncertainty
with the capacity of deductive logic to exploit structure. The result is a richer and more
expressive formalism with a broad range of possible application areas. Probabilistic logics
attempt to find a natural extension of traditional logic truth tables: the results they define are
derived through probabilistic expressions instead. A difficulty with probabilistic logics is that
they tend to multiply the computational complexities of their probabilistic and logical
components. Other difficulties include the possibility of counter-intuitive results, such as
those of Dempster-Shafer theory. The need to deal with a broad variety of contexts and issues
has led to many different proposals.
Probabilistic Reasoning Using Bayesian Learning: The idea of Bayesian learning is to
compute the posterior probability distribution of the target features of a new example
conditioned on its input features and all of the training examples.

Suppose a new case has inputs X=x and has target features, Y; the aim is to compute
P(Y|X=x∧e), where e is the set of training examples. This is the probability distribution of the
target variables given the particular inputs and the examples. The role of a model is to be the
assumed generator of the examples. If we let M be a set of disjoint and covering models, then
reasoning by cases and the chain rule give

P(Y|x∧e) = ∑m∈M P(Y ∧m |x∧e)


= ∑m∈M P(Y | m ∧x∧e) ×P(m|x∧e)
= ∑m∈M P(Y | m ∧x) ×P(m|e) .

The first two equalities are theorems from the definition of probability. The last
equality makes two assumptions: the model includes all of the information about the
examples that is necessary for a particular prediction [i.e., P(Y | m ∧x∧e)= P(Y | m ∧x) ], and
the model does not change depending on the inputs of the new example [i.e., P(m|x∧e)= P(m|
e)]. This formula says that we average over the prediction of all of the models, where each
model is weighted by its posterior probability given the examples.

P(m|e) can be computed using Bayes' rule:

P(m|e) = (P(e|m)×P(m))/(P(e)) .

Thus, the weight of each model depends on how well it predicts the data (the
likelihood) and its prior probability. The denominator, P(e), is a normalizing constant to
make sure the
posterior probabilities of the models sum to 1. Computing P(e) can be very
difficult when there are many models.

A set {e1,...,ek} of examples are IID (independent and identically


distributed), where the distribution is given by model m if, for all i and j,
examples ei and ej are independent given m, which means P(ei∧ej|m)=P(ei|
m)×P(ej|m). We usually assume that the examples are i.i.d.

Suppose the set of training examples e is {e1,...,ek}. That is, e is the


conjunction of the ei, because all of the examples have been observed to be
true. The assumption that the examples are IID implies

P(e|m) = ∏i=1k P(ei|m)

The set of models may include structurally different models in


addition to models that differ in the values of the parameters. One of the
techniques of Bayesian learning is to make the parameters of the model
explicit and to determine the distribution over the parameters.

Example: Consider the simplest learning task under uncertainty. Suppose


there is a single Boolean random variable, Y. One of two outcomes, a and
¬a, occurs for each example. We want to learn the probability distribution
of Y given some examples.

There is a single parameter, φ, that determines the set of all models.


Suppose that φ represents the probability of Y=true. We treat this
parameter as a real-valued random variable on the interval [0,1]. Thus, by
definition of φ, P(a|φ)=φ and P(¬a|φ)=1-φ.

Suppose an agent has no prior information about the probability of


Boolean variable Y and no knowledge beyond the training examples. This
ignorance can be modelled by having the prior probability distribution of
the variable φ as a uniform distribution over the interval [0,1]. This is the
probability density function labeledn0=0, n1=0 in.

We can update the probability distribution of φ given some


examples. Assume that the examples, obtained by running a number of
independent experiments, are a particular sequence of outcomes that
consists of n0 cases where Y is false and n1 cases where Y is true.
Experiment No:6
Mini project on
a) Animal Identification System
/* animal.pro
animal identification game.

start with ?- go. */

go :- hypothesize(Animal),
write('I guess that the animal is: '),
write(Animal),
nl,
undo.

/* hypotheses to be tested */
hypothesize(cheetah) :- cheetah, !.
hypothesize(tiger) :- tiger, !.
hypothesize(giraffe) :- giraffe, !.
hypothesize(zebra) :- zebra, !.
hypothesize(ostrich) :- ostrich, !.
hypothesize(penguin) :- penguin, !.
hypothesize(albatross) :- albatross, !.
hypothesize(unknown). /* no diagnosis */

/* animal identification rules */


cheetah :- mammal,
carnivore,
verify(has_tawny_color),
verify(has_dark_spots).
tiger :- mammal,
carnivore,
verify(has_tawny_color),
verify(has_black_stripes).
giraffe :- ungulate,
verify(has_long_neck),
verify(has_long_legs).
zebra :- ungulate,
verify(has_black_stripes).

ostrich :- bird,
verify(does_not_fly),
verify(has_long_neck).
penguin :- bird,
verify(does_not_fly),
verify(swims),
verify(is_black_and_white).
albatross :- bird,
verify(appears_in_story_Ancient_Mariner),
verify(flys_well).

/* classification rules */
mammal :- verify(has_hair), !.
mammal :- verify(gives_milk).
bird :- verify(has_feathers), !.
bird :- verify(flys),
verify(lays_eggs).
carnivore :- verify(eats_meat), !.
carnivore :- verify(has_pointed_teeth),
verify(has_claws),
verify(has_forward_eyes).
ungulate :- mammal,
verify(has_hooves), !.
ungulate :- mammal,
verify(chews_cud).

/* how to ask questions */


ask(Question) :-
write('Does the animal have the following attribute: '),
write(Question),
write('? '),
read(Response),
nl,
( (Response == yes ; Response == y)
->
assert(yes(Question)) ;
assert(no(Question)), fail).

:- dynamic yes/1,no/1.

/* How to verify something */


verify(S) :-
(yes(S)
->
true ;
(no(S)
->
fail ;
ask(S))).

/* undo all yes/no assertions */


undo :- retract(yes(_)),fail.
undo :- retract(no(_)),fail.
undo.
Exp.6 B) Medical Diagnosis System

domains
disease,indication = symbol
Patient,name = string

predicates
hypothesis(string,disease)
symptom(name,indication)
response(char)
go
clauses
go :-
write(\"What is the patient\'s name? \"),
readln(Patient),
hypothesis(Patient,Disease),
write(Patient,\"probably has \",Disease,\".\"),nl.

go :-
write(\"Sorry, I don\'t seem to be able to\"),nl,
write(\"diagnose the disease.\"),nl.

symptom(Patient,fever) :-
write(\"Does \",Patient,\" have a fever (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,rash) :-
write(\"Does \",Patient,\" have a rash (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,headache) :-
write(\"Does \",Patient,\" have a headache (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,runny_nose) :-
write(\"Does \",Patient,\" have a runny_nose (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,conjunctivitis) :-
write(\"Does \",Patient,\" have a conjunctivitis (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,cough) :-
write(\"Does \",Patient,\" have a cough (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,body_ache) :-
write(\"Does \",Patient,\" have a body_ache (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,chills) :-
write(\"Does \",Patient,\" have a chills (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,sore_throat) :-
write(\"Does \",Patient,\" have a sore_throat (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,sneezing) :-
write(\"Does \",Patient,\" have a sneezing (y/n) ?\"),
response(Reply),
Reply=\'y\'.

symptom(Patient,swollen_glands) :-
write(\"Does \",Patient,\" have a swollen_glands (y/n) ?\"),
response(Reply),
Reply=\'y\'.

hypothesis(Patient,measles) :-
symptom(Patient,fever),
symptom(Patient,cough),
symptom(Patient,conjunctivitis),
symptom(Patient,runny_nose),
symptom(Patient,rash).

hypothesis(Patient,german_measles) :-
symptom(Patient,fever),
symptom(Patient,headache),
symptom(Patient,runny_nose),
symptom(Patient,rash).

hypothesis(Patient,flu) :-
symptom(Patient,fever),
symptom(Patient,headache),
symptom(Patient,body_ache),
symptom(Patient,conjunctivitis),
symptom(Patient,chills),
symptom(Patient,sore_throat),
symptom(Patient,runny_nose),
symptom(Patient,cough).

hypothesis(Patient,common_cold) :-
symptom(Patient,headache),
symptom(Patient,sneezing),
symptom(Patient,sore_throat),
symptom(Patient,runny_nose),
symptom(Patient,chills).

hypothesis(Patient,mumps) :-
symptom(Patient,fever),
symptom(Patient,swollen_glands).

hypothesis(Patient,chicken_pox) :-
symptom(Patient,fever),
symptom(Patient,chills),
symptom(Patient,body_ache),
symptom(Patient,rash).

hypothesis(Patient,measles) :-
symptom(Patient,cough),
symptom(Patient,sneezing),
symptom(Patient,runny_nose).
response(Reply) :-
readchar(Reply),
write(Reply),nl.

You might also like