Module 5
Module 5
Syllabus:
• Inference in First Order Logic: Backward Chaining, Resolution
• Classical Planning: Definition of Classical Planning, Algorithms for Planning as State-Space
Search, Planning Graphs
• Chapter 9-9.4, 9.5
• Chapter 10- 10.1,10.2,10.3
• Text book: Stuart J. Russell and Peter Norvig, Artificial Intelligence, 3rd Edition, Pearson,
2015
BACKWARD CHAINING
These algorithms work backward from the goal, chaining through rules to find known facts that
support the proof.
Steps:
1.Unification
2.Substitution
3.Standardization of Variables
1
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
• Fact 2: Weapon(M1)
• Fact 3: Hostile(Nono)
Query: Is West a criminal? (i.e., Criminal(West)
Step 1 (Start with the query):
• The query is Criminal(West)
• We apply FOL-BC-OR(KB, Criminal(West), {}).
Step 2 (Look for applicable rules):
• Rule 1 applies because Criminal(x)is the goal.
• Now we need to check if all the premises of Rule 1 are satisfied for West:
• American(West) (True, from Fact 1).
• Weapon(M1) (True, from Fact 2).
• Sells(West,M1,Nono) (We need to check this).
• Hostile(Nono) (True, from Fact 3).
Step 3 (Check Sells(West,M1,Nono):
• Rule 2 applies to this sub-goal.
• We check the premises:
• Missile(M1) (True, from Fact 2).
• Owns(Nono,M1) (Assume this is true).
• If both are true, we can conclude that Sells(West,M1,Nono)is true.
• Step 4 (Conclude the query):
• Since all premises of Rule 1 are satisfied, we can conclude Criminal(West) is true.
Step 5 (Return the substitution):
• The final result is a substitution that shows West is a criminal.
2
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
3
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
• The execution of a Prolog program can happen in two modes: interpreted and compiled.
• Interpretation essentially amounts to running the FOL-BC-ASK algorithm with the program as the
knowledge base.
• Prolog interpreters have a global data structure, a stack of choice points, to keep track of the
multiple possibilities.
• Second, our simple implementation of FOL-BC-ASK spends a good deal of time generating
substitutions.
• Instead of explicitly constructing substitutions, Prolog has logic variables that remember their
current binding.
• When a path in the search fails, Prolog will back up to a previous choice point, and then it might
have to unbind some variables.
• This is done by keeping track of all the variables that have been bound in a stack called the trail.
• A compiled Prolog program, on the other hand, is an inference procedure for a specific set of
clauses, so it knows what clauses match the goal.
• The instruction sets of today’s computers give a poor match with Prolog’s semantics, so most
Prolog compilers compile into an intermediate language rather than directly into machine language.
• The most popular intermediate language is the Warren Abstract Machine, or WAM, named after
David H. D. Warren, one of the implementers of the first Prolog compiler.
• The WAM is an abstract instruction set that is suitable for Prolog and can be either interpreted or
translated into machine language.
• The definition of the Append predicate can be compiled into the code
• Unification: This is a central operation in logic programming that attempts to make two
terms (variables or constants) equal.
• Backtracking: If any unification fails, the procedure will revert to the previous state using
4
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
a)path(X,Z) :- link(X,Z).
path(X,Z) :- path(X,Y), link(Y,Z).
5
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
a)path(X,Z) :- link(X,Z).
path(X,Z) :- path(X,Y), link(Y,Z).
b)path(X,Z) :- path(X,Y), link(Y,Z).
path(X,Z) :- link(X,Z).---Fails
6
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
RESOLUTION
• Two clauses, which are assumed to be standardized apart so that they share no variables, can be
resolved if they contain complementary literals.
• Propositional literals are complementary if one is the negation of the other; first-order literals are
complementary if one unifies with the negation of the other.
• For example, we can resolve the two clauses
• [Animal (F(x)) ∨ Loves(G(x), x)] and [¬Loves(u, v) ∨ ¬Kills(u, v)]
• by eliminating the complementary literals Loves(G(x), x) and ¬Loves(u, v), with unifier
θ={u/G(x), v/x}, to produce the resolvent clause
• [Animal (F(x)) ∨ ¬Kills(G(x), x)] .
Proofs Resolution
Example:
• Everyone who loves all animals is loved by someone.
• Anyone who kills an animal is loved by no one.
• Jack loves all animals.
• Either Jack or Curiosity killed the cat, who is named Tuna.
• Did Curiosity kill the cat?
• First, we express the original sentences, some background knowledge, and the negated goal G in
first-order logic:
8
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
Suppose Curiosity did not kill Tuna. We know that either Jack or Curiosity did; thus Jack must
have. Now, Tuna is a cat and cats are animals, so Tuna is an animal. Becauseanyone who kills an
animal is loved by no one, we know that no one loves Jack. On the other hand, Jack loves all
animals, so someone loves him; so we have a contradiction.Therefore, Curiosity killed the cat.
Completeness of Resolution
9
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
1. First, we observe that if S is unsatisfiable, then there exists a particular set of ground instances of
the clauses of S such that this set is also unsatisfiable (Herbrand’s theorem).
2. We then appeal to the ground resolution theorem , which states that propositional resolution
is complete for ground sentences.
3. We then use a lifting lemma to show that, for any propositional resolution proof using the set
of ground sentences, there is a corresponding first-order resolution proof using the first-order
sentences from which the ground sentences were obtained.
• To carry out the first step, we need three new concepts: Herbrand universe: If S is a set of clauses,
then HS, the Herbrand universe of S, is the set of all ground terms constructable from the following:
a. The function symbols in S, if any.
b. The constant symbols in S, if any; if none, then the constant symbol A.
For example, if S contains just the clause ¬P(x, F(x,A))∨¬Q(x,A)∨R(x,B), then
HS is the following infinite set of ground terms: {A,B, F(A,A), F(A,B), F(B,A), F(B,B),
F(A,F(A,A)), . . .} .
• Saturation: If S is a set of clauses and P is a set of ground terms, then P(S), the saturation of S
with respect to P, is the set of all ground clauses obtained by applying all possible consistent
substitutions of ground terms in P with variables in S.
• Herbrand base: The saturation of a set S of clauses with respect to its Herbrand universe is called
the Herbrand base of S, written as HS(S). For example, if S contains solely the clause just given,
then HS(S) is the infinite set of clauses
• {¬P(A, F(A,A)) ∨ ¬Q(A,A) ∨ R(A,B),
• ¬P(B,F(B,A)) ∨¬Q(B,A) ∨ R(B,B),
• ¬P(F(A,A), F(F(A,A),A)) ∨¬Q(F(A,A),A) ∨ R(F(A,A),B),
10
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
CLASSICAL PLANNING :
Chapter 10.1,10.2,10.3
Definition of Classical Planning, Algorithms for Planning as State-Space Search, Planning
Graphs
• The system is unable to infer the correct behavior of the XOR gate for certain input combinations,
like 1 and 0.
• This failure to infer is due to the lack of knowledge about the relationship between those inputs
(i.e., that 1 and 0 are different).
• By examining the axiom for the XOR gate and testing for the output at each gate, it becomes clear
that the system needs to be explicitly told about the condition 1 ≠ 0 in order to deduce the correct
output.
• Once this information is provided, the system can correctly infer that Signal (Out(1, X1)) = 1 when
the inputs are 1 and 0.
In essence, the problem is a missing or forgotten assertion that would allow the system to properly
deduce the XOR gate's output.
• We use a language called PDDL, the Planning Domain Definition Language, that allows us
to express all 4Tn2 actions with one action schema. There have been several versions of PDDL.
• We now show how PDDL describes the four things we need to define a search problem: the
initial state, the
actions that are available in a state, the result of applying an action, and the goal test.
• Each state is represented as a conjunction of fluents that are ground, functionless atoms.
• For example, Poor ∧ Unknown might represent the state of a hapless agent, and a state in a
package delivery problem might be At(Truck 1, Melbourne) ∧ At(Truck 2, Sydney).
• The representation of states is carefully designed so that a state can be treated either as a
conjunction of fluents, which can be manipulated by logical inference, or as a set of fluents,
which can be manipulated with set operations.
• Actions are described by a set of action schemas that implicitly define the ACTIONS(s) and
RESULT(s, a) functions needed to do a problem-solving search.
• Classical planning concentrates on problems where most actions leave most things
unchanged.
• A set of ground (variable-free) actions can be represented by a single action schema.
11
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
• The schema is a lifted representation—it lifts the level of reasoning from propositional logic
to a restricted subset of first-order logic.
• For example, here is an action schema for flying a plane from one location to another:
• Action(Fly(p, from, to), PRECOND:At(p, from) ∧ Plane(p) ∧ Airport (from) ∧ Airport (to)
EFFECT:¬At(p, from) ∧ At(p, to))
• The schema consists of the action name, a list of all the variables used in the schema, a
precondition and an effect.
Summary
1. Load C1 onto P1 at SFO.
2. Fly P1 from SFO to JFK.
3. Unload C1 at JFK.
4. Load C2 onto P2 at JFK.
5. Fly P2 from JFK to SFO.
6. Unload C2 at SFO.
After completing these actions,
C1 will be at JFK, and C2 will be at SFO, achieving the goal state.
13
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
•
Summary of the Plan:
• Remove(Flat, Axle): Move the flat tire off the axle to the ground.
• PutOn(Spare, Axle): Place the spare tire on the axle.
• Once these actions are performed, the goal will be achieved, with the spare tire placed on the axle.
•
• 1. Move Block C from A to the Table:
14
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
• PlanSAT is the question of whether there exists any plan that solves a planning problem.
• Bounded PlanSAT asks whether there is a solution of length k or less; this can be used to find an
optimal plan.
• The first result is that both decision problems are decidable for classical planning.
• The proof follows from the fact that the number of states is finite. But if we add function symbols
to the language, then the number of states becomes infinite, and PlanSAT becomes only
semidecidable: an algorithm exists that will terminate with the correct answer for any solvable
problem, but may not terminate on unsolvable problems.
• The Bounded PlanSAT problem remains decidable even in the presence of function symbols.
and expand the search in the forward direction by applying actions, generating new states.
Steps in Forward Search:
1. Start from the initial state (the given configuration of the world).
2. Generate possible successor states by applying available actions to the current state.
3. Repeat the process for each successor state until a state satisfying the goal condition is found.
4. Use a search strategy (such as breadth-first search, depth-first search, or heuristic search) to decide
which state to expand next.
5. For the previous block stacking problem, a forward search would start with the initial state where
Block A is on the table, Block B is on the table, and Block C is on Block A. It would then generate
successor states by moving blocks around until it reaches the goal state where:
6. Block A is on Block B.
7. Block B is on Block C.
• Backward state space search, we start from the goal state and work backward toward the initial
state. The idea is to try to reverse-engineer the solution by considering which actions could have
led to the goal and then working backward to identify the previous states.
• Steps in Backward Search:
1. Start from the goal state (the desired configuration of the world).
2. Identify possible predecessor states by determining which actions could have resulted in the goal
state.
3. Repeat the process for each predecessor state, working backward until the initial state is reached.
4. Like forward search, use a search strategy to decide which state to explore next.
5. For the block stacking problem, backward search would begin with the goal state (Block A on
Block B, and Block B on Block C). It would then look at actions that could have resulted in this
goal and work backward:
6. Determine that Block A must have been moved to Block B and Block B must have been moved to
Block C.
7. Eventually, the search would backtrack to the initial state where Block A is on the table, Block B
is on the table, and Block C is on Block A.
Heuristics Planning
• We look first at heuristics that add edges to the graph. For example, the ignore preconditions
heuristic drops all preconditions from actions.
• Every action becomes applicable in every state, and any single goal fluent can be achieved in one
step (if there is an applicable
• action—if not, the problem is impossible). This almost implies that the number of steps required
to solve the relaxed problem is the number of unsatisfied goals—almost but not quite, because
• (1) some action may achieve multiple goals and
16
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
• A key idea in defining heuristics is decomposition: dividing a problem into parts, solving each part
independently, and then combining the parts.
• The subgoal independence assumption is that the cost of solving a conjunction of subgoals is
approximated by the sum
• of the costs of solving each subgoal independently.
• The subgoal independence assumption can be optimistic or pessimistic. It is optimistic when there
are negative interactions between the subplans for each subgoal—for example, when an action in
one subplan deletes a goal
• achieved by another subplan.
• It is pessimistic, and therefore inadmissible, when subplans contain redundant actions—for
instance, two actions that could be replaced by a single action in the merged plan.
• It is clear that there is great potential for cutting down the search space by forming abstractions.
• The trick is choosing the right abstractions and using them in a way that makesthe total cost—
defining an abstraction, doing an abstract search, and mapping the abstraction back to the original
problem—less than the cost of solving the original problem.
• Pattern databases can be useful here.
PLANNING GRAPHS
• A special data structure called a planning graph can be used to give better heuristic estimates.
• These heuristics can be applied to any of the search techniques we have seen so far.
• Alternatively, we can search for a solution over the space formed by the planning graph, using an
algorithm called GRAPHPLAN.
• A planning problem asks if we can reach a goal state from the initial state.
• Suppose we are given a tree of all possible actions from the initial state to successor states, and
their successors,
• and so on.
• If we indexed this tree appropriately, we could answer the planning question “can we reach state
G from state S0” immediately, just by looking it up.
• Of course, the tree is of exponential size, so this approach is impractical.
17
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
• A planning graph is polynomialsize approximation to this tree that can be constructed quickly.
• The planning graph can’t answer definitively whether G is reachable from S0, but it can estimate
how many steps it takes to reach G.
• The estimate is always correct when it reports the goal is not reachable, and it never overestimates
the number of steps, so it is an admissible heuristic.
• A planning graph is a directed graph organized into levels: first a level S0 for the initial state,
consisting of nodes representing each fluent that holds in S0; then a level A0 consisting of nodes
for each ground action that might be applicable in S0; then alternating levels Si followed by Ai;
until we reach a termination condition.
18
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
• Figure 10.7 shows a simple planning problem, and Figure 10.8 shows its planning graph.
• Each action at level Ai is connected to its preconditions at Si and its effects at Si+1.
• So a literal appears because an action caused it, but we also want to say that a literal canpersist if
no action negates it. This is represented by a persistence action .
• For every literal C, we add to the problem a persistence action with precondition C and effect C.
• Level A0 in Figure 10.8 shows one “real” action, Eat (Cake), along with two persistence actions
drawn as small square boxes.
• Level A0 contains all the actions that could occur in state S0, but just as important it records
conflicts between actions that would prevent them from occurring together.
• The gray lines in Figure 10.8 indicate mutual exclusion (or mutex) links.
• For example, Eat (Cake) is mutually exclusive with the persistence of either Have(Cake) or ¬
Eaten(Cake).
• Level S1 contains all the literals that could result from picking any subset of the actions in A0, as
well as mutex links (gray lines) indicating literals that could not appear together, regardless of the
choice of actions. For example, Have(Cake) and Eaten(Cake) are mutex:
• depending on the choice of actions in A0, either, but not both, could be the result.
• In other words, S1 represents a belief state: a set of possible states.
• The members of this set are all subsets of the literals such that there is no mutex link between any
members of the subset.
• We continue in this way, alternating between state level Si and action level Ai until we reach a
point where two consecutive levels are identical. At this point, we say that the graph has leveled
off.
• The graph in Figure 10.8 levels off at S2.
• What we end up with is a structure where every Ai level contains all the actions that are applicable
in Si, along with constraints saying that two actions cannot both be executed at the same level.
• Every Si level contains all the literals that could result from any possible choice of actions in Ai−1,
along with constraints saying which pairs of literals are not possible.
19
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
• It is important to note that the process of constructing the planning graph does not require choosing
among actions, which would entail combinatorial search. Instead, it just records the impossibility
of certain choices using mutex links.
• A mutex relation holds between two actions at a given level if any of the following three conditions
holds:
• Inconsistent effects: one action negates an effect of the other.
• For example, Eat (Cake) and the persistence of Have(Cake) have inconsistent effects because they
disagree on the effect Have(Cake).
• Interference: one of the effects of one action is the negation of a precondition of the other. For
example Eat (Cake interferes with the persistence of Have(Cake) by negatingits precondition.
• Competing needs: one of the preconditions of one action is mutually exclusive with a precondition
of the other. For example, Bake(Cake) and Eat (Cake) are mutex because they compete on the
value of the Have(Cake) precondition.
• The purpose of GRAPHPLAN is to generate a sequence of actions that lead from the initial state
to a goal state, while respecting the constraints and conditions of the problem.
• The algorithm builds a planning graph incrementally and uses a backward search to extract the
solution.
• 1.Initial Planning Graph
• The first step is to construct an initial planning graph from the given problem. This graph will
encode both the actions and propositions (conditions) that are relevant to the problem. The
planning graph will be expanded in subsequent steps.
• 2.Extracting the Goal
20
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
7. Terminating Condition
If the planning graph and nogoods hash table have "leveled off," meaning no further expansion is possible,
the algorithm terminates and returns failure because no solution can be found.
• Planning Graph: A layered structure that encodes the relationships between actions and
propositions at different time steps.
• Mutex: Constraints that indicate which actions or propositions cannot coexist at the same level in
the graph.
• No-Goods: A table that helps avoid searching for impossible plans by storing sets of conditions
that cannot be satisfied.
• Backward Search: Once the graph is expanded, the algorithm searches backward from the goals
to find the sequence of actions that can achieve them.
21
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
• GRAPHPLAN is efficient because it constructs a planning graph that captures the necessary
dependencies and constraints, allowing it to search for solutions systematically while avoiding
redundant computations.
• The first line of GRAPHPLAN initializes the planning graph to a one-level (S0) graph representing
the initial state.
• The positive fluents from the problem description’s initial state are shown, as are the relevant
negative fluents.
• Not shown are the unchanging positive literals (such as Tire(Spare)) and the irrelevant negative
literals.
• The goal At(Spare, Axle) is not present in S0, so we need not call EXTRACT-SOLUTION— we
are certain that there is no solution yet. Instead, EXPAND-GRAPH adds into A0 the three actions
whose preconditions exist at level S0 (i.e., all the actions except PutOn(Spare, Axle)), along with
persistence actions for all the literals in S0. The effects of the actions are added at level S1.
EXPAND-GRAPH then looks for mutex relations and adds them to the graph.
• At(Spare, Axle) is still not present in S1, so again we do not call EXTRACT-SOLUTION.
• We call EXPAND-GRAPH again, adding A1 and S1 and giving us the planning graph shown in
Figure 10.10.
• Now that we have the full complement of actions, it is worthwhile to look at some of the examples
of mutex relations and their causes:
• Inconsistent effects: Remove(Spare, Trunk ) is mutex with LeaveOvernight because one has the
effect At(Spare, Ground) and the other has its negation.
• Interference: Remove(Flat, Axle) is mutex with LeaveOvernight because one has the precondition
22
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
1. Literals increase monotonically: Once a literal appears at a given level, it will appear at all
subsequent levels. This is because of the persistence actions; once a literal shows up, persistence
actions cause it to stay forever.
2. Actions increase monotonically: Once an action appears at a given level, it will appear at all
subsequent levels. This is a consequence of the monotonic increase of literals; if the preconditions
of an action appear at one level, they will appear at subsequent levels, and thus so will the action.
3. Mutexes decrease monotonically: If two actions are mutex at a given level Ai, then they will also
be mutex for all previous levels at which they both appear. The same holds for mutexes between
literals. It might not always appear that way in the figures, because the figures have a simplification:
they display neither literals that cannot hold at level Si nor actions that cannot be executed at level
Ai. We can see that “mutexes decrease monotonically” is true if you consider that these invisible
literals and actions are mutex with everything.
4. No-goods decrease monotonically: If a set of goals is not achievable at a given level, then they are not
achievable in any previous level. The proof is by contradiction: if they were achievable at some previous
level, then we could just add persistence actions to make them achievable at a subsequent level.
+
23
Prof. Salma Itagi,Dept. of CSE,SVIT
MODULE 5 ARTIFICIAL INTELLIGENCE(BCS515B)
24
Prof. Salma Itagi,Dept. of CSE,SVIT