0% found this document useful (0 votes)
4 views

UNIT-II

The document provides an overview of game playing in artificial intelligence, discussing the significance of games for exploring machine intelligence and the complexities involved in game trees. It details the minimax algorithm, a recursive method for optimal decision-making in two-player games, and introduces alpha-beta pruning as an optimization technique to enhance the efficiency of the minimax algorithm. Key properties, limitations, and the working mechanism of both algorithms are also outlined, emphasizing their applications in AI for games like chess and tic-tac-toe.

Uploaded by

pnagasyamala39
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

UNIT-II

The document provides an overview of game playing in artificial intelligence, discussing the significance of games for exploring machine intelligence and the complexities involved in game trees. It details the minimax algorithm, a recursive method for optimal decision-making in two-player games, and introduces alpha-beta pruning as an optimization technique to enhance the efficiency of the minimax algorithm. Key properties, limitations, and the working mechanism of both algorithms are also outlined, emphasizing their applications in AI for games like chess and tic-tac-toe.

Uploaded by

pnagasyamala39
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

lOMoARcPSD|3951957

Artificial Intelligence Module -IV

Module-4

GAME PLAYING
OVERVIEW

Games hold an inexplicable fascination for many people, and the notion that computers might play
games has existed at least as long as computers. Charles Babbage, the nineteenth-century computer
architect, thought about programming his Analytical Engine to play chess and later of building a
machine to play tic-tac-toe. Two of the pioneers of the science of information and computing
contributed to the fledgling computer game-playing literature. A few years later, Alan Turing described
a chess-playing program, although he never built it. (Read, just to know)

✓ A game is defined as a sequence of choices where each choice is made from a number of
discrete alternatives. Each sequence ends in a certain outcome and every outcome has a
definite value for the opening player. In case of two player games, both the players have
opposite goals.
✓ There were two reasons that games appeared to be a good domain in which to explore machine
intelligence:
They provide a structured task in which it is very easy to measure success or failure.
They did not obviously require large amounts of knowledge. They were thought to be
solvable by straightforward search from the starting state to a winning position.
✓ The first of these reasons remains valid and accounts for continued interest in the area of
game playing by machine. Unfortunately, the second is not true for any but the simplest games.
For example, consider chess.
The average branching factor is around 35.
In an average game, each player might make 50 moves.
So in order to examine the complete game tree, we would have to examine 35100 positions.
✓ Thus it is clear that a program that simply does a straightforward search of the game tree will
not be able to select even its first move during the lifetime of its opponent. Some kind of
heuristic search procedure is necessary.
✓ To improve the effectiveness of a search-based problem-solving program two things can be
done:
Improve the generate procedure so that only good moves (or paths) are generated.
Improve the test procedure so that the best moves (or paths) will be recognized and
explored first.
✓ The ideal way to use a search procedure to find a solution to a problem is to generate moves
through the problem space until a goal state is reached. In the context of game-playing
programs, a goal state is one in which we win.
✓ In order to choose the best moves (ply), these methods use a static evaluation function. This
function is similar to that of the heuristic function h' in the A* algorithm.

Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV


✓ Unfortunately, deciding which moves have contributed to wins and which to losses is not
always easy. The problem of deciding which of a series of actions is actually responsible
for a particular outcome is called the credit assignment problem.

➢ THE MINIMAX SEARCH PROCEDURE


• Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-
making and game theory. It provides an optimal move for the player assuming that
opponent is also playing optimally.
• Mini-Max algorithm uses recursion to search through the game-tree.
• Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-
tac-toe, go, and various tow-players game. This Algorithm computes the minimax
decision for the current state.
• In this algorithm two players play the game, one is called MAX and other is called MIN.
• Both the players fight it as the opponent player gets the minimum benefit while they get
the maximum benefit.
• Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
• The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree.
• The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.

➢ Working of Min-Max Algorithm:


• The working of the minimax algorithm can be easily described using an example. Below
we have taken an example of game-tree which is representing the two-player game.
• In this example, there are two players one is called Maximizer and other is called
Minimizer.
• Maximizer will try to get the Maximum possible score, and Minimizer will try to get the
minimum possible score.
• This algorithm applies DFS, so in this game-tree, we have to go all the way through the
leaves to reach the terminal nodes.
• At the terminal node, the terminal values are given so we will compare those value and
backtrack the tree until the initial state occurs. Following are the main steps involved in
solving the two-player game tree:
Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility
function to get the utility values for the terminal states. In the below tree diagram, let's take A
is the initial state of the tree. Suppose maximizer takes first turn which has worst-case initial
value =- infinity, and minimizer will take next turn which has worst-case initial value =
+infinity.
Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV

• Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so
we will compare each value in terminal state with initial value of Maximizer and
determines the higher nodes values. It will find the maximum among the all.
• For node D max(-1,- -∞) => max(-1,4)= 4
• For Node E max(2, -∞) => max(2, 6)= 6
• For Node F max(-3, -∞) => max(-3,-5) = -3
• For node G max(0, -∞) = max(0, 7) = 7

Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will
find the 3rd layer node values.
• For node B= min(4,6) = 4
• For node C= min (-3, 7) = -3
Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find the
maximum value for the root node. In this game tree, there are only 4 layers, hence we reach immediately to
the root node, but in real games, there will be more than 4 layers.

o For node A max(4, -3)= 4

That was the complete workflow of the minimax two player game.

Properties of Mini-Max algorithm:


o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite
search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum depth of the
tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which is O(bm).

Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV

Limitation of the minimax Algorithm:

o The main drawback of the minimax algorithm is that it gets really slow for complex games such as
Chess, go, etc. This type of games has a huge branching factor, and the player has lots of choices to
decide. This limitation of the minimax algorithm can be improved from alpha-beta pruning which
we have discussed in the next topic.

Algorithm: MINIMAX (Position, Depth, Player)


1. If DEEP-ENOUGH(Position, Depth),then return the structure
VALUE = STATIC (Position, Player)
PATH = nil
This indicates that there is no path from this node and that its value is that determined
by the static evaluation function.
2. Otherwise, generate one more ply of the tree by calling the function MOVE-GEN and setting
SUCCESSORS to the list it returns.
3. If SUCCESSORS is empty, then there are no moves to be made, so return the same
structure that would have been returned if DEEP-ENOUGH had returned true.
4. If SUCCESSORS is not empty, then examine each element in turn and keep track of the
best one. This is done as follows.
Initialize BEST-SCORE to the minimum value that STATIC can return. It will be
updated to reflect the best score that can be achieved by an element of SUCCESSORS.
For each element SUCC of SUCCESSORS, do the following:
(a) Set RESULT-SUCC to MINIMAX (SUCC, Depth + 1, OPPOSITE (Player))
This recursive call to MINIMAX will actually carry out the exploration of
SUCC.
(b) Set NEW-VALUE to VALUE (RESULT-SUCC). This will cause it to reflect the
merits of the position from the opposite perspective from that of the next lower level.
(c) If NEW-VALUE > BEST-SCORE, then we have found a successor that is better
than any that have been examined so far. Record this by doing the following:
(i) Set BEST-SCORE to NEW-VALUE.
(ii) The best known path is now from CURRENT to SUCC and then on to the
appropriate path down from SUCC as determined by the recursive call to
MINIMAX. So set BEST-PATH to the result of attaching SUCC to the front of
PATH (RESULT-SUCC).
5. Now that all the successors have been examined, we know the value of Position as well as
which path to take from it. So return the structure
VALUE = BEST- SCORE PATH = BEST- PATH
When the initial call to MINIMAX returns, the best move from CURRENT is the first
element on PATH.

Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV

• Problems with minimax search


✓ Number of game states to be examined is exponential.
✓ We need to examine all the possible nodes in a game.
✓ We cannot reduce the number of tree branches to be explored.

Minimax algorithm performs a complete depth-first exploration of game tree. If the maximum
depth of the tree is ‘m’ and there are ‘b’ legal moves at each point (branching factor), then the
time complexity of the algorithm is O(bm). The space complexity is O(b*m). This algorithm serves
as a basis for mathematical analysis of games.
➢ ADDING ALPHA-BETA CUTOFFS/ALPHA BETA PRUNING

✓ Alpha_Beta pruning strategy is used to reduce the number of tree branches explored,
there by eliminate the large parts of the tree from consideration.
✓ Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique
for the minimax algorithm.
✓ As we have seen in the minimax search algorithm that the number of game states it has to examine
are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half.
Hence there is a technique by which without checking each node of the game tree we can compute
the correct minimax decision, and this technique is called pruning. This involves two threshold
parameter Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.
✓ Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.
✓ The two-parameter can be defined as:
o Alpha: The best (highest-value) choice we have found so far at any point along the path of
Maximizer. The initial value of alpha is -∞.
o Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
✓ The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but
making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.

Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is: α>=β

Key points about alpha-beta pruning:


o The Max player will only update the value of alpha.
Dept.
o Theof ISE,
MinRNSIT
player will only update the value of beta.

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV

o While backtracking the tree, the node values will be passed to upper nodes instead of values of alpha
and beta.
o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-beta pruning

Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these
value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the same
value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min, Now
β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α=
-∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -∞,
and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha will
be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right successor
of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.

Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV

Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of
alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now
passes to right successor of A which is Node C.At node C, α=3 and β= +∞, and the same values will be
passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and
then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will
become 1.

Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be
changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the
condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not compute the
entire sub-tree G.

Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV

Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the final
game tree which is the showing the nodes which are computed and nodes which has never computed. Hence
the optimal value for the maximizer is 3 for this example.

Move Ordering in Alpha-Beta pruning:

The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is examined.
Move order is an important aspect of alpha-beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the leaves of the tree, and
works exactly as minimax algorithm. In this case, it also consumes more time because of alpha-beta factors,
such a move of pruning is called worst ordering. In this case, the best move occurs on the right side of the tree.
The time complexity for such an order is O(bm).
Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV

o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning happens in the tree, and
best moves occur at the left side of the tree. We apply DFS hence it first search left of the tree and go deep
twice as minimax algorithm in the same amount of time. Complexity in ideal ordering is O(bm/2).

Algorithm: MINIMAX-A-B (Position, Depth, Player, Use-Thresh, Pass-Thresh)


1. If DEEP-ENOUGH(Position, Depth),then return the
structure VALUE = STATIC (Position, Player);
PATH = nil
2. Otherwise, generate one more ply of the tree by calling the function MOVE-GEN
(Position, Player) and setting SUCCESSORS to the list it returns.
3. If SUCCESSORS is empty, there are no moves to be made, return the same structure that
would have been returned if DEEP-ENOUGH had returned TRUE.
4. If SUCCESSORS is not empty, then go through it, examining each element and keeping
track of the best one. This is done as
follows. For each element SUCC of
SUCCESSORS:
a) Set RESULT-SUCC to
MINIMAX-A-B (SUCC, Depth +1, OPPOSITE (Player), Pass-Thresh, Use-Thresh)
b) Set NEW-VALUE to VALUE (RESULT-SUCC).
c) If NEW-VALUE > Pass-Thresh, then we have found a successor that is better
than any that have been examined so far. Record this by doing the following.
i. Set Pass-Thresh to NEW-VALUE.
ii. The best known path is now from CURRENT to SUCC and then on to the
appropriate path from SUCC as determined by the recursive call to
MINIMAX-A-
B. So set BEST-PATH to the result of attaching SUCC to the front of PATH
(RESULT-SUCC).
d) If Pass-Thresh (reflecting the current best value) is not better than Use-Thresh,
then we should stop examining this branch. But both thresholds and values have
been inverted. So if Pass-Thresh >= Use-Thresh, then return immediately with the
value
VALUE = Pass-Thresh
PATH = BEST-PATH
5. Return the structure VALUE = Pass-Thresh PATH = BEST-PATH

Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV


➢ ADDITIONAL REFINEMENTS

✓ In addition to alpha-beta pruning, there are a variety of other modifications to the


minimax procedure that can also improve its performance.
Waiting for
Quiescence
Secondary Search
Using Book Moves
Alternatives to Minimax

➢ ITERATIVE DEEPENING

✓ A number of ideas for searching two-player game trees have led to new algorithms for
single- agent heuristic search. One such idea is iterative deepening, originally used in a
program called CHESS 4.5. Rather than searching to a fixed depth in the game tree,
CHESS 4.5 first searched only a single ply, applying its static evaluation function to the
result of each of its possible moves. It then initiated a new minimax search, this time to a
depth of two ply. This was followed by a three ply search, then a four-ply search, etc. The
name “iterative deepening” derives from the fact that on each iteration, the tree is searched
one level deeper. Figure below depicts this process.
Figure: Iterative Deepening

✓ First, game-playing programs are subject to time constraints. For example, a chess
program may be required to complete all its moves within two hours. Since it is impossible
to know in advance how long a fixed-depth tree search will take, a program may find itself
running out of time.
✓ of
Dept. ISE, iterative
With RNSIT deepening, the current search can be aborted at any time and the best

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Artificial Intelligence Module -IV


move found by the previous iteration can be played. With effective ordering, the alpha-
beta procedure can prune many more branches, and total search time can be decreased
drastically. This allows more time for deeper iterations.
✓ Depth-first search was efficient in terms of space but required some cutoff depth in order to
force backtracking when a solution was not found. Breadth-first search was guaranteed to
find the shortest solution path but required inordinate amounts of space because all leaf
nodes have to be kept in memory. An algorithm called depth-first iterative deepening
(DFID) combines the best aspects of depth –first and breadth –first search.

Algorithm: Depth-First Iterative Deepening

1. Set SEARCH-DEPTH = 1
2. Conduct a depth-first search to a depth of SEARCH-DEPTH. If a solution path is found,
then return it.
3. Otherwise, increment SEARCH-DEPTH by 1 and go to step 2.

✓ Clearly, DFID will find the shortest solution path to the goal state. Moreover, the
maximum amount of memory used by DFID is proportional to the number of nodes in
that solution path. The DFID is only slower than depth-first search by a constant factor.
DFID avoids the problem of choosing cutoffs without sacrificing efficiency, and DFID is
the optimal algorithm (in terms of space and time) for uninformed search.

Algorithm: Iterative- Deepening-A*


1. Set THRESHOLD = the heuristic evaluation of the start state.
2. Conduct a depth-first search, pruning any branch when its total cost function (g + h')
exceeds THRESHOLD. If a solution path is found during the search, return it.
3. Otherwise, increment THRESHOLD by the minimum amount it was exceeded during
the previous step, and then go to Step 2.

✓ Like A*, Iterative-Deepening-A* (IDA*) is guaranteed to find an optimal solution,


provided that h' is an admissible heuristic. Because of its depth-first search technique,
IDA* is very efficient with respect to space. IDA* was the first heuristic search algorithm to
find optimal solution paths for the 15-puzzle (a 4x4 version of the 8-puzzle) within
reasonable time and space constraints.

*******************

Dept. of ISE, RNSIT

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Constraint Satisfaction Problem

Artificial intelligence (Anna University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by yakoob sk ([email protected])
lOMoARcPSD|3951957

Constraint Satisfaction Problem


 A constraint satisfaction problem is one of the standard search problems where instead
of saying that state is a black box, we say that state is defined by variables and values.
 Each state has a certain set of variables and each variable has a certain set of values
and a complete assignment to all the variables, creates a final state.
 A problem is solved when each variable has a value that satisfies all the constraints on
the variable. A problem described this way is called a constraint satisfaction
problem, or CSP.
 This is a simple example of a formal representation language and it allows for general
purpose algorithms with more power than standard search algorithms.
Defining Constraint Satisfaction Problems

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

 Consider the map of Australia, and we try to solve the graph coloring problem or the
map coloring problem.
 Here, there are seven states in the map and given with three colors, red, blue and
green.
 The task is to color the map such that no two adjacent states have the same color.
 This is a very standard graph theory problem and if we wanted to pass it as a
constraint satistaction problem, our variables would be 7th.
 Assign one variable for each state and the domains would be red, blue and green.
 These are the three colors that we are allowed to use for coloring each variable and
then the constraint would be that Western Australia cannot be equal to Northern
Territory and so on.

 This is the solution, the solution is a specific assignment to each variable such that all
constraints are satisfied.
 It can be helpful to visualize a CSP as a constraint graph, as shown in Fig (b).
 In a constraint graph, each node is a variable and each edge determines whether
there is a constraint between those two variables or not.
 This kind of a constraint graph is a binary constraint graph, where each constraint
relates at most two variables and such a CSP are called binary CSPs.
 A state has many variables and we call them as state variables where, each state
variable is a node.

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Variations on the CSP


i. Discrete variables
Finite domains
 The simplest kind of CSP involves variables that have discrete and have finite
domains.
 The simplest kind of CSP involves variables that are discrete and have finite domains.
 Map coloring problems are of this kind.
 The 8-queens problem can also be viewed as finite-domain
 CSP, where the variables Q1,Q2,…..Q8 are the positions each queen in columns and
each variable has the domain {1,2,3,4,5,6,7,8}.
 If the maximum domain size of any variable in a CSP is d, then the number of
possible complete assignments is - that is, exponential in the number of variables.
Infinite domains
 Discrete variables can also have infinite domains - for example,the set of integers or
the set of strings.
 With infinite domains, it is no longer possible to describe constraints by enumerating
all allowed combination of values. Instead a constraint language is needed such as
ii. Continuous variables
 CSPs with continuous domains are very common in real world.
 For example, in operation research field, the scheduling of experiments on the
Hubble Telescope requires very precise timing of observations; the start and finish
of each observation are continuous-valued variables that must obey a variety of
astronomical, precedence and power constraints.
 The best known category of continuous-domain CSPs is that of linear
programming problems, where the constraints must be linear inequalities
forming a convex region.
 Linear programming problems can be solved in time polynomial in the
number of variables.

Varieties of CSPs

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

Example: red is better than green.


 Often represented by a cost for each variable assignment called constraint
optimization problem.
Cryptarithmetic
 Another example is provided by cryptarithmetic puzzles
 Each letter in a cryptarithmetic puzzle represents a different digit.
 It would be represented as the global constraint

Downloaded by yakoob sk ([email protected])


lOMoARcPSD|3951957

 Each letter stands for a distinct digit; the aim is to find a substitution of digits for
letters such that the resulting sum is arithmetically correct, with the added restriction
that no leading zeros are allowed.
 The constraint hypergraph for the cryptarithmetic problem, shown in the Alldiff
constraint as well as the column addition constraints.
 Each constraint is a square box connected to the variables it contains.

Downloaded by yakoob sk ([email protected])


4/28/2024

Backtracking search
• Variable assignments are commutative, i.e.,
[ WA = red then NT = green ] same as [ NT = green then WA = red ]

• => Only need to consider assignments to a single variable at


each node

• Depth-first search for CSPs with single-variable assignments


is called backtracking search

• Can solve n-queens for n ≈ 25

Backtracking example

1
4/28/2024

Backtracking example

Backtracking example

2
4/28/2024

Backtracking example

Improving backtracking efficiency

• General-purpose methods can give huge


gains in speed:
– Which variable should be assigned next?
– In what order should its values be tried?
– Can we detect inevitable failure early?

3
4/28/2024

Most constrained variable


• Most constrained variable:
choose the variable with the fewest legal values

• a.k.a. minimum remaining values (MRV)


heuristic

Most constraining variable


• A good idea is to use it as a tie-breaker
among most constrained variables
• Most constraining variable:
– choose the variable with the most constraints on
remaining variables

4
4/28/2024

Least constraining value


• Given a variable to assign, choose the least
constraining value:
– the one that rules out the fewest values in the
remaining variables

• Combining these heuristics makes 1000


queens feasible

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values

5
4/28/2024

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values

6
4/28/2024

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values

Constraint propagation
• Forward checking propagates information from assigned to
unassigned variables, but doesn't provide early detection
for all failures:

• NT and SA cannot both be blue!


• Constraint propagation algorithms repeatedly enforce
constraints locally…

7
4/28/2024

Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff

for every value x of X there is some allowed y

Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff

for every value x of X there is some allowed y

8
4/28/2024

Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff

for every value x of X there is some allowed y

• If X loses a value, neighbors of X need to be rechecked

Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff

for every value x of X there is some allowed y

• If X loses a value, neighbors of X need to be rechecked


• Arc consistency detects failure earlier than forward checking
• Can be run as a preprocessor or after each assignment

9
4/28/2024

Arc consistency algorithm AC-3

• Time complexity: O(#constraints |domain|3)


Checking consistency of an arc is O(|domain|2)

k-consistency
• A CSP is k-consistent if, for any set of k-1 variables, and for any consistent
assignment to those variables, a consistent value can always be assigned to
any kth variable
• 1-consistency is node consistency
• 2-consistency is arc consistency
• For binary constraint networks, 3-consistency is the same as path consistency
• Getting k-consistency requires time and space exponential in k
• Strong k-consistency means k’-consistency for all k’ from 1 to k
– Once strong k-consistency for k=#variables has been obtained, solution
can be constructed trivially
• Tradeoff between propagation and branching
• Practitioners usually use 2-consistency and less commonly 3-consistency

10
4/28/2024

A KNOWLEDGE-BASED AGENT
• An intelligent agent needs knowledge about the real world for taking
decisions and reasoning to act efficiently.
• Knowledge Based System that draws upon knowledge of human experts
captured in knowledge base to solve problems that normally require human
expertise.
• A knowledge-based agent includes a knowledge base and an inference system.
• A knowledge base is a set of representations of facts of the world.
• Each individual representation is called a sentence.
• The sentences are expressed in a knowledge representation language.
• The agent operates as follows:
1. It TELLs the knowledge base what it perceives.
2. It ASKs the knowledge base what action it should perform.
4 3. It performs the chosen action.

A KNOWLEDGE-BASED AGENT
• Knowledge-based agents are composed of two main parts:
• Knowledge-base and
• Inference system.

1
4/28/2024

A KNOWLEDGE-BASED AGENT
A knowledge-based agent must able to do the following:
• An agent should be able to represent states , actions, etc.
• An agent should be able to incorporate new percepts
• An agent can update the internal representation of the world
• An agent can deduce the internal representation of the world
• An agent can deduce appropriate actions.

A KNOWLEDGE-BASED AGENT ARCHITECTURE

The knowledge-based agent (KBA) take input from the environment by perceiving the environment.
The input is taken by the inference engine of the agent and which also communicate with KB to decide
as per the knowledge store in KB.

 The learning element of KBA regularly updates the KB by learning new knowledge.

2
4/28/2024

A KNOWLEDGE-BASED AGENT ARCHITECTURE


Knowledge base (KB):
• Knowledge-base is a central component of a knowledge-based agent
• It is a collection of sentences
• These sentences are expressed in a language which is called a
knowledge representation language.
• The Knowledge-base of KBA stores fact about the world.

A KNOWLEDGE-BASED AGENT ARCHITECTURE


Inference Engine:
• Inference means deriving new sentences from old.
• Inference system allows us to add a new sentence to the knowledge
base.
• A sentence is a proposition about the world.
• Example of a proposition: "Grass is green",
and "2 + 5 = 5"
• The first proposition has the truth value of "true" and the second "false".
• Inference system applies logical rules to the KB to deduce new
information.

3
4/28/2024

OPERATIONS PERFORMED BY KBA


Following are three operations which are performed by KBA in order to show the
intelligent behavior:
• TELL: This operation tells the knowledge base what it perceives from
the environment.
• ASK: This operation asks the knowledge base what action it should
perform.
• Perform: It executing the selected action.

OPERATIONS PERFORMED BY KBA


function KB-AGENT(percept): return action
persistent: KB, a knowledge base
t, a counter, initially 0, indicating time
TELL(KB, MAKE-PERCEPT-SENTENCE(percept, t))
Action = ASK(KB, MAKE-ACTION-QUERY(t))
TELL(KB, MAKE-ACTION-SENTENCE(action, t))
t=t+1
return action

A generic knowledge-based agent.


Given a percept,
• the agent adds the percept to its knowledge base,
•asks the knowledge base for the best action,
•and tells the knowledge base that it has in fact taken that action

4
4/28/2024

A TYPICAL WUMPUS WORLD

• The agent always starts in the field [1,1].


• The task of the agent is to find the gold, return to the field [1,1] and
climb out of the cave.
12

THE WUMPUS WORLD ENVIRONMENT


• The Wumpus world is a simple world example to illustrate the worth of a
knowledge-based agent and to represent knowledge representation.
• It was inspired by a video game Hunt the Wumpus by Gregory Yob in 1973.
• The agent explores a cave consisting of rooms connected by passageways.
• Lurking somewhere in the cave is the Wumpus, a beast that eats any agent
that enters its room.
• Some rooms contain bottomless pits that trap any agent that wanders into
the room.
• Occasionally, there is a heap of gold in a room.
• The goal is to collect the gold and exit the world without being eaten

13

5
4/28/2024

A TYPICAL WUMPUS WORLD


• The Wumpus world is a cave which has 4/4 rooms connected with passageways.
• There are total 16 rooms which are connected with each other.
• We have a knowledge-based agent who will go forward in this world.
• The cave has a room with a beast which is called Wumpus, who eats anyone who
enters the room.
• A beast is an animal — and usually not a gentle or attractive one. You can also call a person a
beast when they're behaving in a crude, savage, or horrible way.
• The Wumpus can be shot by the agent, but the agent has a single arrow.
• In the Wumpus world, there are some Pits rooms which are bottomless, and if agent
falls in Pits, then he will be stuck there forever.
• The exciting thing with this cave is that in one room there is a possibility of finding a
heap of gold.
• So the agent goal is to find the gold and climb out the cave without fallen into Pits or
eaten by Wumpus.
• The agent will get a reward if he comes out with gold, and he will get a penalty if
eaten by Wumpus or falls in the pit.

AGENT IN A WUMPUS WORLD: PERCEPTS


• The agent perceives
• a stench in the square containing the Wumpus and in the adjacent
squares (not diagonally)
• a breeze in the squares adjacent to a pit
• a glitter in the square where the gold is
• a bump, if it walks into a wall
• a woeful scream everywhere in the cave, if the wumpus is killed
• The percepts are given as a five-symbol list. If there is a stench and a
breeze, but no glitter, no bump, and no scream, the percept is
[Stench, Breeze, None, None, None]

15

6
4/28/2024

WUMPUS WORLD ACTIONS


• go forward
• turn right 90 degrees
• turn left 90 degrees
• grab: Pick up an object that is in the same square as the agent
• shoot: Fire an arrow in a straight line in the direction the agent is
facing. The arrow continues until it either hits and kills the wumpus or
hits the outer wall. The agent has only one arrow, so only the first
Shoot action has any effect
• climb is used to leave the cave. This action is only effective in the start
square
• die: This action automatically and irretrievably happens if the agent
enters a square with a pit or a live wumpus
16

ILLUSTRATIVE EXAMPLE: WUMPUS WORLD


•Performance measure
• gold +1000,
• death -1000
(falling into a pit or being eaten by the wumpus)
• -1 per step, -10 for using the arrow
•Environment
• Rooms / squares connected by doors.
• Squares adjacent to wumpus are smelly
• Squares adjacent to pit are breezy
• Glitter iff gold is in the same square
• Shooting kills wumpus if you are facing it
• Shooting uses up the only arrow
• Grabbing picks up gold if in same square
• Releasing drops the gold in same square
• Randomly generated at start of game. Wumpus only senses current room.
•Sensors: Stench, Breeze, Glitter, Bump, Scream [perceptual inputs]
•Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot

7
4/28/2024

WUMPUS WORLD CHARACTERIZATION

Fully Observable No – only local perception

Deterministic Yes – outcomes exactly specified

Static Yes – Wumpus and Pits do not move

Discrete Yes

Single-agent? Yes – Wumpus is essentially a “natural feature.”

Exploring the Wumpus world:


Now we will explore the Wumpus world and will determine how the agent will find its goal by applying
logical reasoning.
Agent's First step:
• Initially, the agent is in the first room or on the square [1,1], and we already know that this room is
safe for the agent,
• At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent squares are also OK.

8
4/28/2024

Exploring the Wumpus world:


Agent's second Step:
 Now agent needs to move forward, so it will either move to [1, 2], or [2,1].
 Let's suppose agent moves to the room [2, 1], at this room agent perceives some breeze which means
Pit is around this room.
 The pit can be in [3, 1], or [2,2], so we will add symbol P? to say that, is this Pit room?

•Now agent will stop and think and will not make any harmful
move. The agent will go back to the [1, 1] room.
•The room [1,1], and [2,1] are visited by the agent, so we will use
symbol V to represent the visited squares.

Exploring the Wumpus world:


Agent's third step:
• At the third step, now agent will move to the room
[1,2] which is OK.
• In the room [1,2] agent perceives a stench which
means there must be a Wumpus nearby.

• But Wumpus cannot be in the room [1,1] as by


rules of the game, and also not in
[2,2] (Agent had not detected any
stench when he was at [2,1]).

• Therefore agent infers that Wumpus is in the


room [1,3], and in current state, there is no breeze
which means in [2,2] there is no Pit and no
Wumpus.

• So it is safe, and we will mark it OK,


• and the agent moves further in [2,2].

9
4/28/2024

Exploring the Wumpus world:


Agent's fourth step:
• At room [2,2], here no stench and
no breezes present
• so let's suppose agent decides to
move to [2,3].
• At room [2,3] agent perceives
glitter,
• so it should grab the gold and
climb out of the cave.

NO INDEPENDENT ACCESS TO THE WORLD


• The reasoning agent often gets its knowledge about the facts of
the world as a sequence of logical sentences and must draw
conclusions only from them without independent access to the
world.
• Thus it is very important that the agent’s reasoning is sound!

23

10
4/28/2024

SUMMARY OF KNOWLEDGE BASED AGENTS


• Intelligent agents need knowledge about the world for making good
decisions.
• The knowledge of an agent is stored in a knowledge base in the form
of sentences in a knowledge representation language.
• A knowledge-based agent needs a knowledge base and an
inference mechanism. It operates by storing sentences in its
knowledge base, inferring new sentences with the inference
mechanism, and using them to deduce which actions to take.
• A representation language is defined by its syntax and semantics,
which specify the structure of sentences and how they relate to the
facts of the world.
• The interpretation of a sentence is the fact to which it refers. If this
24 fact is part of the actual world, then the sentence is true.

Types of knowledge
Declarative Knowledge: (descriptive knowledge )
• Declarative knowledge is to know about something.
• It includes concepts, facts, and objects.
• expressed in declarative sentences.
Procedural Knowledge (imperative knowledge.)
• knowing how to do something.
• It can be directly applied to any task.
• It includes rules, strategies, procedures, agendas, etc.
• It depends on the task on which it can be applied.
Heuristic knowledge: Heuristic knowledge is rules of thumb based on previous
experiences, awareness of approaches, and which are good to work but not
guaranteed.
Structural knowledge:
• Structural knowledge is basic knowledge to problem-solving.
• It describes relationships between various concepts such as kind of, part of, and grouping
of something.
• It describes the relationship that exists between concepts or objects.

11
4/28/2024

Techniques of knowledge representation

There are mainly four ways of knowledge representation which are given
as follows:
• Logical Representation
• Semantic Network Representation
• Frame Representation
• Production Rules

Logical Representation

12
4/28/2024

Basic of Logical Representation


• Logical representation is a language with standard rules which deals with propositions
and has no ambiguity in representation.
• It consists of precisely defined syntax and semantics which supports the sound
inference.
Syntax:
• Syntaxes are the rules which decide how we can construct legal
sentences in the logic.
• It determines which symbol we can use in knowledge representation.
• How to write those symbols.
Semantics:
• Semantics are the rules by which we can interpret the sentence in the
logic.
• Semantic involves assigning a meaning to each sentence.

Logical Representation

Logical representation can be categorized into mainly two


logics:
• Propositional Logics
• Predicate logics

13
4/28/2024

Propositional logic

Propositional logic

• Propositional logic (PL) is the simplest form of logic where all the
statements are made by propositions.
• A proposition is a declarative statement which is either true or false.
• It is a technique of knowledge representation in logical and
mathematical form.
• Example:
• a) today is Wednesday
• b) The Sun rises from West (False proposition)
• c) 3+3= 6(true proposition)

14
4/28/2024

Propositional logic
• In propositional logic, we use symbolic variables to represent the logic, and
we can use any symbol for a representing a proposition, such A, B, C, P, Q, R,
etc.
• The atomic sentences consist of a single proposition symbol.
• Complex sentences are constructed from simpler sentences.
• Propositions can be either true or false, but it cannot be both.
• Propositional logic consists of an object, relations or function, and logical
connectives.

Syntax of Propositional logic


The syntax of propositional logic defines the allowable sentences for the
knowledge representation.
There are two types of Propositions:
• Atomic Propositions
• Compound propositions

15
4/28/2024

Atomic & Compound Proposition


• Atomic propositions are the simple propositions.
• It consists of a single proposition symbol.
• These are the sentences which must be either true or false.
• Examples :
• 2+2 is 4, it is an atomic proposition as it is a true fact.
• "The Sun is cold" is also a proposition as it is a false fact.

• Compound propositions are constructed by combining simpler or atomic


propositions, using parenthesis and logical connectives.
• Examples:
• "It is raining today, and street is wet."
• "Ankit is a doctor, and his clinic is in Mumbai."

Logical Connectives:
• Logical connectives are used to connect two simpler propositions or
representing a sentence logically.
• We can create compound propositions with the help of logical
connectives.
• There are mainly five connectives, which are given as follows:

16
4/28/2024

Logical Connectives: Example


Sentences:
X : It is hot
Y: It is Humid
Z: It is raining

• If it is humid then it is hot


• YX
• If it is hot and humid then it is not raining
• X^Y ~Z.

Logical Connectives:
Conjunction :
A sentence which has ∧ connective
• P ∧ Q is called a conjunction.
Example: Rohan is intelligent and hardworking.
Disjunction:
A sentence which has ∨ connective.
• P ∨ Q. is called disjunction.
Example: "Raju is a doctor or surgeon",
Negation:
A sentence such as ¬ P is called negation of P.
A literal can be either Positive literal or negative literal.

17
4/28/2024

Logical Connectives:
Implication: A sentence such as P → Q, is called an implication.
 Implications are also known as if-then rules.

examples: If it is raining, then the street is wet.


Let P= It is raining, and
Q= Street is wet
so it is represented as P → Q

Biconditional: A sentence such as P⇔ Q is a Biconditional sentence,


• Ex:- If and only if I am breathing, then I am alive
P= I am breathing,
Q= I am alive,
it can be represented as P ⇔ Q.

Logical Connectives: Truth Tables


• In propositional logic, we need to know the truth values of propositions in all
possible scenarios.
• We can combine all the possible combination with logical connectives, and the
representation of these combinations in a tabular format is called Truth table.
OR (∨)− The OR operation of two propositions • AND (∧) − The AND operation of two
A and B (written as A∨B ) is true if at least any propositions A and B (written as A∧B ) is true if
of the propositional variable A or B is true. both the propositional variable A and B is true.
The truth table is as follows − • The truth table is as follows −
A B A∨B A B A∧B
True True True True True True
True False True True False False
False True True False True False
False False False
False False False

Eg: You should eat or watch TV at Time Eg: Please like my video and subscribe my channel

18
4/28/2024

Logical Connectives: Truth Tables


Negation (¬) if-then (→)
− The negation of a proposition − An implication A→B is the proposition
A (written as ¬A) is false when A “if A, then B”.
is true and is true when A is It is false if A is true and B is false.
false. The rest cases are true.
The truth table is as follows The truth table is as follows −
A B A→B
True True True
A ¬A
True False False
True False
False True
False True True
False False True
Eg: Today is not Tuesday

Eg: If there is rain then the roads are wet

Logical Connectives: Truth Tables


If and only if (⇔) (reverse of Exclusive OR)
− (A⇔B) is bi-conditional logical connective which is true when A and B are same, i.e.
both are false or both are true.
The truth table is as follows −
A B A⇔B

True True True

True False False

False True False

False False True

Eg: I will go to mall iff I have to do shopping

Eg: You can access the internet from campus only if you are CSE student or you are not freshman

19
4/28/2024

Logical Connectives: Precedence


• Just like arithmetic operators, there is a precedence order for propositional
connectors or logical operators.
• This order should be followed while evaluating a propositional problem.
• Following is the list of the precedence order for operators:

Precedence Operators

First Precedence Parenthesis


Second Precedence Negation
Third Precedence Conjunction(AND)
Fourth Precedence Disjunction(OR)
Fifth Precedence Implication
Sixth Precedence Biconditional

Logical equivalence:
• Logical equivalence is one of the features of propositional logic.
• Two propositions are said to be logically equivalent if and only if the
columns in the truth table are identical to each other.
(p∧ Q) ≡ (Q ∧ P) commutativity of ∧ ¬(P∧ Q) ≡ (¬P∨ ¬Q) De Morgan
(P∨ Q) ≡ (Q ∨ P) commutativity of ∨ ¬(P∨ Q) ≡ (¬P∧ ¬Q) De Morgan
((P∧ Q) ∧ R) ≡ (P∧ (Q ∧ R)) associativity of ∧ (P∧ (Q ∨ R)) ≡ ((P∧ Q) ∨ (P∧ R)) distributivity of ∧ over ∨
((P∨ Q) ∨ R) ≡ (P∨ (Q ∨ R)) associativity of ∨ (P∨ (Q ∧ R)) ≡ ((P∨ Q) ∧ (P∨ R)) distributivity of ∨ over ∧
¬(¬P) ≡ P double-negation elimination

(P⇒ Q) ≡ (¬Q ⇒ ¬P) contraposition


(P⇒ Q) ≡ (¬P∨ Q) implication élimination
(P⇔ Q) ≡ ((P⇒ Q) ∧ (Q ⇒ P)) biconditional
elimination
The symbols P, Q, and R stand for arbitrary sentences of propositional logic.

20
4/28/2024

Rules of inferences
• To deduce new statements from the statements whose truth value that we already
know, Rules of Inference are used.
Rules of Inference
• Mathematical logic is often used for logical proofs.
• Proofs are valid arguments that determine the truth values of mathematical statements.
• An argument is a sequence of statements.
• The last statement is the conclusion and all its preceding statements are called
premises (or hypothesis).
• The symbol “∴” is placed before the conclusion.
• A valid argument is one where the conclusion follows from the truth values of
the premises.
• Rules of Inference provide the templates or guidelines for constructing valid
arguments from the statements ( that we already have.)

Rules of inferences
Modus Ponens:
• it states that if P and P → Q is true, then we can infer that Q will be true.

Example:
• Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.

• Hence, we can say that, if P→ Q is true and P is true then Q will be true.

Proof by Truth table:

21
4/28/2024

Rules of inferences
Modus Tollens:
• It state that if P→ Q is true and ¬ Q is true, then ¬ P will also true.

• EX:-
• Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: "I do not go to the bed."==> ~Q
Conclusion: Which infers that "I am not sleepy" => ~P

• Proof by Truth table:

Rules of inferences
Addition:
The Addition rule states that
• If P is true, then P∨Q will be true.

Example:
Statement-1: I have a vanilla ice-cream. ==> P
Statement-2: I have Chocolate ice-cream.==>Q
Conclusion: I have vanilla or chocolate ice-cream. ==> (P∨Q)
Proof by Truth-Table:

22
4/28/2024

Rules of inferences
Simplification:
• The simplification rule state that, if P∧ Q is true, then Q or P will also be true.

It can be represented as:

Proof by Truth-Table:

Rules of inferences
Resolution:
• The Resolution rule state that
• if P∨Q and ¬ P∧R is true, then Q∨R will also be true.
• It can be represented as

Proof by Truth-Table:

23
4/28/2024

Rules of inferences

proposition variable for Wumpus world:


• Let Pi,j be true if there is a Pit in the room [i, j].
• Let Bi,j be true if agent perceives breeze in [i, j].
• Let Wi,j be true if there is wumpus in the square[i, j].
• Let Si,j be true if agent perceives stench in the square [i, j].
• Let Vi,j be true if that square[i, j] is visited.
• Let Gi,j be true if there is gold (and glitter) in the square [i, j].
• Let OKi,j be true if the room is safe.

24
4/28/2024

Some Propositional Rules for the wumpus world:


(B2,1 AND B4,1 AND B3,2) P3,1
(B2,3 AND B3,2 AND B3,4 AND B4,3) P3,3
(B4,3 AND B3,4) P4,4

1,4 2,4 3,4 4,4


1,3 2,3 3,3 4,3
1,2 2,2 3,2 4,2
2,1 3,1 4,1

Representation of Knowledgebase for Wumpus world:


Following is the Simple KB for wumpus world when an agent moves from room [1, 1],
to room [2,1]:

25
4/28/2024

• Examples for use of inference rules and equivalences in the wumpus


world.
• We can prove that wumpus is in the room (1, 3) using propositional rules which we
have derived for the wumpus world and using inference rule.

1,4 2,4 3,4 4,4

1,3 2,3 3,3 4,3

1,2 2,2 3,2 4,2

2,1 3,1 4,1

• Apply Modus Ponens with ¬S11 and R1:

•Apply And-Elimination Rule: ¬ W11 ∧ ¬ W12 ∧ ¬ W21


¬ W11, ¬ W12, and ¬W21.
•Apply Modus Ponens to ¬S21, and R2:

•Apply and-elimination rule to ¬ W21 ∧ ¬ W22 ∧¬ W31,

¬ W21 , ¬ W22 and ¬ W31

26
4/28/2024

• Apply Modus Ponens to ¬S1,2, and R4

•Apply Unit resolution on W13 ∨ W12 ∨ W22 ∨W11 and ¬ W11 :

•Apply Unit resolution on W13 ∨ W12 ∨ W22 and ¬ W22 :

• Apply Unit Resolution on W13 ∨ W12 and ¬ W12 :

27
4/28/2024

Generalized Modus Ponens (GMP)


• Apply modus ponens reasoning to generalized rules
• Combines And-Introduction, Universal-Elimination, and Modus Ponens
• From P(c) and Q(c) and (x)(P(x)  Q(x))  R(x) derive R(c)
• General case: Given
• atomic sentences P1, P2, ..., PN
• implication sentence (Q1  Q2  ...  QN)  R
• Q1, ..., QN and R are atomic sentences
• substitution subst(θ, Pi) = subst(θ, Qi) for i=1,...,N
• Derive new sentence: subst(θ, R)
• Substitutions
• subst(θ, α) denotes the result of applying a set of substitutions defined by θ to the sentence α
• A substitution list θ = {v1/t1, v2/t2, ..., vn/tn} means to replace all occurrences of variable symbol vi by
term ti
• Substitutions are made in left-to-right order in the list
• subst({x/IceCream, y/Ziggy}, eats(y,x)) = eats(Ziggy, IceCream)
58

Horn clauses
• A Horn clause is a sentence of the form:
(x) P1(x)  P2(x)  ...  Pn(x)  Q(x)
where
• there are 0 or more Pis and 0 or 1 Q
• the Pis and Q are positive (i.e., non-negated) literals
• Equivalently: P1(x)  P2(x) …  Pn(x) where the Pi are all atomic
and at most one of them is positive
• Prolog is based on Horn clauses
• Horn clauses represent a subset of the set of sentences
representable in FOL
59

28

You might also like