0% found this document useful (0 votes)
90 views

Ai Chapter 2

The document defines problems as state space search problems. It explains that a problem can be formally defined by specifying an initial state, goal states, and the rules to move between states. This forms a state space that can be searched to find a solution path from initial to goal. It provides examples of modeling the chess and water jug problems as state space search problems by defining their states, initial/goal states, and operators to move between states.

Uploaded by

d2gb6whnd9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views

Ai Chapter 2

The document defines problems as state space search problems. It explains that a problem can be formally defined by specifying an initial state, goal states, and the rules to move between states. This forms a state space that can be searched to find a solution path from initial to goal. It provides examples of modeling the chess and water jug problems as state space search problems by defining their states, initial/goal states, and operators to move between states.

Uploaded by

d2gb6whnd9
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

2 – Problems, State Space Search &

Heuristic Search Techniques

Introduction
• Problem solving is the major area of concern in Artificial Intelligence.
• It is the process of generating solution from given observed data.
• To solve a particular problem, we need to build a system or a method which can generate
required solution.
• Following four things are required for building such system.
1. Define the problem precisely.
➢ This definition must precisely specify the initial situation (input).
➢ What final situation (output) will constitute the acceptable solution to the
problem.
2. Analyze the problem.
➢ To identify those important features which can have an immense impact
on the appropriateness of various possible techniques for solving the
problem.
3. Isolate and represent the task knowledge that is necessary to solve the problem.
4. Choose the best problem solving technique and apply it to the particular
problem.

Defining the Problem as a State Space Search


1. Defining Problem & Search
• A problem is described formally as:
1. Define a state space that contains all the possible configurations of relevant objects.
2. Specify one or more states within that space that describe possible situations from
which the problem solving process may start. These states are called initial states.
3. Specify one or more states that would be acceptable as solutions to the problem. These
states are called goal states.
4. Specify a set of rules that describe the actions available.
• The problem can then be solved by using the rules, in combination with an appropriate
control strategy, to move through the problem space until a path from an initial state to a
goal state is found.
• This process is known as search.
• Search is fundamental to the problem-solving process.
• Search is a general mechanism that can be used when more direct method is not known.
• Search also provides the framework into which more direct methods for solving subparts of
a problem can be embedded.

2. Defining State & State Space


• A state is a representation of problem elements at a given moment.
• A State space is the set of all states reachable from the initial state.
Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 1
2 – Problems, State Space Search &
Heuristic Search Techniques
• A state space forms a graph in which the nodes are states and the arcs between nodes
are actions.
• In state space, a path is a sequence of states connected by a sequence of actions.
• The solution of a problem is part of the graph formed by the state space.
• The state space representation forms the basis of most of the AI methods.
• Its structure corresponds to the structure of problem solving in two important ways:
1. It allows for a formal definition of a problem as per the need to convert some given
situation into some desired situation using a set of permissible operations.
2. It permits the problem to be solved with the help of known techniques and control
strategies to move through the problem space until goal state is found.

3. Define the Problem as State Space Search


Ex.1:- Consider the problem of Playing Chess
• To build a program that could play chess, we have to specify:
o The starting position of the chess board,
o The rules that define legal moves, and
o The board position that represents a win.
• The starting position can be described by an 8 X 8 array square in which each element
square (x, y), (x varying from 1 to 8 & y varying from 1 to 8) describes the board position
of an appropriate piece in the official chess opening position.
• The goal is any board position in which the opponent does not have a legal move and his
or her “king” is under attack.
• The legal moves provide the way of getting from initial state of final state.
• The legal moves can be described as a set of rules consisting of two parts: A left side that
gives the current position and the right side that describes the change to be made to the
board position.
• An example is shown in the following figure.
Current Position
While pawn at square ( 5 , 2), AND Square ( 5 , 3 ) is empty, AND Square ( 5 , 4) is
empty.
Changing Board Position
Move pawn from Square ( 5 , 2 ) to Square ( 5 , 4 ) .
• The current position of a chess coin on the board is its state and the set of all possible
states is state space.
• One or more states where the problem terminates are goal states.
• Chess has approximately 10120 game paths. These positions comprise the problem
search space.
• Using above formulation, the problem of playing chess is defined as a problem of
moving around in a state space, where each state corresponds to a legal position of the

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 2


2 – Problems, State Space Search &
Heuristic Search Techniques
board.
• State space representation seems natural for play chess problem because the set of
states, which corresponds to the set of board positions, is well organized.

Ex.2:- Consider Water Jug problem


• A Water Jug Problem: You are given two jugs, a 4-gallon one and a 3-gallon one, a pump
which has unlimited water which you can use to fill the jug, and the ground on which
water may be poured. Neither jug has any measuring markings on it. How can you get
exactly 2 gallons of water in the 4-gallon jug?
• Here the initial state is (0, 0). The goal state is (2, n) for any value of n.
• State Space Representation: we will represent a state of the problem as a tuple (x, y)
where x represents the amount of water in the 4-gallon jug and y represents the amount
of water in the 3-gallon jug. Note that 0 ≤ x ≤ 4, and 0 ≤ y ≤ 3.
• To solve this we have to make some assumptions not mentioned in the problem. They
are:
o We can fill a jug from the pump.
o We can pour water out of a jug to the ground.
o We can pour water from one jug to another.
o There is no measuring device available.
• Operators – we must define a set of operators that will take us from one state to
another.
Sr. Current state Next State Descriptions

1 (x, y) if x < 4 (4,y) Fill the 4 gallon jug

2 (x, y) if y <3 (x,3) Fill the 3 gallon jug

3 (x, y) if x > 0 (x-d, y) Pour some water out of the 4 gallon jug

4 (x, y) if y > 0 (x, y-d) Pour some water out of the 3 gallon jug

5 (x, y) if x>0 (0, y) Empty the 4 gallon jug

6 (x, y) if y >0 (x,0) Empty the 3 gallon jug on the ground

7 (x, y) if x+y >= 4 and (4, y-(4-x)) Pour water from the 3 gallon jug into the
y>0 4 gallon jug until the 4 gallon jug is full

8 (x, y) if x+y >= 3 and (x-(3-y), 3) Pour water from the 4 gallon jug into the
x>0 3-gallon jug until the 3 gallon jug is full

9 (x, y) if x+y <=4 and (x+y, 0) Pour all the water from the 3 gallon jug
y>0 into the 4 gallon jug

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 3


2 – Problems, State Space Search &
Heuristic Search Techniques

10 (x, y) if x+y <= 3 and (0, x+y) Pour all the water from the 4 gallon jug
x>0 into the 3 gallon jug

11 (0,2) (2,0) Pour the 2 gallons from 3 gallon jug into


the 4 gallon jug

12 (2,y) (0,y) Empty the 2 gallons in the 4 gallon jug on


the ground

• There are several sequences of operators that will solve the problem.
• One of the possible solutions is given as:
Gallons in the 4- Gallons in the 3- Rule applied
gallon jug gallon jug

0 0 2

0 3 9

3 0 2

3 3 7

4 2 5 or 12

0 2 9 0r 11

2 0 --

Ex.3:- Consider 8 puzzle problem


• The 8 puzzle consists of eight numbered, movable tiles set in a 3x3 frame. One cell of the
frame is always empty thus making it possible to move an adjacent numbered tile into
the empty cell. Such a puzzle is illustrated in following diagram.

• The program is to change the initial configuration into the goal configuration.
• A solution to the problem is an appropriate sequence of moves, such as “move tiles 5 to
the right, move tile 7 to the left ,move tile 6 to the down” etc…

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 4


2 – Problems, State Space Search &
Heuristic Search Techniques
• To solve a problem, we must specify the global database, the rules, and the control
strategy.
• For the 8 puzzle problem that correspond to three components.
• These elements are the problem states, moves and goal.
• In this problem each tile configuration is a state.
• The set of all possible configuration in the problem space, consists of 3,62,880 different
configurations of the 8 tiles and blank space.
• For the 8-puzzle, a straight forward description is a 3X3 array of matrix of numbers.
Initial global database is this description of the initial problem state. Virtually any kind of
data structure can be used to describe states.
• A move transforms one problem state into another state.

Figure 1: Solution of 8 Puzzle problem

• The 8-puzzle is conveniently interpreted as having the following for moves.


Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 5
2 – Problems, State Space Search &
Heuristic Search Techniques
o Move empty space (blank) to the left, move blank up, move blank to the right
and move blank down.
o These moves are modeled by production rules that operate on the state
descriptions in the appropriate manner.
• The goal condition forms the basis for the termination.
• The control strategy repeatedly applies rules to state descriptions until a description of a
goal state is produced.
• It also keeps track of rules that have been applied so that it can compose them into
sequence representing the problem solution.
• A solution to the 8-puzzle problem is given in fig. 1.

Production System
• Search process forms the core of many intelligence processes.
• So, it is useful to structure AI programs in a way that facilitates describing and performing
the search process.
• Production system provides such structures.
• A production system consists of:
1. A set of rules, each consisting of a left side that determines the applicability of the rule
and a right side that describes the operation to be performed if that rule is applied.
2. One or more knowledge/databases that contain whatever information is appropriate
for the particular task. Some parts of the database may be permanent, while other parts
of it may pertain only to the solution of the current problem.
3. A control strategy that specifies the order in which the rules will be compared to the
database and a way of resolving the conflicts that arise when several rules match at
once.
4. A rule applier which is the computational system that implements the control strategy
and applies the rules.
• In order to solve a problem:
o We must first reduce it to the form for which a precise statement can be given. This
can be done by defining the problem’s state space (start and goal states) and a set of
operators for moving that space.
o The problem can then be solved by searching for a path through the space from an
initial state to a goal state.
o The process of solving the problem can usefully be modeled as a production system.
Benefits of Production System
1. Production systems provide an excellent tool for structuring AI programs.
2. Production Systems are highly modular because the individual rules can be added,
removed or modified independently.
3. The production rules are expressed in a natural form, so the statements contained in the

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 6


2 – Problems, State Space Search &
Heuristic Search Techniques
knowledge base should be easily understandable.
Production System Characteristics
1. Monotonic Production System: the application of a rule never prevents the later
application of another rule that could also have been applied at the time the first rule
was selected. i.e., rules are independent.
2. Non-Monotonic Production system is one in which this is not true.
3. Partially commutative Production system: a production system with the property that if
application of a particular sequence of rules transforms state x to state y, then allowable
permutation of those rules, also transforms state x into state y.
4. Commutative Production system: A Commutative production system is a production
system that is both monotonic and partially commutative.

Control Strategies
• Control strategies help us decide which rule to apply next during the process of searching
for a solution to a problem.
• Good control strategy should:
1. It should cause motion
2. It should be Systematic
• Control strategies are classified as:
1. Uninformed/blind search control strategy:
o Do not have additional information about states beyond problem definition.
o Total search space is looked for solution.
o Example: Breadth First Search (BFS), Depth First Search (DFS), Depth Limited
Search (DLS).
2. Informed/Directed Search Control Strategy:
o Some information about problem space is used to compute preference
among the various possibilities for exploration and expansion.
o Examples: Best First Search, Problem Decomposition, A*, Mean end Analysis

Breadth-First Search Strategy (BFS)


• This is an exhaustive search technique.
• The search generates all nodes at a particular level before proceeding to the next level of
the tree.
• The search systematically proceeds testing each node that is reachable from a parent node
before it expands to any child of those nodes.
• Search terminates when a solution is found and the test returns true.
Algorithm:
1. Create a variable called NODE-LIST and set it to initial state.
2. Until a goal state is found or NODE-LIST is empty do:

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 7


2 – Problems, State Space Search &
Heuristic Search Techniques
i. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty, quit.
ii. For each way that each rule can match the state described in E do:
a. Apply the rule to generate a new state.
b. If the new state is a goal state, quit and return this state.
c. Otherwise, add the new state to the end of NODE-LIST.

Depth-First Search Strategy (DFS)


• Here, the search systematically proceeds to some depth d, before another path is
considered.
• If the maximum depth of search tree is reached and if the solution has not been found, then
the search backtracks to the previous level and explores any remaining alternatives at this
level, and so on.
Algorithm:
1. If the initial state is a goal state, quit and return success
2. Otherwise, do the following until success or failure is signaled:
a. Generate a successor, E, of initial state. If there are no more successors, signal
failure.
b. Call Depth-First Search, with E as the initial state
c. If success is returned, signal success. Otherwise continue in this loop.

Comparison: DFS & BFS


Depth First Search Breath First Search
DFS requires less memory since only BFS guarantees that the space of
the nodes on the current path are possible moves is systematically
stored. examined; this search requires
considerable memory resources.
By chance, DFS may find a solution The search systematically proceeds
without examining much of the testing each node that is reachable from
search space at all. Then it finds a parent node before it expands to any
solution faster. child of those nodes.
If the selected path does not reach BFS will not get trapped exploring a blind
to the solution node, DFS gets stuck alley.
into a blind alley.
Does not guarantee to find solution. If there is a solution, BFS is guaranteed
Backtracking is required if wrong to find it.
path is selected.

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 8


2 – Problems, State Space Search &
Heuristic Search Techniques

Iterative Deepening Search


• Depth first search is incomplete if there is an infinite branch in the search tree.
• Infinite branches can happen if:
o paths contain loops
o infinite number of states and/or operators.
• For problems with infinite (or just very large) state spaces, several variants of depth-first
search have been developed:
o depth limited search
o iterative deepening search
• Depth limited search (DLS) is a form of depth-first search.
• It expands the search tree depth-first up to a maximum depth 𝑙
• The nodes at depth 𝑙 are treated as if they had no successors
• If the search reaches a node at depth 𝑙 where the path is not a solution, we backtrack to the
next choice point at 𝑑𝑒𝑝𝑡ℎ < 𝑙
• Depth-first search can be viewed as a special case of DLS with 𝑙 = ∞
• The depth bound can sometimes be chosen based on knowledge of the problem
• For e.g., in the route planning problem, the longest route has length 𝑠 – 1, where 𝑠 is the
number of cities (states), so we can set 𝑙 = 𝑠 – 1
• For the most problems, 𝑑 is unknown.
• Iterative deepening (depth-first) search (IDS) is a form of depth limited search which
progressively increases the bound.
• It first tries 𝑙 = 1, then 𝑙 = 2, then 𝑙 = 3, etc. until a solution is found
• Solution will be found when 𝑙 = 𝑑
• IDDFS combines depth-first search’s space-efficiency and breadth-first search’s fast search
(for nodes closer to root).
• IDDFS calls DFS for different depths starting from an initial value. In every call, DFS is
restricted from going beyond given depth. So basically we do DFS in a BFS fashion.
• The example of Iterative-deepening depth-first search as given below, with the current
depth-limit (𝑙) starting at 1 and incrementing each time:

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 9


2 – Problems, State Space Search &
Heuristic Search Techniques

Problem Characteristics
• In order to choose the most appropriate problem solving method, it is necessary to analyze the
problem along various key dimensions.
• These dimensions are referred to as problem characteristics discussed below.
1. Is the problem decomposable into a set of independent smaller or easier sub-problems?
➢ A very large and composite problem can be easily solved if it can be broken into smaller
problems and recursion could be used.
➢ For example, we want to solve :- ∫ 𝑥2 + 3𝑥 + 𝑠𝑖𝑛2𝑥 𝑐𝑜𝑠2𝑥 𝑑𝑥
➢ This can be done by breaking it into three smaller problems and solving each by applying
specific rules. Adding the results we can find the complete solution.
➢ But there are certain problems which cannot be decomposed into sub-problems.
➢ For example Blocks world problem in which, start and goal state are given as,

➢ Here, solution can be achieved be moving blocks in a sequence such that goal state can
be derived.
➢ Solution steps are interdependent and cannot be decomposed in sub problems.
➢ These two examples, symbolic integration and the blocks world illustrate the difference
between decomposable and non-decomposable problems.
2. Can solution steps be ignored or at least undone if they prove unwise?
➢ Problem fall under three classes, (i) ignorable, (ii) recoverable and (iii) irrecoverable.
Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 10
2 – Problems, State Space Search &
Heuristic Search Techniques
➢ This classification is with reference to the steps of the solution to a problem.
➢ Consider theorem proving. We may later find that it is of no use. We can still proceed
further, since nothing is lost by this redundant step. This is an example of ignorable
solutions steps.
➢ Now consider the 8 puzzle problem tray and arranged in specified order.
➢ While moving from the start state towards goal state, we may make some stupid move
but we can backtrack and undo the unwanted move. This only involves additional steps
and the solution steps are recoverable.
➢ Lastly consider the game of chess. If a wrong move is made, it can neither be ignored nor
be recovered. The thing to do is to make the best use of current situation and proceed.
This is an example of an irrecoverable solution steps.
➢ Knowledge of these will help in determining the control structure.
o Ignorable problems can be solved using a simple control structure that never
backtracks.
o Recoverable problems can be solved by a slightly more complicated control strategy
that allows backtracking.
o Irrecoverable problems will need to be solved by a system that expends a great deal
of effort making each decision since decision must be final.
3. Is the problem’s universe predictable?
➢ Problems can be classified into those with certain outcome (eight puzzle and water jug
problems) and those with uncertain outcome (playing cards).
➢ In certain – outcome problems, planning could be done to generate a sequence of
operators that guarantees to lead to a solution.
➢ Planning helps to avoid unwanted solution steps.
➢ For uncertain outcome problems, planning can at best generate a sequence of operators
that has a good probability of leading to a solution.
➢ The uncertain outcome problems do not guarantee a solution and it is often very
expensive since the number of solution paths to be explored increases exponentially
with the number of points at which the outcome cannot be predicted.
➢ Thus one of the hardest types of problems to solve is the irrecoverable, uncertain –
outcome problems (Ex:- Playing cards).
4. Is a good solution to the problem obvious without comparison to all other possible
solutions?
➢ There are two categories of problems - Any path problem and Best path problem.
➢ In any path problem, like the water jug and 8 puzzle problems, we are satisfied with the
solution, irrespective of the solution path taken.
➢ Whereas in the other category not just any solution is acceptable but we want the best
path solution.
➢ Like that of traveling sales man problem, which is the shortest path problem.
➢ In any – path problems, by heuristic methods we obtain a solution and we do not

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 11


2 – Problems, State Space Search &
Heuristic Search Techniques
explore alternatives.
➢ Any path problems can often be solved in a reasonable amount of time by using
heuristics that suggest good paths to explore.
➢ For the best-path problems all possible paths are explored using an exhaustive search
until the best path is obtained.
➢ Best path problems are computationally harder.
5. Is the desired solution a state of the world or a path to a state?
➢ Consider the problem of natural language processing.
➢ Finding a consistent interpretation for the sentence “The bank president ate a dish of
pasta salad with the fork”.
➢ We need to find the interpretation but not the record of the processing by which the
interpretation is found.
➢ Contrast this with the water jug problem.
➢ In water jug problem, it is not sufficient to report that we have solved, but the path that
we found to the state (2, 0). Thus the statement of a solution to this problem must be a
sequence of operations that produces the final state.
6. What is the role of knowledge?
➢ Though one could have unlimited computing power, the size of the knowledge base
available for solving the problem does matter in arriving at a good solution.
➢ Take for example the game of playing chess, just the rules for determining legal moves
and some simple control mechanism is sufficient to arrive at a solution.
➢ But additional knowledge about good strategy and tactics could help to constrain the
search and speed up the execution of the program. The solution would then be realistic.
➢ Consider the case of predicting the political trend. This would require an enormous
amount of knowledge even to be able to recognize a solution, leave alone the best.
7. Does the task require interaction with a person?
The problems can again be categorized under two heads.
i. Solitary in which the computer will be given a problem description and will produce
an answer, with no intermediate communication and with the demand for an
explanation of the reasoning process. Simple theorem proving falls under this
category. Given the basic rules and laws, the theorem could be proved, if one exists.
ii. Conversational, in which there will be intermediate communication between a
person and the computer, either to provide additional assistance to the computer or
to provide additional information to the user, or both, such as medical diagnosis fall
under this category, where people will be unwilling to accept the verdict of the
program, if they cannot follow its reasoning.
Problem Classification
➢ Actual problems are examined from the point of view of all these questions; it becomes
apparent that there are several broad classes into which the problems fall.

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 12


2 – Problems, State Space Search &
Heuristic Search Techniques

Issues in the design of search programs


1. The direction in which to conduct the search (forward versus backward reasoning). If the
search proceeds from start state towards a goal state, it is a forward search or we can also
search from the goal.
2. How to select applicable rules (Matching). Production systems typically spend most of their
time looking for rules to apply. So, it is critical to have efficient procedures for matching
rules against states.
3. How to represent each node of the search process (knowledge representation problem).

Heuristic Search Techniques


• In order to solve many hard problems efficiently, it is often necessary to compromise the
requirements of mobility and systematicity and to construct a control structure that is no
longer guaranteed to find the best answer but will always find a very good answer.
• Usually very hard problems tend to have very large search spaces. Heuristics can be used to
limit search process.
• There are good general purpose heuristics that are useful in a wide variety of problem
domains.
• Special purpose heuristics exploit domain specific knowledge.
• For example nearest neighbor heuristics for shortest path problem. It works by selecting
locally superior alternative at each step.
• Applying nearest neighbor heuristics to Travelling Salesman Problem:
1. Arbitrarily select a starting city
2. To select the next city, look at all cities not yet visited and select the one closest
to the current city. Go to next step.
3. Repeat step 2 until all cities have been visited.
• This procedure executes in time proportional to N2, where N is the number of cities to be
visited.

Heuristic Function
• Heuristic function maps from problem state descriptions to measures of desirability,
usually represented as numbers.
• Which aspects of the problem state are considered, how those aspects are evaluated, and
the weights given to individual aspects are chosen in such a way that the value of the
heuristic function at a given node in the search process gives as good an estimate as
possible of whether that node is on the desired path to a solution.
• Well-designed heuristic functions can play an important part in efficiently guiding a search
process toward a solution.
• Every search process can be viewed as a traversal of a directed graph, in which the nodes
represent problem states and the arcs represent relationships between states.

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 13


2 – Problems, State Space Search &
Heuristic Search Techniques
• The search process must find a path through this graph, starting at an initial state and
ending in one or more final states.
• Domain-specific knowledge must be added to improve search efficiency. Information about
the problem includes the nature of states, cost of transforming from one state to another,
and characteristics of the goals.
• This information can often be expressed in the form of heuristic evaluation function.
• In general, heuristic search improve the quality of the path that are exported.
• Using good heuristics we can hope to get good solutions to hard problems such as the
traveling salesman problem in less than exponential time.
Heuristic Search Techniques
I. Generate-and-Test
• Generate-and-test search algorithm is a very simple algorithm that guarantees to find a
solution if done systematically and there exists a solution.
Algorithm:
1. Generate a possible solution. For some problems, this means generating a particular
point in the problem space. For others it means generating a path from a start stat.
2. Test to see if this is actually a solution by comparing the chosen point or the endpoint
of the chosen path to the set of acceptable goal states.
3. If a solution has been found, quit, Otherwise return to step 1.
• It is a depth first search procedure since complete solutions must be generated before
they can be tested.
• In its most systematic form, it is simply an exhaustive search of the problem space.
• It operates by generating solutions randomly.

II. Simple Hill Climbing


• Hill climbing is a variant of generate-and test in which feedback from the test procedure
is used to help the generator decide which direction to move in search space.
• The test function is augmented with a heuristic function that provides an estimate of
how close a given state is to the goal state.
• Hill climbing is often used when a good heuristic function is available for evaluating
states but when no other useful knowledge is available.
• The key difference between Simple Hill climbing and Generate-and-test is the use of
evaluation function as a way to inject task specific knowledge into the control process.
Algorithm:
1. Evaluate the initial state. If it is also goal state, then return it and quit. Otherwise
continue with the initial state as the current state.
2. Loop until a solution is found or until there are no new operators left to be applied in
the current state:
a. Select an operator that has not yet been applied to the current state and

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 14


2 – Problems, State Space Search &
Heuristic Search Techniques
apply it to produce a new state.
b. Evaluate the new state
i. If it is the goal state, then return it and quit.
ii. If it is not a goal state but it is better than the current state, then make
it the current state.
iii. If it is not better than the current state, then continue in the loop.

III. Steepest-Ascent Hill Climbing


• This is a variation of simple hill climbing which considers all the moves from the current
state and selects the best one as the next state.
• At each current state we select a transition, evaluate the resulting state, and if the
resulting state is an improvement we move there, otherwise we try a new transition
from where we were.
• We repeat this until we reach a goal state, or have no more transitions to try.
• The transitions explored can be selected at random, or according to some problem
specific heuristics.
Algorithm
1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise,
continue with the initial state as the current state.
2. Loop until a solution is found or until a complete iteration produces no change to current
state:
a. Let S be a state such that any possible successor of the current state will be better
than S.
b. For each operator that applies to the current state do:
i. Apply the operator and generate a new state
ii. Evaluate the new state. If is is a goal state, then return it and quit. If not,
compare it to S. If it is better, then set S to this state. If it is not better,
leave S alone.
c. If the S is better than the current state, then set current state to S.

• Hill Climbing has three well-known drawbacks:


i. Local Maxima: a local maximum is a state that is better than all its neighbors but
is not better than some other states further away.
ii. Plateau: a plateau is a flat area of the search space in which, a whole set of
neighboring states have the same values.
iii. Ridge: is a special kind of local maximum. It is an area of the search space that is
higher than surrounding areas and that itself has slop.

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 15


2 – Problems, State Space Search &
Heuristic Search Techniques

Local Maxima Plateau Ridge

• In each of the previous cases (local maxima, plateaus & ridge), the algorithm reaches a
point at which no progress is being made.
• A solution is,
i. Backtrack to some earlier node and try going in a different direction.
ii. Make a big jump to try to get in a new section.
iii. Moving in several directions at once.

IV. Simulated Annealing (SA)


• Motivated by the physical annealing process.
• Material is heated and slowly cooled into a uniform structure. Simulated annealing mimics
this process.
• Compared to hill climbing the main difference is that SA allows downwards steps.
• Simulated annealing also differs from hill climbing in that a move is selected at random and
then decides whether to accept it.
• To accept or not to accept?
o The law of thermodynamics states that at temperature, t, the probability of an increase
in energy of magnitude, 𝛿𝐸, is given by,
𝑃(𝛿𝐸) = 𝑒𝑥𝑝(−𝛿𝐸 /𝑘𝑡)
o Where k is a constant known as Boltzmann’s constant and it is incorporated into T.
o So the revised probability formula is,
𝑃′ (𝛿𝐸) = 𝑒𝑥𝑝(−𝛿𝐸 /𝑇)
o 𝛿𝐸 is the positive change in the objective function and T is the current temperature.
o The probability of accepting a worse state is a function of both the temperature of the
system and the change in the cost function.
o As the temperature decreases, the probability of accepting worse moves decreases.
o If t=0, no worse moves are accepted (i.e. hill climbing).
Algorithm
1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise,
continue with the initial state as the current state.
2. Initialize BEST-SO-FAR to the current state.
3. Initialize T according to the annealing schedule.
4. Loop until a solution is found or until there are no new operators left to be applied in the

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 16


2 – Problems, State Space Search &
Heuristic Search Techniques
current state.
a. Select an operator that has not yet been applied to the current state and apply it
to produce a new state.
b. Evaluate the new state. Compute
𝛿𝐸 = (value of current) – (value of new state)
o If the new state is a goal state, then return it and quit.
o If it is not a goal state but is better than the current state, then make it
the current state. Also set BEST-SO-FAR to this new state.
o If it is not better than the current state, then make it the current state
with probability 𝑃′ as defined above. This step is usually implemented by
generating a random number between [0, 1]. If the number is less than
𝑃′ , then the move is accepted otherwise do nothing.
c. Revise T as necessary according to the annealing schedule.

V. Best First Search


• DFS is good because it allows a solution to be found without expanding all competing
branches. BFS is good because it does not get trapped on dead end paths.
• Best first search combines the advantages of both DFS and BFS into a single method.
• One way of combining BFS and DFS is to follow a single path at a time, but switch paths
whenever some competing path looks more promising than the current one does.
• At each step of the Best First Search process; we select the most promising of the nodes we
have generated so far.
• This is done by applying an appropriate heuristic function to each of them.
• We then expand the chosen node by using the rules to generate its successors.
• If one of them is a solution, we can quit. If not, all those new nodes are added to the set of
nodes generated so far.
OR Graphs
• It is sometimes important to search graphs so that duplicate paths will not be pursued.
• An algorithm to do this will operate by searching a directed graph in which each node
represents a point in problem space.
• Each node will contain:
o Description of problem state it represents
o Indication of how promising it is
o Parent link that points back to the best node from which it came
o List of nodes that were generated from it
• Parent link will make it possible to recover the path to the goal, once the goal is found.
• The list of successors will make it possible, if a better path is found to an already existing
node, to propagate the improvement down to its successors.
• This is called OR-graph, since each of its branches represents an alternative problem solving

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 17


2 – Problems, State Space Search &
Heuristic Search Techniques
path.
Implementation of OR graphs
We need two lists of nodes:
• OPEN – nodes that have been generated and have had the heuristic function applied to
them but which have not yet been examined. OPEN is actually a priority queue in which
the elements with the highest priority are those with the most promising value of the
heuristic function.
• CLOSED- nodes that have already been examined. We need to keep these nodes in
memory if we want to search a graph rather than a tree, since whenever a new node is
generated; we need to check whether it has been generated before.

Algorithm: Best First Search


1. Start with OPEN containing just the initial state
2. Until a goal is found or there are no nodes left on OPEN do:
a. Pick the best node on OPEN
b. Generate its successors
c. For each successor do:
i. If it has not been generated before, evaluate it, add it to OPEN, and record
its parent.
ii. If it has been generated before, change the parent if this new path is
better than the previous one. In that case, update the cost of getting to
this node and to any successors that this node may already have.
Best First Search example

The A* Algorithm
• Best First Search is a simplification of A* Algorithm.
• This algorithm uses following functions:

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 18


2 – Problems, State Space Search &
Heuristic Search Techniques
1. f’: Heuristic function that estimates the merits of each node we generate. f’ = g + h’.
f’ represents an estimate of the cost of getting from the initial state to a goal state
along with the path that generated the current node.
2. g: The function g is a measure of the cost of getting from initial state to the current
node.
3. h’: The function h’ is an estimate of the additional cost of getting from the current
node to a goal state.
• The algorithm also uses the lists: OPEN and CLOSED
Algorithm: A*
1. Start with OPEN containing only initial node. Set that node’s g value to 0, its h’ value to
whatever it is, and its f’ value to h’+0 or h’. Set CLOSED to empty list.
2. Until a goal node is found, repeat the following procedure: If there are no nodes on
OPEN, report failure. Otherwise select the node on OPEN with the lowest f’ value. Call it
BESTNODE. Remove it from OPEN. Place it in CLOSED. See if the BESTNODE is a goal
state. If so exit and report a solution. Otherwise, generate the successors of BESTNODE
but do not set the BESTNODE to point to them yet. For each of the SUCCESSOR, do the
following:
a. Set SUCCESSOR to point back to BESTNODE. These backwards links will make it
possible to recover the path once a solution is found.
b. Compute g(SUCCESSOR) = g(BESTNODE) + the cost of getting from BESTNODE to
SUCCESSOR
c. See if SUCCESSOR is the same as any node on OPEN. If so call the node OLD.
i. Check whether it is cheaper to get to OLD via its current parent or to
SUCESSOR via BESTNODE by comparing their g values.
ii. If OLD is cheaper, then do nothing. If SUCCESSOR is cheaper then reset
OLD’s parent link to point to BESTNODE.
iii. Record the new cheaper path in g(OLD) and update f ‘(OLD).
d. If SUCCESSOR was not on OPEN, see if it is on CLOSED. If so, call the node on
CLOSED OLD and add OLD to the list of BESTNODE’s successors.
e. If SUCCESSOR was not already on either OPEN or CLOSED, then put it on OPEN
and add it to the list of BESTNODE’s successors. Compute f’(SUCCESSOR) =
g(SUCCESSOR) + h’(SUCCESSOR).

Observations about A*
o Role of g function: This lets us choose which node to expand next on the basis of not
only of how good the node itself looks, but also on the basis of how good the path to the
node was.
o h’, the distance of a node to the goal. If h’ is a perfect estimator of h, then A* will
converge immediately to the goal with no search.
Admissibility of A*

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 19


2 – Problems, State Space Search &
Heuristic Search Techniques
o A heuristic function h’(n) is said to be admissible if it never overestimates the cost of
getting to a goal state.
o i.e. if the true minimum cost of getting from node n to a goal state is C then h must
satisfy: h’(n) ≤ C
o If h’ is a perfect estimator of h, then A* will converge immediately to the goal state with
no search.
o If h’ never overestimates h, then A* algorithm is guaranteed to find an optimal path if
one exists.
VI. Problem Reduction
AND-OR graphs
o AND-OR graph (or tree) is useful for representing the solution of problems that
can be solved by decomposing them into a set of smaller problems, all of which
must then be solved.
o This decomposition or reduction generates arcs that we call AND arcs.
o One AND arc may point to any numbers of successor nodes. All of which must
then be solved in order for the arc to point solution.
o In order to find solution in an AND-OR graph we need an algorithm similar to
best –first search but with the ability to handle the AND arcs appropriately.
o We define FUTILITY, if the estimated cost of solution becomes greater than the
value of FUTILITY then we abandon the search, FUTILITY should be chosen to
correspond to a threshold.
o In following figure AND arcs are indicated with a line connecting all the
components.

The AO* Algorithm


• Rather than the two lists, OPEN and CLOSED, that were used in the A* algorithm, the
AO* algorithm will use a single structure GRAPH, representing the part of the search
graph that has been explicitly generated so far.
• Each node in the graph will point both down to its immediate successors and up to
its immediate predecessors.
• Each node in the graph will also have associated with it an h' value, an estimate of
the cost of a path from itself to a set of solution nodes.
• We will not store g (the cost of getting from the start node to the current node) as
we did in the A* algorithm.
• And such a value is not necessary because of the top-down traversing of the edge

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 20


2 – Problems, State Space Search &
Heuristic Search Techniques
which guarantees that only nodes that are on the best path will ever be considered
for expansion.
Algorithm: AO*
1. Let GRAPH consist only of the node representing the initial state. Call this node INIT,
Compute VINIT.
2. Until INIT is labeled SOLVED or until INIT's h' value becomes greater than FUTILITY,
repeat the following procedure:
a. Trace the labeled arcs from INIT and select for expansion one of the as yet
unexpanded nodes that occurs on this path. Call the selected node NODE.
b. Generate the successors of NODE. If there are none, then assign FUTILITY as
the h' value of NODE. This is equivalent to saying that NODE is not solvable. If
there are successors, then for each one (called SUCCESSOR) that is not also
an ancestor of NODE do the following:
i. Add SUCCESSOR to GRAPH
ii. If SUCCESSOR is a terminal node, label it SOLVED and assign it an h'
value of 0
iii. If SUCCESSOR is not a terminal node, compute its h' value
c. Propagate the newly discovered information up the graph by doing the
following: Let S be a set of nodes that have been labeled SOLVED or whose h'
values have been changed and so need to have values propagated back to
their parents. Initialize S to NODE. Until S is empty, repeat the, following
procedure:
i. If possible, select from S a node none of whose descendants in GRAPH
occurs in S. If there is no such node, select any node from S. Call this
node CURRENT, and remove it from S.
ii. Compute the cost of each of the arcs emerging from CURRENT. The
cost of each arc is equal to the sum of the h' values of each of the
nodes at the end of the arc plus whatever the cost of the arc itself is.
Assign as CURRENT'S new h' value the minimum of the costs just
computed for the arcs emerging from it.
iii. Mark the best path out of CURRENT by marking the arc that had the
minimum cost as computed in the previous step.
iv. Mark CURRENT SOLVED if all of the nodes connected to it through the
new labeled arc have been labeled SOLVED.
v. If CURRENT has been labeled SOLVED or if the cost of CURRENT was
just changed, then its new status must be propagated back up the
graph. So add all of the ancestors of CURRENT to S.

VII. Constraint Satisfaction


• Constraint satisfaction is a search procedure that operates in a space of constraint sets. The

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 21


2 – Problems, State Space Search &
Heuristic Search Techniques
initial state contains the constraints that are originally given in the problem description.
• A goal state is any state that has been constrained “enough” where “enough” must be
defined for each problem.
• For example, in cryptarithmetic problems, enough means that each letter has been assigned
a unique numeric value.
• Constraint Satisfaction problems in AI have goal of discovering some problem state that
satisfies a given set of constraints.
• Design tasks can be viewed as constraint satisfaction problems in which a design must be
created within fixed limits on time, cost, and materials.
• Constraint Satisfaction is a two-step process:
1. First constraints are discovered and propagated as far as possible throughout the
system.
2. Then if there is still not a solution, search begins. A guess about something is made
and added as a new constraint.
Example: Cryptarithmetic Problem
Constraints:
• No two letters have the same value
• The sums of the digits must be as shown in the problem
Goal State:
• All letters have been assigned a digit in such a way that all the initial constraints are satisfied
Input State

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 22


2 – Problems, State Space Search &
Heuristic Search Techniques

• The solution process proceeds in cycles. At each cycle, two significant things are done:
1. Constraints are propagated by using rules that correspond to the properties of
arithmetic.
2. A value is guessed for some letter whose value is not yet determined.
Solution:

Algorithm: Constraint Satisfaction


1. Propagate available constraints. To do this first set OPEN to set of all objects that must
have values assigned to them in a complete solution. Then do until an inconsistency is
detected or until OPEN is empty:
a. Select an object OB from OPEN. Strengthen as much as possible the set of
constraints that apply to OB.
b. If this set is different from the set that was assigned the last time OB was
examined or if this is the first time OB has been examined, then add to OPEN
all objects that share any constraints with OB.
c. Remove OB from OPEN.
2. If the union of the constraints discovered above defines a solution, then quit and report
the solution.
3. If the union of the constraints discovered above defines a contradiction, then return the
Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 23
2 – Problems, State Space Search &
Heuristic Search Techniques
failure.
4. If neither of the above occurs, then it is necessary to make a guess at something in order
to proceed. To do this loop until a solution is found or all possible solutions have been
eliminated:
a. Select an object whose value is not yet determined and select a way of
strengthening the constraints on that object.
b. Recursively invoke constraint satisfaction with the current set of constraints
augmented by strengthening constraint just selected.

VIII. Means-Ends Analysis


• Collection of strategies presented so far can reason either forward or backward, but for
a given problem, one direction or the other must be chosen.
• A mixture of the two directions is appropriate. Such a mixed strategy would make it
possible to solve the major parts of a problem first and then go back and solve the small
problems that arise in “gluing” the big pieces together.
• The technique of Means-Ends Analysis (MEA) allows us to do that.
• MEA process centers around the detection of differences between the current state and
the goal state.
• Once such a difference is isolated, an operator that can reduce the difference must be
found.
• If the operator cannot be applied to the current state, we set up a sub-problem of
getting to a state in which it can be applied.
• The kind of backward chaining in which operators are selected and then sub-goals are
set up to establish the preconditions of the operators is called operator sub-goaling
Algorithm: Means-Ends Analysis
1. Compare CURRENT to GOAL. If there are no differences between them then return.
2. Otherwise, select the most important difference and reduce it by doing the following
until success or failure is signaled:
a. Select an as yet untried operator O that is applicable to the current difference.
If there are no such operators, then signal failure.
b. Attempt to apply O to CURRENT. Generate descriptions of two states: O-
START, a state in which O’s preconditions are satisfied and O-RESULT, the
state that would result if O were applied in O-START.
c. If
(FIRST-PART  MEA( CURRENT, O-START))
and
(LAST-PART  MEA(O-RESULT, GOAL))
are successful, then signal success and return the result of concatenating
FIRST-PART, O, and LAST-PART.

Gopi Sanghani, CE Department | 2180703 – Artificial Intelligence 24

You might also like