Ai-Unit-Ii Notes
Ai-Unit-Ii Notes
UNIT – II
Searching: Searching for solutions,
uninformed search strategies – Breadth first search, depth limited Search.
Search with partial information (Heuristic search)
Greedy best first search,
A* search,
Memory -bounded heuristic search
Local search algorithms- Hill climbing, Simulated annealing search, Local beam
search, Genetic algorithms
Introduction to Problem Solving, General problem solving
Utilizing Knowledge It uses knowledge during the process of It does not require using any knowledge
searching. during the process of searching.
Speed Finding the solution is quicker. Finding the solution is much slower
comparatively.
Cost Incurred The expenses are much lower. The expenses are comparatively higher.
Length of Implementation is shorter using AI. The implementation is lengthier using AI.
Implementation
Step 1: Initially fringe contains only one node corresponding to the source state A.
FRINGE: A
Step 2: A is removed from fringe. The node is expanded, and its children B and C are
generated. They are placed at the back of fringe.
FRINGE: B C
Step 3: Node B is removed from fringe and is expanded. Its children D, E
are generated and put at the back of fringe.
FRINGE: C D E
Step 4: Node C is removed from fringe and is expanded. Its children D and G are added
to the back of fringe.
FRINGE: D E D G
Step 5: Node D is removed from fringe. Its children C and F are generated and added to
the back of fringe.
FRINGE: E D G C F
Step 6: Node E is removed from fringe. It has no children.
FRINGE: D G C F
Step 7: D is expanded; B and F are put in OPEN.
FRINGE: G C F B F
Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm
returns the path A C G by following the parent pointers of the node corresponding to G.
The algorithm terminates.
Breadth first search is:
One of the simplest search strategies
Complete. If there is a solution, BFS is guaranteed to find it.
If there are multiple solutions, then a minimal solution will be found
The algorithm is optimal (i.e., admissible) if all operators have the same cost.
Otherwise, breadth first search finds a solution with the shortest path length.
Time complexity : O(bd )
Space complexity : O(bd )
Optimality :Yes
b - branching factor(maximum no of successors of any node), d – Depth of the shallowest goal
node
Maximum length of any path (m) in search space
Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Time Complexity: Time complexity of DLS algorithm is O(bℓ).
Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
SEARCHING WITH PARTIAL INFORMATION
The agent can calculate exactly which state results from any sequence of actions and
always knows which state it is in.
Its percepts provide no new information after each action.
What happens when knowledge of the states or actions is incomplete? We find that
different types of incompleteness lead to three distinct problem types.
1. Sensorless problems (also called conformant problems):
If the agent has no sensors at all, then (as far as it knows) it could be in one of several
possible initial states, and each action might therefore lead to one of several
possible successor states.
2. Contingency problems: If the environment is partially observable or if actions are
uncertain, then the agent's percepts provide new information after each action.
Each possible percept defines a contingency that must be planned for. A
problem
is called adversarial if the uncertainty is caused by the actions of another agent.
3. Exploration problems:
When the states and actions of the environment are unknown, the agent must act to
discover them.
Exploration problems can be viewed as an extreme case of contingency problems.
In this Vaccum world environment example, the state space has 8 states, as shown in Fig
3.20.There are three actions- Left, Right, and Suck and the goal is to clean up all the
dirt(states 7 and 8).If the environment is observable, deterministic, and completely known,
then the problem is trivially solvable by any of the algorithms we have described.
For Example, if the initial state is 5,then the action Sequence[Right, Suck] will
reach a goal state,8.
Sensorless problems :
Suppose that the vacuum agent knows all the effects of its actions, but has no sensors.
Then it knows only that its initial state is one of the set {1,2,3,4,5,6,7,8).
One might suppose that the agent's predicament is hopeless, but in fact it can do quite
well.
Because it knows what its actions do, it can, for example, calculate that the action Right
will cause it to be in one of the states {2,4,6,8), and the action sequence [Right,Suck]
will always end up in one of the states {4,8}.
Finally, the sequence [Right,Suck,Left,Suck] is guaranteed to reach the COERCION goal
state 7 no matter what the start state
Greedy best first search
Greedy best-first search tries to expand the node that is closest to the goal, on the
grounds that this is likely to lead to a solution quickly.
Thus, it evaluates nodes by using just the heuristic function; that is,
f(n) = h(n).
This works for route-finding problems in Romania; we use the straightline distance
heuristic, which we will call hSLD .
If the goal is Bucharest, we need to know the straight-line distances to Bucharest,
which are shown in Figure 3.22.
For example, hSLD (In(Arad)) = 366.
Notice that the values of hSLD cannot be computed from the problem description
itself.
Moreover, it takes a certain amount of experience to know that hSLD is correlated
with actual road distances and is, therefore, a useful heuristic.
For example, hSLD (In(Arad)) = 366.
Notice that the values of hSLD cannot be computed from the problem description itself.
Moreover, it takes a certain amount of experience to know that hSLD is correlated with
actual road distances and is, therefore, a useful heuristic.
Figure 3.23 shows the progress of a greedy best-first search using hSLD to find a path
from Arad to Bucharest.
The first node to be expanded from Arad will be Sibiu because it is closer to Bucharest
than either Zerind or Timisoara. The next node to be expanded will be Fagaras because
it is closest.
Fagaras in turn generates Bucharest, which is the goal.
For this particular problem, greedy best-first search using hSLD finds a solution
Greedy best-first tree search is also incomplete even in a finite state space, much like
depth-first search.
A* search: Minimizing the total estimated solution cost
The most widely known form of best-first search is called A∗ search (pronounced “A-
star search”).
It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to
get from the node to the goal:
f(n) = g(n) + h(n)
Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated
cost of the cheapest path from n to the goal, we have f(n) = estimated cost of the
cheapest solution through n .
Thus, if we are trying to find the cheapest solution, a reasonable thing to try first is the
node with the lowest value of g(n) + h(n).
A∗ search is both complete and optimal.
The algorithm is identical to UNIFORM-COST-SEARCH except that A∗ uses g + h
instead
of g
Conditions for optimality: Admissibility and consistency
The first condition we require for optimality is that h(n) be an admissible heuristic.
An admissible heuristic is one that never overestimates the cost to reach the goal.
Because g(n) is the actual cost to reach n along the current path, and
f(n)= g(n) + h(n), we have as an immediateconsequence that f(n)
never overestimates the true cost of a solution along the current path through n.
Admissible heuristics are by nature optimistic because they think the cost of solving
the problem is less than it actually is.
An obvious example of an admissible heuristic is the straight-line distance hSLD
that we used in getting to Bucharest.
Straight-line distance is admissible because the shortest path between any two
points is a straight line, so the straight line cannot be an overestimate.
A second, slightly stronger condition called consistency (or sometimes
monotonicity) is required only for applications of A∗ to graph search.
A heuristic h(n) is consistent if, for every node n and every successor n’ of n
generated by any action a, the estimated cost of reaching the goal from n is no
greater than the step cost of getting to n’ plus the estimated cost of reaching the
goal from n’
h(n) ≤ c(n, a, n’ ) + h(n’ ) .
Optimality of A*
A∗ has the following properties:
The tree-search version of A∗ is optimal if h(n) is admissible, while the graph-
search version is optimal if h(n) is consistent.
Among optimal algorithms of this type—algorithms that extend search
paths from the root and use the same heuristic information—A∗ is
optimally efficient for any given consistent heuristic.
Memory-bounded heuristic search
The simplest way to reduce memory requirements for A∗ is to adapt the idea of iterative
deepening to the heuristic search context, resulting in the iterative-deepening A∗ (IDA∗)
Algorithm.
Two other memory-bounded algorithms, called RBFS and MA∗.
Recursive best-first search (RBFS) is a simple recursive algorithm that attempts to
mimic the
operation of standard best-first search, but using only linear space.
Its structure is similar to that of a recursive depth-first search, but rather than continuing
indefinitely down the current path, it uses the f limit variable to keep track of the f-value of the
best alternative path available from any ancestor of the current node.
If the current node exceeds this limit, the recursion unwinds back to the alternative path. As
the recursion unwinds, RBFS replaces the f-value of each node along the path with a backed-up
value—the best f-value of its children. In this way, RBFS remembers the f-value of the best leaf
in the forgotten subtree.
Local Search Algorithms: Hill Climbing Algorithm
Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a peak value where no neighbor
has a higher value.
It is also called greedy local search as it only looks to its good immediate neighbor state
and not beyond that.
A node of hill climbing algorithm has two components which are state and value.
In this algorithm, we don't need to maintain and handle the search tree or graph as it
only keeps a single current state.
We will assume we are trying to maximize a function. That is, we are trying to find a
point in the search space that is better than all the others. And by "better" we mean
that the evaluation is higher. We might also say that the solution is of better quality than
all the others.
The idea behind Hill climbing is as follows.
1. Pick a random point in the search space.
2. Consider all the neighbors of the current state.
3. Choose the neighbor with the best quality and move to that state.
4. Repeat 2 thru 4 until all the neighboring states are of lower quality.
5. Return the current state as the solution state.
Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:
Generate and Test variant: Hill Climbing is the variant of Generate and Test method.
The Generate and Test method produce feedback which helps to decide which
direction to move in the search space.
Greedy approach: Hill-climbing algorithm search moves in the direction which
optimizes the cost.
No backtracking: It does not backtrack the search space, as it does not remember
the previous states.
State-space Diagram for Hill Climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to
apply. Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a
current state.
c. Else if not better than the current state, then return to step2.
Step 5: Exit.
Types of Hill Climbing Algorithm:
Simple hill Climbing:
Steepest-Ascent hill-climbing:
Stochastic hill Climbing:
1. Simple Hill Climbing:
Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only
evaluates the neighbor node state at a time and selects the first one which optimizes
current cost and set it as a current state. It only checks it's one successor state, and if it
finds better than the current state, then move else be in the same state. This algorithm has
the following features:
Less time consuming
Less optimal solution and the solution is not guaranteed
2. Steepest-Ascent hill climbing:
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is
closest to the goal state. This algorithm consumes more time as it searches for multiple
neighbors.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the
current state contains the same value, because of this algorithm does not find any best direction
to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to
solve the problem. Randomly select a state which is far away from the current state so it is
possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is
higher than its surrounding areas, but itself has a slope, and cannot be reached in a
single move.
Solution: With the use of bidirectional search, or by moving in different directions,
we can improve this problem.
Simulated Annealing Search