0% found this document useful (0 votes)
13 views22 pages

Artificial Intelligence Notes

Module-2

Uploaded by

1by22ai023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views22 pages

Artificial Intelligence Notes

Module-2

Uploaded by

1by22ai023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

MODULE-II

Solving problems by searching


a sequence of actions that form a path to a goal state. Such an agent is called a problem-solving
agent, and the computational process it undertakes is called search.

Problem-solving agents use atomic representations, that is, states of the world are considered as
wholes, with no internal structure visible to the problem-solving algorithms.

Agents that use factored or structured representations of states are called planning agents.

informed algorithms, in which the agent can estimate how far it is from the goal, and
uninformed algorithms, where no such estimate is available.

Problem -solving agents:


The agent can follow this four-phase problem-solving process:

GOAL FORMULATION: The agent adopts the goal of reaching the destination. Goals
organize behavior by limiting the objectives and hence the actions to be considered.

PROBLEM FORMULATION: The agent devises a description of the states and actions
necessary to reach the goal.

SEARCH: Before taking any action in the real world, the agent simulates sequences of actions
in its model, searching until it finds a sequence of actions that reaches the goal. Such a sequence
is called a solution.

EXECUTION: The agent can now execute the actions in the solution, one at a time
1. Search problems and solutions:
A search problem can be defined “A set of possible states that the environment can be in. We call
this the state space”.

The initial state that the agent starts in. For example: Arad.

A set of one or more goal states.

The actions available to the agent. Given a state s, ACTIONS(s) returns a finite set of actions that
can be executed in s. We say that each of these actions is applicable in s. An example:

ACTIONS(Arad) = {ToSibiu, ToTimisoara, ToZerind}.

A transition model, which describes what each action does. RESULT returns the state that results
from doing action in state s. For example,
RESULT(Arad, ToZerind) = Zerind

action cost function, A problem-solving agent should use a cost function that reflects its own
performance measure.

A sequence of actions forms a path, and a solution is a path from the initial state to a goal state.
We assume that action costs are additive; that is, the total cost of a path is the sum of the individual
action costs. An optimal solution has the lowest path cost among all solutions.

2. Formulating Problems
Our formulation of the problem of getting to destination is a model—an abstract mathematical
description—and not the real thing.
The process of removing detail from a representation is called abstraction.

EXAMPLE PROBLEMS
The problem-solving approach has been applied to a vast array of task environments.

A standardized problem is intended to illustrate or exercise various problem solving methods. It


can be given a concise, exact description and hence is suitable as a benchmark for researchers to
compare the performance of algorithms.

A real-world problem, such as robot navigation, is one whose solutions people actually use, and
whose formulation is idiosyncratic, not standardized, because, for example, each robot has
different sensors that produce different data.

1. standardized problem
In a sliding-tile puzzle, a number of tiles (sometimes called blocks or pieces) are arranged in a
grid with one or more blank spaces so that some of the tiles can slide into the blank space. the best-
known variant is the 8-puzzle , which consists of a grid with eight numbered tiles and one blank
space, and the 15-puzzle on a 4X4 grid. The object is to reach a specified goal state.

The standard formulation of the 8 puzzle is as follows:

STATES: A state description specifies the location of each of the tiles.

INITIAL STATE: Any state can be designated as the initial state.

ACTIONS: While in the physical world it is a tile that slides, the simplest way of describing an
action is to think of the blank space moving Left, Right, Up, or Down. If the blank is at an edge or
corner then not all actions will be applicable.

TRANSITION MODEL: Maps a state and action to a resulting state; for example, if we apply
Left to the start state in figure, the resulting state has the 5 and the blank switched.

GOAL STATE: Although any state could be the goal, we typically specify a state with the
numbers in order, as in figure .

ACTION COST: Each action costs 1.

Real-world problems

Route-finding algorithms are used in a variety of applications. Some, such as Web sites and in-car
systems that provide driving directions. Others, such as routing video streams in computer
networks, military operations planning, and airline travel-planning systems, involve much more
complex specifications.
Consider the airline travel problems that must be solved by a travel-planning Web site:

STATES: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous segments,
their fare bases, and their status as domestic or international, the state must record extra
information about these “historical” aspects.

INITIAL STATE: The user’s home airport.

ACTIONS: Take any flight from the current location, in any seat class, leaving after the current
time, leaving enough time for within-airport transfer if needed.

TRANSITION MODEL: The state resulting from taking a flight will have the flight’s destination
as the new location and the flight’s arrival time as the new time.

GOAL STATE: A destination city. Sometimes the goal can be more complex, such as “arrive at
the destination on a nonstop flight.”

ACTION COST: A combination of monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane, frequent-flyer reward
points, and so on.

Search Algorithms

A search algorithm takes a search problem as input and returns a solution, or an indication of
failure.

Tree: A tree data structure is a hierarchical structure that is used to represent and organize data
in a way that is easy to navigate and search.

Distinction between the state space and the search tree.


The state space describes the (possibly infinite) set of states in the world, and the actions that allow
transitions from one state to another. The search tree describes paths between these states, reaching
towards the goal. The search tree may have multiple paths to (and thus multiple nodes for) any
given state, but each node in the tree has a unique path back to the root (as in all trees).
Measuring problem-solving performance

We can evaluate an algorithm’s performance in four ways:


COMPLETENESS: Is the algorithm guaranteed to find a solution when there is one, and to
correctly report failure when there is not?
COST OPTIMALITY: Does it find a solution with the lowest path cost of all solutions?
TIME COMPLEXITY: How long does it take to find a solution? This can be measured in
seconds, or more abstractly by the number of states and actions considered.
SPACE COMPLEXITY: How much memory is needed to perform the search?

Uninformed Search Strategies:

An uninformed search algorithm is given no clue about how close a state is to the goal(s).

Breadth-first search

When all actions have the same cost, an appropriate strategy is breadth-first search, in which the
root node is expanded first, then all the successors of the root node are expanded next, then their
successors, and so on.

This is a systematic search strategy that is therefore complete even on infinite state spaces.
A first-in-first-out queue will be faster than a priority queue, and will give us the correct order of
nodes: new nodes (which are always deeper than their parents) go to the back of the queue, and
old nodes, which are shallower than the new nodes, get expanded first.

In addition, reached can be a set of states rather than a mapping from states to nodes, because once
we’ve reached a state, we can never find a better path to the state.

That also means we can do an early goal test, checking whether a node is a solution as soon as it
is generated, waiting until a node is popped off the queue.

Advantage:

 The main advantage is that it is very simple to implement and understand.


 It is guaranteed to find the shortest path from the starting point to the goal.
 Breadth First Search tends to find paths with fewer steps than other algorithms, such as
Depth First Search.
 Breadth First Search can be easily parallelized, which means that it can take advantage of
multiple processors to speed up the search.

Disadvantage:

 It can be very memory intensive since it needs to keep track of all the nodes in the search
tree.
 It can be slow since it expands all the nodes at each level before moving on to the next
level.
 It can sometimes find sub-optimal solutions since it doesn’t explore all possible paths
through the search tree.

Execrcise-1: Find a path for the given tree to reach the goal node M from root node A
using Breadth first algorithm.

Starting node: A

Target node: M

Data structures used: Queue and List

Solution:

Step 1: Add node A into Queue and

Queue A
List
Step 2: A is not target, so move A to List from queue and add children of A to List

Queue B (Front) C (Rear)


List A
Step 3: Node B is not target in this case, so move B to list and add children of B into queue.

Queue C D E F
List A B

Step 4: Since node C appears at the front of Queue it is checked to find whether its target or not.
As C is not target which is moved to list and its children are added to Queue.

Queue D E F G H I
List A B C

Step 5: D is not target so its moved to list and it does not have children, so that the queue is not
updated.

Queue E F G H I
List A B C D

Step 6: E is not target and it is shifted to list and its children J and K are added in queue

Queue F G H I J K
List A B C D E

Step 7: F is not target and it is shifted to list. Nothing is added to queue as F does not have
children.

Queue G H I J K
List A B C D E F

Step 8: G to list and its child L to queue

Queue H I J K L
List A B C D E F G

Step 9: H to list and no children

Queue I J K L
List A B C D E F G H

Step 10:
Queue J K L
List A B C D E F G H I

Step 11: J to list and queue is not added due to no children for J.

Queue K L
List A B C D E F G H I J

Step 12: K is switched to list and it does not have children

Queue L
List A B C D E F G H I J K

Step 13: L is switched to list and M is added in queue

Queue M
List A B C D E F G H I J K L

Step 14:

M is our target node and put it in list. The process of searching the node is ended here.

Queue
List A B C D E F G H I J K L M

Path to reach M is:

A->B->C->D->E->F->G->H->I->J->K->L->M
2. Find path from root node A to G using Breadth First Search algorithm.

DEPTH FIRST SEARCH

Depth-first search always expands the deepest node in the frontier first. It could be implemented
as a call to BEST-FIRST-SEARCH where the evaluation function is the negative of the depth.
However, it is usually implemented not as a graph search but as a tree-like search that does not
keep a table of reached states.

Depth-first search is not cost-optimal; it returns the first solution it finds, even if it is not cheapest.
For finite state spaces that are trees it is efficient and complete; for acyclic state spaces it may
end up expanding the same state many times via different paths, but will (eventually)
systematically explore the entire space.

In cyclic state spaces it can get stuck in an infinite loop; therefore some implementations of depth-
first search check each new node for cycles. Finally, in infinite state spaces, depth-first search is
not systematic: it can get stuck going down an infinite path, even if there are no cycles. Thus,
depth-first search is incomplete.

With all this bad news, why would anyone consider using depth-first search rather than breadth-
first or best-first? The answer is that for problems where a tree-like search is feasible, depth-first
search has much smaller needs for memory. We don’t keep a reached table at all, and the frontier
is very small: think of the frontier in breadth-first search as the surface of an ever-expanding
sphere, while the frontier in depth-first search is just a radius of the sphere.

Exercise 1: Find the path for the goal node G in a given tree using Depth First Search
algorithm.

Step 1: Push the initial node A in stack. Since node A is not goal, pop and place it in list. Push
children of A in to stack.

Step 2: Pop node B from queue and place it in list as it’s not a goal , and all children of B are
added into stack.
Step 3: Node D is added in list, no nodes added in queue as node D is not having children.

Step 4: Turn to check node E which is not goal and having two children I and J. So, E is popped
from the queue and moved to list and nodes I and J are popped into queue.

Step 5: node I is popped from the list and appended in list.

Step 6: J is removed from stack and placed in list. As J is the goal node, the algorithm stops its
search here.

Therefore, the path to reach J from A is A->B->D->E->I->J


ITERATIVE DEEPENING DEPTH FIRST SEARCH
The iterativeDeepeningSearch function performs iterative deepening search on the graph using a
root node and a goal node as inputs until the goal is attained or the search space is used up. This is
accomplished by regularly using the depthLimitedSearch function, which applies a depth
restriction to DFS. The search ends and returns the goal node if the goal is located at any depth.
The search yields None if the search space is used up (all nodes up to the depth limit have been
investigated).

The depthLimitedSearch function conducts DFS on the graph with the specified depth limit by
taking as inputs a node, a destination node, and a depth limit. The search returns FOUND if the
desired node is located at the current depth. The search returns NOT FOUND if the depth limit is
reached but the goal node cannot be located. If neither criterion is true, the search iteratively moves
on to the node's offspring.
INFORMED SEARCH STRATEGIES

iT uses domain-specific hints about the location of goals AND can find solutions more efficiently
than an uninformed strategy.

The hints come in the form of a heuristic function, denoted h(n).

h(n) = estimated cost of the cheapest path from the state at node n to a goal state

For example, in route-finding problems, we can estimate the distance from the current state to a
goal by computing the straight-line distance on the map between the two points.

Greedy best-first search:


Greedy best-first search is a form of best-first search that expands first the node with the lowest
h(n) value—the node that appears to be closest to the goal—on the grounds that this is likely to
lead to a solution quickly. So the evaluation function f(n)=h(n).

Example: Find path from Arad to Bucharest in the given map with heuristic values from
table.
Solution:

Following figures show the progress of a greedy best-first search using hSLD to find a path from
Arad to Bucharest. The first node to be expanded from Arad will be Sibiu because the heuristic
says it is closer to Bucharest than is either Zerind or Timisoara. The next node to be expanded will
be Fagaras because it is now closest according to the heuristic. Fagaras in turn generates Bucharest,
which is the goal. For this particular problem, greedy best-first search using hSLD finds a solution
without ever expanding a node that is not on the solution path. The solution it found does not have
optimal cost, however: the path via Sibiu and Fagaras to Bucharest is 32 miles longer than the path
through Rimnicu Vilcea and Pitesti. This is why the algorithm is called “greedy”—on each iteration
it tries to get as close to a goal as it can, but greediness can lead to worse results than being careful.
Exercise 1: Find goal node using greedy best first search algorithm.

(Note: Values along with node indicates heuristic values of the corresponding node. Some times
heuristic values are given in table)

Solution:
Priority queue data structure is used to implement BFS algorithm. In this problem, priority queue
is implemented by OPEN list and CLOSE list is used to explore the path.

Step 1: Add root node with its heuristic value in OPEN list.

A -- OPEN List


10

-- CLOSE List

Step 2: Since, there is only one node in OPEN which is moved to CLOSE list and node A is
explored and the children of A are inserted in OPEN list.

B C
2 3

Step 3: Higher priority is given to B as its heuristic value is less than C, so that it is moved to
CLOSE list and its children are added in OPEN list.

C D E
3 5 6

A B

Step 4: Node C gets higher priority comparing among D and E in OPEN list of step 3. Hence, C
is moved to CLOSE and its Children are explored and inserted into OPEN list.

D E F G
5 6 4 2

A B C

Step 5: In step4, the less heuristic value is 2 for G. Therefore, G is moved to CLOSE and G’s
children F and L are inserted into OPEN list.

D E F L
5 6 4 2
A B C G

Step 6: L is getting least value which is moved in to CLOSE list. L does not have children.

D E F
5 6 4

A B C G L

Step 7: According to step 6, Nodes J and K are added in OPEN during exploration of node F
because of its less heuristic value among all the nodes present in OPEN list.

D E J K
5 6 1 0

A B C G L F

Step 8: Node K is shifted to CLOSE list and the process of searching is stopped here as heuristic
value of K is zero which indicates goal node.

D E J
5 6 1

A B C G L F K

Hence, the root to reach goal K from A is:

A -> B -> C -> G -> L -> F -> K

A* search:
The most common informed search algorithm is A* search, a best-first search that uses the
evaluation function

f(n) = g(n) + h(n)

where g(n) is the path cost from the initial state to node n, and h(n) is the estimated cost of
the shortest path from n to a goal state, so we have

f(n) = estimated cost of the best path that continues from n to a goal.

Example: Find path from Arad to Bucharest in the given map with heuristic values from
table.

Solution:
Features of A*:

A* search is complete. Whether A* is cost-optimal depends on certain properties of the


heuristic.

A key property is admissibility: an admissible heuristic is one that never overestimates the cost
to reach a goal.

A slightly stronger property is called consistency.


Exercise 2:

Find cheapest path from A to F using A* algorithm for the given graph.

f(n) = g(n) + h(n)

g(n) -> sum of path costs from root node

h(n) -> heuristic value

1 A -> F = (13)+0 = 13 (Hold )


A -> B = (1)+3 = 4 (Explore)
2 A -> F = (13) + 0 = 13 (hold)
A -> B -> C = (1 + 1) + 4 = 6 (hold)
A -> B -> D = (1 + 2) + 2 = 5 (explore)
3 A -> F = (13) + 0 = 13 (hold)
A -> B -> C = (1 + 1) + 4 = 6 (explore)
A -> B -> D -> E = (1 + 2 + 5) + 6 = 14 (hold)
4 A -> F = (13) + 0 = 13 (hold)
A -> B -> D -> E = (1 + 2 + 5) + 6 = 14 (hold)
A -> B -> C -> E = (1 + 1 + 3) + 6 = 11 (explore)
5 A -> F = (13) + 0 = 13
A -> B -> D -> E = (1 + 2 + 5) + 6 = 14
A -> B -> C -> E -> F = (1 + 1 + 3 + 2) + 0 = 7 (Cheapest cost path to reach
Target )

Cheapest path to reach Target : A -> B -> C -> E -> F

Exercise:
1 S -> A = (1) + 6 = 7 (hold)
S -> B = (4) + 2 = 6 (explore)
2 S -> A = (1) + 6 = 7 (explore)
S -> B - > C = (4 + 2) + 1 = 7 (explore)
3 S -> A -> B = (1 + 2) + 2 = 5 (explore)
S -> A -> C = (1 + 5) + 1 = 7 (hold)
S -> A -> D = (1 + 12) + 0 = 13 (hold)

S -> B - > C -> D = (4 + 2 + 3) + 0 = 9 (hold)


4 S -> A -> C = (1 + 5) + 1 = 7 (hold)
S -> A -> D = (1 + 12) + 0 = 13 (hold)
S -> B - > C -> D = (4 + 2 + 3) + 0 = 9 (hold)
S -> A -> B -> C = (1 + 2 +2) + 1 = 6 (explore)
5 S -> A -> C = (1 + 5) + 1 = 7 (explore)
S -> A -> D = (1 + 12) + 0 = 13 (hold)
S -> B - > C -> D = (4 + 2 + 3) + 0 = 9 (hold)
S -> A -> B -> C -> D= (1 + 2 +2 + 3) + 0 = 8 (hold)
S -> A -> D = (1 + 12) + 0 = 13 (hold)
S -> B - > C -> D = (4 + 2 + 3) + 0 = 9 (hold)
S -> A -> B -> C -> D= (1 + 2 +2 + 3) + 0 = 8 (Target)
S -> A -> C -> D= (1 + 5 + 3) + 0 = 9 (explore)

Cheapest path to reach Target : S -> A -> B -> C -> D

You might also like