0% found this document useful (0 votes)
6 views

Chapter 2 Problem Solving by Search

The document discusses formulating problems as search problems and searching for solutions. It provides examples of formulating problems from different domains as search problems by defining the states, initial state, goal states, operators and path costs. It also explains blind and informed search strategies like breadth-first search, depth-first search, best-first search and hill-climbing search.

Uploaded by

Yếnn Hải
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Chapter 2 Problem Solving by Search

The document discusses formulating problems as search problems and searching for solutions. It provides examples of formulating problems from different domains as search problems by defining the states, initial state, goal states, operators and path costs. It also explains blind and informed search strategies like breadth-first search, depth-first search, best-first search and hill-climbing search.

Uploaded by

Yếnn Hải
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

1

PROBLEM SOLVING BY SEARCH

Dr. Nguyen Quoc Tuan

Department of Network and Information


Systems
Faculty of Information Technology
Contents
2

 Formulating problems as search problems


 Searching for Solutions
Formulating problems as search problems
3

 Example 1: Romania
Formulating problems as search problems
4

 Example 1: You’re in Arad, Romania, and you need to


get to Bucharest as quickly as possible to catch your
flight.
 Formulate problem
◼ States: Various cities
◼ Operators: Drive between cities
 Formulate goal
◼ Be in Bucharest before flight leaves
 Find solution
◼ Actual sequence of cities from Arad to Bucharest
◼ Minimize driving distance/time
Formulating problems as search problems
5

 Problem description <{S}, S0, {SG}, {O}, {g}>


 {S} – cities (ci)
 S0 – Arad
 SG – Bucharest
◼ G(S) – Is the current state (S) Bucharest?
 {O}: { ci → cj, for some i and j }
 gij
◼ Driving distance between ci and cj?
◼ Time to drive from ci to cj?
◼ 1?
Formulating problems as search problems
6

 State-space description < {S}, S0, {SG}, {O}, {g} >


 S: Possible states
 S0: Initial state of the agent

 SG: Goal state(s)


◼ Or equivalently, a goal test G(S)
 O: Operators O: {S} => {S}
◼ Describes the possible actions of the agent
 g: Path cost function, assigns a cost to a path/action
Formulating problems as search problems
7

 State-space diagrams
 State-space
description can be represented by a state-
space diagram, which shows
◼ States (incl. initial and goal)
◼ Operators/actions (state transitions)
◼ Path costs
Formulating problems as search problems
8

 Example 2: 8-Puzzle

 States: Various configurations of the puzzle


 Operators: Movements of the blank
 Goal test: Goal configuration
 Path cost: Each move costs 1
Formulating problems as search problems
9

 Example 3: Missionaries and Cannibals


Problem: Three missionaries and three cannibals are on
one side of a river, along with a boat that can hold one
or two people. Find a way to get everyone to the other
side, without ever leaving a group of missionaries in one
place outnumbered by the cannibals in that place

 States:
 Operators:
 Goal test:
 Path cost: ?
Formulating problems as search problems
10

 Example 3: Missionaries and Cannibals


Formulating problems as search problems
11

 Example 3: Missionaries and Cannibals


 State-space description < {S}, S0, {SG}, {O}, {g} >
◼ S: { ({0,1,2,3} {0,1,2,3} {0,1}) }
◼ S0: {3 3 1}
◼ SG: {0 0 0}
◼ O: { (x y b) → (x’ y’ b’) }
◼ g: 1
Formulating problems as search problems
12

 Example 3: Missionaries and Cannibals


Searching for Solutions
13

 Finding a solution is done by searching through the state space


 While maintaining a set of partial solution sequences
 The search strategy determines which states should be
expanded first
 Expand a state = Applying the operators to the current state and
thereby generating a new set of successor states
 Conceptually, the search process builds up a search tree that
is superimposed over the state space
 Root node of the tree “Initial state”
 Leaves of the tree “States to be expanded (or expanded to null)”
 At each step, the search algorithm chooses a leaf to expand
Search strategy
14

 Uninformed (blind) search


 Can only distinguish goal state from non-goal state
 Informed (heuristic) search
 Can evaluate states
Search strategy
15

 Blind search strategies


 Breadth-first search
 Depth-first search
 Depth-limited search
 …
Breadth-first search
16

function Breath_First_Search
{
1. Initialize list L using the initial state of problem
2. loop do
if L empty then return failure;
choose a node u at the front of L;
if u is goal state then return the corresponding
solution
else for each node v adjacent to node u do
{insert v into the queue of L; father(v)=u};
}
Breadth-first search
17

 Example 1:
 Initial state: A; Goal state: H
Breadth-first search
18

 Example 1:
Depth first search
19

function Depth_First_Search
{
1. Initialize list L using the initial state of problem
2. loop do
if L empty then return failure;
choose a node u at the front of L;
if u is goal state then return the corresponding
solution
else for each node v adjacent to node u do
{insert v into the front of L; father(v)=u};
}
Depth first search
20

 Example 1:
Blind search
21

 Example 2:
 Initial state: A; Goal state: G
Blind search
22

 Example 2:
Blind search
23

 Example 3:
 Initial state: A; Goal state: G
Search strategy
24

 Informed (heuristic) search


 Best-First Search
◼ Breath First Search + heuristic function
 Hill-Climbing Search
◼ Depth First Search + heuristic function
Heuristic function
25

 The heuristic function h(N)  0 estimates the cost to go from


STATE(N) to a goal state.
 Example

5 8 1 2 3
4 2 1 4 5 6
7 3 6 7 8
STATE(N) Goal state
 h1(N) = number of misplaced numbered tiles = 6
 h2(N) = sum of the (Manhattan) distance of every numbered tile to its goal
position.
h2(N) = 2 + 3 + 0 + 1 + 3 + 0 + 3 + 1 = 13
Best-First Search
26

 Example 1: Start State : A; Goal State: B


Best-First Search
27

function Best_First_Search
{
1. Initialize list L using the initial state of problem
2. loop do
if L empty then return failure;
choose a node u at the front of L;
if u is goal state then return the corresponding solution
else for each node v adjacent to node u do
{insert v into of L so that L is sorted in ascending order of
the evaluation function; father(v)=u};
}
Best-First Search
28
Hill-Climbing Search
29

function Hill_Climbing_Search
{
1. Initialize list L using the initial state of problem
2. loop do
if L empty then return failure;
choose a node u at the front of L;
if u is goal state then return the corresponding solution
else for each node v adjacent to node u do
{insert v into of L1 so that L1 is sorted in ascending order of the
evaluation function; father(v)=u
Move list L1 to the beginning of list L
};
}
Hill-Climbing Search
30
Exercise
31

You might also like