0% found this document useful (0 votes)
1 views

Lecture 2

Chapter 2 discusses problem-solving through searching, highlighting the differences between reflex agents and goal-based agents. It outlines the components of well-defined problems, including initial state, actions, transition models, goal tests, and path costs, and emphasizes the importance of search strategies in finding optimal solutions. The chapter also covers the evaluation criteria for search strategies, including completeness, optimality, time complexity, and space complexity.

Uploaded by

ausry86
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Lecture 2

Chapter 2 discusses problem-solving through searching, highlighting the differences between reflex agents and goal-based agents. It outlines the components of well-defined problems, including initial state, actions, transition models, goal tests, and path costs, and emphasizes the importance of search strategies in finding optimal solutions. The chapter also covers the evaluation criteria for search strategies, including completeness, optimality, time complexity, and space complexity.

Uploaded by

ausry86
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Chapter 2

Solving Problems by Searching


Reflex agent is simple
⚫ base their actions on
⚫ a direct mapping from states to actions

⚫ but cannot work well in environments


⚫ which this mapping would be too large to store
⚫ and would take too long to learn

Hence, goal-based agent is used


Problem-solving agent
Problem-solving agent
⚫ A kind of goal-based agent
⚫ It solves problem by
⚫ finding sequences of actions that lead to
desirable states (goals)
⚫ To solve a problem,
⚫ the first step is the goal formulation, based on
the current situation
Goal formulation
The goal is formulated
⚫ as a set of world states, in which the goal is
satisfied
Reaching from initial state → goal state
⚫ Actions are required
Actions are the operators
⚫ causing transitions between world states
⚫ Actions should be abstract enough at a
certain degree, instead of very detailed
⚫ E.g., turn left VS turn left 30 degree, etc.
Problem formulation
The process of deciding
⚫ what actions and states to consider
E.g., driving Amman → Zarqa
⚫ in-between states and actions defined
⚫ States: Some places in Amman & Zarqa

⚫ Actions: Turn left, Turn right, go straight,


accelerate & brake, etc.
Search
Because there are many ways to achieve
the same goal
⚫ Those ways are together expressed as a tree
⚫ Multiple options of unknown value at a point,
⚫ the agent can examine different possible
sequences of actions, and choose the best
⚫ This process of looking for the best sequence
is called search
⚫ The best sequence is then a list of actions,
called solution
Search algorithm
Defined as
⚫ taking a problem
⚫ and returns a solution

Once a solution is found


⚫ the agent follows the solution
⚫ and carries out the list of actions –
execution phase
Design of an agent
⚫ “Formulate, search, execute”
Well-defined problems and solutions
A problem is defined by 5 components:
Initial state
Actions
Transition model or
(Successor functions)
Goal Test.
Path Cost.
Well-defined problems and solutions
A problem is defined by 4 components:
⚫ The initial state
⚫ that the agent starts in
⚫ The set of possible actions
⚫ Transition model: description of what each action
does.
(successor functions): refer to any state reachable from
given state by a single action
⚫ Initial state, actions and Transition model define the
state space
⚫ the set of all states reachable from the initial state by any
sequence of actions.
⚫ A path in the state space:
⚫ any sequence of states connected by a sequence of actions.
Well-defined problems and solutions
The goal test
⚫ Applied to the current state to test
⚫ if the agent is in its goal
-Sometimes there is an explicit set of possible goal states.
(example: in Amman).
-Sometimes the goal is described by the properties
⚫ instead of stating explicitly the set of states
⚫ Example: Chess
⚫ the agent wins if it can capture the KING of the opponent on
next move ( checkmate).
⚫ no matter what the opponent does
Well-defined problems and solutions
A path cost function,
⚫ assigns a numeric cost to each path
⚫ = performance measure
⚫ denoted by g
⚫ to distinguish the best path from others

Usually the path cost is


⚫ the sum of the step costs of the individual
actions (in the action list)
Well-defined problems and solutions
Together a problem is defined by
⚫ Initial state
⚫ Actions
⚫ Successor function
⚫ Goal test
⚫ Path cost function
The solution of a problem is then
⚫ a path from the initial state to a state satisfying the goal
test
Optimal solution
⚫ the solution with lowest path cost among all solutions
Evaluation Criteria
formulation of a problem as search task
basic search strategies
important properties of search
strategies
selection of search strategies for
specific tasks
(The ordering of the nodes in FRINGE
defines the search strategy)
Example
From our Example
1. Formulate Goal

- Be In Amman

2. Formulate Problem

- States : Cities
- actions : Drive Between Cities

3. Find Solution

- Sequence of Cities : ajlun – Jarash - Amman


Our Example

1. Problem : To Go from Ajlun to Amman

2. Initial State : Ajlween

3. Operator : Go from One City To another .

4. State Space : {Jarash , Salat , irbed,……..}

5. Goal Test : are the agent in Amman.

6. Path Cost Function : Get The Cost From The Map.

7. Solution : { {Aj → Ja → Ir → Ma → Za → Am} , {Aj →Ir → Ma → Za → Am} …. {Aj → Ja → Am} }


8. State Set Space : {Ajlun → Jarash → Amman}
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest

Formulate goal:
⚫ be in Bucharest

Formulate problem:
⚫ states: various cities
⚫ actions: drive between cities

Find solution:
⚫ sequence of cities, e.g., Arad, Sibiu, Fagaras,
Bucharest
Example: Romania
Single-state problem formulation
A problem is defined by four items:
initial state e.g., "at Arad"
1. actions or successor function S(x) = set of action–state pairs
⚫ e.g., S(Arad) = {<Arad → Zerind, Zerind>, … }

2. goal test, can be


⚫ explicit, e.g., x = "at Bucharest"
⚫ implicit, e.g., Checkmate(x)

3. path cost (additive)


⚫ e.g., sum of distances, number of actions executed, etc.

A solution is a sequence of actions leading from the initial state


to a goal state
Example: River Crossing
Items: Man, Wolf, Corn, Chicken.
Man wants to cross river with all items.
⚫ Wolf will eat Chicken
⚫ Chicken will eat corn.

⚫ Boat will take max of two.


3.3 Searching for solutions
3.3 Searching for solutions
Finding out a solution is done by
⚫ searching through the state space
All problems are transformed
⚫ as a search tree
⚫ generated by the initial state and
successor function
Search tree
Initial state
⚫ The root of the search tree is a search node
Expanding
⚫ applying successor function to the current state
⚫ thereby generating a new set of states

leaf nodes
⚫ the states having no successors
Fringe : Set of search nodes that have not been
expanded yet.
Refer to next figure
Tree search example
Tree search example
Search tree
The essence of searching
⚫ in case the first choice is not correct
⚫ choosing one option and keep others for later
inspection
Hence we have the search strategy
⚫ which determines the choice of which state to
expand
⚫ good choice → fewer work → faster

Important:
⚫ state space ≠ search tree
Search tree
State space
⚫ has unique states {A, B}
⚫ while a search tree may have cyclic paths:
A-B-A-B-A-B- …
A good search strategy should avoid
such paths
Search tree
A node is having five components:
⚫ STATE: which state it is in the state space
⚫ PARENT-NODE: from which node it is generated

⚫ ACTION: which action applied to its parent-node


to generate it
⚫ PATH-COST: the cost, g(n), from initial state to
the node n itself
⚫ DEPTH: number of steps along the path from the
initial state
Measuring problem-solving performance
The evaluation of a search strategy
⚫ Completeness:
⚫ is the strategy guaranteed to find a solution when
there is one?
⚫ Optimality:
⚫ does the strategy find the highest-quality solution
when there are several different solutions?
⚫ Time complexity:
⚫ how long does it take to find a solution?
⚫ Space complexity:
⚫ how much memory is needed to perform the search?
Measuring problem-solving performance
In AI, complexity is expressed in
⚫ b, branching factor, maximum number of
successors of any node
⚫ d, the depth of the shallowest goal node.
(depth of the least-cost solution)
⚫ m, the maximum length of any path in the state
space
Time and Space is measured in
⚫ number of nodes generated during the search
⚫ maximum number of nodes stored in memory
Measuring problem-solving performance

For effectiveness of a search algorithm


⚫ we can just consider the total cost
⚫ The total cost = path cost (g) of the solution
found + search cost
⚫ search cost = time necessary to find the solution

You might also like