0% found this document useful (0 votes)
3 views

AI_Lecture_3

The document discusses problem-solving agents, specifically goal-based agents that find sequences of actions leading to desired states. It outlines the components of well-defined problems, search algorithms, and examples of toy and real-world problems. Additionally, it covers the performance metrics for problem-solving algorithms, such as completeness, optimality, time complexity, and space complexity.

Uploaded by

khinsisthway36
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

AI_Lecture_3

The document discusses problem-solving agents, specifically goal-based agents that find sequences of actions leading to desired states. It outlines the components of well-defined problems, search algorithms, and examples of toy and real-world problems. Additionally, it covers the performance metrics for problem-solving algorithms, such as completeness, optimality, time complexity, and space complexity.

Uploaded by

khinsisthway36
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

CST-5115

Lecture-3
Solving problems by searching
Content
• Problem Solving Agents
• Example Problems
• Searching for Solutions
• Summary

Faculty of Computer Science 2


Solving Problems by Searching
• Reflex agent is simple
– base their actions on
– a direct mapping from states to actions
– but cannot work well in environments
• which this mapping would be too large to store
• and would take too long to learn
• Hence, goal-based agent is used
Problem-solving agent

• Problem-solving agent
– A kind of goal-based agent
– It solves problem by
• finding sequences of actions that lead to desirable states
(goals)
– To solve a problem,
• the first step is the goal formulation, based on the current
situation
• Goals help organize behavior by limiting the objectives
that the agent is trying to achieve and hence the actions it
needs to consider
Goal formulation

• The goal is formulated


– as a set of world states, in which the goal is satisfied
• Reaching from initial state  goal state
– Actions are required
• Actions are the operators
– causing transitions between world states
– Actions should be abstract enough at a certain degree,
instead of very detailed
– E.g., turn left VS turn left 30 degree, etc.
Problem formulation

• The process of deciding


– what actions and states to consider
• E.g., driving Yangon  Mandalay
– in-between states and actions defined
– States: Some places in Yangon & Mandalay
– Actions: Turn left, Turn right, go straight, accelerate & brake,
etc.
Search

• Because there are many ways to achieve the same goal


– Those ways are together expressed as a tree
– Multiple options of unknown value at a point,
• the agent can examine different possible sequences of
actions, and choose the best
– This process of looking for the best sequence is called
search
– The best sequence is then a list of actions, called solution
Search algorithm

• Defined as
– taking a problem
– and returns a solution
• Once a solution is found
– the agent follows the solution
– and carries out the list of actions – execution phase
• Design of an agent
– “Formulate, search, execute”
Well-defined problems and solutions
A problem is defined by 5 components:
• Initial state – that the agent starts in
• Actions
• Transition model or (Successor functions)
 description of what each action does
 successor to refer to any state reachable from a given state by a
single action
• Goal Test – determines whether a given state is a goal state
• Path Cost – sum of the costs of the individual actions along the path
 path – assigns a numeric cost to each path
 cost function reflects its own performance measure

The solution of a problem is


a path from the initial state to a state satisfying the goal test
Optimal solution
the solution with lowest path cost among all solutions
 Initial state
 Actions
 Transition model or (Successor functions)
 Goal{In(Bucharest)} RESULT(s, a)
 Path Cost RESULT(In(Arad), Go(Zerind)) = In(Zerind)

In(Arad)

ACTIONS(s){
Go(Sibiu),
Go(Timisoara),
Go(Zerind)}

Faculty of Computer Science 11


• while the agent is executing the solution sequence it ignores its
percepts when choosing an action because it knows in advance
what they will be.
• open-loop system – ignoring the percepts breaks the
loop between agent and environment

• Abstraction
– the process to take out the irrelevant information
– leave the most essential parts to the description of the states
( Remove detail from representation)
– Conclusion: Only the most important parts that are
contributing to searching are used
Example problems

• Toy problems
– those intended to illustrate or exercise various problem-
solving methods
– E.g., puzzle, chess, etc.
• Real-world problems
– tend to be more difficult and whose solutions people actually
care about
– E.g., Design, planning, etc.
Toy problems
• Example: vacuum world

Number of states: 8
Initial state: Any
Number of actions: 4
 left, right, suck,
noOp
Goal: clean up all dirt
 Goal states: {7, 8}

 Path Cost:
 Each step costs 1
The 8-puzzle
The 8-puzzle

• States:
– a state description specifies the location of each of the
eight tiles and blank in one of the nine squares
• Initial State:
– Any state in state space
• Successor function:
– the blank moves Left, Right, Up, or Down
• Goal test:
– current state matches the goal configuration
• Path cost:
– each step costs 1, so the path cost is just the length of the
path
8-queens problem

Solution
?

Faculty of Computer Science 18


 An incremental formulation involves operators that augment
the state description, starting with an empty state
 for the 8-queens problem, this means that each action adds a
queen to the state.
 A complete-state formulation starts with all 8 queens on
the board and moves them around.
• States: Any arrangement of 0 to 8 queens on the board is a
state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the
specified square.
• Goal test: 8 queens are on the board, none attacked.
 path cost is of no interest because
Faculty of Computeronly the final state counts.
Science 19
Real-world problems
 Route-finding problem
 airline travel problems that must be solved by a travel-planning Web site
 States: Each state obviously includes a location (e.g., an airport) and the
current time.
 Initial state: This is specified by the user’s query.
 Actions: Take any flight from the current location, in any seat class,
leaving after the current time, leaving enough time for within-airport
transfer if needed.
 Transition model: The state resulting from taking a flight will have the
flight’s destination as the current location and the flight’s arrival time as
the current time.
 Goal test: Are we at the final destination specified by the user?
 Path cost: This depends on monetary cost, waiting time, flight time,
customs and immigration procedures, seat quality, time of day, type of
airplane, frequent-flyer mileage
Faculty ofawards, and so on.
Computer Science 20
Real-world problems
 Traveling salesperson problem
 a touring problem in which each city must be visited exactly once.
The aim is to find the shortest tour.
 A VLSI layout problem
 positioning millions of components and connections on a chip to
minimize area, minimize circuit delays, minimize stray capacitances,
and maximize manufacturing yield
 Robot navigation
 generalization of the route-finding problem
 a robot can move in a continuous space with (in principle) an infinite
set of possible actions and states
 Automatic assembly sequencing
 protein design
Faculty of Computer Science 21
Searching for solutions
• After formulating a problem, “solve it”
• A solution is an action sequence
• search algorithms work by considering various possible action
sequences.
• The possible action sequences starting at the initial state form a
search tree
root

branch = action
node

Faculty of Computer Science 22


repeated states by loopy path

• expanding the current state by applying each legal action to the


current state, thereby generating a new set of states.
Faculty of Computer Science 23
• redundant paths – whenever there is more than one way to get
from one state to another.
• eg,
 Arad–Sibiu (140 km long)
 Arad–Zerind–Oradea–Sibiu (297 km long) – redundant

Faculty of Computer Science 24


Search algorithms vary primarily according to how
they choose which state to expand next—the so-called
search strategy.

Faculty of Computer Science 25


Infrastructure for search
algorithms
Four components of structure for each node n of the tree
• n.STATE: the state in the state space to which the node
corresponds
• n.PARENT: the node in the search tree that generated this node
• n.ACTION: the action that was applied to the parent to generate
the node
• n.PATH-COST: the cost, traditionally denoted by g(n), of the
path from the initial state to the node, as indicated by the parent
pointers.

Faculty of Computer Science 26


Measuring problem-solving
performance
• Completeness: Is the algorithm guaranteed to find a solution
when there is one?
• Optimality: Does the strategy find the optimal solution
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform
the search?

Faculty of Computer Science 27


Reference

• Artificial Intelligence A Modern Approach Third Edition by Stuart J. Russell


and Peter Norvig

Faculty of Computer Science 28


CST-31105

Lecture-4
Search, Games and Problem Solving
 The students have already understood some algorithms in 2nd
year Data Structure course.
 This lecture is reading assignment for the students.
 The students have to do group presentations.
Reference

• Introduction to Artificial Intelligence (Undergraduate Topics in Computer


Science) by Wolfgang Ertel, Nathanael T. Black, Springer; 2011 edition
(March 15, 2011)

Faculty of Computer Science 30

You might also like