0% found this document useful (0 votes)
33 views12 pages

AI Unit 2 Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views12 pages

AI Unit 2 Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

AI Unit 2: Problem Solving

Problem Solving in AI
Problem Solving in AI is a process where an agent tries to achieve a goal from a given initial
state by applying a sequence of actions.
Problem Formulation is process of deciding what actions and states to consider, given a goal.
A problem can be defined by 4 component:
Components of a Problem
1. States: Defines Problem
2. Initial State: The starting point of the problem.
3. Succesor Function (Operators): Possible moves or transformations.
4. Goal Test: The desired outcome.
Problem Formulation Types:
1. Incremental State Formulation
• In Incremental Formulation, the state includes only the necessary information to
reach the current step from the initial state.
• It builds the solution step by step by adding one piece at a time.
• Focuses on partial solutions that will eventually lead to a complete solution.
Example:
• 8-Queens Problem: Place queens one by one on the board. At each step, the state is
a partial arrangement of queens.
2. Complete State Formulation
• In Complete Formulation, the state describes a complete configuration of the
problem at any given time, whether it is a solution or not.
• The entire environment or problem condition is represented in each state.
Example:
• 8-Puzzle Problem: A complete arrangement of tiles at any point is a state. Even if not
solved, it shows the full board configuration.
Problem Solving Agent:
A Problem-Solving Agent is a type of intelligent agent that decides what to do by searching
for a sequence of actions that lead to the desired goal.
It is goal-based and takes actions that maximize its performance measure by solving problems
using search algorithms.
Structure of a Problem-Solving Agent:

1. Goal Formulation:
• Define the goal based on the current situation.
• What does the agent want to achieve?
2. Problem Formulation:
• Define the problem as a set of states and actions to reach the goal.
• Components:
o States
o Initial state
o Succesor function (Operators)
o Goal test
3. Search for Solution:
• Search the state space to find a sequence of actions leading to the goal.
• Can use search algorithms (BFS, DFS, A*, etc.).
4. Execute Solution:
• Perform the sequence of actions that solve the problem.
Example: Pathfinding Robot

Problem:
A robot is at point A and needs to reach point G in a maze.
Components of the Problem:
• Initial State: Robot is at A.
• Actions: Move North, South, East, West.
• Goal State: Reach G.
• Transition Model: Moving from one point to another if the path is clear.
• Path Cost: Each move costs 1 unit.

Steps Followed by Problem-Solving Agent:


1. Goal Formulation:
o Goal: Reach point G.
2. Problem Formulation:
o Define map as graph nodes (A, B, C, ..., G) and connections (edges).
o Determine which moves are possible.
3. Search for Solution:
o Using BFS or A* to find the shortest path from A to G.
o For example, path found: A → B → E → G
4. Execute Solution:
o Robot follows the path and reaches G.
5. Observe Effects:
o Check if goal is achieved.

Steps Involved in Simple Problem-Solving Technique


1. Define the Problem (Problem Formulation)
• Understand and clearly state what needs to be solved.
• Identify initial state, goal state, actions, and constraints.

2. Analyze the Problem


• Understand the complexity, requirements, and limitations.
• Determine if it's solvable and what information is needed..
3. Generate Possible Solutions (Search for Solutions)
• Create a list of possible actions or steps to move towards the goal.
• Explore different states and transitions.

4. Select the Best Solution (Choose a Path/Plan)


• Evaluate the generated solutions based on efficiency, cost, and feasibility.
• Select the optimal or most effective solution.

5. Implement the Solution (Action Execution)


• Perform the actions/steps chosen in the solution to reach the goal.

6. Evaluate the Results


• Check if the goal is achieved.
• If not successful, revisit previous steps and try alternative solutions.

Example Recap: Maze Solving by a Robot


• Define: Find path from start to exit.
• Analyze: Check maze layout and obstacles.
• Generate: List all possible movements.
• Select: Choose shortest path using search strategy (like A*).
• Implement: Follow the path step by step.
• Evaluate: Check if the robot has reached the exit.

Evaluation of Search Strategy


Search strategies are methods used by AI agents to explore possible solutions to a problem.
The effectiveness of a search strategy is evaluated based on the following criteria:
1. Completeness
• Definition: Whether the strategy guarantees finding a solution if one exists.

• Example: Breadth First Search (BFS) is complete if the branching factor is finite.
2. Optimality
• Definition: Whether the strategy finds the best solution (lowest cost solution).

• Example: Uniform Cost Search (UCS) is optimal if path costs are positive.
3. Time Complexity
• Definition: The amount of time (number of nodes generated/expanded) required to
find a solution.

• Example: BFS has time complexity O(b^d), where b is branching factor, d is depth
of solution.
4. Space Complexity
• Definition: The amount of memory required to perform the search (number of
nodes stored in memory).

• Example: DFS uses O(bm) space, where m is maximum depth.


Evaluating search strategies based on these criteria helps in selecting the most appropriate
algorithm for a specific AI problem depending on the need for speed, memory, and quality
of the solution.
Difference between Blind search and Heuristic search
Uniform Cost Search (UFS)
Uniform-cost search (Branch & Bound) is an uninformed search algorithm in Artificial
Intelligence
UCS algorithm uses the lowest cumulative cost to find a path from the source node to the
goal node.
Nodes are expanded, starting from the root, according to the minimum cumulative cost.
The uniform-cost search is implemented using a Priority Queue.
Working:
1) Insert the root node into the priority queue.
2) Remove the element with the highest priority.
3) If the removed node is the goal node,
- print total cost and stop the algorithm
4) Else
- Enqueue all the children of the current node to the priority queue, with their cumulative
cost from the root as priority and the current node to the visited list.

3) Performance Evaluation:
3.1) Completeness:

• Complete — UCS is complete because it will always find a solution if a solution


exists, even if the search space is infinite (as long as the cost of each action is greater
than some positive minimum value).
3.2) Optimality:

• Optimal — UCS always finds the least-cost (optimal) solution because it expands
nodes in the order of their cumulative path cost from the start node.
• Ensures that the first time a goal node is expanded, it has been reached with
minimum possible cost.
3.3) Time and Space Complexity:
Heuristic Function
1) A Heuristic Function, denoted as h(n), is a problem-specific function that estimates the
cost or distance from the current node (n) to the goal in a search algorithm.
2) It provides guidance to informed search algorithms to choose the most promising paths,
making the search process faster and more efficient.
Key Points about Heuristic Function:
1. Estimation Function: It estimates how close a state is to the goal.
2. Used in Informed Search: Essential for algorithms like A*, Greedy Best-First Search,
and Hill Climbing.
3. Not Guaranteed to be Accurate: It may not give exact distances but provides a useful
approximation.
4. Domain-Specific Knowledge: Depends on the problem; better heuristics give better
performance.
5. Guides Search: Helps in reducing search time by focusing on promising paths.
Mathematical Representation:
h(n): Estimated cost from node n to the goal
in A* search
f(n)= g(n) + h(n)
Where:
• f(n) = Estimated total cost of path through node n.
• g(n) = Actual cost from the start to n.
• h(n) = Heuristic estimated cost from n to goal.
Hill Climbing Search Algorithm
Hill Climbing Search is an informed (heuristic-based) search algorithm used to find the
optimal solution by iteratively improving the current state.
• It is a local search algorithm that starts with an arbitrary solution and makes
incremental changes to improve the solution based on a heuristic function.
• It moves towards higher value (better) states until it reaches a peak (goal) where no
better neighboring state is found.
Working Principle:
1. Start with an initial state.
2. Evaluate neighboring states using a heuristic function h(n).
3. If a neighbor has a better heuristic value, move to that neighbor.
4. Repeat until no neighbor is better (local maximum).
5. Stop when the goal is reached or no better move is possible.
A* Algorithm
The A* (A-Star) algorithm is a popular and powerful informed search algorithm used for
finding the shortest path from a start node to a goal node in a graph.
It combines the benefits of Uniform Cost Search and Greedy Best-First Search by using both
the actual cost to reach a node and the estimated cost to reach the goal.

Working Steps of A* Algorithm:


1. Initialize the open list (priority queue) with the start node.
2. Create a closed list to keep track of visited nodes.
3. Loop until the open list is empty:
o Pick the node n with the lowest f(n) from the open list.
o If n is the goal node, return the path (solution found).
o Else:
▪ Generate all possible successors of n.
▪ For each successor:
▪ Calculate g(successor), h(successor) and f(successor).
▪ If this path to successor is better, update the open list.
▪ Move n to the closed list.
4. If open list becomes empty without finding goal → No solution.
Example.
Start A

B Goal
Advantages of A*:
• Finds optimal and shortest path.
• More efficient due to heuristics.
Disadvantages of A*:
• Can be memory-intensive, as it stores all generated nodes.
• Performance highly depends on the quality of heuristic function.
Describe the local search and optimization problem.
Local Search is a search technique used for optimization problems where the goal is to find
the best (optimal) solution according to some objective (fitness) function.
Unlike classical search algorithms (like BFS or DFS), local search does not explore all possible
states. Instead, it iteratively moves from one solution to a neighboring solution that is
"better" according to some criterion.
Key Points:
• Focuses on improving current solution, not building paths.
• Works well for large search spaces.
• Memory efficient — usually keeps only one current state.
• Can get stuck in local optima without reaching global optimum.
An Optimization Problem involves finding the best solution from a set of possible solutions
based on a given objective function (maximize or minimize a value).

Characteristics of Optimization Problems:


• Objective function to evaluate solutions.
• Feasible solutions set (possible states).
• Constraints that solutions must satisfy.
• Goal is to maximize or minimize the objective function.
How Local Search Solves Optimization Problems?

Working Steps of Local Search:


1. Start with a randomly generated solution.
2. Evaluate the objective function for the current solution.
3. Look at neighboring solutions.
4. Move to a neighbor if it improves the objective function.
5. Repeat until a stopping criterion is met (e.g., no better neighbors, time limit).
Example: Traveling Salesman Problem (TSP)
• Problem: Find the shortest possible route that visits a set of cities and returns to the
start.
• Objective function: Total distance of the route (minimize).
• Local Search Strategy:
o Start with an initial route.
o Swap two cities (neighboring solution).
o If the new route is shorter, move to that.
o Repeat until no improvement is possible.
Advantages of Local Search:
• Memory-efficient — needs to store only current state.
• Works well in large or infinite state spaces.

You might also like