AI Unit 2 Notes
AI Unit 2 Notes
Problem Solving in AI
Problem Solving in AI is a process where an agent tries to achieve a goal from a given initial
state by applying a sequence of actions.
Problem Formulation is process of deciding what actions and states to consider, given a goal.
A problem can be defined by 4 component:
Components of a Problem
1. States: Defines Problem
2. Initial State: The starting point of the problem.
3. Succesor Function (Operators): Possible moves or transformations.
4. Goal Test: The desired outcome.
Problem Formulation Types:
1. Incremental State Formulation
• In Incremental Formulation, the state includes only the necessary information to
reach the current step from the initial state.
• It builds the solution step by step by adding one piece at a time.
• Focuses on partial solutions that will eventually lead to a complete solution.
Example:
• 8-Queens Problem: Place queens one by one on the board. At each step, the state is
a partial arrangement of queens.
2. Complete State Formulation
• In Complete Formulation, the state describes a complete configuration of the
problem at any given time, whether it is a solution or not.
• The entire environment or problem condition is represented in each state.
Example:
• 8-Puzzle Problem: A complete arrangement of tiles at any point is a state. Even if not
solved, it shows the full board configuration.
Problem Solving Agent:
A Problem-Solving Agent is a type of intelligent agent that decides what to do by searching
for a sequence of actions that lead to the desired goal.
It is goal-based and takes actions that maximize its performance measure by solving problems
using search algorithms.
Structure of a Problem-Solving Agent:
1. Goal Formulation:
• Define the goal based on the current situation.
• What does the agent want to achieve?
2. Problem Formulation:
• Define the problem as a set of states and actions to reach the goal.
• Components:
o States
o Initial state
o Succesor function (Operators)
o Goal test
3. Search for Solution:
• Search the state space to find a sequence of actions leading to the goal.
• Can use search algorithms (BFS, DFS, A*, etc.).
4. Execute Solution:
• Perform the sequence of actions that solve the problem.
Example: Pathfinding Robot
Problem:
A robot is at point A and needs to reach point G in a maze.
Components of the Problem:
• Initial State: Robot is at A.
• Actions: Move North, South, East, West.
• Goal State: Reach G.
• Transition Model: Moving from one point to another if the path is clear.
• Path Cost: Each move costs 1 unit.
• Example: Breadth First Search (BFS) is complete if the branching factor is finite.
2. Optimality
• Definition: Whether the strategy finds the best solution (lowest cost solution).
• Example: Uniform Cost Search (UCS) is optimal if path costs are positive.
3. Time Complexity
• Definition: The amount of time (number of nodes generated/expanded) required to
find a solution.
• Example: BFS has time complexity O(b^d), where b is branching factor, d is depth
of solution.
4. Space Complexity
• Definition: The amount of memory required to perform the search (number of
nodes stored in memory).
3) Performance Evaluation:
3.1) Completeness:
• Optimal — UCS always finds the least-cost (optimal) solution because it expands
nodes in the order of their cumulative path cost from the start node.
• Ensures that the first time a goal node is expanded, it has been reached with
minimum possible cost.
3.3) Time and Space Complexity:
Heuristic Function
1) A Heuristic Function, denoted as h(n), is a problem-specific function that estimates the
cost or distance from the current node (n) to the goal in a search algorithm.
2) It provides guidance to informed search algorithms to choose the most promising paths,
making the search process faster and more efficient.
Key Points about Heuristic Function:
1. Estimation Function: It estimates how close a state is to the goal.
2. Used in Informed Search: Essential for algorithms like A*, Greedy Best-First Search,
and Hill Climbing.
3. Not Guaranteed to be Accurate: It may not give exact distances but provides a useful
approximation.
4. Domain-Specific Knowledge: Depends on the problem; better heuristics give better
performance.
5. Guides Search: Helps in reducing search time by focusing on promising paths.
Mathematical Representation:
h(n): Estimated cost from node n to the goal
in A* search
f(n)= g(n) + h(n)
Where:
• f(n) = Estimated total cost of path through node n.
• g(n) = Actual cost from the start to n.
• h(n) = Heuristic estimated cost from n to goal.
Hill Climbing Search Algorithm
Hill Climbing Search is an informed (heuristic-based) search algorithm used to find the
optimal solution by iteratively improving the current state.
• It is a local search algorithm that starts with an arbitrary solution and makes
incremental changes to improve the solution based on a heuristic function.
• It moves towards higher value (better) states until it reaches a peak (goal) where no
better neighboring state is found.
Working Principle:
1. Start with an initial state.
2. Evaluate neighboring states using a heuristic function h(n).
3. If a neighbor has a better heuristic value, move to that neighbor.
4. Repeat until no neighbor is better (local maximum).
5. Stop when the goal is reached or no better move is possible.
A* Algorithm
The A* (A-Star) algorithm is a popular and powerful informed search algorithm used for
finding the shortest path from a start node to a goal node in a graph.
It combines the benefits of Uniform Cost Search and Greedy Best-First Search by using both
the actual cost to reach a node and the estimated cost to reach the goal.
B Goal
Advantages of A*:
• Finds optimal and shortest path.
• More efficient due to heuristics.
Disadvantages of A*:
• Can be memory-intensive, as it stores all generated nodes.
• Performance highly depends on the quality of heuristic function.
Describe the local search and optimization problem.
Local Search is a search technique used for optimization problems where the goal is to find
the best (optimal) solution according to some objective (fitness) function.
Unlike classical search algorithms (like BFS or DFS), local search does not explore all possible
states. Instead, it iteratively moves from one solution to a neighboring solution that is
"better" according to some criterion.
Key Points:
• Focuses on improving current solution, not building paths.
• Works well for large search spaces.
• Memory efficient — usually keeps only one current state.
• Can get stuck in local optima without reaching global optimum.
An Optimization Problem involves finding the best solution from a set of possible solutions
based on a given objective function (maximize or minimize a value).