0% found this document useful (0 votes)
11 views

Unit2d Local Search

Local search algorithms are optimization techniques in AI that focus on incremental improvements to find high-quality solutions in complex problem spaces. Hill climbing is a specific local search method that iteratively improves a solution but can get stuck in local maxima, while simulated annealing introduces randomness to escape such traps. Other methods like local beam search and genetic algorithms also offer diverse strategies for optimization by exploring multiple solutions or combining parent states.

Uploaded by

Trishna Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Unit2d Local Search

Local search algorithms are optimization techniques in AI that focus on incremental improvements to find high-quality solutions in complex problem spaces. Hill climbing is a specific local search method that iteratively improves a solution but can get stuck in local maxima, while simulated annealing introduces randomness to escape such traps. Other methods like local beam search and genetic algorithms also offer diverse strategies for optimization by exploring multiple solutions or combining parent states.

Uploaded by

Trishna Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Introduction

• Local search algorithms are used in artificial intelligence and


optimization to find high-quality solutions in large and complex
problem spaces.

• Unlike global search methods that explore the entire solution space,
local search algorithms focus on making incremental changes to
improve a current solution until they reach a locally optimal or
satisfactory solution.

• Advantages: Use very little memory and they can find reasonable
solutions in large state spaces.
• Considering the state space landscape, there is location information
(state) and elevation (value of heuristic function or objective function)
• If elevation corresponds to cost, then the aim is to find the lowest
valley—a global minimum; if elevation corresponds to an objective
function, then the aim is to find the highest peak—a global maximum.
• A complete local search algorithm always finds a goal if one exists; an
optimal algorithm always finds a global minimum/maximum.
Working
• Initialization: Start with an initial solution, which can be generated
randomly or through some heuristic method.
• Evaluation: Evaluate the quality of the initial solution using an
objective function or a fitness measure. This function quantifies how
close the solution is to the desired outcome.
• Neighbor Generation: Generate a set of neighboring solutions by
making minor changes to the current solution. These changes are
typically referred to as "moves."
• Selection: Choose one of the neighboring solutions based on a
criterion. This step determines the direction in which the search
proceeds.
• Termination: Continue the process iteratively, moving to the selected
neighboring solution, and repeating steps 2 to 4 until a termination
condition is met.
Hill climbing
• Its primary goal is to find the best solution within a given search space
by iteratively improving the current solution.
• Terminates when it reaches a peak
 Initialization: Begin with an initial solution, often generated randomly
or using a heuristic method.
 Evaluation: Calculate the quality of the initial solution using an
objective function or fitness measure.
 Neighbor Generation: Generate neighboring solutions by making
small changes (moves) to the current solution.
 Selection: Choose the neighboring solution that results in the most
significant improvement in the objective function.
 Termination: Continue this process until a termination condition is
met
Features of hill climbing
• Generate and Test Approach: This feature involves generating
neighboring solutions and evaluating their effectiveness, always aiming
for an upward move in the solution space.

• Greedy Local Search: The algorithm uses a cheap strategy, opting for
immediate beneficial moves that promise local improvements.

• No Backtracking: Unlike other algorithms, Hill Climbing Algorithm


in AI does not revisit or reconsider previous decisions, persistently
moving forward in the quest for the optimal solution.
Hill climbing analysis
Hill climbing analysis
• Local maximum: State that is marginally superior to its neighboring
states but never higher than the highest state.
• Global maximum: Highest state in state space with highest value of
cost function.
• Current state: State where agent is currently present.
• Flat local maximum: All neighboring states have the same value.
• Shoulder region: A region with an upward edge.
Problems in hill climbing regions
Local maximum: The algorithm terminates when current node becomes
local maximum i.e. it is better than its neighbors. However, a global
maximum exists where objective function has a higher value.
Solution: Back-propagation

Ridge: Occurs when there are multiple peaks having the same value.
Multiple local maxima same as the global maxima exist

Solution: Moving in several directions at he same time


Problems in hill climbing regions
Plateau: All neighboring nodes have the same value of objective
function

Solution: Making a big jump from current state


Types of hill climbing
1) Simple hill climbing: Chooses the first neighbor that improves the
solution.
Step 1: Start with an initial state
Step 2: Check if the initial state is goal state. If yes return success and exit.
Step 3: Enter loop to search for a better state
• Select a neighboring state
• Evaluate this new state
 if it is goal state, return success and exit
 if better than current state, update this state to current state
If not better then discard and continue the loop
Step 4: End the process if no better state found and goal is not achieved.
Types of hill climbing
2) Steepest-ascent hill climbing: Evaluates all neighbors and selects the best
one.
Step 1: Evaluate initial state. If it’s goal state, return success and set as
current state.
Step 2: Repeat until solution is found.
• Initialize best_successor as best potential improvement over the current
state
• For each operator, apply to current state and evaluate the new state
 if it is goal state, return success and exit
 if better than best_successor, update best_successor to this new state
• If best_successor is an improvement, update the current state
Step 3: Stop if no solution is found or further improvement is possible.
Types of hill climbing
3) Stochastic hill climbing: Randomly selects neighbors to explore

This algorithm does not look at all neighbors to check if its better than
the current node, instead it randomly selects one neighboring node and
based on a pre-defined criteria it decides whether to go to the
neighboring node or select an alternate node.
Applications of hill climbing
Hill Climbing technique can be used to solve many problems, where the
current state allows for an accurate evaluation function, such as
Network-Flow, Travelling Salesman problem, 8-Queens problem,
Integrated Circuit design, Job scheduling, Game theory etc.
Simulated Annealing
• A probabilistic local search algorithm inspired by the annealing process
in metallurgy.
• It allows the algorithm to accept worse solutions with a certain
probability, which decreases over time. This randomness introduces
exploration into the search process, helping the algorithm escape local
optima and potentially find global optima.

• SA has been successfully applied to a wide range of optimization


problems, such as TSP, protein folding, graph partitioning, and job-shop
scheduling. The main advantage of SA is its ability to escape from local
minima and converge to a global minimum. SA is also relatively easy to
implement and does not require a priori knowledge of the search space.
Simulated Annealing
The algorithm begins by setting the temperature and creating an initial
solution. It then iteratively performs the steps below:
Perturbation: A neighboring solution is created by making a minor
random alteration to the existing one. This disturbance adds exploration
to the search process.
Evaluation: The new solution's energy is determined using the energy
function. Acceptance is granted if the new solution requires less energy
(of higher quality) than the present solution. Otherwise, it may be
accepted probabilistically based on temperature and energy differences.
Temperature Update: The temperature is updated based on the cooling
schedule, progressively lowering its value over iterations. This lowers
the likelihood of accepting poorer alternatives as the search advances.
Local beam search
• Instead of starting with a single initial solution, local beam search begins
with multiple solutions, maintaining a fixed number (the "beam width")
simultaneously.
• The algorithm explores the neighbors of all these solutions and selects
the best solutions among them.
Initialization: Start with multiple initial solutions.
Evaluation: Evaluate the quality of each initial solution.
Neighbor Generation: Generate neighboring solutions for all the current
solutions.
Selection: Choose the top solutions based on the improvement in the
objective function.
Termination: Continue iterating until a termination condition is met.
Local beam search
• Local beam search effectively avoids local optima because it maintains
diversity in the solutions it explores. However, it requires more memory
to store multiple solutions in memory simultaneously.
Genetic algorithm
• In this algorithm, the successor states are generated by combining two
parent states.
• GA begins with a set of k randomly generated states called population.
Each state is represented by a string od 0s and 1s.
• Each state is evaluated on the basis of a fitness function.
• In each pair a crossover point is selected and offsprings are generated by
crossing over the parent strings at this crossover point.
• After crossover, each location is subjected to a random mutation i.e. a
small change in the chromosome to get a new solution
• This is done to maintain and introduce diversity in the genetic
population.
Genetic algorithm

You might also like