0% found this document useful (0 votes)
11 views

L05 Local Search Algorithms

Uploaded by

pedanticwiles
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

L05 Local Search Algorithms

Uploaded by

pedanticwiles
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

COL333/671: Introduction to AI

Semester I, 2024-25

Local Search Algorithms

Rohan Paul

1
Outline
• Last Class
• Informed Search
• This Class
• Local Search Algorithms
• Reference Material
• AIMA Ch. 4.1

2
Acknowledgement
These slides are intended for teaching purposes only. Some material
has been used/adapted from web sources and from slides by Doina
Precup, Dorsa Sadigh, Percy Liang, Mausam, Dan Klein, Nicholas Roy
and others.

3
Search Methods for Discrete Optimization

• Setting
• A set of discrete states, X.
• An objective/evaluation function assigns a “goodness” value to a
state, Eval(X)
• Problem is to search the state space for the state, X* that
maximizes the objective.

• Searching for the optimal solution can be challenging. Why? Key Idea
• The number of states is very large. - Searching for “the optimal”
• Cannot simply enumerate all states and find the optimal. solution is very difficult.
- Question is whether we can search
• We can only evaluate the function.
for a reasonably good solution.
• Cannot write it down analytically and optimize it directly.

4
Example – Windmill Placements
Problem: Optimizing the locations of windmills in a
wind farm
• An area to place windmills.
• Location of windmills affects the others. Reduced
efficiency for those in the wake of others.
• Grid the area into bins.
• A large number of configurations of windmills
possible.
• Given a configuration we can evaluate the total
efficiency of the farm.
• Can neither enumerate all configurations nor
optimize the power efficiency function analytically.
• Goal is to search for the configuration that
maximizes the efficiency.

Inspired from this example: https://www.shell.in/energy-and-innovation/ai-


5
hackathon/_jcr_content/par/textimage_1834506119_1619963074.stream/1612943059963/4b0a86b7cc0fe7179148284ffed9ef33524c2816/windfarm-layout-optimisation-challenge.pdf
Example: Conference Scheduling

Assign papers that are similar in a session. Avoid conflicts between sessions.
6
Example

4-Queens Problem
• Discrete set of states: 4 queens in 4
columns (44 = 256 states)
• Goal is to find a configuration such that
there are no attacks.
• Moving a piece will change the configuration.
• Any configuration can be evaluated using a
function
• h(x) = number of attacks (number of violated
binary constraints)
• Search for the configuration that is
optimal such that h = 0.
Example
Number of attacks are 4.
Formally
Local Search Methods

• Keep track of a single "current" state


• We need a principled way to search/explore the state space hoping to find
the state with the optimal evaluation.
• Do not maintain a search tree as we need the solution not the path that
led to the solution.
• Only maintain a single current state.

• Perform local improvements


• Look for alternatives in the vicinity of that solution
• Try to move towards more better solutions.

9
Hill-climbing Search
Let S be the start node and let G be the goal node.
Let h(c) be a heuristic function giving the value of a node
Hill climbing
Let c be the start node

Loop
Let c’ = the highest valued neighbor of c
If h(c) ≥ h(c’) then return c
c = c’

Start at a configuration. Evaluate the neighbors. Move to


the highest valued neighbor if its value is higher than the
current state. Else stay.

10
Hill climbing for 4 -queens
• Select a column and move the queen to
the square with the fewest conflicts.
• Perform local modifications to the state by
changing the position of one piece till the
evaluation is minimum.
• Evaluate the possibilities from a state and
then jump to that state.

11
Example
• Local search looks at a state
and its local neighborhood.
• Not constructing the entire
search tree.
• Consider local modifications
to the state. Immediately
jump to the next promising
neighbor state. Then start
again.
• Highly scalable.

12
Example: Idea of local improvements
Locally improving a solution for a Travelling Salesperson Problem.

The idea of making local improvements to a candidate solution is a general and widely applicable technique.
13
8-Queens Problem
Is this an optimal state?

Issue: search reaches a solution where it cannot Local minima (h = 1). Every successor
be improved - a local minimum. has a higher cost.

14
Core Problem in Local Search
All values in the
neighborhood
decrease values

Neighborhood values
are equivalent, need
to search for a longer
distance to find a path Neighborhood values
to a higher value are equivalent beyond
which values only
decrease

• Hill climbing prone to local maxima. Neighbors may not be of higher value. Search will stop at a sub-optimal solution
• Locally optimal actions may not lead to the globally optimal solution
Escaping local minima: Adding randomness
• Random Re-starts Q: Which method to use for the following cost surfaces?
• A series of searches from Random re-starts or random walk?
randomly generated initial
states.

• Random Walk
• Pick ”any” candidate move
(whether improves the
solution or not).

16
Escaping local minima: Adding randomness

• Escaping flat local minima (shoulders)


• When local search reaches a flat area,
that is, when all the neighbours have the
same cost as the current state, it
terminates right away
• Keep moving - strategy
• Make sideways moves for a few steps.

• Stochastic Hill Climbing


• Instead of picking the best move, pick any
move that produces an improvement.

17
Looking for Solution from Multiple Points
• Local Beam Search
• Algorithm
• Track k states (rather than 1). • Note:
• Begin with k randomly sampled states. • Each run is not independent,
• Loop information is passed between
parallel search threads.
• Generate successors of each of the • Promising states are propagated. Less
k-states promising states are not propagated.
• If anyone has the goal, the algorithm • Problem: states become concentrated
halts in a small region of space.
• Otherwise, select only the k-best
successors from the list and repeat.

18
Beam Search is a General Search Technique
• Beam search is a general idea (see
right figure).
• Instead of considering all solutions at
a level, consider only the top-k.
• Note: usually our memory is finite in
size, there is an upper bound on the
number of states that can be kept.
• In general, it is an approximate Beam search is a general idea. Here, shown in the
search method. context of a tree search. Beam size is 3. For local search
we don’t construct the full tree.

19
Source: wikipedia
“Stochastic” Beam Search
• Local beam search
• Problem: states become concentrated
in a small region of space
• Search degenerates to hill climbing
• Stochastic beam search
• Instead of taking the best k states
• Sample k states from a distribution
• Probability of selecting a state increases
as the value of the state. Instead of top k, sample k given a probability.

20
Source: wikipedia
Simulated Annealing
• In case of an improving move – move there.
• But allow some apparently bad moves - to escape local maxima.
• Decrease the size and the frequency of bad moves over time.

A form of Monte-Carlo Search. Move around the environment to explore it instead of systematically
sweeping. Powerful technique for large domains. 21
Simulated Annealing: How to decide p?
• Considering a move from state of value E to a
lower valued state of E’. That is considering a
sub-optimal move (E is higher than E’).
• If (E − E’) is large:
• Likely to be close to a promising maximum.
• Less inclined to to go downhill.
• If (E − E’) is small:
• The closest maximum may be shallow
• More inclined to go downhill is not as bad.

22
Simulated Annealing: Selecting Moves
• If the new value Ei is better than the old value E, move to Xi

• If the new value is worse (Ei < E) then move to the neighboring solution
as per Boltzmann distribution.

• Temperature (T>0)
• T is high, exp is ~0, acceptance probability is ~1, high probability of acceptance of
a worse solution.
• T is low, the probability of moving to a worse solution is ~ 0, low probability of
acceptance of a worse solution.
• Schedule T to reduce over time. 23
Simulated Annealing
• T is high
• The algorithm is in an exploratory phase
• Even bad moves have a high chance of
being picked
• T is low
• The algorithm is in an exploitation phase
• The “bad” moves have very low
probability
• If T is decreased slowly enough
• Simulated annealing is guaranteed to
reach the best solution in the limit.
Able to escape local maxima. 24
Adding (some) memory: Tabu Search
Motivating example: PCB layout with
lower wire overlaps.
• Local search loses track of the global cost
landscape.
• May frequently come back to the same state
• Introduce “memory” to prevent re-visits.
• Maintain a finite-sized “tabu” list which remembers
recently visited states so that one does not go
towards them.
• If a state proposed in the neighbourhood is in the
tabu list do not go.

25
Search with Memory
• Tabu Search
• Maintain a tabu list of the k last assignments.
• Don’t allow an assignment that is already on the tabu list.
• If k = 1, we don’t allow an assignment of to the same value to the variable chosen.
• Maintain a finite-sized tabu list (a form of local memory) which remembers recently
visited states so that one does not go towards them.
• Note: Tabu search allows for sub-optimal moves.
• Types of memory rules
• Short-term: immediate states visited in the past.
• Longer-term: guide the search towards certain regions of the search – where all have
we explored in the past.
• Generalise to – searching locally by growing a tree for a short horizon and
then picking a move (combining local and tree search).

26
Genetic Algorithms
• Idea
• Variant of stochastic beam search:
progression is by modifying a state.
• Combine two states to generate the
successor.
• A mechanism to propose next moves in a
different way.

• Ingredients
• Coding of a solution into a string of symbols
or bit-string
• A fitness function to judge the worth of a
state (or configuration)
• A population of states (or configurations)
27
Genetic Algorithms https://rednuht.org/genetic_cars_2/

View as a way to propose moves – in an evolutionary way.

Advantage: ability to combine large blocks that evolved independently, impact the granularity of
search.
28

You might also like