0% found this document useful (0 votes)
27 views42 pages

Lec06 Greedy Algorithms (Part 1)

This document covers greedy algorithms, including their properties, applications, and specific examples like the Fractional Knapsack Problem and Dijkstra's Algorithm. It explains the greedy method as a technique for solving optimization problems by making the best immediate choice at each step, and discusses scenarios where greedy algorithms yield optimal solutions versus when they do not. Additionally, it outlines the implementation of these algorithms and their efficiency in various contexts.

Uploaded by

yijoebackup
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views42 pages

Lec06 Greedy Algorithms (Part 1)

This document covers greedy algorithms, including their properties, applications, and specific examples like the Fractional Knapsack Problem and Dijkstra's Algorithm. It explains the greedy method as a technique for solving optimization problems by making the best immediate choice at each step, and discusses scenarios where greedy algorithms yield optimal solutions versus when they do not. Additionally, it outlines the implementation of these algorithms and their efficiency in various contexts.

Uploaded by

yijoebackup
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

LECTURE 06

GREEDY ALGORITHMS (PART 1)


Lecture outcomes

 The Greedy Method Technique


 Fractional Knapsack Problem
 Dijkstra’s Algorithm
Optimization Problems

 Optimization problem: to find not just a solution, but


the best solution

 “Greedy algorithm”: sometimes works well for


optimization problems

 A greedy algorithm works in phases. At each phase:


 Choose the best immediate result, without regard
for future consequences
 You hope that by choosing a local optimum at each
step, you will end up at a global optimum
Greedy Method

 The greedy method is a general algorithm design


paradigm, built on the following elements:
 configurations: different choices, collections, or
values to find
 objective function: a score assigned to
configurations, which we want to either maximize or
minimize
 It works best when applied to problems with the
greedy-choice property:
 a globally-optimal solution can always be found by
a series of local improvements from a starting
configuration.
Properties of Greedy Algorithms

 Extremely efficient in solving a problem – no


backtracking is involved making it easy to program.

 Often guarantee a solution to a problem.

 At each step, greedy algorithms take the largest steps


towards its goal but in doing so, it may not obtain the
optimal solution.
Example: Making Change

 Problem:
A dollar amount to reach and a collection of coin
amounts to use to get there.
 Configuration:
A dollar amount yet to return to a customer plus the
coins already returned
 Objective function:
Minimize number of coins returned.
Example: Making Change

 Greedy solution: Always return the largest coin you


can
 Example 1: Coins are valued $.32, $.08, $.01
 Has the greedy-choice property,
 $.32 + $.08 = $.40

Example 2: Coins are valued $.30, $.20, $.05, $.01


 Does not have greedy-choice property,
 $.30 + $.05 + $.05 = $.40 (not optimal solution)
 $.20 + $.20 = $.40 (optimal solution)
Problems with Greedy-Choice Property

 Suppose you want to count out a certain amount of


money, using the fewest possible bills and coins
A greedy algorithm would do this:
At each step, take the largest possible bill or coin that
does not overshoot
 Example: To make RM6.65, you can choose:
 a RM5 bill
 a RM1 bill, to make RM6
 a 50sen coin, to make RM6.50
 a 10sen coin, to make RM6.60
 a 5sen coins, to make RM6.65
 Total 5 bills/coins
 For Malaysian money, the greedy algorithm always gives
the optimum solution
Problems without Greedy-Choice Property

 Suppose you want to count out a certain amount of


money, using the fewest possible bills and coins
A greedy algorithm would do this:
At each step, take the largest possible bill or coin that
does not overshoot
 Example: To make $6.30, you can choose:
 a $5 bill
 a $1 bill, to make $6
 a 25¢ coin, to make $6.25
 no 10¢ coin
 five 1¢ coins, to make $6.30
 A total of 8 bills or coins.
Problems without Greedy-Choice Property

 An optimum solution to make $6.30 is:

 a $5 bill
 a $1 bill, to make $6
 no 25¢ coin
 three 10¢ coin, to make $6.30
 no 1¢ coin

 A total of 5 bills or coins.


 For this monetary system, the greedy algorithm does not
always give the optimum solution
Exercise

 You want to get a total of 15 cents in coins using the


greedy method. What would be the combination(s)
possible if:
 The coins available are in denominations of 25, 10,
5 and 1 cents
 The coins available are in denominations of 9, 7
and 1 cents
A Scheduling Problem

 You have to run nine jobs, with running times of 3, 5, 6, 10,


11, 14, 15, 18, and 20 minutes
 You have three processors on which you can run these jobs
 You decide to do the longest-running jobs first, on whatever
processor is available
P1 20 10 3

P2 18 11 6

P3 15 14 5

 Time to completion: 18 + 11 + 6 = 35 minutes


 This solution isn’t bad, but we might be able to do better
Another Approach

 What would be the result if you ran the shortest job first?
 Again, the running times are 3, 5, 6, 10, 11, 14, 15, 18,
and 20 minutes

P1 3 10 15

P2 5 11 18

P3 6 14 20

 That wasn’t such a good idea; time to completion is now


6 + 14 + 20 = 40 minutes
 Note, however, that the greedy algorithm itself is fast
 All we had to do at each stage was pick the minimum or maximum
An Optimum Solution

 Better solutions do exist:

P1 20 14

P2 18 11 5

P3 15 10 6 3

 This solution is clearly optimal (why?)


 Clearly, there are other optimal solutions
 How do we find such a solution?
 One way: Try all possible assignments of jobs to processors
 Unfortunately, this approach can take exponential time
The Knapsack Problem (KP)
 KP: an optimization problem used to illustrate both problem
and solution:
 Given items of different values and volumes, find the most
valuable set of items that will fit in a knapsack of fixed
volume.
 Formal Definition:
 There is a knapsack of capacity c > 0 and N items.
 Each item has value vi > 0 and weight wi > 0.
 The goal is to find the selection of items (δi = 1 if selected, 0
if not) that fit, ∑i=1N δiwi ≤ c, and the total value, ∑i=1N δivi, is
maximized.
 Variations of the problem exist and affect the solution; i.e. 0-1,
fractional, unbounded etc.
Knapsack problem
Implementation
 Recursive tree method
 Each item, is either picked or left (binary)
 When we pick in recursive manner, and return the total
sum of values
 we hit the base when no more capacity and no item is
available (means 0 item)
 The picked items accumulated weight is less than the
capacity W
 If item’s weight is more than capacity W, just left it
 Else, always maximise the picked value
Implementation

Knapsack(W, weight, values, n)

If no more item and If weight of item n-1 > W


capacity W is 0, just Return Knapsack(W, weight,
return 0 value values, n-1)
/*this return means that skip
this item since it is > W*/

Else (means that weight of item n-1


< W)
Return the maximum value between
two items:
(Value [n-1] + Knapsack(W- weight[n-
1], weight, values, n-1),
Knapsack(W, weight, values, n-1)
A Simple Recursive Knapsack Algorithm

 Knapsack(W, weight[], values[], n)


 If (weight[] = null)
 Return 0

 If (weight[n-1] > W)
 Return Knapsack(W, weight[], values[], n-1
 Else
 Return Knapsack(Values[n-1]+Knapsack(W-
weight[n-1], weight[], values[], n-1)
 Knapsack(W, weight[], values[],n-1)

19
The Fractional Knapsack Problem

 Given: A set S of n items, with each item i having


 bi - a positive benefit
 wi - a positive weight
 Goal: Choose items with maximum total benefit but with
weight at most W.
 If we are allowed to take fractional amounts, then this is
the fractional knapsack problem.
 In this case, we let xi denote the amount we take of
item i
 Objective: maximize
 iS
bi ( xi / wi )

 Constraint: x
iS
i W
Example

 Given: A set S of n items, with each item i having


 bi - a positive benefit
 wi - a positive weight
 Goal: Choose items with maximum total benefit but
with weight at most W. “knapsack”

What item(s) to
choose?
Items:
1 2 3 4 5

Weight: 4 ml 8 ml 2 ml 6 ml 1 ml
Benefit: $12 $32 $40 $30 $50 10 ml
Value per 3 4 20 5 50
ml
Fractional Knapsack Algorithm

 Greedy choice: Keep taking item with highest value


(benefit to weight ratio)
 Since  bi ( xi / wi )  (bi / wi ) xi
iS iS
 Run time: O(n log n). Why?
 Correctness: Suppose there is a better solution
 there is an item i with higher value than a chosen
item j, but xi<wi, xj>0 and vi >vj
 If we substitute some j with i, we get a better
solution
 How much of i: min{wi-xi, xj}
 Thus, there is no better solution than the greedy
one
Algorithm fractionalKnapsack(S, W)
Input: set S of n items with benefit bi and weight wi;
max. weight W
Output: amount xi of each item i to maximize benefit with
weight at most W
for each item i in S
xi  0
vi  bi / wi {value}
w0 {total weight}
while w < W
remove item i with highest vi
xi  min{wi , W - w}
w  w + xi
Exercise

 The table shows six products, P1 – P6 with respective


weights and benefit. Use the fractional knapsack
algorithm to choose products with maximum total
benefit but with the total weight at most 24 kg. List the
selected items and state the maximum total benefit.

Product P1 P2 P3 P4 P5 P6
Weight 8 kg 6 kg 11 kg 7 kg 9 kg 12 kg
Benefit $60 $42 $44 $63 $45 $42
Value per 7.5 7 4 9 5 3.5
kg
SHORTEST PATHS
Outline

 Shortest path problem


 Shortest path properties
 Dijkstra’s algorithm
 Algorithm
 Edge relaxation
Shortest Path Problem

 The Knapsack problem can be reduced to solve


finding the single-source shortest paths problem in
graphs
 Given a weighted graph and two vertices u and v, we
want to find a path of minimum total weight between u
and v.
 Length of a path is the sum of the weights of its
edges.
 Applications
 Internet packet routing
 Flight reservations
 Driving directions
Shortest Path Problem

 Example:
 Shortest path between Providence and Honolulu?

849 2 PVD
1843 ORD 1 4
SFO

802

1205
43 LGA
337

17 87 10
2555 1 3
HNL 1233 99
LAX DFW 1120
MIA
Shortest Path Properties
 Property 1:
 A subpath of a shortest path is itself a shortest path
 Property 2:
 There is a tree of shortest paths from a start vertex
to all the other vertices
 Example:
 Tree of shortest paths from Providence
Dijkstra’s Algorithm

 The distance from vertex s  Grow a “cloud” of vertices,


to vertex v is the length of beginning with s and
a shortest path between s eventually covering all the
and v vertices
 Dijkstra’s algorithm  Store with each vertex v a
computes the distances of label d(v) representing the
all the vertices from a distance of v from s in the
given start vertex s subgraph consisting of the
cloud and its adjacent vertices
 Assumptions:
 At each step
 the graph is connected
 Add to the cloud the vertex
 the edges are
u outside the cloud with the
undirected smallest distance label, d(u)
 the edge weights are  Update the labels of the
nonnegative vertices adjacent to u
Edge Relaxation

 To remove cycle
 Consider an edge e = (u,z) such that
 u is the vertex most recently added to the cloud
 z is not in the cloud
 The relaxation of edge e updates distance d(z) as
follows:
 d(z)  min{d(z),d(u) + weight(e)}
Dijkstra’s Algorithm

 A priority queue stores the vertices outside the cloud


 Key: distance
 Element: vertex
 Locator-based methods
 insert(k,e) returns a locator
 replaceKey(l,k) changes the key of an item
 Store two labels with each vertex:
 Distance (d(v) label)
 locator in priority queue
Algorithm DijkstraDistances(G, s)
Q  new heap-based priority queue
for all v  G.vertices()
if v = s
setDistance(v, 0)
else
setDistance(v, )
l  Q.insert(getDistance(v), v)
setLocator(v,l)
while Q.isEmpty()
u  Q.removeMin()
for all e  G.incidentEdges(u)
{ relax edge e }
z  G.opposite(u,e)
r  getDistance(u) + weight(e)
if r < getDistance(z)
setDistance(z,r)

Q.replaceKey(getLocator(z),r)
Analysis
 Graph operations
 Method incidentEdges is called once for each vertex
 Label operations
 Set/get the distance and locator labels of vertex z O(deg(z)) times
 Setting/getting a label takes O(1) time
 Priority queue operations
 Each vertex is inserted once into and removed once from the
priority queue, where each insertion or removal takes O(log n) time
 The key of a vertex in the priority queue is modified at most deg(w)
times, where each key change takes O(log n) time
 Dijkstra’s algorithm runs in O((n + m) log n) time provided the graph is
represented by the adjacency list structure
 Recall that ∑vdeg(v) = 2m
 The running time can also be expressed as O(m log n) since the
graph is connected
previously discovered path
and weight

new shorter path and weight

relaxed edge
Dijkstra’s Algorithm Example

 Find the shortest paths to all vertices from source


vertex A

d()
8 A 4
2
d() d() d()
7 1
B C D

d() 3 9 d()
2 5
E F
0  From source vertex, visit all
8 A 4
neighbours and update the
2
8 2 4 weights d
7 1
B C D

 3 9 
2 5
E F

0
 Choose vertex with
8 A 4 smallest weight to proceed
8
2
2 3
 Repeat the process and
7 1
B C D update weights of
3 9 neighbours
5 11
2 5
E F  If new path has smaller
total weight, update and
set old path as relaxed
edge
0 0
8 A 4 A 4
8
2 2
8 7 2 1 3 7 2 3
B C D 7 1
B C D

5 3 9 11 3 9
5 8
2 5 2 5
E F E F

0
0
8 A 4
8 A 4
2
8 2 3 2
7 1 7 2 3
B C D 7 1
B C D
5 3 9 8 3 9
2 5 5 8
E F 2 5
E F
0
8 A 4
2
7 7 2 1 3
B C D

5 3 9 8
2 5
E F

0
8 A 4
2
7 7 2 1 3
B C D
3 9
5 8
2 5
E F
Why Dijkstra’s Algorithm Works

 Dijkstra’s algorithm is based on the greedy method. It


adds vertices by increasing distance.
 Suppose it didn’t find all
shortest distances. Let F be 0
the first wrong vertex the 8 A 4
algorithm processed. 2
7 7 2 1 3
 When the previous node, D, on B C D
the true shortest path was
3 9
considered, its distance was 5 8
2 5
correct. E F
 But the edge (D,F) was
relaxed at that time!
 Thus, so long as d(F)>d(D), F’s
distance cannot be wrong.
That is, there is no wrong
vertex.
Why It Doesn’t Work for
Negative-Weight Edges

 If a node with a negative incident edge were to be


added late to the cloud, it could mess up distances for
vertices already in the cloud.
0
8 A 4
6
7 7 5 1 4
B C D

5 0 -8 9
2 5
E F

C’s true distance is 1, but it is


already in the cloud with
d(C)=5!
Exercise

 Find the shortest


paths to all vertices
starting from 3 in the
following acyclic
graph

You might also like