0% found this document useful (0 votes)
147 views

Coin Changing 2x2

This document discusses dynamic programming and how it can be used to solve problems like making change with coins in the optimal way. It presents an iterative dynamic programming solution that uses a table to store the optimal solutions to subproblems. This avoids recomputing subproblems and has a runtime of O(nt) where n is the amount of change and t is the number of coin types. The document also discusses designing dynamic programming algorithms by expressing problems in terms of overlapping subproblems and filling tables in an order that allows computing each entry based on previously computed entries.

Uploaded by

bawet
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views

Coin Changing 2x2

This document discusses dynamic programming and how it can be used to solve problems like making change with coins in the optimal way. It presents an iterative dynamic programming solution that uses a table to store the optimal solutions to subproblems. This avoids recomputing subproblems and has a runtime of O(nt) where n is the amount of change and t is the number of coin types. The document also discusses designing dynamic programming algorithms by expressing problems in terms of overlapping subproblems and filling tables in an order that allows computing each entry based on previously computed entries.

Uploaded by

bawet
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Introduction

Reading:
Dynamic Programming I “Dynamic Programming”, 15 CLRS
I “Dynamic Programming”, 5.3 GT
Jonathan Backer
Greedy works if some optimal solution contains the greedy choice.
[email protected]
I Dijkstra’s algorithm always adds the cheapest vertex to the
shortest path tree (greedy).
Department of Computer Science I Dijkstra’s algorithm may not work with negative edge weights.
University of British Columbia

Dynamic programming tries all possible choices.


I Bellman-Ford’s algorithm attempts every one edge extension
July 5, 2007
of shortest paths (exhaustive).
I Bellman-Ford’s algorithm works with negative edge weights.

Optimal substructure Making change with coins

Both Dijkstra’s and Bellman-Ford’s algorithms work because you


Problem
can extend the optimal solution of a subproblem.
Given: Coin values c1 , c2 , . . . , ct with which to make change and
Definition the amount of change to be made n.
A problem has optimal substructure if some optimal solution is Wanted: Number of each coin to use n1 , n2 , . . . , nt such that sum
I an optimal solution to a subproblem combined with of coins is n and fewest coins are used.
I an optimal choice. Denominations chosen so that greedy algorithm works, but not
true in general.
I Often don’t know which choice to make, so try them all.
Example
I May be efficient if subproblems overlap
Coins: 1¢, 3¢, and 4¢ Greedy → 4¢, 1¢, and 1¢
I |V | paths to extend in Bellman-Ford (subproblems)
I |E | edges to extend with (choices) Change to make: 6¢ Optimal → 3¢ and 3¢
Exhaustive coin changing Recursion tree for TryEmAll

Algorithm TryEmAll(C ,n) This is inefficient because it recomputes the same subproblems
int N[C .length] over and over again.
for i ← 0 to N.length−1 do
N[i] ← 0 TryEmAll([1,3,4],90)
90
if n = 0 then
return N
89 87 86
N[1] ← ∞
for i ← 0 to C .length−1 do
88 86 85 86 84 83 85 83 82
if n ≥ C [i] then
subprob ← TryEmAll(C ,n − C [i])
A better idea: replace each recursive call with a table look-up.
if subprob.sum()+1 < N.sum() then
N ← subprob I Construct a table to store the optimal solution for each n.
N[i] ← N[i] + 1 I Iteratively increase n and compute its entry.
return N

Dynamic programming solution Runtime complexity

Algorithm DPCoinChange(C ,n) What is the runtime complexity of this algorithm?


int N[n + 1][C .length]
I If it updates N[m] every time, then n × t updates.
I Each update copies t integers.
for i ← 0 to C .length−1 do
N[0][i] ← 0
I So O(nt 2 ).

for m ← 1 to n do Advantages of eliminating the recursion:


N[m][0] ← ∞
I Counting argument for runtime complexity.
for i ← 0 to C .length−1 do
if m ≥ C [i] then I No call stack overhead!
if N[m − C [i]].sum()+1 < N[m].sum() then
N[m] ← N[m − C [i]] Why copy solution during update when we only chose one coin?
N[m][i] ← N[m][i] + 1 I Faster to remember the optimal choice and
return N[n] I backtrack to recover the solution.
Faster solution Backtracking
// backtracking
int N[C .length]
Algorithm FastDPCoinChange(C ,n) for i ← 0 to N.length−1 do
int minCoins[n + 1], bestChoice[n + 1] N[i] ← 0
minCoins[0] ← 0 while n > 0 do
bestChoice[0] ← −1 N[bestChoice[n]] ← N[bestChoice[n]] + 1
// main loop n ← n − C [bestChoice[n]]
for m ← 1 to n do return N
minCoins[0] ← ∞
for i ← 0 to C .length−1 do Eliminates copying and adding t integers from the innermost loop.
if m ≥ C [i] then I Total runtime complexity is O(nt).
if minCoins[m − C [i]] + 1 < minCoins[m] then
minCoins[m] ← minCoins[m − C [i]] + 1 Example: 1¢, 3¢, and 4¢ coins
bestChoice[m] ← i n 0 1 2 3 4 5 6 7 8 9 10 How do you
N 0 1 2 1 1 2 2 2 2 make 9¢ and
bC ∅ 1 1 3 4 1 3 3 4 10¢ change?

Designing a dynamic programming algorithm Design (cont’d)

Decide on the parameters the problem will have.


I This gives the “shape” of the table and determines the Express the problem in terms of smaller problems.
runtime complexity. I FastDPCoinChange
I FastDPCoinChange only used n to determine bestChoice, so
table is an array. minCoins[n] = min{1 + minCoins[n − C [i]] : C [i] ≤ n}
I Limited supply of coins
I Solution depends on n and number of each type of coin Determine how to fill-in the table.
available (a1 , a2 , . . . , at ). I A subproblem solution must be computed before those that
I Table has one dimension for n, another for a1 , another for a2 , rely on it.
etc.
I Trickier for multi-dimensional tables. Typically row-by-row,
What do we need to store in the table: column-by-column, or diagonal-by-diagonal.
I DPCoinChange stored all of the best choices made so far.
I FastDPCoinChange just stored the last best choice.
Memoization (top-down)

Use divide-and-conquer to fill-in the table.


I Return value if already computed.
I Recurse otherwise.
I Save solution in table before returning.
Pros:
I If some subproblem is irrelevant, memoization won’t solve it.
I If you cannot figure out how to fill the table,
divide-and-conquer will do it for you.
Cons:
I Recursive function calling overhead (stack frame).
I Sometimes miss tricks like we used in FastDPCoinChange.

You might also like