Algorithms and Data Structures:
Dynamic Programming; Matrix-chain multiplication
ADS: lects 12 and 13 – slide 1 –
Algorithmic Paradigms
Divide and Conquer
Idea: Divide problem instance into smaller sub-instances of the
same problem, solve these recursively, and then put solutions
together to a solution of the given instance.
Examples: Mergesort, Quicksort, Strassen’s algorithm, FFT.
Greedy Algorithms
Idea: Find solution by always making the choice that looks
optimal at the moment — don’t look ahead, never go back.
Examples: Prim’s algorithm, Kruskal’s algorithm.
Dynamic Programming
Idea: Turn recursion upside down.
Example: Floyd-Warshall algorithm for the all pairs shortest path
problem.
ADS: lects 12 and 13 – slide 2 –
Dynamic Programming - A Toy Example
Fibonacci Numbers
F0 = 0,
F1 = 1,
Fn = Fn−1 + Fn−2 (for n ≥ 2).
A recursive algorithm
Algorithm Rec-Fib(n)
1. if n = 0 then
2. return 0
3. else if n = 1 then
4. return 1
5. else
6. return Rec-Fib(n − 1)+Rec-Fib(n − 2)
Ridiculously slow: exponentially many repeated computations of Rec-Fib(j)
for small values of j.
ADS: lects 12 and 13 – slide 3 –
Fibonacci Example (cont’d)
Why is the recursive solution so slow?
Running time T (n) satisfies
T (n) = T (n − 1) + T (n − 2) + Θ(1) ≥ Fn ∼ (1.618)n .
Fn
Fn−1 Fn−2
Fn−2 Fn−3 Fn−3 Fn−4
Fn−3 Fn−4 Fn−4 Fn−5 Fn−4 Fn−5
ADS: lects 12 and 13 – slide 4 –
Lower bounds (in order of increasing quality and effort to prove).
1. Let T 0 (n) = 2 ∗ T 0 (n − 2) + Θ(1). Show by induction on n that
T(n) ≥ T 0 (n). Recursion
√ n reaches zero and ends after n/2 steps. Thus
T 0 (n) ≥ 2n/2 = 2 ∼ (1.41)n .
2. We show Fn ≥ 12 (3/2)n for n ≥ 8 by induction on n. Induction
step: T (n) ≥ T (n − 1) + T (n − 2) ≥ 21 ((3/2)n−1 + (3/2)n−2 ) =
1 n−2 ((3/2) + 1) > 1 (3/2)n−2 (3/2)2 = 1 (3/2)n .
2 (3/2) 2 2
3. Let T 0 (n) = T 0 (n − 1) + T 0 (n − 2) for n ≥ 2 and T 0 (0) = 0 and
T 0 (1) = 1. Then T (n) ≥ T 0 (n). We have
" # " #" # " #n−1 " #
T 0 (n) 1 1 T 0 (n − 1) 1 1 T 0 (1)
= =
T 0 (n − 1) 1 0 T 0 (n − 2) 1 0 T 0 (0)
Basic linear algebra. Compute eigenvectors and √ a base transform to
diagonalize the matrix. Yields T 0 (n) = Ω(( 1+2 5 )n ).
ADS: lects 12 and 13 – slide 5 –
Fibonacci Example (cont’d)
Dynamic Programming Approach
Algorithm Dyn-Fib(n)
1. F [0] = 0
2. F [1] = 1
3. for i ← 2 to n do
4. F [i] ← F [i − 1] + F [i − 2]
5. return F [n]
Build “from the bottom up”
Running Time
Θ(n)
Very fast in practice - just need an array (of linear size) to store the F(i)
values.
Further improvement to use Θ(1) space (but still Θ(n) time): Just use
variables to store the current and two previous Fi .
ADS: lects 12 and 13 – slide 6 –
Multiplying Sequences of Matrices
Recall
Multiplying a (p × q) matrix with a (q × r ) matrix (in the
standard way) requires
pqr
multiplications.
We want to compute products of the form
A1 · A2 · · · An .
How do we set the parentheses?
ADS: lects 12 and 13 – slide 7 –
Example
Compute
A · B · C · D
30 × 1 1 × 40 40 × 10 10 × 25
Multiplication order (A · B) · (C · D) requires
30 · 1 · 40 + 40 · 10 · 25 + 30 · 40 · 25 = 41, 200
multiplications.
Multiplication order A · ((B · C ) · D) requires
1 · 40 · 10 + 1 · 10 · 25 + 30 · 1 · 25 = 1, 400
multiplications.
ADS: lects 12 and 13 – slide 8 –
The Matrix Chain Multiplication Problem
Input:
Sequence of matrices A1 , . . . , An , where Ai is a
pi−1 × pi -matrix
Output:
Optimal number of multiplications needed to compute
A1 · A2 · · · An , and an optimal parenthesisation to realise
this
Running time of algorithms will be measured in terms of n.
ADS: lects 12 and 13 – slide 9 –
Solution “Attempts”
Approach 1: Exhaustive search (CORRECT but SLOW).
Try all possible parenthesisations and compare them. Correct,
but extremely slow. Similar recurrence as Divide and Conquer
(see below), thus exponential. See also Textbook.
Approach 2: Greedy algorithm (INCORRECT).
Always do the cheapest multiplication first. Does not work
correctly — sometimes, it returns a parenthesisation that is not
optimal:
Example: Consider
A1 · A2 · A3
3 × 100 100 × 2 2×2
Solution proposed by greedy algorithm: A1 · (A2 · A3 ) with
100 · 2 · 2 + 3 · 100 · 2 = 1000 multiplications.
Optimal solution: (A1 · A2 ) · A3 with 3 · 100 · 2 + 3 · 2 · 2 = 612
multiplications.
ADS: lects 12 and 13 – slide 10 –
Solution “Attempts” (cont’d)
Approach 3: Alternative greedy algorithm (INCORRECT).
Set outermost parentheses such that cheapest multiplication is
done last.
Doesn’t work correctly either (Exercise!).
Approach 4: Recursive (Divide and Conquer) - (SLOW - see over).
Divide:
(A1 · · · Ak ) · (Ak+1 · · · An )
For all k, recursively solve the two sub-problems and then take
best overall solution.
For 1 ≤ i ≤ j ≤ n, let
m[i, j] = least number of multiplications needed to com-
pute Ai · · · Aj
Then
0 if i = j,
m[i, j] =
mini≤k<j m[i, k] + m[k + 1, j] + pi−1 pk pj if i < j.
ADS: lects 12 and 13 – slide 11 –
The Recursive Algorithm (SLOW)
Running time T (n) satisfies the recurrence
X
n−1
T (n) = T (k) + T (n − k) + Θ(n).
k=1
This implies
T (n) = Ω(2n ).
We show T (n) ≥ c2n for some constant c by induction on n. Base case
easy (choose constant suitably).
Induction hypothesis T
P(n) ≥ c2n for some constant c.
P
Ind. step.: T (n) ≥ k=1 T (k) + T (n − k) = n−1
n−1
2T (k)) ≥
Pn−1 k =c
Pn−1 k+1 k=1
≥ c2n .
k=1 2c2 k=1 2
ADS: lects 12 and 13 – slide 12 –
Dynamic Programming Solution
As before:
m[i, j] = least number of multiplications needed to
compute Ai · · · Aj
Moreover,
s[i, j] = (the smallest) k such that i ≤ k < j and
m[i, j] = m[i, k] + m[k + 1, j] + pi−1 pk pj .
s[i, j] can be used to reconstruct the optimal parenthesisation.
Idea
Compute the m[i, j] and s[i, j] in a bottom-up fashion.
TURN RECURSION UPSIDE DOWN :-)
ADS: lects 12 and 13 – slide 13 –
Implementation
Algorithm Matrix-Chain-Order(p)
1. n ← p.length − 1
2. for i ← 1 to n do
3. m[i, i] ← 0
4. for ` ← 2 to n do
5. for i ← 1 to n − ` + 1 do
6. j ←i +`−1
7. m[i, j] ← ∞
8. for k ← i to j − 1 do
9. q ← m[i, k] + m[k + 1, j] + pi−1 pk pj
10. if q < m[i, j] then
11. m[i, j] ← q
12. s[i, j] ← k
13. return s
Running Time: Θ(n3 )
ADS: lects 12 and 13 – slide 14 –
Example
A1 · A2 · A3 · A4
30 × 1 1 × 40 40 × 10 10 × 25
Solution for m and s
m 1 2 3 4 s 1 2 3 4
1 0 1200 700 1400 1 1 1 1
2 0 400 650 2 2 3
3 0 10 000 3 3
4 0 4
Optimal Parenthesisation
A1 · ((A2 · A3 ) · A4 ))
ADS: lects 12 and 13 – slide 15 –
Multiplying the Matrices
Algorithm Matrix-Chain-Multiply(A, p)
1. n ← A.length
2. s ←Matrix-Chain-Order(p)
3. return Rec-Mult(A, s, 1, n)
Algorithm Rec-Mult(A, s, i, j)
1. if i < j then
2. C ←Rec-Mult(A, s, i, s[i, j])
3. D ←Rec-Mult(A, s, s[i, j] + 1, j)
4. return (C ) · (D)
5. else
6. return Ai
ADS: lects 12 and 13 – slide 16 –
Problems
See Wikipedia:
http://en.wikipedia.org/wiki/Dynamic programming
[CLRS] Sections 15.2-15.3
1. Review the Edit-Distance Algorithm and try to understand why it is
a dynamic programming algorithm.
2. Exercise 15.2-1 of [CLRS].
ADS: lects 12 and 13 – slide 17 –