0% found this document useful (0 votes)
182 views

Introduction To Algorithms, Cormen Et Al, Chap15 Solutions

The document provides solutions to exercises from Chapter 15 on dynamic programming. It summarizes the running time analysis of two algorithms: Recursive-Matrix-Chain runs in O(n3) time while enumerating all parenthesizations takes Ω(4n/n3/2) time, making the recursive approach more efficient. It also describes how to modify the LCS algorithm to use only O(min(m,n)) space instead of a 2D table by reusing the space for previous rows.

Uploaded by

eab
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
182 views

Introduction To Algorithms, Cormen Et Al, Chap15 Solutions

The document provides solutions to exercises from Chapter 15 on dynamic programming. It summarizes the running time analysis of two algorithms: Recursive-Matrix-Chain runs in O(n3) time while enumerating all parenthesizations takes Ω(4n/n3/2) time, making the recursive approach more efficient. It also describes how to modify the LCS algorithm to use only O(min(m,n)) space instead of a 2D table by reusing the space for previous rows.

Uploaded by

eab
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Selected Solutions for Chapter 15:

Dynamic Programming

Solution to Exercise 15.2-5


Each time the l-loop executes, the i-loop executes n l C 1 times. Each time the
i-loop executes, the k-loop executes j i D l 1 times, each time referencing
m twice. Thus the total number
Pn of times that an entry of m is referenced while
computing other entries is lD2 .n l C 1/.l 1/2. Thus,
n X
n
n
X
X
.n l C 1/.l 1/2
R.i; j / D
iD1 j Di

lD2

D 2

n 1
X

.n

l/l

nl

lD1

D 2

n 1
X

n.n

1/n
2

D n3

n2

n3

l2

lD1

lD1

D 2

n 1
X

2n3

.n

1/n.2n
6
3n2 C n
3

1/

Solution to Exercise 15.3-1


Running R ECURSIVE -M ATRIX -C HAIN is asymptotically more efficient than enumerating all the ways of parenthesizing the product and computing the number of
multiplications for each.
Consider the treatment of subproblems by the two approaches.


For each possible place to split the matrix chain, the enumeration approach
finds all ways to parenthesize the left half, finds all ways to parenthesize the
right half, and looks at all possible combinations of the left half with the right
half. The amount of work to look at each combination of left- and right-half

15-2

Selected Solutions for Chapter 15: Dynamic Programming

subproblem results is thus the product of the number of ways to do the left half
and the number of ways to do the right half.
For each possible place to split the matrix chain, R ECURSIVE -M ATRIX -C HAIN
finds the best way to parenthesize the left half, finds the best way to parenthesize
the right half, and combines just those two results. Thus the amount of work to
combine the left- and right-half subproblem results is O.1/.

Section 15.2 argued that the running time for enumeration is .4n =n3=2 /. We will
show that the running time for R ECURSIVE -M ATRIX -C HAIN is O.n3n 1 /.
To get an upper bound on the running time of R ECURSIVE -M ATRIX -C HAIN, well
use the same approach used in Section 15.2 to get a lower bound: Derive a recurrence of the form T .n/  : : : and solve it by substitution. For the lower-bound
recurrence, the book assumed that the execution of lines 12 and 67 each take at
least unit time. For the upper-bound recurrence, well assume those pairs of lines
each take at most constant time c. Thus, we have the recurrence

T .n/ 

if n D 1 ;

cC

n 1
X

.T .k/ C T .n

k/ C c/ if n  2 :

kD1

This is just like the books  recurrence except that it has c instead of 1, and so we
can be rewrite it as
T .n/  2

n 1
X

T .i/ C cn :

i D1

We shall prove that T .n/ D O.n3n 1 / using the substitution method. (Note: Any
upper bound on T .n/ that is o.4n =n3=2 / will suffice. You might prefer to prove one
that is easier to think up, such as T .n/ D O.3:5n /.) Specifically, we shall show
that T .n/  cn3n 1 for all n  1. The basis is easy, since T .1/  c D c  1  31 1 .
Inductively, for n  2 we have
n 1
X
T .n/  2
T .i/ C cn
i D1

 2

n 1
X

ci3i

C cn

i D1

 c 2

n 1
X
iD1

i3

i 1

Cn



  n 1
1 3n
n3
Cn
C
D c 2
3 1
.3 1/2


1 3n
n 1
D cn3
Cc
Cn
2
c
D cn3n 1 C .2n C 1 3n /
2
 cn3n 1 for all c > 0, n  1 .

(see below)

Selected Solutions for Chapter 15: Dynamic Programming

15-3

Running R ECURSIVE -M ATRIX -C HAIN takes O.n3n 1 / time, and enumerating all
parenthesizations takes .4n =n3=2 / time, and so R ECURSIVE -M ATRIX -C HAIN is
more efficient than enumeration.
Note: The above substitution uses the following fact:
n 1
X

ix i

iD1

1
nx n 1
C
x 1
.x

xn
:
1/2

This equation can be derived from equation (A.5) by taking the derivative. Let
f .x/ D

n 1
X

xi D

iD1

Then
n 1
X

ix i

xn 1
x 1

D f 0 .x/ D

iD1

1:

nx n 1
1
C
x 1
.x

xn
:
1/2

Solution to Exercise 15.4-4


When computing a particular row of the c table, no rows before the previous row
are needed. Thus only two rows2  Y:length entriesneed to be kept in memory
at a time. (Note: Each row of c actually has Y:length C1 entries, but we dont need
to store the column of 0sinstead we can make the program know that those
entries are 0.) With this idea, we need only 2  min.m; n/ entries if we always call
LCS-L ENGTH with the shorter sequence as the Y argument.
We can thus do away with the c table as follows:





Use two arrays of length min.m; n/, preious-row and current-row, to hold the
appropriate rows of c.
Initialize preious-row to all 0 and compute current-row from left to right.
When current-row is filled, if there are still more rows to compute, copy
current-row into preious-row and compute the new current-row.

Actually only a little more than one rows worth of c entriesmin.m; n/ C 1 entriesare needed during the computation. The only entries needed in the table
when it is time to compute ci; j are ci; k for k  j 1 (i.e., earlier entries in
the current row, which will be needed to compute the next row); and ci 1; k for
k  j 1 (i.e., entries in the previous row that are still needed to compute the rest
of the current row). This is one entry for each k from 1 to min.m; n/ except that
there are two entries with k D j 1, hence the additional entry needed besides the
one rows worth of entries.
We can thus do away with the c table as follows:


Use an array a of length min.m; n/ C 1 to hold the appropriate entries of c. At


the time ci; j is to be computed, a will hold the following entries:



ak D ci; k for 1  k < j


ak D ci 1; k for k  j

1 (i.e., earlier entries in the current row),


1 (i.e., entries in the previous row),

15-4

Selected Solutions for Chapter 15: Dynamic Programming

a0 D ci; j 1 (i.e., the previous entry computed, which couldnt be put


into the right place in a without erasing the still-needed ci 1; j 1).

Initialize a to all 0 and compute the entries from left to right.




Note that the 3 values needed to compute ci; j for j > 1 are in a0 D
ci; j 1, aj 1 D ci 1; j 1, and aj D ci 1; j .
When ci; j has been computed, move a0 (ci; j
1) to its correct
place, aj 1, and put ci; j in a0.

Solution to Problem 15-4


Note: We assume that no word is longer than will fit into a line, i.e., li  M for
all i.
First, well make some definitions so that we can state the problem more uniformly.
Special cases about the last line and worries about whether a sequence of words fits
in a line will be handled in these definitions, so that we can forget about them when
framing our overall strategy.
Pj

Define extrasi; j D M j C i
kDi lk to be the number of extra spaces
at the end of a line containing words i through j . Note that extras may be
negative.

Now define the cost of including a line containing words i through j in the sum
we want to minimize:
lci; j D

0
.extrasi; j /3

if extrasi; j < 0 (i.e., words i; : : : ; j dont fit) ;


if j D n and extrasi; j  0 (last line costs 0) ;
otherwise :

By making the line cost infinite when the words dont fit on it, we prevent such
an arrangement from being part of a minimal sum, and by making the cost 0 for
the last line (if the words fit), we prevent the arrangement of the last line from
influencing the sum being minimized.
We want to minimize the sum of lc over all lines of the paragraph.
Our subproblems are how to optimally arrange words 1; : : : ; j , where j D
1; : : : ; n.
Consider an optimal arrangement of words 1; : : : ; j . Suppose we know that the
last line, which ends in word j , begins with word i. The preceding lines, therefore,
contain words 1; : : : ; i 1. In fact, they must contain an optimal arrangement of
words 1; : : : ; i 1. (The usual type of cut-and-paste argument applies.)
Let cj be the cost of an optimal arrangement of words 1; : : : ; j . If we know that
the last line contains words i; : : : ; j , then cj D ci 1 C lci; j . As a base case,
when were computing c1, we need c0. If we set c0 D 0, then c1 D lc1; 1,
which is what we want.
But of course we have to figure out which word begins the last line for the subproblem of words 1; : : : ; j . So we try all possibilities for word i, and we pick the
one that gives the lowest cost. Here, i ranges from 1 to j . Thus, we can define cj
recursively by

Selected Solutions for Chapter 15: Dynamic Programming

cj D

0
min .ci

1ij

15-5

if j D 0 ;
1 C lci; j / if j > 0 :

Note that the way we defined lc ensures that




all choices made will fit on the line (since an arrangement with lc D 1 cannot
be chosen as the minimum), and
the cost of putting words i; : : : ; j on the last line will not be 0 unless this really
is the last line of the paragraph (j D n) or words i : : : j fill the entire line.

We can compute a table of c values from left to right, since each value depends
only on earlier values.
To keep track of what words go on what lines, we can keep a parallel p table that
points to where each c value came from. When cj is computed, if cj is based
on the value of ck 1, set pj D k. Then after cn is computed, we can trace
the pointers to see where to break the lines. The last line starts at word pn and
goes through word n. The previous line starts at word ppn and goes through
word pn 1, etc.
In pseudocode, heres how we construct the tables:
P RINT-N EATLY.l; n; M /
let extras1 : : n; 1 : : n, lc1 : : n; 1 : : n, and c0 : : n be new arrays
// Compute extrasi; j for 1  i  j  n.
for i D 1 to n
extrasi; i D M li
for j D i C 1 to n
extrasi; j D extrasi; j 1 lj 1
// Compute lci; j for 1  i  j  n.
for i D 1 to n
for j D i to n
if extrasi; j < 0
lci; j D 1
elseif j == n and extrasi; j  0
lci; j D 0
else lci; j D .extrasi; j /3
// Compute cj and pj for 1  j  n.
c0 D 0
for j D 1 to n
cj D 1
for i D 1 to j
if ci 1 C lci; j < cj
cj D ci 1 C lci; j
pj D i
return c and p
Quite clearly, both the time and space are .n2 /.
In fact, we can do a bit better: we can get both the time and space down to .nM /.
The key observation is that at most dM=2e words can fit on a line. (Each word is

15-6

Selected Solutions for Chapter 15: Dynamic Programming

at least one character long, and theres a space between words.) Since a line with
words i; : : : ; j contains j i C 1 words, if j i C 1 > dM=2e then we know
that lci; j D 1. We need only compute and store extrasi; j and lci; j for
j i C 1  dM=2e. And the inner for loop header in the computation of cj
and pj can run from max.1; j dM=2e C 1/ to j .
We can reduce the space even further to .n/. We do so by not storing the lc
and extras tables, and instead computing the value of lci; j as needed in the last
loop. The idea is that we could compute lci; j in O.1/ time if we knew the
value of extrasi; j . And if we scan for the minimum value in descending order
of i , we can compute that as extrasi; j D extrasi C 1; j li 1. (Initially,
extrasj; j D M lj .) This improvement reduces the space to .n/, since now
the only tables we store are c and p.
Heres how we print which words are on which line. The printed output of
G IVE -L INES.p; j / is a sequence of triples .k; i; j /, indicating that words i; : : : ; j
are printed on line k. The return value is the line number k.
G IVE -L INES.p; j /
i D pj
if i == 1
k D1
else k D G IVE -L INES.p; i
print .k; i; j /
return k

1/ C 1

The initial call is G IVE -L INES.p; n/. Since the value of j decreases in each recursive call, G IVE -L INES takes a total of O.n/ time.

You might also like