CS / MCS 401 Homework 2 Grader Solutions: Questions From CLRS. Questions Marked With An Asterisk Were Not Graded
CS / MCS 401 Homework 2 Grader Solutions: Questions From CLRS. Questions Marked With An Asterisk Were Not Graded
Questions from CLRS. Questions marked with an asterisk ∗ were not graded.
3.1-2 (p.52) Show that for any real constants a and b, where b > 0, (n + a)b = Θ(nb ).
The first statement is true. Note that 2n+1 = 2 · 2n , so using a constant of c = 2 we see that 2n+1 6 2 · 2n for all n.
The second statement is false. Suppose that 22n = O(2n ), with constant c. Then for all n > n0 for some fixed
n0 , we have that
22n 6 c · 2n = 2log2 (c) · 2n = 2log2 (c)+n .
However, since log2 (c) is finite, there is some N > log2 (c), for which it is immediate that 22N > 2log2 (c)+N , contra-
dicting the above. Therefore the statement is false.
3.2-2 (p.60) Prove equation (3.16), which states that alogb (c) = clogb (a) .
Use equation (3.15), which states that logx (y) = ln(x)/ ln(y). Applying reversible operations to the equation
above, we reach a true statement, and so the original statement is true. Indeed,
which is true.
1
3.2-8 (p.60) Show that k ln(k) = Θ(n) implies k = Θ(n/ ln(n)).
We are given that k ln(k) = Θ(n), or that there exist positive constants c1 , c2 such that
0 6 c1 n 6 k ln(k) 6 c2 n. (3)
n ln(k) n
0 6 c1 6k 6 c2 . (4)
ln(n) ln(n) ln(n)
Since ln(k)/ ln(n) → 0 as n → ∞, we will keep c1 as the lower bound constant to show that k = Θ(n/ ln(n)) (that
is, replacing ln(k)/ ln(n) with the constant 1 does not change the truth of the middle inequality of (4)).
For the upper bound, use the left side of (3) to note that
ln(n) ln(c1 )
c1 n 6 k ln(k) < k 2 =⇒ ln(c1 ) + ln(n) < 2 ln(k) =⇒ <2− <2
ln(k) ln(k)
for k large enough. Finally, rewrite k and use the right side of (4) to get
ln(k) ln(n) n
k=k < c2 · 2.
ln(n) ln(k) ln(n)
3.2 (p.61) Indicate, for each pair of expressions (A, B) in the table below, whether A is O, o, Ω, ω, or Θ of B.
Assume that k > 1, > 0, and c > 1 are constants. Your answer should be in the form of the table with “yes” or
“no” written in each box.
A B O o Ω ω Θ
logk2 (n) n yes yes no no no
nk cn yes yes no no no
√ sin(n)
n n no no no no no
n n/2
2 2 no no yes yes no
nlog2 (c) clog2 (n) yes no yes no yes
n
log2 (n!) log2 (n ) yes no yes no yes
2
4.3-1 (p.87) Show that the solution of T (n) = T (n − 1) + n is O(n2 ).
We find the answer by expressing T (n) in terms of elements in the seqence that come before it. We find that
T (n) = T (n − 1) + n
= T (n − 2) + (n − 1) + n
= T (n − 3) + (n − 2) + (n − 1) + n
..
.
Xn
= T (0) + k
k=1
n(n + 1)
= T (0) +
2
= O(n2 ),
as desired.
4.3-2 (p.87) Show that the solution of T (n) = T (dn/2e) + 1 is O(log2 (n)).
This terminates at 1 after a certain number of steps, say k. That is, after n/2k 6 1 (because then 1 = dn/2k e =
dn/2k+1 e), or when n 6 2k , or when k 6 log2 (n). Since c is the nmber of steps we’ve taken, it follows that
c = dlog2 (n)e. Hence the solution of T (n) is O(log2 (n)).
∗ 4.3-6 (p.87) Show that the solution to T (n) = 2T (bn/2c + 17) + n is O(n log2 (n)).
4.4-1 (p.92) Use a recursion tree to determine a good asymptotic upper bound on the recurrence T (n) = 3T (bn/2c)+
n. Use the substitution method to verify your answer.
We construct the tree as described in the book, summing all the elements of each row. We make the assumption
that n is a power of 2 and we ignore the floor function.
n n
n n n 3
2 2 2 2n
n n n n n n n n n 3 2
log2 (n) 4 4 4 4 4 4 4 4 4 2 n
..
.. .. .. .. .. .. .. .. .. .
. . . . . . . . .
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Θ(nlog2 (3) )
3 log2 (n)
2 n = nlog2 (3/2) n = nlog2 (3)−log2 (2)+1 = nlog2 (3) ,
3
Taking the sum of the elements of each row, we get
2 log2 (n)−1
3 3 3
T (n) = n + n + n + ··· + n + Θ(nlog2 (3) )
2 2 2
log2 (n)−1 i
X 3
= n + Θ(nlog2 (3) )
i=0
2
(3/2)log2 (n) − 1
=n· + Θ(nlog2 (3) )
3/2 − 1
log (n)
3 2
= 2n − 1 + Θ(nlog2 (3) )
2log2 (n)
2n · nlog2 (3)
= − 2n + Θ(nlog2 (3) )
n
= nlog2 (3) − 2n + Θ(nlog2 (3) )
= O(nlog2 (3) ),
where in the third line we used the formula for a finite geometric series (equation (A.5) on page 1147). This is our
asymptotic upper bound on the recurrence T (n).
4.4-2 (p.92) Use a recursion tree to determine a good asymptotic upper bound on the recurrence T (n) = T (n/2)+n2 .
Use the substitution method to verify your answer.
n2 n2
n2 n 2
4 2
n2 n 2
log2 (n) 16 4
..
.. .
.
1 Θ(1)
2
n n2
The last line, by the pattern of the sum of elements in each row, should have 22 log 2 (n)
= n2 = 1 element. Taking the
sum of the elements of each row, we get
n 2 n 2 n 2
T (n) = n2 + + + · · · + log (n)−1 + Θ(1)
2 4 2 2
log2 (n)−1 2
X n
= + Θ(1)
i=0
22i
(1/4)log2 (n) − 1
= n2 · + Θ(1)
1/4 − 1
1
= −3n2 − 1 + Θ(1)
n2
= 3n2 − 3 + Θ(1)
= O(n2 ).
Note the constant −3 is absorbed into Θ(1), giving us an asymptotic upper bound of O(n2 ). Note that this is actually
an asymptotically tight bound, so we could have written Θ(n2 ) as well.
4
∗ 4.4-4 (p.93) Use a recursion tree to determine a good asymptotic upper bound on the recurrence T (n) =
2T (n − 1) + 1.
∗ 4.4-5 (p.93) Use a recursion tree to determine a good asymptotic upper bound on the recurrence T (n) =
T (n − 1) + T (n/2) + n.
4.4-8 (p.93) Use a recursion tree to give an asymptotically tight solution to the recurrence T (n) = T (n − a) +
T (a) + cn, where a > 1 and c > 0 are constants.
cn cn
c(n − a) ca cn
which is asymptotically tight because we did not introduce any slopiness into the calculations.
∗ 4.4-9 (p.93) Use a recursion tree to give an asymptotically tight solution to the recurrence T (n) = T (αn) +
T ((1 − α)n) + cn, where α is a constant in the range 0 < α < 1 and c > 0 is also a constant.
4.5-1 (p.96) Use the master method to give tight asymptotic bounds for the following recurrences.
c. T (n) = 2T (n/4) + n.
Since n = Ω(n1/2+ ) and 2 · n/4 = 2n 6 3n, where 3 > 1 is certainly a constant, case 3 applies, and T (n) = Θ(n).
d. T (n) = 2T (n/4) + n2 .
Since n = Ω(n1/2+ ) and 2 · (n/4)2 = n2 /8 6 2n2 , where 2 > 1 is certainly a constant, case 3 applies, and
T (n) = Θ(n2 ).
5
4.5-3 (p.97) Use the master method to show that the solution to the binary-search recurrence T (n) = T (n/2) +Θ(1)
is T (n) = Θ(log2 (n)).
Here a = 1 and b = 2, so log2 (1) = 0. Case 2 applies, because Θ(nlog2 (1) ) = Θ(n0 ) = Θ(1), which is exactly the
f (n) term. Hence by the master method, T (n) = Θ(nlog2 (1) log2 (n)) = Θ(log2 (n)), as desired.
4.5-4 (p.97) Can the master method be applied to the recurrence T (n) = 4T (n/2) + n2 log2 (n)? Why or why not?
Give an asymptotic upper bound for this reccurence.
Here a = 4 and b = 2, so log2 (4) = 2. Case 1 does not apply, because n2 log2 (n) is not bounded above by n2−
(that is, it is not O(n2− ). Case 2 does not apply, because it is not bound above or below by n2 (that is, it is not
Θ(n2 )). Finally, case 3 also does not apply, because although n2 log2 (n) = Ω(n2 ), it is not Ω(n2+ ), because any
positive power of n eventually grows faster than log2 (n). Hence the master method can not be applied.
An asymptotic upper bound of O(n2 log2 (n)) may be found via the substitution or recurrence tree method.