0% found this document useful (0 votes)
13 views

Midterm

This document contains the details of a midterm exam for an algorithms course, including two main exercises. Exercise 1 involves determining the asymptotic notation of several functions, and Exercise 2 involves finding the asymptotic upper bounds of recurrence relations. The solutions provided justify the answers by applying concepts like the Master Theorem and analyzing recursion trees.

Uploaded by

Mr. Xcoder
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Midterm

This document contains the details of a midterm exam for an algorithms course, including two main exercises. Exercise 1 involves determining the asymptotic notation of several functions, and Exercise 2 involves finding the asymptotic upper bounds of recurrence relations. The solutions provided justify the answers by applying concepts like the Master Theorem and analyzing recursion trees.

Uploaded by

Mr. Xcoder
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CSE103 Introduction to Algorithms (2020/2021) Midterm

CSE103 Introduction to Algorithms


Midterm Exam
May 6th, 2021

You have 1 hour and 30 minutes.


No computer, laptop, phone or any Internet-connected device authorized.
You may use the supplementary material on recurrences.
Each question/subquestion is independent of the others and may be answered in any order.
The maximum score for the exam is 50 points. The number of points awarded by fully answering
each question is written next to the question/subquestion itself.

Exercise 1: asymptotic notation (12 points)


True or false? Please give a brief justification of your answer.
1. (2 points) n2 + 10n + 6 = O(n3 )
2. (2 points) 2n√+ 42n1000 = Θ(2n )
3. (2 points) 4 4 n + log n = O(log n)
4. (2 points) 37n4 + n + 15 = Ω(n2 )
5. (2 points) if f (n) = n2O(n) , then log f (n) = O(n)
n log n
6. (2 points) 2 2 = O (2n )

Solution.

1. True, n2 grows less quickly than n3 .

2. True: 2n + 42n1000 is obviously bounded below by 2n , and 2n eventually bounds above any
polynomial.

3. False: 4 n grows more quickly that log n.

4. True: n2 grows less quickly than n4 .

5. True: f (n) = n2O(n) means that there exist m ≥ 0 and c > 0 such that, for all n ≥ m, f (n) ≤ n2cn .
Since log is monotonic, this implies that, for all n ≥ m, log f (n) ≤ log(n2cn ) = log n + log 2cn =
log n + cn ≤ (1 + c)n, which proves that log f (n) = O(n).

6. False: see Exercise 1.8 of TD3.

Exercise 2: recurrences (12 points)


Give the best asymptotic upper bound you can to the following recurrences (where, in all cases,
T (0) = 1). Please justify your answer.
1. (3 points) T (n) = 2T n3  + 5n + log n
2. (3 points) T (n) = 2T n2 + 2n
2
3. (3 points) T (n) = 4T n2 + sinnn+2


4. (3 points) T (n) = T 2n

3 +1

1
CSE103 Introduction to Algorithms (2020/2021) Midterm

Solution.
1. Since 5n + log n = Θ(n), we apply the Master Theorem with a = 2, b = 3 and c = 1, and since
a < bc , we get T (n) = Θ(n).

2. The Master Theorem does not apply, so we try with the recursion tree method. It is a full binary
tree, and very regular:
• level 0 (the root) costs 2n ;
n
• level 1 has 2 nodes costing 2 2 ;
n
• level 4 has 4 nodes costing 2 4 , and so on.
n n k+ n
So level k has 2k nodes costing 2 2k each. The cost of level k is therefore 2k 2 2k = 2 2k . The series
for n → ∞ does not converge, so we need to estimate the number of levels. This is very easy: the
depth of the tree is log n because it is a full binary tree. So we have
log n n
k+
T (n) ≤ ∑ 2 2k

k =0

From this, we may infer several upper bounds on T (n), depending on how tightly we approxi-
mate the above sum. A first, very rough bound may be found as follows (knowing that 2nk ≤ n
for all k):
log n n
log n log n
k+
∑ 2 2k ≤ ∑ 2k + n ≤ ∑ 2log n+n = (log n + 1)n2n ≤ 4n ,
k =0 k =0 k =0
for n large enough. So we may try with proving that T (n) = O( g(n)) where g(n) := 4n . The
induction step works easily:
n n
T (n) = 2 T + 2n ≤ 2 · 4 2 + 2n = 3 · 2n ≤ 4n
2
which holds as long as n ≥ 3. For the base case, let us find the value of T (3):

T (0) = 1
T (1) = 2T (0) + 21 = 4
T (2) = 2T (1) + 22 = 12
T (3) = 2T (1) + 23 = 16,

so we do have T (3) ≤ 43 , which proves that T (n) = O(4n ) with constants m = 3 and c = 1.
Another possibility is to take the better estimate g(n) = n log(n)2n , which we obtained above
before approximating everything to 4n . This works too.
Another, more refined bound is obtained by observing that
log n log n log n
k+ n 1 − 2log n
∑ 2 2k ≤ ∑ 2k + n = 2n ∑ 2k = 2n
1−2
= 2n (2log n − 1) = 2n (n − 1).
k =0 k =0 k =0

Let g(n) := n2n


and let us try and prove that, for all n large enough, T (n) ≤ g(n) (that is, we are
trying to use c = 1 as the big O constant). Let us start with the induction step:
n n n n n
T (n) = 2 T + 2n ≤ 2 2 2 + 2n = n2 2 + 2n = n2− 2 2n + 2n .
2 2
n n
Since 2− 2 goes to zero very quickly, we certainly have n2− 2 ≤ n2 for all n large enough: in fact,
it is easy to see that this holds for all n ≥ 2. If n ≥ 2, we also have n2 ≥ 1, hence 2n ≤ n2 2n .
Therefore, for all n ≥ 2, we have
n n n n n
T (n) ≤ n2− 2 2n + 2n ≤ 2 + 2 = n2n = g(n),
2 2

2
CSE103 Introduction to Algorithms (2020/2021) Midterm

as desired. The first few values of T (n) and g(n) are:

T (0) = 1 g (0) = 0
1
T (1) = 2T (0) + 2 = 4 g (1) = 2
2
T (2) = 2T (1) + 2 = 12 g (2) = 8
3
T (3) = 2T (1) + 2 = 16 g(3) = 24,

so we see that taking m = 3 works for proving T (n) = O(n2n ).


None of the above upper bounds is tight: in fact, it is possible to prove that T (n) = Θ(2n ). It is
obvious that T (n) = Ω(2n ) (it is the root of the tree). The intuition for the upper bound is that,
unlike most cases we’ve seen, in which the costs of the nodes of the recursion tree are polynomial
n
in n, the weights here are exponential, and 2 2k grows strictly more slowly than 2n , for all k > 0
n
(technically, 2 2k = o (2n ) for all k > 0). Therefore, all the costs in the tree are dominated by the
root! This is not true when the costs are polynomial, because they are all of the same degree.
To confirm this intuition, we may show in the usual way that, for n large enough and for some
c > 0 to be determined, T (n) ≤ c2n . Let us try the induction step, to see if we can determine c:
n n n
T (n) = 2 T + 2n ≤ 2c2 2 + 2n = (c21− 2 + 1)2n .
2
n
If we manage to find c such that c21− 2 + 1 ≤ c, we’re happy. Solving the inequality, we get

1
c≥ n .
1 − 21− 2
The quantity to the right of the inequality tends to 1 from above, and in fact, for all n ≥ 4, we
have
1
1< n ≤ 2.
1 − 21 − 2
So, if we take c := 2 and n big enough (at least n ≥ 4), it will work! For the base case, we check a
few further values of T (n), and see that T (4) = 40 and T (5) = 56, whereas 2 · 25 = 64, so we may
take m := 5 (this is fine because we said above that any m ≥ 4 would be fine for the induction)
and we have shown that T (n) = O(2n ). This, combined with the obvious lower bound pointed
out above, gives T (n) = Θ(2n ). If you managed to prove this, congratulations, you will get extra
points!
n2
3. We have that, for all n ≥ 0, 1 ≤ sin n + 2 ≤ 3, so for all n ≥ 0, 13 n2 ≤ sin n+2 ≤ n2 , which shows
n2
that sin n+2= Θ(n2 ). We may therefore apply the Master Theorem with a = 4, b = 2 and c = 2,
and since a = bc , we get T (n) = Θ(n2 log n).
3
4. Obviously, for all n > 0, 1 = n0 so we may apply the master theorem with a = 1, b = 2 and
c = 0, and since a = bc , we get T (n) = Θ(log n).

Exercise 3: a weird sorting algorithm (13 points)


Believe it or not, the following function sorts a list l:
def weirdSort ( l ):
n = len ( l )
r = l
if n == 2 and l [0] >= l [1]:
r = [ l [1] , l [0]]
elif n > 2:

3
CSE103 Introduction to Algorithms (2020/2021) Midterm

a = n // 3
b = 2* a + ( n % 3)
r1 = weirdSort ( l [0: b ]) + l [ b : n ]
r2 = r1 [0: a ] + weirdSort ( r1 [ a : n ])
r = weirdSort ( r2 [0: b ]) + r2 [ b : n ]
return r
Please answer the following questions, justifying your answer in each case:
1. (8 points) Is this algorithm more or less efficient than insertion sort? Justify your answer by a
complexity analysis using one of the techniques you have learned.
2. (5 points) Suppose that the above function is modified by replacing the if condition at the third
line with
n == 2 and l [0][0] >= l [1][0]
In this way, the function operates on lists of lists, and sorts them according to the value of
the first element of the inner lists. For example, weirdSort([[1,4],[2,3],[0,2]]) will return
[[0,2],[1,4],[2,3]].
Recall that a sorting algorithm is stable if it preserves the relative position of elements with
identical sorting key (in this case, the sorting key is the value of the first element of the inner
lists). For example, when sorting [[0,4],[2,3],[0,2]], a stable sorting algorithm must return
[[0,4],[0,2],[2,3]], because [0,4] comes before [0,2] in the original list, whereas an unsta-
ble sorting algorithm may return [[0,2],[0,4],[2,3]], which is equally well sorted, but the
position of the elements [0,4] and [0,2] has been swapped.
Show, by means of an example, that weirdSort as modified above is not stable. Is it possible to
modify it so that it becomes stable? How?

Solution.

1. By inspecting the code, we see that weirdSort(l) works as follows (we denote the length of l
by n):
• if n ≤ 1, it returns l itself;
• if n = 2, it swaps the two elements of l (if necessary) and returns the result;
• if n ≥ 3 or greater, the function calls itself three times:
– the first time on a list of length b;
– the second time on a list of length n − a;
– the third time on a list of length b.
From the code, we see that a = n3 , so n − a = 32 n, whereas b is equal to 2a plus a constant (at
most equal to 2), so we may say that b = 23 n as well. For the rest, the algorithm does only basic
arithmetic (costing Θ(1)) and slicing and concatenating lists of length n (costing Θ(n)), so the
total cost outside of the recursive calls is Θ(n).
Therefore, the running time T (n) of weirdSort obeys the recurrence
 
2
T (n) = 3 T n + Θ ( n ),
3
3
which may be solved using the Master Theorem: a = 3, b = 2 and c = 1, and since a > bc , we
log 3 3
have T (n) = Θ(n 2 ).
2
Now, since 32 = 9
4 = 2+ 1
4 < 3, we have that log 3 3 > 2. Therefore, weirdSort is asymptoti-
2
cally worse than insertion sort (which is Θ(n2 )).

2. An example showing that weirdSort is not stable is [[0,1],[0,2]]: when we execute the modi-
fied version of weirdSort with this input, the if condition is true (the length is 2 and the sorting

4
CSE103 Introduction to Algorithms (2020/2021) Midterm

keys are equal), so the function swaps the two elements of the list and returns [[0,2],[0,1]],
whereas a stable sorting algorithm would have returned the list unchanged.
In order to make weirdSort stable, it is enough to replace the conditional at the third line with
if n == 2 and l [0][0] > l [1][0]:

that is, we test for strict inequality. In this way, elements with the same sorting key are never
swapped, and they retain their original relative position.

Exercise 4: Russian multiplication (13 points)


The following is an algorithm for multiplying two strictly positive integer numbers x and y:
def mul (x , y , s =0):
if x == 1:
return s + y
if x % 2 != 0:
s += y
return mul ( x //2 , y *2 , s )
Known by some as “Russian multiplication”, it is in fact a variant of a very old multiplication algorithm
used by the ancient Egyptians. Notice that it only requires knowing how to add, multiply by 2, divide
by 2, and recognizing whether a number is even or odd.
We will consider the following cost model, which is realistic when representing arbitrarily big
numbers in base 2:
• addition of two n-digit numbers costs Θ(n);
• multiplying or dividing an arbitrary number by 2 costs Θ(1);
• verifying whether an arbitrary number is even or odd costs Θ(1).
Please answer the following questions, justifying your answer in each case:
1. (5 points) What is the worst-case asymptotic (big O) complexity of multiplying two n-digit
numbers using Russian multiplication?
2. (3 points) What is the worst-case asymptotic (big O) complexity of executing mul(1,y) with
respect to the number n of digits of y?
3. (3 points) What is the worst-case asymptotic (big O) complexity of executing mul(x,1) with
respect to the number n of digits of x?
4. (2 points) The ancient Egyptians did not consider zero to be a number. What happens if the
above algorithm is applied to x = 0? What about y = 0?

Solution.
1. The algorithm works as follows: it keeps track of an incremental sum s, and if the first input x is
1, it stops immediately returning s + y; otherwise, it updates the sum s by adding y to it only
in case x is odd, and calls itself on inputs x//2 and y*2. Notice that, albeit recursive, this is not
a divide and conquer algorithm: if the number of digits of x is n, the number of digits of x//2 is
n − 1, so we are decreasing the input size but not dividing it by something.
So, on input two positive integers x and y both of length n, the algorithm will call itself n times
before returning, and each time it will perform some operations of cost O(1) and a sum s + y,
the cost of which is O(m), where m is the maximum of the number of digits of s and y. For
the latter, it has n digits at the beginning and, at each iteration, the number of digits increases
by 1 (because the recursive call is on 2*y), so at any time the number of digits of y is bounded
by 2n. The number of digits of s increases progressively but, since s is always smaller than the
result x * y, it is also at most 2n, so m ≤ 2n = O(n) and we have that the total cost of each
iteration is O(n). Since, as pointed out above, we are doing n iterations, the total cost of Russian
multiplication is O(n2 ).

5
CSE103 Introduction to Algorithms (2020/2021) Midterm

2. When we execute Russian multiplication on input x = 1 and an n-digit number y, we return im-
mediately 0 + y. We may imagine that addition is implemented so that adding zero terminates
immediately (elementary school addition has this behavior, for example), so the cost is O(1), i.e.,
it is independent of y. Otherwise, if for some reason addition is “silly”, then we would get a cost
O(n). Since this was not specified in the statement of the problem, both answers are fine.

3. When we execute Russian multiplication on input an n-digit number x and y = 1, we do n


iterations during which we progressively “rebuild” x in s, by means of successive additions. The
worst case is when x= 2n − 1 (i.e., its binary representation consists of n digits 1). In that case,
we perform an addition at every iteration, on numbers with more and more digits (1 digit the
first time, 2 digits the second time, and so on), so the complexity is
!
n
O ∑i = O ( n2 ).
i =1

4. When we execute mul(0,y) (which is the same as mul(0,y,0)), we do not enter in any of the two
if statements, and we call again mul(0,y,0), so we enter an infinite loop and never terminate.
When we execute mul(x,0) (which is the same as mul(x,0,0)), with x having n digits, we either
do not change s, or we add 0 to it (so s stays 0 in both cases), and then we do a recursive call on
mul(x//2,0,0), so the length of the first input has decreased by 1. This means that we will do n
recursive calls, during which s will always be 0, and in the end, after O(n) time, we will return
0, which is the expected result.

You might also like