Midterm
Midterm
Solution.
2. True: 2n + 42n1000 is obviously bounded below by 2n , and 2n eventually bounds above any
polynomial.
√
3. False: 4 n grows more quickly that log n.
5. True: f (n) = n2O(n) means that there exist m ≥ 0 and c > 0 such that, for all n ≥ m, f (n) ≤ n2cn .
Since log is monotonic, this implies that, for all n ≥ m, log f (n) ≤ log(n2cn ) = log n + log 2cn =
log n + cn ≤ (1 + c)n, which proves that log f (n) = O(n).
4. (3 points) T (n) = T 2n
3 +1
1
CSE103 Introduction to Algorithms (2020/2021) Midterm
Solution.
1. Since 5n + log n = Θ(n), we apply the Master Theorem with a = 2, b = 3 and c = 1, and since
a < bc , we get T (n) = Θ(n).
2. The Master Theorem does not apply, so we try with the recursion tree method. It is a full binary
tree, and very regular:
• level 0 (the root) costs 2n ;
n
• level 1 has 2 nodes costing 2 2 ;
n
• level 4 has 4 nodes costing 2 4 , and so on.
n n k+ n
So level k has 2k nodes costing 2 2k each. The cost of level k is therefore 2k 2 2k = 2 2k . The series
for n → ∞ does not converge, so we need to estimate the number of levels. This is very easy: the
depth of the tree is log n because it is a full binary tree. So we have
log n n
k+
T (n) ≤ ∑ 2 2k
k =0
From this, we may infer several upper bounds on T (n), depending on how tightly we approxi-
mate the above sum. A first, very rough bound may be found as follows (knowing that 2nk ≤ n
for all k):
log n n
log n log n
k+
∑ 2 2k ≤ ∑ 2k + n ≤ ∑ 2log n+n = (log n + 1)n2n ≤ 4n ,
k =0 k =0 k =0
for n large enough. So we may try with proving that T (n) = O( g(n)) where g(n) := 4n . The
induction step works easily:
n n
T (n) = 2 T + 2n ≤ 2 · 4 2 + 2n = 3 · 2n ≤ 4n
2
which holds as long as n ≥ 3. For the base case, let us find the value of T (3):
T (0) = 1
T (1) = 2T (0) + 21 = 4
T (2) = 2T (1) + 22 = 12
T (3) = 2T (1) + 23 = 16,
so we do have T (3) ≤ 43 , which proves that T (n) = O(4n ) with constants m = 3 and c = 1.
Another possibility is to take the better estimate g(n) = n log(n)2n , which we obtained above
before approximating everything to 4n . This works too.
Another, more refined bound is obtained by observing that
log n log n log n
k+ n 1 − 2log n
∑ 2 2k ≤ ∑ 2k + n = 2n ∑ 2k = 2n
1−2
= 2n (2log n − 1) = 2n (n − 1).
k =0 k =0 k =0
2
CSE103 Introduction to Algorithms (2020/2021) Midterm
T (0) = 1 g (0) = 0
1
T (1) = 2T (0) + 2 = 4 g (1) = 2
2
T (2) = 2T (1) + 2 = 12 g (2) = 8
3
T (3) = 2T (1) + 2 = 16 g(3) = 24,
1
c≥ n .
1 − 21− 2
The quantity to the right of the inequality tends to 1 from above, and in fact, for all n ≥ 4, we
have
1
1< n ≤ 2.
1 − 21 − 2
So, if we take c := 2 and n big enough (at least n ≥ 4), it will work! For the base case, we check a
few further values of T (n), and see that T (4) = 40 and T (5) = 56, whereas 2 · 25 = 64, so we may
take m := 5 (this is fine because we said above that any m ≥ 4 would be fine for the induction)
and we have shown that T (n) = O(2n ). This, combined with the obvious lower bound pointed
out above, gives T (n) = Θ(2n ). If you managed to prove this, congratulations, you will get extra
points!
n2
3. We have that, for all n ≥ 0, 1 ≤ sin n + 2 ≤ 3, so for all n ≥ 0, 13 n2 ≤ sin n+2 ≤ n2 , which shows
n2
that sin n+2= Θ(n2 ). We may therefore apply the Master Theorem with a = 4, b = 2 and c = 2,
and since a = bc , we get T (n) = Θ(n2 log n).
3
4. Obviously, for all n > 0, 1 = n0 so we may apply the master theorem with a = 1, b = 2 and
c = 0, and since a = bc , we get T (n) = Θ(log n).
3
CSE103 Introduction to Algorithms (2020/2021) Midterm
a = n // 3
b = 2* a + ( n % 3)
r1 = weirdSort ( l [0: b ]) + l [ b : n ]
r2 = r1 [0: a ] + weirdSort ( r1 [ a : n ])
r = weirdSort ( r2 [0: b ]) + r2 [ b : n ]
return r
Please answer the following questions, justifying your answer in each case:
1. (8 points) Is this algorithm more or less efficient than insertion sort? Justify your answer by a
complexity analysis using one of the techniques you have learned.
2. (5 points) Suppose that the above function is modified by replacing the if condition at the third
line with
n == 2 and l [0][0] >= l [1][0]
In this way, the function operates on lists of lists, and sorts them according to the value of
the first element of the inner lists. For example, weirdSort([[1,4],[2,3],[0,2]]) will return
[[0,2],[1,4],[2,3]].
Recall that a sorting algorithm is stable if it preserves the relative position of elements with
identical sorting key (in this case, the sorting key is the value of the first element of the inner
lists). For example, when sorting [[0,4],[2,3],[0,2]], a stable sorting algorithm must return
[[0,4],[0,2],[2,3]], because [0,4] comes before [0,2] in the original list, whereas an unsta-
ble sorting algorithm may return [[0,2],[0,4],[2,3]], which is equally well sorted, but the
position of the elements [0,4] and [0,2] has been swapped.
Show, by means of an example, that weirdSort as modified above is not stable. Is it possible to
modify it so that it becomes stable? How?
Solution.
1. By inspecting the code, we see that weirdSort(l) works as follows (we denote the length of l
by n):
• if n ≤ 1, it returns l itself;
• if n = 2, it swaps the two elements of l (if necessary) and returns the result;
• if n ≥ 3 or greater, the function calls itself three times:
– the first time on a list of length b;
– the second time on a list of length n − a;
– the third time on a list of length b.
From the code, we see that a = n3 , so n − a = 32 n, whereas b is equal to 2a plus a constant (at
most equal to 2), so we may say that b = 23 n as well. For the rest, the algorithm does only basic
arithmetic (costing Θ(1)) and slicing and concatenating lists of length n (costing Θ(n)), so the
total cost outside of the recursive calls is Θ(n).
Therefore, the running time T (n) of weirdSort obeys the recurrence
2
T (n) = 3 T n + Θ ( n ),
3
3
which may be solved using the Master Theorem: a = 3, b = 2 and c = 1, and since a > bc , we
log 3 3
have T (n) = Θ(n 2 ).
2
Now, since 32 = 9
4 = 2+ 1
4 < 3, we have that log 3 3 > 2. Therefore, weirdSort is asymptoti-
2
cally worse than insertion sort (which is Θ(n2 )).
2. An example showing that weirdSort is not stable is [[0,1],[0,2]]: when we execute the modi-
fied version of weirdSort with this input, the if condition is true (the length is 2 and the sorting
4
CSE103 Introduction to Algorithms (2020/2021) Midterm
keys are equal), so the function swaps the two elements of the list and returns [[0,2],[0,1]],
whereas a stable sorting algorithm would have returned the list unchanged.
In order to make weirdSort stable, it is enough to replace the conditional at the third line with
if n == 2 and l [0][0] > l [1][0]:
that is, we test for strict inequality. In this way, elements with the same sorting key are never
swapped, and they retain their original relative position.
Solution.
1. The algorithm works as follows: it keeps track of an incremental sum s, and if the first input x is
1, it stops immediately returning s + y; otherwise, it updates the sum s by adding y to it only
in case x is odd, and calls itself on inputs x//2 and y*2. Notice that, albeit recursive, this is not
a divide and conquer algorithm: if the number of digits of x is n, the number of digits of x//2 is
n − 1, so we are decreasing the input size but not dividing it by something.
So, on input two positive integers x and y both of length n, the algorithm will call itself n times
before returning, and each time it will perform some operations of cost O(1) and a sum s + y,
the cost of which is O(m), where m is the maximum of the number of digits of s and y. For
the latter, it has n digits at the beginning and, at each iteration, the number of digits increases
by 1 (because the recursive call is on 2*y), so at any time the number of digits of y is bounded
by 2n. The number of digits of s increases progressively but, since s is always smaller than the
result x * y, it is also at most 2n, so m ≤ 2n = O(n) and we have that the total cost of each
iteration is O(n). Since, as pointed out above, we are doing n iterations, the total cost of Russian
multiplication is O(n2 ).
5
CSE103 Introduction to Algorithms (2020/2021) Midterm
2. When we execute Russian multiplication on input x = 1 and an n-digit number y, we return im-
mediately 0 + y. We may imagine that addition is implemented so that adding zero terminates
immediately (elementary school addition has this behavior, for example), so the cost is O(1), i.e.,
it is independent of y. Otherwise, if for some reason addition is “silly”, then we would get a cost
O(n). Since this was not specified in the statement of the problem, both answers are fine.
4. When we execute mul(0,y) (which is the same as mul(0,y,0)), we do not enter in any of the two
if statements, and we call again mul(0,y,0), so we enter an infinite loop and never terminate.
When we execute mul(x,0) (which is the same as mul(x,0,0)), with x having n digits, we either
do not change s, or we add 0 to it (so s stays 0 in both cases), and then we do a recursive call on
mul(x//2,0,0), so the length of the first input has decreased by 1. This means that we will do n
recursive calls, during which s will always be 0, and in the end, after O(n) time, we will return
0, which is the expected result.