0% found this document useful (0 votes)
172 views11 pages

Algorithm Analysis Lab Guide

This document discusses analyzing algorithms through recurrence relations. It examines three problems - calculating power, sorting arrays using insertion and merge sort, and finding order statistics like maximum. For each problem, it derives the recurrence relation and solves it using various methods like substitution, recurrence trees and the master method. It concludes that power takes O(log n) time, insertion sort takes O(n^2) time, merge sort takes O(n log n) time, and finding the maximum element takes O(n) time. It also presents some additional analysis problems.

Uploaded by

RanaAsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
172 views11 pages

Algorithm Analysis Lab Guide

This document discusses analyzing algorithms through recurrence relations. It examines three problems - calculating power, sorting arrays using insertion and merge sort, and finding order statistics like maximum. For each problem, it derives the recurrence relation and solves it using various methods like substitution, recurrence trees and the master method. It concludes that power takes O(log n) time, insertion sort takes O(n^2) time, merge sort takes O(n log n) time, and finding the maximum element takes O(n) time. It also presents some additional analysis problems.

Uploaded by

RanaAsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Cairo University

Faculty of Computers &Information


Department of Computer Science

Course: Algorithms
Lab #1
Algorithm Analysis

This lab will focus on the analysis of Algorithms with different methods, and pays a special interest on
writing and solving recurrence relations. Other more techniques of solving recurrences shall be elaborated
with examples in next lab (Divide and Conquer). To illustrate the analysis methods, we will consider three
types of problems here:

 Problem#1: Power.
 Problem#2: Sort (using insertion sort, different variants of merge sort).
 Problem#3: Order Statistics (Max, min and ith element).

We will focus our efforts in this lab on:

 Recurrence relation extraction.


 Solving recurrence using careful inspection.
 Solving recurrence using recurrence trees.
 Introducing Master method.

Finally, Student will try to solve a certain problem as lab work using java or c++.

Part #1: Problems of Interest.

We briefly introduce three problems along with solution pseudo-codes to be used throughout the lab.

Problem 1: Power(A, B):

The objective of this problem is to calculate ab (where a and b are positive integers). The direct approach is
to multiply a by itself for b times using a loop. However, consider the following better solution:

FastPower(a,b) :
if b = 1
return a
otherwise
c := a*a
ans := FastPower(c,[b/2])
if b is odd
return a*ans
otherwise return ans
end

1
Here [x] denotes the floor function, that is, the largest integer less than or equal to x.

Problem 2: Sort(array):

We will consider here only two famous sort algorithms, namely insertion sort and merge sort. Two types of
sorts are described using the next figures and pseudo-codes.

 Insertion Sort:

 Merge Sort:

2
Problem 3: Order Statistics (array):

An order statistics problem denotes finding the n th smallest/largest element in an array, for instance, finding
the minimum, maximum, median, or second largest number.

The code for this type of problems relies on what is required. For example, finding maximum element in an
array can be performed like this:

SET Max to array[0]


FOR i = 1 to array length - 1
IF array[i] > Max THEN
SET Max to array[i]
ENDIF
ENDFOR
PRINT Max

Part#2: Analysis &Recurrence Relations:

Let's try to analyze each of the previous algorithms separately, where possible, we will try to construct a
recurrence relation and solve it. We will conform only to worst case performance analysis.

3
Problem 1: powers

FastPower(a,b) :
if b = 1
return a
otherwise
c := a*a
ans := FastPower(c,[b/2])
if b is odd
return a*ans
otherwise return ans
end

Worst case analysis:

Now assuming that you use a calculator that supports multiplication and division (i.e., you can do
multiplications and divisions in constant time), what would be the overall asymptotic running time of the
above algorithm (as a function of b)?

Note that for each function call, you make one recursive call with b replaced by [b/2]. In the worst condition
b is always an odd number and hence you have to do only one extra operation of multiplying a by answer.
By careful inspection: every time you reduce b by half, and hence you will reach the base case in log n calls.
Each recursive call performs constant time work aside from recursion, which means the correct answer is:

Θ(log(b)): Constant work per digit in the binary expansion of b.

Now try to make the same analysis by writing recurrence relation:


You have only one recursive call for each b. and zero (if b is even) or one (if b is odd) extra operation. In the
worst case we assume that you have always one extra operation for every b. Therefore, you can write the
recurrence as:

T(b) = T([b/2]) + 1 which is even better than T(b) = T(b/2) + 1. We drop the floor function to ease the process.

Now We can solve the recurrence using number of ways: The most direct one here is master method:

Applying master method here, we are in the first case with d=0 which yields O(log(b)).

4
Problem 2: Sort (Insertion Sort)

As shown from graphing above, insertion sort needs iteration over all x's in an array. At each
element x, the sub-array on the left side of x is already sorted during previous iterations, while the
one on the right is still unvisited. The first thing to conclude from this is that we have an outer loop
to cover all n elements. Inside this loop, we move x to its rightful position inside left array. The worst
case occurs when x is smaller than all elements on its left in which case all elements have to be right shifted
which cost k operations where k is the current index of x.

From this analysis we can infer that in the kth iteration of the outer loop, we perform at most k operations.
That's we have at most 1 operation in first outer iteration, at most 2 in second, 3 in third and so on. In other
words we have 1 + 2 + 3 + … + K + … + n operations = n(n+1)/2 = O(n 2).

Now we need to reach the same conclusion using recurrence relation. In any instance time, we have the
sorting cost of array of size n equals the sorting cost of array of size n-1 plus the cost of inserting nth element
in its proper place which cost n operation in worst time. We can express this relation using the following
recurrence:

T(n) = T(n-1) + O(n) or simply T(n) = T(n-1) + n.

Note that this form cannot be solved using Master method above (just compare it to the master method
general form). But we can still solve it easily using simple substitutions of recursive terms in the recursive
formula as in the listing below.

Hint: As you may have guessed, this method is much easier when the recursive term is expressed in terms
of (n-k)th term, while the master method is more direct when the recursive term is expressed in terms of
(n/k)th term.

Hint: Not all recurrences are as easy to solve using this method as this example. Many examples of this
form may require advanced recurrence solving techniques you've studies in math courses. (take a breath,
we will not encounter such examples in this undergraduate course)

Hint: Master method can't solve many recurrences where n th term is expressed by (n/k)th term. It solves only
those conforming to its general form. For example you can't solve this recurrence using master method:
T(n) = 2T(n/4) + T(n/2) + O(n).
Usually, this recurrence is solved using recurrence tree method discussed later in this section.

5
Problem 2: Sort (Merge Sort)

6
Merge sort algorithm can be analyzed easily by both master method and recurrence tree. For
master method, it would be easy to see that sorting an array of size n can be carried out by sorting
2 arrays of size n/2 followed by n operations to combine both half arrays. Therefore, we can write
recurrence relation as:

T(n) = 2T(n/2) + O(n), which can be solved using the first case of Master method yielding O(n log n)
performance.

However, we can even get along without master method here just by drawing the recurrence tree
above. We can naïvely figure out that we have a tree of depth log n with each level having at most
n operations which results in total amount of n log n operations.

Note that this method is more productive and powerful than master method as it can infer many
solutions that master method fails with. Nevertheless, this method is tricky and needs very careful
processing, especially when the tree is not balanced.

Problem 2: Order Statistics (Max)

The method is too simple to analyze. We need exactly n-1 comparisons to find out maximum of n
non-ordered elements sequentially. If we reformed the algorithm as a recurrence relation, we can define
the max of n elements as the max of first n-1 elements compared to last element. That's:
T(n) = T(n-1) + 1.
Using the substitution method (as in insertion sort):

T(n) = T(n-1) + 1
T(n-1) = T(n-2) + 1
…..
T(2) = T(1) + 1
T(1) = 0

T(n) = T(n-1) + 1 = T(n-2) + 1 + 1 = T(n-3) + 1 + 1 +1 … = n-1

We will develop this example further in section 4.

7
Part#3: Extra Problems:

Problem 1:

Arrange the following functions in increasing order of growth rate (with g(n) following f(n) in your list if
and only if f(n)=O(g(n))).

a)n2log(n)
b)2n
c)22^n
d)nlog(n)
e)n2

Answer: e a d b c

One approach is to graph these functions for large values of n. Once in a while this can be misleading,
however. Another useful trick is to take logarithms and see what happens.

Problem 2:

3-way-Merge Sort : Suppose that instead of dividing in half at each step of Merge Sort, you divide into
thirds, sort each third, and finally combine all of them using a three-way merge subroutine. What is the
overall asymptotic running time of this algorithm? (Hint: Note that the merge step can still be implemented
in O(n) time.)

Answer:
nlog(n) : There is still a logarithmic number of levels, and the overall amount of work at each level is still
linear. Use either recurrence tree method or master method.

Problem 3:

k-way-Merge Sort. Suppose you are given k sorted arrays, each with n elements, and you want to combine
them into a single array of kn elements. Consider the following approach. Using the merge subroutine
taught in lecture, you merge the first 2 arrays, then merge the 3rdgiven array with this merged version of the
first two arrays, then merge the 4th given array with the merged version of the first three arrays, and so on
until you merge in the final (kth) input array. What is the running time taken by this successive merging
algorithm, as a function of k and n? (Optional: can you think of a faster way to do the k-way merge
procedure ?)

Answer:
θ(nk2):

8
For the upper bound, the merged list size is always O(kn), merging is linear in the size of the larger array,
and there are k iterations. For the lower bound, each of the last k/2 merges takes Ω(kn) time. Alternatively,
think of the process as subsequent merges with 2n operations in the first merge, 3n in the second, 4n in the
fourth until the kth merge = 2n + 3n + … kn = n (2 + 3 +…+ k) = n ((1 +2 + 3 +…+ k )-1) = n (k(k+1)/2-1) = θ(nk2).

Part#4: Lab work

Write an algorithm that finds both the smallest and largest numbers in a list of n numbers. Try to find a
method that does at most 1.5n comparisons of array items. Show the full analysis.

Note: Analysis is discussed on the board; students shall implement Algorithm 2 in the lab time.

Algorithm1:

Although algorithm 1 does the job correctly, we will spice up the analysis job a little by approaching the
problem recursively.

Algorithm 2:

1. The first step here is the same in previous algorithm. You compare each two adjacent elements
together to obtain array of ordered pairs [(n/2) comparisons].
2. Second step is to merge each adjacent pairs together (as in merge sort), the merging method will
compare only first elements of the pairs together, and last elements of the pair together, and make
necessary swaps to maintain minimum at the first of merged list and maximum at its end.
3. Complete merging up until the while array is merged. At the end, minimum is located in the first
element, and maximum in the last element.

9
Analysis:

The analysis here is a bit similar to merge sort problem. We have a tree with log n levels. What is different
here is the amount of work in each level. Whereas we have O(n) work in each level in merge sort, here we
have different amount of work for each level. In the very upper level (as in last step), we have only two
comparisons. In the next level we have 4 comparisons. In the next we have 8 till you reach the final level
where you have n/2 comparisons.
To sum up, we have number of comparisons = 2 + 4 + 8 + …n/2 = (1 + 2 + 4 + 8 + …n/2) – 1.

Useful formula: A full binary tree with k leaves contains exactly 2k -1 nodes, with 1 node at root, 2 at next
level, 4 at next … and k at last level. In other words 1 + 2 + 4 … + k = 2k-1.
Now set k to n/2: = (1 + 2 + 4 + 8 + …n/2) = n-1.

That's the number of comparisons above = (1 + 2 + 4 + 8 + …n/2) – 1 = n -2 comparisons (in second and
third steps)

Adding this to first step, we have exactly 1.5 n – 2 comparisons.

Note that Master method cannot help here as the master method computes the performance in the form of
big-O notation while we need the exact performance.

We can infer the recurrence relation just for practice here: we say for an array of size n, it must be divided to
two arrays of size n/2, and once we obtain max and min in each of them we need two comparisons to obtain
general max and min respectively. That's the recurrence relation is:

T(n) = 2T(n/2) + 2.

This can be solved directly from master method (third case) to generate O(n). But it doesn't give information
about the exact constants.

11
Part #5: Exercises

These practice problems are intended for student practice at home and are not to be submitted as
homework nor discussed in labs.

1) You are a given a unimodal array of n distinct elements, meaning that its entries are in increasing order
up until its maximum element, after which its elements are in decreasing order. Give an algorithm to
compute the maximum element that runs in O(log n) time.

2) You are given a sorted (from smallest to largest) array A of n distinct integers which can be positive,
negative, or zero. You want to decide whether or not there is an index i such that A[i] = i. Design the
fastest algorithm that you can for solving this problem.

3) You are given as input an unsorted array of n distinct numbers, where n is a power of 2. Give an
algorithm that identifies the second-largest number in the array, and that uses at
most n+log2n−2 comparisons.

Good Luck

11

You might also like