ADA Notes Module 1
ADA Notes Module 1
In computer science and mathematics, an algorithm is a set of instructions used for solving
problems in a step-by-step manner. This step-by-step explanation of doing something is known
as an algorithm
Analyzing an Algorithm:
1. Efficiency.
Time efficiency, indicating how fast the algorithm runs,
Space efficiency, indicating how much extra memory it uses.
2. simplicity.
An algorithm should be precisely defined and investigated with
mathematical expressions.
Simpler algorithms are easier to understand and easier to program.
Simple algorithms usually contain fewer bugs.
Coding an Algorithm
Most algorithms are destined to be ultimately implemented as computer
programs. Programming an algorithm presents both a peril and an
opportunity.
A working program provides an additional opportunity in allowing an
empirical analysis of the underlying algorithm. Such an analysis is based on
timing the program on several inputs and then analyzing the results
obtained.
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.9
TABLE 1.1 Values (approximate) of several functions important for analysis of algorithms
n √� log2n n n log2n n2 n3 2n n!
1 1 0 1 0 1 1 2 1
2 1.4 1 2 2 4 4 4 2
4 2 2 4 8 16 64 16 24
8 2.8 3 8 2.4•101 64 5.1•102 2.6•102 4.0•104
10 3.2 3.3 10 3.3•101 102 103 103 3.6•106
16 4 4 16 6.4•101 2.6•102 4.1•103 6.5•104 2.1•1013
102 10 6.6 102 6.6•102 104 106 1.3•1030 9.3•10157
103 31 10 103 1.0•104 106 109
104 102 13 104 1.3•105 108 1012 Very big
105 3.2•102 17 105 1.7•106 1010 1015 computation
106 103 20 106 2.0•107 1012 1018
In the worst case, there is no matching of elements or the first matching element can found
at last on the list. In the best case, there is matching of elements at first on the list.
Worst-case efficiency
The worst-case efficiency of an algorithm is its efficiency for the worst case input of size n.
The algorithm runs the longest among all possible inputs of that size.
For the input of size n, the running time is Cworst(n) = n.
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.11
Yet another type of efficiency is called amortized efficiency. It applies not to a single run of
an algorithm but rather to a sequence of operations performed on the same data structure.
Asymptotic notation is a notation, which is used to take meaningful statement about the
efficiency of a program.
The efficiency analysis framework concentrates on the order of growth of an algorithm’s
basic operation count as the principal indicator of the algorithm’s efficiency.
To compare and rank such orders of growth, computer scientists use three notations, they
are:
O - Big oh notation
Ω - Big omega notation
Θ - Big theta notation
Let t(n) and g(n) can be any nonnegative functions defined on the set of natural numbers.
The algorithm’s running time t(n) usually indicated by its basic operation count C(n), and g(n),
some simple function to compare with the count.
Example 1:
ӫ −
4
i.e., −
4
Hence, − ∈ Θ 2
, where c2= 4 , c1= and n0=2
Note: asymptotic notation can be thought of as "relational operators" for functions similar to the
corresponding relational operators for values.
= ⇒ Θ(), ≤ ⇒ O(), ≥ ⇒ Ω(), < ⇒ o(), > ⇒ ω()
THEOREM: If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}).
(The analogous assertions are true for the Ω and Θ notations as well.)
PROOF: The proof extends to orders of growth the following simple fact about four arbitrary real
numbers a1, b1, a2, b2: if a1 ≤ b1 and a2 ≤ b2, then a1 + a2 ≤ 2 max{b1, b2}.
Since t1(n) ∈ O(g1(n)), there exist some positive constant c1 and some nonnegative integer
n1 such that
t1(n) ≤ c1g1(n) for all n ≥ n1.
Similarly, since t2(n) ∈ O(g2(n)),
t2(n) ≤ c2g2(n) for all n ≥ n2.
Let us denote c3 = max{c1, c2} and consider n ≥ max{n1, n2} so that we can use
both inequalities. Adding them yields the following:
t1(n) + t2(n) ≤ c1g1(n) + c2g2(n)
≤ c3g1(n) + c3g2(n)
= c3[g1(n) + g2(n)]
≤ c32 max{g1(n), g2(n)}.
Hence, t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}), with the constants c and n0 required by the
definition O being 2c3 = 2 max{c1, c2} and max{n1, n2}, respectively.
The property implies that the algorithm’s overall efficiency will be determined by the part
with a higher order of growth, i.e., its least efficient part.
ӫ t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}).
Summation formulas
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.15
EXAMPLE 1: Compute the factorial function F(n) = n! for an arbitrary nonnegative integer n.
Since n!= 1•. . . . • (n − 1) • n = (n − 1)! • n, for n ≥ 1 and 0!= 1 by definition, we can compute
F(n) = F(n − 1) • n with the following recursive algorithm. (ND 2015)
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F(n − 1) * n
Algorithm analysis
For simplicity, we consider n itself as an indicator of this algorithm’s input size. i.e. 1.
The basic operation of the algorithm is multiplication, whose number of executions we
denote M(n). Since the function F(n) is computed according to the formula F(n) = F(n −1)•n
for n > 0.
The number of multiplications M(n) needed to compute it must satisfy the equality
M(n) = M(n-1) + 1 for n > 0
To compute To multiply
F(n-1) F(n-1) by n
M(n − 1) multiplications are spent to compute F(n − 1), and one more multiplication is
needed to multiply the result by n.
Recurrence relations
The last equation defines the sequence M(n) that we need to find. This equation defines
M(n) not explicitly, i.e., as a function of n, but implicitly as a function of its value at another point,
namely n − 1. Such equations are called recurrence relations or recurrences.
Solve the recurrence relation � = � − + , i.e., to find an explicit formula for
M(n) in terms of n only.
To determine a solution uniquely, we need an initial condition that tells us the value with
which the sequence starts. We can obtain this value by inspecting the condition that makes the
algorithm stop its recursive calls:
if n = 0 return 1.
This tells us two things. First, since the calls stop when n = 0, the smallest value of n for
which this algorithm is executed and hence M(n) defined is 0. Second, by inspecting the
pseudocode’s exiting line, we can see that when n = 0, the algorithm performs no multiplications.
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.16
Thus, the recurrence relation and initial condition for the algorithm’s number of multiplications
M(n):
M(n) = M(n − 1) + 1 for n > 0,
M(0) = 0 for n = 0.
ALGORITHM TOH(n, A, C, B)
//Move disks from source to destination recursively
//Input: n disks and 3 pegs A, B, and C
//Output: Disks moved to destination as in the source order.
if n=1
Move disk from A to C
else
Move top n-1 disks from A to B using C
TOH(n - 1, A, B, C)
Move top n-1 disks from B to C using A
TOH(n - 1, B, C, A)
Algorithm analysis
The number of moves M(n) depends on n only, and we get the following recurrence
equation for it: M(n) = M(n − 1) + 1+ M(n − 1) for n > 1.
With the obvious initial condition M(1) = 1, we have the following recurrence relation for the
number of moves M(n):
M(n) = 2M(n − 1) + 1 for n > 1,
M(1) = 1.
We solve this recurrence by the same method of backward substitutions:
M(n) = 2M(n − 1) + 1 sub. M(n − 1) = 2M(n − 2) + 1
= 2[2M(n − 2) + 1]+ 1
= 22M(n − 2) + 2 + 1 sub. M(n − 2) = 2M(n − 3) + 1
= 2 [2M(n − 3) + 1]+ 2 + 1
2
…
= 2iM(n − i) + 2i−1 + 2i−2 + . . . + 2 + 1= 2iM(n − i) + 2i − 1.
…
Since the initial condition is specified for n = 1, which is achieved for i = n − 1,
M(n) = 2n−1M(n − (n − 1)) + 2n−1 – 1 = 2n−1M(1) + 2n−1 − 1= 2n−1 + 2n−1 − 1= 2n − 1.
Thus, we have an exponential time algorithm
EXAMPLE 3: An investigation of a recursive version of the algorithm which finds the number of
binary digits in the binary representation of a positive decimal integer.
ALGORITHM BinRec(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
if n = 1 return 1
else return BinRec( n/2 )+ 1
Algorithm analysis
The number of additions made in computing BinRec( n/2 ) is A( n/2 ), plus one more
addition is made by the algorithm to increase the returned value by 1. This leads to the recurrence
A(n) = A( n/2 ) + 1 for n > 1.
Since the recursive calls end when n is equal to 1 and there are no additions made
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.18
backward substitutions
A(2k) = A(2k−1) + 1 substitute A(2k−1) = A(2k−2) + 1
= [A(2k−2) + 1]+ 1= A(2k−2) + 2 substitute A(2k−2) = A(2k−3) + 1
= [A(2k−3) + 1]+ 2 = A(2k−3) + 3 ...
...
= A(2k−i) + i
...
= A(2k−k) + k.
Thus, we end up with A(2k) = A(1) + k = k, or, after returning to the original variable n = 2k and
hence k = log2 n,
A(n) = log2 n ϵ Θ (log2 n).
EXAMPLE 1: Consider the problem of finding the value of the largest element in a list of n
numbers. Assume that the list is implemented as an array for simplicity.
ALGORITHM MaxElement(A[0..n − 1])
//Determines the value of the largest element in a given array
//Input: An array A[0..n − 1] of real numbers
//Output: The value of the largest element in A
maxval ←A[0]
for i ←1 to n − 1 do
if A[i]>maxval
maxval←A[i]
return maxval
Algorithm analysis
The measure of an input’s size here is the number of elements in the array, i.e., n.
There are two operations in the for loop’s body:
o The comparison A[i]> maxval and
o The assignment maxval←A[i].
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.19
The comparison operation is considered as the algorithm’s basic operation, because the
comparison is executed on each repetition of the loop and not the assignment.
The number of comparisons will be the same for all arrays of size n; therefore, there is no
need to distinguish among the worst, average, and best cases here.
Let C(n) denotes the number of times this comparison is executed. The algorithm makes
one comparison on each execution of the loop, which is repeated for each value of the
loop’s variable i within the bounds 1 and n − 1, inclusive. Therefore, the sum for C(n) is
calculated as follows:
�−�
� � = ∑�
�=�
i.e., Sum up 1 in repeated n-1 times
�−�
� � = ∑�= �−� ∈� �
�=�
EXAMPLE 2: Consider the element uniqueness problem: check whether all the Elements in a
given array of n elements are distinct.
ALGORITHM UniqueElements(A[0..n − 1])
//Determines whether all the elements in a given array are distinct
//Input: An array A[0..n − 1]
//Output: Returns “true” if all the elements in A are distinct and “false” otherwise
for i ←0 to n − 2 do
for j ←i + 1 to n − 1 do
if A[i]= A[j ] return false
return true
Algorithm analysis
The natural measure of the input’s size here is again n (the number of elements in the array).
Since the innermost loop contains a single operation (the comparison of two elements), we
should consider it as the algorithm’s basic operation.
The number of element comparisons depends not only on n but also on whether there are
equal elements in the array and, if there are, which array positions they occupy. We will
limit our investigation to the worst case only.
One comparison is made for each repetition of the innermost loop, i.e., for each value of the
loop variable j between its limits i + 1 and n − 1; this is repeated for each value of the outer
loop, i.e., for each value of the loop variable i between its limits 0 and n − 2.
EXAMPLE 3: Consider matrix multiplication. Given two n × n matrices A and B, find the time
efficiency of the definition-based algorithm for computing their product C = AB. By definition, C
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.20
is an n × n matrix whose elements are computed as the scalar (dot) products of the rows of matrix A
and the columns of matrix B:
where C[i, j ]= A[i, 0]B[0, j]+ . . . + A[i, k]B[k, j]+ . . . + A[i, n − 1]B[n − 1, j] for every pair of
indices 0 ≤ i, j ≤ n − 1.
The total number of multiplications M(n) is expressed by the following triple sum:
Now, we can compute this sum by using formula (S1) and rule (R1)
.
The running time of the algorithm on a particular machine m, we can do it by the product
If we consider, time spent on the additions too, then the total time on the machine is
CS6404 __ Design and Analysis of Algorithms _ Unit I ______1.21
EXAMPLE 4 The following algorithm finds the number of binary digits in the binary
representation of a positive decimal integer.
ALGORITHM Binary(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ←1
while n > 1 do
count ←count + 1
n← n/
return count
Algorithm analysis
An input’s size is n.
The loop variable takes on only a few values between its lower and upper limits.
Since the value of n is about halved on each repetition of the loop, the answer should be
about log2 n.
The exact formula for the number of times.
The comparison > will be executed is actually log2 n + 1.
Selection sort
Selection sort is a simple comparison-based sorting algorithm. It divides the input list into
two parts: the sublist of items already sorted, which is built up from left to right at the front
(left) of the list, and the sublist of items remaining to be sorted that occupy the rest of the list.
Here's a step-by-step breakdown of how selection sort works:
1. Start with the first element: Consider the first element of the list as the minimum.
2. Find the minimum element in the unsorted part: Scan the entire list to find the
smallest element.
3. Swap the minimum element with the first unsorted element: Swap the found
minimum element with the first element of the unsorted part.
4. Move the boundary of the sorted part: Increase the boundary of the sorted part by
one, so the sorted part now includes the minimum element found in step 2.
5. Repeat until the list is sorted: Repeat the process for the next unsorted element,
treating the boundary of the sorted and unsorted parts as shifting to the right.
Algorithm
selectionSort(arr, n)
min_index = i
min_index = j
if min_index != i
Pseudo code
bubbleSort(arr, n)
for i from 0 to n-1
for j from 0 to n-i-2
if arr[j] > arr[j+1]
swap arr[j] and arr[j+1]
Sequential search, also known as linear search, is a simple search algorithm used to find a
particular element in a list. It checks each element of the list sequentially until a match is
found or the end of the list is reached.
Pseudocode
plaintext
Copy code
sequentialSearch(arr, target)
for i from 0 to length(arr) - 1
if arr[i] == target
return i
return -1
The time complexity of sequential search depends on the position of the target element in the
list or if the element is not present:
Brute force search is a straightforward and simple search algorithm that involves checking
each possible solution until the correct one is found. It's often used when the problem size is
small or when other search algorithms are impractical. The brute force approach is typically
applied to a variety of search problems, including string matching, optimization problems,
and combinatorial problems.
1. Generate all possible candidates: Enumerate all possible solutions to the problem.
2. Check each candidate: For each candidate, check if it satisfies the given criteria or matches
the target.
3. Return the solution: If a candidate matches the criteria, return it as the solution.
4. Continue until a solution is found: If no candidates match, continue until all possibilities are
exhausted.
Let's take an example of brute force search applied to string matching, where we want to find
a substring within a string.
Pseudo code
plaintext
Copy code
bruteForceSearch(text, pattern)
n = length of text
m = length of pattern
for i from 0 to n - m
j = 0
while j < m and text[i + j] == pattern[j]
j = j + 1
if j == m
return i
return -1
The time complexity of brute force search depends on the specific problem being solved. For
the string matching example:
Where n is the length of the text and mmm is the length of the pattern.