6 Complexity of Algorithm
6 Complexity of Algorithm
The term algorithm complexity measures how many steps are required by the algorithm to
solve the given problem. It evaluates the order of count of operations executed by an
algorithm as a function of input data size.
To assess the complexity, the order (approximation) of the count of operation is always
considered instead of counting the exact steps.
The complexity can be found in any form such as constant, logarithmic, linear, n*log(n),
quadratic, cubic, exponential, etc. It is nothing but the order of constant, logarithmic, linear
and so on, the number of steps encountered for the completion of a particular algorithm. To
make it even more precise, we often call the complexity of an algorithm as "running time".
Since the constants do not hold a significant effect on the order of count of operation, so it is
better to ignore them. Thus, to consider an algorithm to be linear and equally efficient, it
must undergo N, N/2 or 3*N count of operation, respectively, on the same number of
elements to solve a particular problem.
1. Iterative Algorithm: In the iterative approach, the function repeatedly runs until
the condition is met or it fails. It involves the looping construct.
2. Recursive Algorithm: In the recursive approach, the function calls itself until the
condition is met. It integrates the branching structure.
However, it is worth noting that any program that is written in iteration could be written as
recursion. Likewise, a recursive program can be converted to iteration, making both of these
algorithms equivalent to each other.
But to analyze the iterative program, we have to count the number of times the loop is going
to execute, whereas in the recursive program, we use recursive equations, i.e., we write a
function of F(n) in terms of F(n/2).
Suppose the program is neither iterative nor recursive. In that case, it can be concluded that
there is no dependency of the running time on the input data size, i.e., whatever is the input
size, the running time is going to be a constant value. Thus, for such programs, the
complexity will be O(1).
Example1:
In the first example, we have an integer i and a for loop running from i equals 1 to n. Now
the question arises, how many times does the name get printed?
1. A()
2. {
3. int i;
4. for (i=1 to n)
5. printf("Edward");
6. }
Since i equals 1 to n, so the above program will print Edward, n number of times. Thus, the
complexity will be O(n).
Example2:
1. A()
2. {
3. int i, j:
4. for (i=1 to n)
5. for (j=1 to n)
6. printf("Edward");
7. }
In this case, firstly, the outer loop will run n times, such that for each time, the inner loop will
also run n times. Thus, the time complexity will be O(n2).
Example3:
1. A()
2. {
3. i = 1; S = 1;
4. while (S<=n)
5. {
6. i++;
7. SS = S + i;
8. printf("Edward");
9. }
10. }
As we can see from the above example, we have two variables; i, S and then we have while
S<=n, which means S will start at 1, and the entire loop will stop whenever S value reaches
a point where S becomes greater than n.
Here i is incrementing in steps of one, and S will increment by the value of i, i.e., the
increment in i is linear. However, the increment in S depends on the i.
Initially;
i=1, S=1
i=2, S=3
i=3, S=6
Since we don't know the value of n, so let's suppose it to be k. Now, if we notice the value of
S in the above case is increasing; for i=1, S=1; i=2, S=3; i=3, S=6; i=4, S=10; …
Thus, it is nothing but a series of the sum of first n natural numbers, i.e., by the time i
reaches k, the value of S will be k(k+1)/2.
To stop the loop, has to be greater than n, and when we solve this equation, we will
get > n. Hence, it can be concluded that we get a complexity of O(√n) in this case.
Example1:
1. A(n)
2. {
3. if (n>1)
4. return (A(n-1))
5. }
Solution;
Here we will see the simple Back Substitution method to solve the above problem.
Now, according to Eqn. (1), i.e. T(n) = 1 + T(n-1), the algorithm will run until n>1. Basically,
n will start from a very large number, and it will decrease gradually. So, when T(n) = 1, the
algorithm eventually stops, and such a terminating condition is called anchor condition, base
condition or stopping condition.
1. Divide and Conquer Approach: It is a top-down approach. The algorithms which follow
the divide & conquer techniques involve three steps:
o Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
o The greedy algorithm doesn't always guarantee the optimal solution however it
generally produces a solution that is very close in value to the optimal.
This is particularly helpful when the number of copying subproblems is exponentially large.
Dynamic Programming is frequently related to Optimization Problems.
4. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find
the right one. It is a depth-first search of the set of possible solution. During the search, if an
alternative doesn't work, then backtrack to the choice point, the place which presented
different alternatives, and tries the next alternative.
1. Sequencing
2. If-then-else
3. for loop
4. While loop
1. Sequencing:
Suppose our algorithm consists of two parts A and B. A takes time t A and B takes time tB for
computation. The total computation "tA + tB" is according to the sequence rule. According to
maximum rule, this computation time is (max (tA,tB)).
Example:
Computation Time = tA + tB
= (max (tA,tB)
= (max (O (n), θ (n2)) = θ (n2)
2. If-then-else:
The total time computation is according to the condition rule-"if-then-else." According to the
maximum rule, this computation time is max (tA,tB).
Example:
3. For loop:
The general format of for loop is:
1. For (initialization; condition; updation)
2.
3. Statement(s);
1. for i ← 1 to n
2. {
3. P (i)
4. }
If the computation time ti for ( PI) various as a function of "i", then the total computation
time for the loop is given not by a multiplication but by a sum i.e.
1. For i ← 1 to n
2. {
3. P (i)
4. }
Takes
If the algorithms consist of nested "for" loops, then the total computation time is
For i ← 1 to n
{
For j ← 1 to n
{
P (ij)
}
}
Example:
Consider the following "for" loop, Calculate the total computation time for the following:
1. For i ← 2 to n-1
2. {
3. For j ← 3 to i
4. {
5. Sum ← Sum+A [i] [j]
6. }
7. }
Solution:
4. While loop:
The Simple technique for analyzing the loop is to determine the function of variable involved
whose value decreases each time around. Secondly, for terminating the loop, it is necessary
that value must be a positive integer. By keeping track of how many times the value of
function decreases, one can obtain the number of repetition of the loop. The other approach
for analyzing "while" loop is to treat them as recursive algorithms.
Algorithm:
1. 1. [Initialize] Set k: =1, LOC: =1 and MAX: = DATA [1]
2. 2. Repeat steps 3 and 4 while K≤N
3. 3. if MAX<DATA [k],then:
4. Set LOC: = K and MAX: = DATA [k]
5. 4. Set k: = k+1
6. [End of step 2 loop]
7. 5. Write: LOC, MAX
8. 6. EXIT
Example:
The running time of algorithm array Max of computing the maximum element in an array of
n integer is O (n).
Solution:
1. 2 + 1 + n +4 (n-1) + 1=5n
2. 2 + 1 + n + 6 (n-1) + 1=7n-2
The best case T(n) =5n occurs when A [0] is the maximum element. The worst case T(n) =
7n-2 occurs when element are sorted in increasing order.
We may, therefore, apply the big-Oh definition with c=7 and n 0=1 and conclude the running
time of this is O (n).
Recurrence Relation
A recurrence is an equation or inequality that describes a function in terms of its values on
smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the
natural numbers that satisfy the recurrence.
For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.
2T + θ (n) if n>1
1. Substitution Method
2. Iteration Method
3. Recursion Tree Method
4. Master Method
1. Substitution Method:
The Substitution Method Consists of two main steps:
T (n) = T + n
Solution:
1. T (n) ≤c logn.
T (n) ≤c log + 1
T (n) = 2T + n n>1
Solution:
1. T (n) = 1 if n=1
2. = 2T (n-1) if n>1
Solution:
T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4) (Eq.1)
T (n) = 2i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1
Solution:
T (n) = T (n-1) +1
= (T (n-2) +1) +1 = (T (n-3) +1) +1+1
= T (n-4) +4 = T (n-5) +1+4
= T (n-5) +5= T (n-k) + k
Where k = n-1
T (n-k) = T (1) = θ (1)
T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).
5. We sum the costs within each of the levels of the tree to obtain a set of pre-level costs
and then sum all pre-level costs to determine the total cost of all levels of the recursion.
6. A Recursion Tree is best used to generate a good guess, which can be verified by the
Substitution Method.
Example 1
Consider T (n) = 2T + n2
T (n) = 4T +n
When we add the values across the levels of the recursion trees, we get a value of n for
every level. The longest path from the root to leaf is