Ads Unit 1
Ads Unit 1
What is an algorithm?
Algorithm is a set of steps to complete a task.
For example,
Task: to make a cup of
tea. Algorithm:
‘’a set of steps to accomplish or complete a task that is described precisely enough that a
computer can run it’’.
Described precisely: very difficult for a machine to know how much water, milk to be
added etc. in the above tea making algorithm.
These algorithms run on computers or computational devices. For example, GPS in our
smart phones, Google hangouts.
GPS uses shortest path algorithm. Online shopping uses cryptography which uses RSA
algorithm.
Characteristics of an algorithm:-
1
Expectation from an algorithm
Correctness:-
Correct: Algorithms must produce correct result.
Produce an incorrect answer:Even if it fails to give correct results all the time still
there is a control on how often it gives wrong result. Eg.Rabin-Miller PrimalityTest
(Used in RSA algorithm): It doesn’t give correct answer all the time.1 out of 250
times it gives incorrect result.
Approximation algorithm: Exact solution is not found, but near optimal solution
can be found out. (Applied to optimization problem.)
Less resource usage:
Algorithms should use less resources (time and space).
Resource usage:
Here, the time is considered to be the primary measure of efficiency. We are also
concerned with how much the respective algorithm involves the computer memory. But
mostly time is the resource that is dealt with. And the actual running time depends on a
variety of backgrounds: like the speed of the Computer, the language in which the
algorithm is implemented, the compiler/interpreter, skill of the programmers etc.
So, mainly the resource usage can be divided into: 1.Memory (space) 2.Time
2
1. How long the algorithm takes :-will be represented as a function of the size of
the input.
2. How fast the function that characterizes the running time grows with the input
size.
“Rate of growth of running time”.
The algorithm with less rate of growth of running time is considered better.
Algorithms are just like a technology. We all use latest and greatest processors but we need
to run implementations of good algorithms on that computer in order to properly take benefits
of our money that we spent to have the latest processor. Let’s make this example more
concrete by pitting a faster computer(computer A) running a sorting algorithm whose running
time on n values grows like n2 against a slower computer (computer B) running a sorting
algorithm whose running time grows like n lg n. They each must sort an array of 10 million
numbers. Suppose that computer A executes 10 billion instructions per second (faster than any
single sequential computer at the time of this writing) and computer B executes only 10 million
instructions per second, so that computer A is1000 times faster than computer B in raw
computing power. To make the difference even more dramatic, suppose that the world’s
craftiest programmer codes in machine language for computer A, and the resulting code
requires 2n2 instructions to sort n numbers. Suppose further that just an average programmer
writes for computer B, using a high- level language with an inefficient compiler, with the
resulting code taking 50n lg n instructions.
3
Time taken=
Let us form an algorithm for Insertion sort (which sort a sequence of numbers).The pseudo
code for the algorithm is give below.
Pseudo code:
key=A[j] C2
i=j-1 C4
A[i+1]=A[i]- C6
i=i 1- C7
A[i+1]=key C8
4
Let Ci be the cost of ith line. Since comment lines will not incur any cost C3=0.
C1n
C2 n-1
C3=0 n-1
C4n-1 C5
C6 )
C7
C8n-1
Best case:
5
All tjvalues are 1.
Worst case:
The worst-case running time gives a guaranteed upper bound on the running time for
any input.
For some algorithms, the worst case occurs often. For example, when searching, the
worst case often occurs when the item being searched for is not present, and searches
for absent items may be frequent.
Why not analyze the average case? Because it’s often about as bad as the worst case.
Order of growth:
It is described by the highest degree term of the formula for running time. (Drop lower-order
terms. Ignore the constant coefficient in the leading term.)
6
Example: We found out that for insertion sort the worst-case running time is of the form
an2 + bn + c.
Drop lower-order terms. What remains is an 2.Ignore constant coefficient. It results in n2.But
we cannot say that the worst-case running time T(n) equals n2 .Rather It grows like n2 . But it
doesn’t equal n2.We say that the running time is Θ (n 2) to capture the notion that the order of
grow this n2.
We usually consider one algorithm to be more efficient than another if its worst-case
running time has a smaller order of growth.
Asymptotic notation
ω≈>
7
8
Example: n2 /2 − 2n = Θ (n2), with c1 = 1/4, c2 = 1/2, and n0 = 8.
9
Recurrences, Solution of Recurrences by substitution, Recursion Tree
and Master Method
• If the given instance of the problem is small or simple enough, just solve it.
• Otherwise, reduce the problem to one or more simpler instances of the same problem.
E.g.the worst case running time T(n) of the merge sort procedure by recurrence can be
expressed as
T(n)= ϴ(1) ; if n=1
2T(n/2) + ϴ(n) ;if n>1
whose solution can be found as T(n)=ϴ(nlog n)
1. SUBSTITUTION METHOD:
We substitute the guessed solution for the function when applying the inductive
hypothesis to smaller values. Hence the name “substitution method”. This method is powerful,
but we must be able to guess the form of the answer in order to apply it.
10
step 1: guess the form of solution
T(n)=4T(n/2)
F(n)=4f(n/2)
F(2n)=4f(n)
F(n)=n2
So, T(n) is order of n2
Guess T(n)=O(n3)
Assume T(k)<=ck3
T(n)=4T(n/2)+n
<=4c(n/2)3 +n
<=cn3/2+n
<=cn3-(cn3/2-n)
T(n)<=cn3 as (cn3/2 –n) is always positive
So what we assumed was true.
T(n)=O(n3)
Cn3/2-n>=0
n>=1
c>=2
Assume,T(k)<=ck2
T(n)=4T(n/2)+n
4c(n/2)2+n
cn2+n
11
So,T(n) will never be less than cn 2. But if we will take the assumption of T(k)=c 1 k2-c2k, then we
can find that T(n) = O(n2)
2. BY ITERATIVE METHOD:
e.g. T(n)=2T(n/2)+n
=>22T(n/4)+n+n
=>23T(n/23) +3n
T(n)=nT(1)+nlogn
In a recursion tree ,each node represents the cost of a single sub-problem somewhere in the set of
recursive problems invocations .we sum the cost within each level of the tree to obtain a set of
per level cost, and then we sum all the per level cost to determine the total cost of all levels of
recursion .
Constructing a recursion tree for the recurrence T(n)=3T(n/4)+cn2
12
Constructing a recursion tree for the recurrence T (n)= 3T (n=4) + cn2.. Part (a) shows T (n),
which progressively expands in (b)–(d) to form the recursion tree. The fully expanded tree in part
(d) has height log4n (it has log4n + 1 levels).
13
No. Of nodes at depth i=3i
cnlog43
T(n)= + c nlog 43
<= cn2
4. BY MASTER METHOD:
T(n)=aT(n/b)+f(n)
where a>=1 and b>1 are constants and f(n) is a asymptotically positive function .
3. If f(n)=Ὠ(nlogba+Ɛ) for some constant Ɛ>0 ,and if a*f(n/b)<=c*f(n) for some constant c<1
and all sufficiently large n,then T(n)=ϴ(f(n))
14
e.g. T(n)=2T(n/2)+nlogn
=>ϴ(nlog2n)
15