Ada Mod1 PPT Notes
Ada Mod1 PPT Notes
1. Introduction to Algorithms
2. Fundamentals of Algorithmic Problem Solving
3. Fundamentals of the Analysis of Algorithm
Efficiency
1. Analysis Framework
2. Asymptotic Notations and Basic Efficiency
Classes
3. Mathematical analysis for Non-recursive
algorithms
4. Mathematical analysis for Recursive algorithms
1.INTRODUCTION TO ALGORITHMS
1. An algorithm is an effective method for solving a particular given
problem. or
(step by step procedure to solve a computational problem.)
1
An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time.
Program
algorithm
PROPERTIES OF ALGORITHM
An algorithm MUST satisfy the following criteria:
1. INPUT: The Algorithm should be given zero or more inputs.
2. OUTPUT: At least one quantity must be produced. For each input,
the algorithm should produce output value from specific task.
3. DEFINITENESS: Each instruction must be clear and unambiguous.
4. FINITENESS: If we trace out the instructions of an algorithm, then
for all cases, the algorithm must terminate after a finite number of
steps.
5. EFFECTIVENESS: Every instruction must very basic and must run
in short time so that it should be carried out, in principle, by a person
using only pencil & paper.
2
Need of Algorithm
1. To understand the basic idea of the problem.
2. To find an approach to solve the problem.
3. To improve the efficiency of existing techniques.
4. To understand the basic principles of designing the algorithms.
5. To compare the performance of the algorithm with respect to other
techniques.
6. It is the best method of description without describing the
implementation detail.
7. The Algorithm gives a clear description of requirements and goal of
the problem to the designer.
8. A good design can produce a good solution.
9. To understand the flow of the problem.
3
Differences
Algorithm Program
1. At design phase 1. At Implementation phase
2. Written in Natural
language 2. Written in any
programming language
3. Person should have 3. Programmer
Domain knowledge
4. Analyze 4. Testing
Proving correctness
Optimality
4
Basic Issues Related to Algorithms
10
5
Algorithm design strategies
11
6
3.How to validate an algorithm: (Proving correctness)
• Once an algorithm is created it is necessary to
show that it computes the correct output for all
possible legal input , this process is called
algorithm validation.
13
14
7
2.Fundamentals of Algorithmic Problem
Solving
4) Data structures
15
16
8
3. Choosing between Exact and Approximate Problem Solving:
The next principal decision is to choose between solving the problem
exactly and solving it approximately.
17
18
9
5. Algorithm Design Techniques:
19
20
10
1. Natural language:
• It is very simple and easy to specify an algorithm using natural language.
• Disadvantages:
• It may not be clear all the time and we should check for the
ambiguity.
• It creates difficulty while implementing. Hence prefer Pseudocode.
• Example: An algorithm to perform addition of two numbers.
Step 1: Read the first number, say a.
Step 2: Read the second number, say b.
Step 3: Add the above two numbers and store the result in c.
Step 4: Display the result from c.
21
2. Pseudocode
Pseudocode is a mixture of a natural language and programming
language constructs.
Pseudocode is usually more precise than natural language.
• For Assignment operation ,use left arrow “←”,
• For comments ,use two slashes “ // ”,
• Programming language constructs like if condition, for, while
loops are used.
This specification is more useful for implementation of any
language.
22
11
Example: Pseudocode to perform addition of two numbers
ALGORITHM Sum(a,b)
//Problem : This algorithm performs addition of two numbers
//Input: Two integers a and b
//Output: Addition of two integers
c←a+b
return c
23
3. Flowchart
• Flowchart is a graphical representation of an algorithm.
• It is a method of expressing an algorithm by a collection of
connected geometric shapes containing descriptions of the
algorithm’s steps.
24
12
Example: Flowchart to perform addition of two numbers
25
26
13
8. Analyzing an Algorithm:
For an algorithm the most important thing is efficiency.
There are two kinds of algorithm efficiency. They are:
1. Time efficiency, indicates how fast the algorithm runs, and
2. Space efficiency, indicates how much extra memory it needs.
The efficiency of an algorithm is determined by measuring both
time efficiency and space efficiency.
So there are 4 factors to analyze an algorithm. They are:
1. Time efficiency of an algorithm
Efficiency
2. Space efficiency of an algorithm
3. Simplicity of an algorithm
4. Generality of an algorithm
27
9.Coding an Algorithm:
Algorithms are destined to be ultimately implemented as computer
programs.
The coding / implementation of an algorithm is done by a suitable
programming language like C, C++, JAVA,PYTHON….
Implementing an algorithm correctly is necessary.
Test and debug program thoroughly after implementing an
algorithm as computer program.
It is very essential to write an optimized code to reduce the burden
of compiler.
28
14
Algorithm’s to find GCD of 2 numbers
1. Euclid’s Algorithm
3. Middle-school procedure
29
1.Euclid’s Algorithm
Problem: Find gcd(m,n), the greatest common divisor of two
non-negative, not-both-zero integers m and n.
Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ?
Euclid’s algorithm is based on repeated application of equality
gcd(m,n) = gcd(n, m mod n)
until m mod n is equal to zero(i.e.,the second number
becomes 0).
Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12
30
15
Two descriptions of Euclid’s algorithm:
Euclid's algorithm for computing gcd(m, n) in Natural Language
Step 1: If n = 0, return m and stop; otherwise go to Step 2
Step 2: Divide m by n and assign the value of the remainder to r
Step 3: Assign the value of n to m and the value of r to n. Go to Step1.
32
16
Other methods for gcd(m,n) [cont.]
3.Middle-school procedure
Step 1: Find the prime factors of m
Step 2: Find the prime factors of n
Step 3: Find all the common prime factors
Step 4: Compute the product of all the common prime factors and
return it as gcd(m,n)
Note:
1. It is not clearly specified how to find prime factors, i.e., there is an
ambiguity. (the prime factorization steps are not defined unambiguously)
2. If m or n is 1, this method does not work.
33
34
17
3. Fundamentals of the Analysis
of Algorithm Efficiency
1. Analysis Framework.
2. Asymptotic Notations and its properties.
3. Mathematical analysis for Recursive algorithms.
4. Mathematical analysis for Non-recursive algorithms
35
Analysis Framework
This is a framework for analyzing the efficiency of an algorithm.
There are two kinds of efficiencies.
1. Time efficiency, indicating how fast the algorithm runs,
2. Space efficiency, indicating how much extra memory it uses.
36
18
1. Measuring an Input’s Size
All algorithms run longer on larger inputs.
Ex: multiplying two n × n matrices, searching, sorting……n=10
n=100
An algorithm’s efficiency can be calculated based on input size.
An algorithm’s efficiency can be defined as a function of some
parameter n, where n indicating the algorithm’s input size.
i.e.,… f(n)
For the problem of evaluating a polynomial
p(x) = an xn + . . . + a0 of degree n,
the size of the input parameter will be the polynomial’s degree n
or the number of its coefficients n+1, which is larger by 1 than its
degree.
37
38
19
1. Measuring an Input’s Size (cont’d)
In computing the product of two n × n matrices, the choice of a
parameter indicating an input size does matter.
Consider a spell-checking algorithm.
If the algorithm examines individual characters of its input,
then the size is measured by the number of characters.
When measuring input size for algorithms such as checking
whether a given integer n is prime, the input is the number of bits
required to represent the n in binary form .
The input size is measured by the number b of bits in the n’s
binary representation: b=(log2 n)+1.
39
40
20
1. Measuring an Input’s Size(cont’d)
41
42
21
2. Units for Measuring Running Time
Some standard unit of time measurement such as a second, or
millisecond, and so on can be used to measure the running time of
a program after implementing the algorithm.
Drawbacks of this approach are:
1. Computer Speed: Depends on the speed of the specific computer.
2. Program Quality: Affected by how well the program is written.
3. Compiler Variations: Different compilers can change the running time.
4. Timing Challenges: Difficult to measure the exact running time
accurately.
Need for a Better Metric:
So, We need a better way to measure an algorithm’s efficiency that
doesn’t depend on these external factors.
43
algorithms.
• Arithmetic operations such as addition and multiplication are basic
44
22
2. Units for Measuring Running Time (cont’d)
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size
Basic operation: the operation that contributes the most
towards the running time of the algorithm
Estimating Running Time (T(n)):
input size
T(n) ≈ copC(n)
running time execution time Number of times
for basic operation basic operation is
or cost executed
45
46
23
2. Units for Measuring Running Time (cont’d)
Example:
sorting an array of 𝑛 elements using the bubble sort algorithm.
In bubble sort, the basic operation is comparing two elements and swapping
them if they are in the wrong order.
Let's assume 𝑐0=1unit of time.(i.e., time it takes to perform this basic operation -
comparing and possibly swapping two elements on a particular computer)
Now, let's say sorting an array of size 𝑛 = 5.
Let s assume the count C(n) = 𝑛2 for bubble sort.
Using the formula, 𝑇(𝑛) = 𝑐op⋅ 𝐶(𝑛), we can calculate estimated running time:
𝑇(5) = 1⋅5 2 =25
So, the estimated running time for sorting an array of size 5 using bubble sort on
particular computer would be 25 units of time.
Now, let's consider how the running time would change if we double the input
size to 𝑛=10.
Using the same formula: 𝑇(10)=1⋅10 2 =100.
So, for 𝑛=10, the estimated running time would be 100 units of time.
47
48
24
Input size and basic operation examples
Problem Input size measure Basic operation
Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge
49
3. Orders of Growth
• Orders of growth describe how the running time or space
requirements of an algorithm scale with the size of the input.
• It provides a way to choose best algorithm based on their efficiency
by comparing their orders of growth, especially for large inputs.
• Example: When multiple algorithms can solve the same
problem, comparing their orders of growth helps select the most
efficient one.
• When finding GCD of two small numbers, it is not immediately
clear how much more efficient Euclid’s algorithm is compared to
the other algorithms, the difference in algorithm efficiencies
becomes clear for larger input only.
• Orders of growth are typically expressed using Big O notation.
50
25
3. Orders of Growth(cont’d)
Here are some common orders of growth and what they mean:
1. Constant Time - 𝑂(1): The running time is the same regardless of input size.
Always takes the same amount of time. Ex: Arithmetic operations
2. Logarithmic Time - 𝑂(logn): Running time grows slowly as input size grows.
Example: Binary search in a sorted array.
3. Linear Time - 𝑂(𝑛): Running time grows directly (proportion) with input size.
Ex: Linear search
4. Linearithmic Time - 𝑂(𝑛logn): Running time grows faster than linear but
slower than quadratic. Example: Merge sort.
5. Quadratic Time - 𝑂(𝑛2): Running time grows with the square of the input size.
Example: Bubble sort, where each pair of elements is compared.
6. Cubic Time - 𝑂(𝑛3): Running time grows with the cube of the input size.
Example: Multiplying two matrices.
7. Exponential Time - 𝑂(2n): Running time grows very quickly, i.e., the time
taken doubles with each additional element in the input, impractical for large
inputs. Example: Solving the traveling salesman problem using brute force.
51
3. Orders of Growth(cont’d)
Here are some common orders of growth and what they mean:
26
3. Orders of Growth(cont’d)
• Order of growth rates from small to largest:
53
3. Orders of Growth(cont’d)
• Order of growth rates from small to largest:
54
27
Values of some important functions as n
55
Worst-Case analysis,
Amortized analysis.
56
28
4.Worst-Case, Best-Case, and Average-Case Efficiencies
1.Worst-case: Cworst(n)(usually)
maximum time an algorithm takes on any input of size n.
It provides an upper bound on the algorithm's performance
2.Average-case: Cavg(n)(sometimes)
Average time/same time/expected time an algorithm takes on average
over all inputs of size n.
Need assumption about probability distribution of all possible inputs.
The Average case efficiency lies between best case and worst case.
It provides an average bound on the algorithm's performance
3.Best-case: Cbest (n)
minimum time an algorithm takes on any input of size n.
It provides lower bound on the algorithm's performance
4. Amortized:
It applies not to a single run of an algorithm but rather to a sequence of
operations performed on the same data structure.
57
58
29
Best-case efficiency :
The best-case occurs when the key value is found at the first
Worst-case efficiency:
The worst-case occurs when the key value is not present in the
array or is located at the last position.
In this case, the algorithm performs 𝑛 comparisons, where n is
the number of elements in the array.
For the input of size n, the running time is Cworst(n) = n.
Tworst (n) = O(n)
59
60
30
Average case efficiency:
• In the case of a successful search,
• the number of comparisons made by the algorithm in such a
situation is i, and
• the probability of the first match occurring in the i th position of
the list is p/n for every i.
• In the case of an unsuccessful search, the number of comparisons is
n and the probability of such a search is (1- p).
successful search unsuccessful search
61
62
31
For Sequential Search (Linear Search) algorithm for
search key K.
Worst case - n key comparisons
Best case – 1 key comparisons
Average case - (n+1)/2 for successful search,
- n for successful search
63
PERFORMANCE ANALYSIS
64
32
1.Space complexity
• The space complexity of an algorithm is the amount of Memory
Space required by an algorithm during course of execution.
• There are three types of space
1. Instruction space : executable program
2. Data space: Required to store all the constant and variable data
space.
3. Environment: It is required to store environment information
needed to resume the suspended space.
65
1.Space complexity
The space needed by the algorithm P is calculated by,
S(P) = c + SP(instance characteristics)
= c + SP(n)
Where,
c is constant and is fixed part
Sp is variable part
Fixed part:
It is independent of characteristics such as size of inputs and outputs.
It is instruction space (i.e., space for code), space for simple variables, constants,
etc.
Variable part:
It depends on instance characteristics such as size of inputs and outputs.
It is space for component variables, whose size depends on the input or instance
of problem being solved.
66
33
Example 1:
Algorithm sum(a,b,c)
{
a=10; a -> 1
b=20; b -> 1
c=a+b; c -> 1
}
S(P) = c + SP
=3 + 0
=3
67
Example 2:
S(P)=c + SP
=3 + 0
=3
s(p)>=(3)
68
34
Example 3:
Algorithm sum(int a[],int n)
{ a -> n S(P) = c + SP
int s =0; n -> 1 =3+n
=n+3
for i = 1 to n do s -> 1
s = s + a[i]; i -> 1 s(p)>=(n+3)
return s;
} Ans : n + 1 + 1 +1 = n+3 words
69
Example 4 : Example 5 :
a – mn
b - mn
c – mn
i , j, m, n -1,1,1,1
70
35
2. TIME COMPLEXITY:
The time complexity of an algorithm is the total amount of time
required by an algorithm to complete its execution.
The time T(P) taken by a program P is the sum of the compile
time and the run time(execution time).
T(P) = compile time + execution time
= execution time
= tp( instance characteristics)
= tp(n)
71
2. TIME COMPLEXITY:
procedure to find time complexity:
Identify blocks and assign value to each step.
For comments, Declarations =0
For initialization, assignment, return = 1
For condition statement= 1 (max of if and else)
For loop iteration for n times = n+1 (multiplication for nested
loops)
For body of the loop = n
72
36
2. TIME COMPLEXITY:
Example :
for ( i=0; i<n ; i++) n+1 times
{ Remove constant 1
------ Therfore, Time complexity = O(n)
}
73
2. TIME COMPLEXITY:
Example 1:
Algorithm sum(a,b)
{
sum=a+b; 1
return sum 1
} ----
2 is time complexity
74
37
Example 2:
75
Example 3:
Algorithm sum(int a[],int n)
{
int s =0;
for i = 1 to n do
s = s + a[i];
return s;
}
76
38
TIME COMPLEXITY:
Example 2:
Algorithm matrixAddition(a, b, c, m, n)
{
for i=1 to m do m+1
for j=1 to n do m(n +1)
c[i, j] = a[i, j] + b[i, j] mn
} -------------------
m+1+m(n+1)+mn
=m + 1+mn+m+mn
= 2mn+2m+1
77
Example 5 :
Algorithm matrixAddition(a, b, c, n)
{
for i=1 to n do
for j=1 to n do
c[i, j] = a[i, j] + b[i, j]
}
78
39
Example 3:
1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. s = 0.0; 1 1 1
4. for i =1 to n do 1 n+1 n+1
5. s = s + a[i]; 1 n n
6. return s; 1 1 1
7. } 0 - 0
Total 2n+3
79
ASYMPTOTIC NOTATION
ASYMPTOTIC NOTATION: The mathematical way of
representing the Time complexity.
The notation we use to describe the asymptotic running time of
an algorithm are defined in terms of functions whose domains
are the set of natural numbers.
80
40
Classification of growth
1.Growing with the same rate.
2. Growing with the slower rate.
3.Growing with the faster rate.
81
ASYMPTOTIC NOTATION
3 asymptotic notations are mostly used to represent time
complexity of algorithm.
1.Big oh (O)notation
2.Big omega (Ω) notation
3.Theta(Θ) notation
4.Little oh(o) notation
5.Little omega() notation
82
41
1.Big oh (O)notation
Asymptotic “less than”(slower rate).
This notation mainly represent upper bound of algorithm run time.
Big oh (O)notation is useful to calculate maximum amount of time
needed for execution.
By using Big-oh notation, worst case time complexity will be
calculated.
Formula : t(n)<=c g(n) n>=n0 , c>0 ,n0 >=1
Definition: A function t(n) is said to be in O(g(n)), denoted
t(n)O(g(n)), if there exist some positive constant c and some
nonnegative integer n0 such that t(n) <= c g(n) for all n >= n0.
83
Big-oh
84
42
Examples
Example : f(n)=2n +3 & g(n)= n
Formula : f(n)<=c g(n) n>=n0 , c>0 ,n0 >=1
f(n)=2n+3 & g(n)=n
Now 3n+2<=c.n
3n+2<=4.n
Put the value of n =1
5<=4 false
N=2 8<=8 true now n0>2 For all value of n>2 & c=4
now f(n)<= c.g(n)
3n+2<=4n for all value of n>2
Above condition is satisfied this notation takes maximum amount of time to
execute .so that it is called worst case complexity.
85
2. Ω-Omega notation
Asymptotic “greater than”(faster rate).
86
43
Big-omega
87
Examples
Example : f(n)=3n +2
Formula : f(n)>=c g(n) n>=n0 , c>0 ,n0 >=1
f(n)=3n+2
3n+2>=1*n, c=1 put the value of n=1
n=1 5>=1 true n0>=1 for all value of n
It means that f(n)= Ω g(n).
88
44
Exercises: prove the following using the above definition
1. 10n2 (n2)
2. 0.3n2 - 2n (n2)
3. 0.1n3 (n2)
89
3. -Theta notation
Asymptotic “Equality”(same rate).
90
45
Big-theta
91
Examples
Example : f(n)=3n+2
Formula : c1 g(n)<=f(n)<=c2 g(n)
f(n)=2n+3
1*n<=3n+2<=4*n now put the value of
n=1 we get 1<=5<=4 false
n=2 we get 2<=8<=8 true
n=3 we get 3<=11<=12 true
Now for all value of n>=2 it is true above condition is satisfied.
92
46
Exercises: prove the following using the above definition
1. 10n2 (n2)
2. 0.3n2 - 2n (n2)
3. (1/2)n(n+1) (n2)
93
Theorem: Prove that if t1(n) O(g1(n)) and t2(n) O(g2(n)), then t1(n) + t2(n)
O(max{g1(n), g2(n)}).
Proof:
By using the property of 4 arbitrary real numbers a1, b1, a2, b2 where a1<=b1,
a2<=b2, then a1+a2 <= 2 max{b1,b2}.
Since t1(n) O(g1(n)) , there exist some positive constant c1 and some
nonnegative integer n1 such that t1(n) c1*g1(n), for all n n1 .
Similarly, since t2(n) O(g2(n)), t2(n) c2*g2(n), for all n n2.
47
Basic asymptotic efficiency classes
1 constant
log n logarithmic
n linear
n log n n-log-n
n2 quadratic
n3 cubic
2n exponential
n! factorial
95
96
48
Mathematical Analysis of nonrecursive algorithms
General Plan for Analyzing Time efficiency of nonrecursive
algorithms:
1. Decide on parameter n indicating input size
97
98
49
99
100
50
Example 1: Maximum element
101
102
51
Example 2: Element uniqueness problem
103
104
52
Example 3: Matrix multiplication
105
106
53
Example 3: Matrix multiplication - Time efficiency
• There is just one multiplication executed on each repetition.
= (n3)
107
108
54
• It cannot be investigated in the way as the previous example.
• The exact formula for the number of times the comparison n > 1
will be executed is actually ⌊log2n⌋+1.
• Given that each iteration of the loop effectively halves n, the
number of iterations (and thus the number of divisions) is
log2n. Therefore, the number of comparisons is ⌊ log2n⌋ ⌋+1.
• Thus, the worst-case time complexity is O(log2n).
• There is Another solution for this problem, Using recurrence
relations.
109
55