0% found this document useful (0 votes)
40 views

Ada Mod1 PPT Notes

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Ada Mod1 PPT Notes

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Module-1

1. Introduction to Algorithms
2. Fundamentals of Algorithmic Problem Solving
3. Fundamentals of the Analysis of Algorithm
Efficiency
1. Analysis Framework
2. Asymptotic Notations and Basic Efficiency
Classes
3. Mathematical analysis for Non-recursive
algorithms
4. Mathematical analysis for Recursive algorithms

1.INTRODUCTION TO ALGORITHMS
1. An algorithm is an effective method for solving a particular given
problem. or
(step by step procedure to solve a computational problem.)

2. An Algorithm is a sequence of computational steps that transforms


input into an output. (by CORMEN)

3. An algorithm is a sequence of unambiguous instructions for


solving a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time. (by LEVITIN)

4. An algorithm is a finite set of instructions that if followed


accomplishes a particular task. (by SAHNI)

1
 An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time.

 Notion of an algorithm and problem:


problem Algorithm + Data structure

Program
algorithm

input “computer” output

PROPERTIES OF ALGORITHM
An algorithm MUST satisfy the following criteria:
1. INPUT: The Algorithm should be given zero or more inputs.
2. OUTPUT: At least one quantity must be produced. For each input,
the algorithm should produce output value from specific task.
3. DEFINITENESS: Each instruction must be clear and unambiguous.
4. FINITENESS: If we trace out the instructions of an algorithm, then
for all cases, the algorithm must terminate after a finite number of
steps.
5. EFFECTIVENESS: Every instruction must very basic and must run
in short time so that it should be carried out, in principle, by a person
using only pencil & paper.

2
Need of Algorithm
1. To understand the basic idea of the problem.
2. To find an approach to solve the problem.
3. To improve the efficiency of existing techniques.
4. To understand the basic principles of designing the algorithms.
5. To compare the performance of the algorithm with respect to other
techniques.
6. It is the best method of description without describing the
implementation detail.
7. The Algorithm gives a clear description of requirements and goal of
the problem to the designer.
8. A good design can produce a good solution.
9. To understand the flow of the problem.

How To Write an Algorithm


Step-1:start Step-1: start
Step-2:Read a,b,c Step-2: Read a,b,c
Step-3:if a>b Step-3:if a>b then go to step 4
if a>c otherwise go to step 5
print a is largest Step-4:if a>c then
else print a is largest otherwise
if b>c print c is largest
print b is largest Step-5: if b>c then
else print b is largest otherwise
print c is largest print c is largest
Step-4 : stop step-6: stop

3
Differences
Algorithm Program
1. At design phase 1. At Implementation phase
2. Written in Natural
language 2. Written in any
programming language
3. Person should have 3. Programmer
Domain knowledge
4. Analyze 4. Testing

Basic Issues Related to Algorithms


 How to design algorithms

 How to express algorithms

 Proving correctness

 Efficiency (or complexity) analysis

 Optimality

4
Basic Issues Related to Algorithms

1. How to design algorithms.


2. How to express algorithms
3. How to validate an algorithm.(Proving correctness)
4. How to analyse an algorithm.(Efficiency or complexity
analysis)
5. How to test algorithm’s optimality.(coding and testing)

1 .How to design an algorithm:


 To design an algorithm we have various design
techniques such as
a) Divide & Conquer
b) Greedy method
c) Dynamic Programming
d) Branch & Bound
e) Backtracking…….

10

5
Algorithm design strategies

1. Brute force 5. Greedy approach

2. Divide and conquer 6. Dynamic programming

3. Decrease and conquer 7. Backtracking

4. Transform and conquer 8. branch-and-bound

11

2. How to express an algorithm :ALGORITHM SPECIFICATION


Algorithm can be described (Represented) in four ways:
1.Natural language like English: When this way is chosen, we should ensure
that each & every statement is definite. (no ambiguity)
2. Graphic representation called flowchart: This method will work well
when the algorithm is small & simple.
3. Pseudo-code Method: This is a mixture of natural language and
programming language constructs.
In this method, we should typically describe algorithm as program, which
resembles language like Pascal & Algol(Algorithmic Language).
Advantages: Calculating time and proving correctness is easy.
4. Programming Language: we can use programming language like C,
C++,JAVA etc. also to write algorithms.
12

6
3.How to validate an algorithm: (Proving correctness)
• Once an algorithm is created it is necessary to
show that it computes the correct output for all
possible legal input , this process is called
algorithm validation.

13

4. How to analyse an algorithm:

• Analysis of an algorithm or performance analysis refers to task of


determining how much computing time & storage required.

a) Computing time-Time complexity: Frequency or Step count


method

b) Storage space- Space complexity :we have to use number of


inputs used in algorithms.

14

7
2.Fundamentals of Algorithmic Problem
Solving

4) Data structures

15

Fundamentals of Algorithmic Problem Solving


1. Understanding the Problem: Before designing an algorithm,
understand the problem given.
2. Ascertaining the Capabilities of the Computational Device:
Once you completely understand a problem, ascertain the
capabilities of the computational device the algorithm is intended
for. Select appropriate machine - sequential or parallel machine.
 Algorithms designed to be executed on sequential machine are
called sequential algorithms.
 Algorithms designed to be executed on parallel machine are
called parallel algorithms.

16

8
3. Choosing between Exact and Approximate Problem Solving:
The next principal decision is to choose between solving the problem
exactly and solving it approximately.

 Exact algorithm- An algorithm used to solve the problem exactly


and produce correct result is called an exact algorithm.

 Approximate algorithm- If the problem is so complex and not


able to get exact solution, then choose an algorithm called an
approximation algorithm, that produces an approximate answer.

 E.g., extracting square roots, solving nonlinear equations, and


evaluating definite integrals.

17

4. Deciding on Appropriate Data Structures:

Algorithms+ Data Structures = Programs

 Hence the choice of proper data structure is required before


designing the algorithm.

 Implementation of algorithm is possible only with the help of


Algorithms and Data Structures

18

9
5. Algorithm Design Techniques:

 An algorithm design technique (or “strategy” or “paradigm”)


is a general approach to solve problems algorithmically from
different areas of computing.

 Some Algorithmic Design Strategy / Technique / Paradigm


are Brute Force, Divide and Conquer, Dynamic
Programming, Greedy Method and so on.

19

6. Design The Algorithm & Methods of Specifying an


Algorithm: (Algorithm Specification)
 There are three ways to specify an algorithm.
 They are:
1. Natural language
2. Pseudocode
3. Flowchart

20

10
1. Natural language:
• It is very simple and easy to specify an algorithm using natural language.
• Disadvantages:
• It may not be clear all the time and we should check for the
ambiguity.
• It creates difficulty while implementing. Hence prefer Pseudocode.
• Example: An algorithm to perform addition of two numbers.
Step 1: Read the first number, say a.
Step 2: Read the second number, say b.
Step 3: Add the above two numbers and store the result in c.
Step 4: Display the result from c.

21

2. Pseudocode
 Pseudocode is a mixture of a natural language and programming
language constructs.
 Pseudocode is usually more precise than natural language.
• For Assignment operation ,use left arrow “←”,
• For comments ,use two slashes “ // ”,
• Programming language constructs like if condition, for, while
loops are used.
 This specification is more useful for implementation of any
language.

22

11
Example: Pseudocode to perform addition of two numbers

ALGORITHM Sum(a,b)
//Problem : This algorithm performs addition of two numbers
//Input: Two integers a and b
//Output: Addition of two integers
c←a+b
return c

23

3. Flowchart
• Flowchart is a graphical representation of an algorithm.
• It is a method of expressing an algorithm by a collection of
connected geometric shapes containing descriptions of the
algorithm’s steps.

24

12
Example: Flowchart to perform addition of two numbers

25

7. Proving an Algorithm’s Correctness:


 Once an algorithm has been specified then its correctness must be
proved.
 An algorithm must yield a required result for every legitimate
input in a finite amount of time.
 A common technique for proving correctness is to use
mathematical induction because an algorithm’s iterations provide
a natural sequence of steps needed for such proofs.

26

13
8. Analyzing an Algorithm:
 For an algorithm the most important thing is efficiency.
 There are two kinds of algorithm efficiency. They are:
1. Time efficiency, indicates how fast the algorithm runs, and
2. Space efficiency, indicates how much extra memory it needs.
 The efficiency of an algorithm is determined by measuring both
time efficiency and space efficiency.
 So there are 4 factors to analyze an algorithm. They are:
1. Time efficiency of an algorithm
Efficiency
2. Space efficiency of an algorithm
3. Simplicity of an algorithm
4. Generality of an algorithm
27

9.Coding an Algorithm:
 Algorithms are destined to be ultimately implemented as computer
programs.
 The coding / implementation of an algorithm is done by a suitable
programming language like C, C++, JAVA,PYTHON….
 Implementing an algorithm correctly is necessary.
 Test and debug program thoroughly after implementing an
algorithm as computer program.
 It is very essential to write an optimized code to reduce the burden
of compiler.

28

14
Algorithm’s to find GCD of 2 numbers

1. Euclid’s Algorithm

2. Consecutive integer checking algorithm

3. Middle-school procedure

29

1.Euclid’s Algorithm
Problem: Find gcd(m,n), the greatest common divisor of two
non-negative, not-both-zero integers m and n.
Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ?
Euclid’s algorithm is based on repeated application of equality
gcd(m,n) = gcd(n, m mod n)
until m mod n is equal to zero(i.e.,the second number
becomes 0).
Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12

30

15
Two descriptions of Euclid’s algorithm:
Euclid's algorithm for computing gcd(m, n) in Natural Language
Step 1: If n = 0, return m and stop; otherwise go to Step 2
Step 2: Divide m by n and assign the value of the remainder to r
Step 3: Assign the value of n to m and the value of r to n. Go to Step1.

ALGORITHM Euclid(m, n) in Pseudo code


// Computes gcd(m, n) by Euclid's algorithm
// Input: Two nonnegative, not -both-zero integers m and n
// Output: Greatest common divisor of m and n
while n ≠ 0 do
r ← m mod n
m← n
n←r
return m
31

Other methods for gcd(m,n) [cont.]


2. Consecutive integer checking algorithm
Step 1: Assign the value of min{m,n} to t
Step 2: Divide m by t. If the remainder is 0, go to Step 3;
otherwise, go to Step 4
Step 3: Divide n by t. If the remainder is 0, return t and stop;
otherwise, go to Step 4
Step 4: Decrease t by 1 and go to Step 2
gcd(m,n)<= min{m,n}
Note: unlike Euclid's algorithm, this algorithm does not work
correctly when one of its input numbers is zero.(if m or n = 0)

32

16
Other methods for gcd(m,n) [cont.]
3.Middle-school procedure
Step 1: Find the prime factors of m
Step 2: Find the prime factors of n
Step 3: Find all the common prime factors
Step 4: Compute the product of all the common prime factors and
return it as gcd(m,n)
Note:
1. It is not clearly specified how to find prime factors, i.e., there is an
ambiguity. (the prime factorization steps are not defined unambiguously)
2. If m or n is 1, this method does not work.

33

IMPORTANT PROBLEM TYPES


The most important problem types are:
1. Sorting
2. Searching
3. String processing - Searching for a given word in a text
4. Graph problems - graph traversal, shortest path algorithm, traveling
salesman problem, graph-coloring problem
5. Combinatorial problems - permutation, combination, or a subset.
Ex: Traveling salesman problem, graph-coloring problem
6. Geometric problems - geometric objects such as points, lines, and
polygons.
Ex: closest-pair problem, convex-hull problem
7. Numerical problems - mathematical equations, computing definite
integrals, evaluating functions etc. (can be solved only approximately)

34

17
3. Fundamentals of the Analysis
of Algorithm Efficiency
1. Analysis Framework.
2. Asymptotic Notations and its properties.
3. Mathematical analysis for Recursive algorithms.
4. Mathematical analysis for Non-recursive algorithms

35

Analysis Framework
 This is a framework for analyzing the efficiency of an algorithm.
 There are two kinds of efficiencies.
1. Time efficiency, indicating how fast the algorithm runs,
2. Space efficiency, indicating how much extra memory it uses.

 This analysis framework consists of the following 4 factors :

1. Measuring an Input’s Size


2. Units for Measuring Running Time
3. Orders of Growth
4. Worst-Case, Best-Case, and Average-Case Efficiencies

36

18
1. Measuring an Input’s Size
 All algorithms run longer on larger inputs.
 Ex: multiplying two n × n matrices, searching, sorting……n=10
n=100
 An algorithm’s efficiency can be calculated based on input size.
 An algorithm’s efficiency can be defined as a function of some
parameter n, where n indicating the algorithm’s input size.
i.e.,… f(n)
 For the problem of evaluating a polynomial
p(x) = an xn + . . . + a0 of degree n,
the size of the input parameter will be the polynomial’s degree n
or the number of its coefficients n+1, which is larger by 1 than its
degree.
37

1. Measuring an Input’s Size(cont’d)


• A polynomial is a mathematical expression involving a sum of powers in
one or more variables multiplied by coefficients.
• p(x) = an xn + an-1 xn-1 + an-2 xn-2 . . . + a1 x1 + a0
• Here:
• 𝑥 is the variable.
• The powers of 𝑥 (i.e., xn , xn-1 xn-2 . . . x0 ) are the degrees of the
polynomial.
• an, an-1, an-2,……….a1 , a0 are the coefficients of xn , xn-1 xn-2 . . . x0
• In the problem of evaluating the polynomial p(x), the size of the input
parameter is typically considered to be
• the degree 𝑛 of the polynomial.
• However, since there are n+1 coefficients (from an, an-1, an-2,……….a1 ,
a0 ), the input size can also be described by the number of coefficients,
which is 𝑛+1.

38

19
1. Measuring an Input’s Size (cont’d)
 In computing the product of two n × n matrices, the choice of a
parameter indicating an input size does matter.
 Consider a spell-checking algorithm.
 If the algorithm examines individual characters of its input,
then the size is measured by the number of characters.
 When measuring input size for algorithms such as checking
whether a given integer n is prime, the input is the number of bits
required to represent the n in binary form .
 The input size is measured by the number b of bits in the n’s
binary representation: b=(log2 n)+1.

39

1. Measuring an Input’s Size(cont’d)


Example:
 Consider the number 𝑛=1000.
 Binary Representation: The binary representation of
1000 is 11111010001111101000, which consists of 10
bits.
 Bit Length Calculation: b=(log2 n)+1
 log21000≈9.97
 ⌊9.97⌋=9
 Adding 1 gives 𝑏=10 (b=(log2 n)+1
= 9 +1
=10 )
 Therefore, n=1000 can be represented with 10 bits.

40

20
1. Measuring an Input’s Size(cont’d)

41

1. Measuring an Input’s Size(cont’d)


When merging two arrays, combine all elements from both arrays into a single sorted array.
Example: Consider two sorted arrays: Array A: [1, 3, 5] with 𝑚=3 elements
Array B: [2, 4, 6, 8] with 𝑛=4 elements
Merging Process: To merge these arrays,
 Compare the first elements of both arrays.
 Place the smaller element into the merged array.
 Move the pointer to the next element in the array from which the smaller element was
taken.
 Repeat the process until all elements from both arrays are merged.
Steps: Compare 1 (from A) and 2 (from B). 1 is smaller, so add 1 to the merged array.
Compare 3 (next in A) and 2 (from B). 2 is smaller, so add 2 to the merged array.
Compare 3 (from A) and 4 (next in B). 3 is smaller, so add 3 to the merged array.
Compare 5 (next in A) and 4 (from B). 4 is smaller, so add 4 to the merged array.
Compare 5 (from A) and 6 (next in B). 5 is smaller, so add 5 to the merged array.
Compare nothing (A is exhausted) and 6 (from B). Add 6 to the merged array.
Compare nothing (A is exhausted) and 8 (next in B). Add 8 to the merged array.
The merged array is: [1, 2, 3, 4, 5, 6, 8].
Input Size :The input size for this merging operation is the total number of elements in both
arrays = 𝑚+𝑛.
In above example: 𝑚=3, 𝑛=4 So, the total input size is 𝑚+𝑛=3+4=7.

42

21
2. Units for Measuring Running Time
 Some standard unit of time measurement such as a second, or
millisecond, and so on can be used to measure the running time of
a program after implementing the algorithm.
 Drawbacks of this approach are:
1. Computer Speed: Depends on the speed of the specific computer.
2. Program Quality: Affected by how well the program is written.
3. Compiler Variations: Different compilers can change the running time.
4. Timing Challenges: Difficult to measure the exact running time
accurately.
 Need for a Better Metric:
 So, We need a better way to measure an algorithm’s efficiency that
doesn’t depend on these external factors.
43

2. Units for Measuring Running Time (cont’d)


 Alternative Approach: Counting the Number of Operations
• Efficiency can be Measured by counting how many times each operation
in the algorithm is executed.
• This approach is excessively difficult.
• So, Focus on Basic Operations: Identify key operations of the
algorithm.
• key comparison is basic operation for searching and sorting

algorithms.
• Arithmetic operations such as addition and multiplication are basic

operations for polynomial evaluation and matrix multiplication


algorithms.
• Counting these basic operations is easy and provides a good estimate of
the algorithm's efficiency.
• The total running time is determined by basic operations count.

44

22
2. Units for Measuring Running Time (cont’d)
 Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size
 Basic operation: the operation that contributes the most
towards the running time of the algorithm
 Estimating Running Time (T(n)):
input size

T(n) ≈ copC(n)
running time execution time Number of times
for basic operation basic operation is
or cost executed

Note: Different basic operations may cost differently!

45

2. Units for Measuring Running Time (cont’d)


Estimating Running Time (T(n)):
 𝑇(𝑛) is the estimated running time of the algorithm for a given input
size 𝑛.
 It's estimated using the formula:
𝑇(𝑛) ≈ 𝑐op⋅ 𝐶(𝑛)
where:
 𝑐op is the execution time of a basic operation on a specific
computer.
 𝐶(𝑛) is the count of how many times the basic operation is executed
for the algorithm with input size 𝑛.
• This formula should be used carefully because:
• 𝐶(𝑛) only considers basic operations and may be approximated.
• 𝑐op is also an approximation and its reliability varies.

46

23
2. Units for Measuring Running Time (cont’d)
Example:
 sorting an array of 𝑛 elements using the bubble sort algorithm.
 In bubble sort, the basic operation is comparing two elements and swapping
them if they are in the wrong order.
 Let's assume 𝑐0=1unit of time.(i.e., time it takes to perform this basic operation -
comparing and possibly swapping two elements on a particular computer)
 Now, let's say sorting an array of size 𝑛 = 5.
 Let s assume the count C(n) = 𝑛2 for bubble sort.
 Using the formula, 𝑇(𝑛) = 𝑐op⋅ 𝐶(𝑛), we can calculate estimated running time:
𝑇(5) = 1⋅5 2 =25
 So, the estimated running time for sorting an array of size 5 using bubble sort on
particular computer would be 25 units of time.
 Now, let's consider how the running time would change if we double the input
size to 𝑛=10.
 Using the same formula: 𝑇(10)=1⋅10 2 =100.
 So, for 𝑛=10, the estimated running time would be 100 units of time.

47

Example : Algorithm to find sum of n elements.


Algorithm sum(int a[],int n)
{
int s =0;
for i = 1 to n do
s = s + a[i];
return s;
}

𝑇(𝑛) ≈ 𝑐op⋅ 𝐶(𝑛)


𝑇(𝑛) ≈ 𝑐add⋅ 𝐶(𝑛)
= 2n + 3 (remove constants 2, 3)
= O(n) Therefore, 𝑇(𝑛) ≈ O(𝑛)

48

24
Input size and basic operation examples
Problem Input size measure Basic operation

Searching for key in a list


Number of list’s items, i.e. n Key comparison
of n items

Multiplication of two Matrix dimensions or total Multiplication of two


matrices number of elements numbers

Checking primality of a n’size = number of digits (in


Division
given integer n binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge

49

3. Orders of Growth
• Orders of growth describe how the running time or space
requirements of an algorithm scale with the size of the input.
• It provides a way to choose best algorithm based on their efficiency
by comparing their orders of growth, especially for large inputs.
• Example: When multiple algorithms can solve the same
problem, comparing their orders of growth helps select the most
efficient one.
• When finding GCD of two small numbers, it is not immediately
clear how much more efficient Euclid’s algorithm is compared to
the other algorithms, the difference in algorithm efficiencies
becomes clear for larger input only.
• Orders of growth are typically expressed using Big O notation.

50

25
3. Orders of Growth(cont’d)
 Here are some common orders of growth and what they mean:
1. Constant Time - 𝑂(1): The running time is the same regardless of input size.
Always takes the same amount of time. Ex: Arithmetic operations
2. Logarithmic Time - 𝑂(logn): Running time grows slowly as input size grows.
Example: Binary search in a sorted array.
3. Linear Time - 𝑂(𝑛): Running time grows directly (proportion) with input size.
Ex: Linear search
4. Linearithmic Time - 𝑂(𝑛logn): Running time grows faster than linear but
slower than quadratic. Example: Merge sort.
5. Quadratic Time - 𝑂(𝑛2): Running time grows with the square of the input size.
Example: Bubble sort, where each pair of elements is compared.
6. Cubic Time - 𝑂(𝑛3): Running time grows with the cube of the input size.
Example: Multiplying two matrices.
7. Exponential Time - 𝑂(2n): Running time grows very quickly, i.e., the time
taken doubles with each additional element in the input, impractical for large
inputs. Example: Solving the traveling salesman problem using brute force.

51

3. Orders of Growth(cont’d)
 Here are some common orders of growth and what they mean:

1. Constant Time - 𝑂(1): for Arithmetic operations

2. Logarithmic Time - 𝑂(logn): for Binary search in a sorted array.

3. Linear Time - 𝑂(𝑛): for Linear search

4. Linearithmic Time - 𝑂(𝑛logn): for Merge sort.

5. Quadratic Time - 𝑂(𝑛2): for Bubble sort

6. Cubic Time - 𝑂(𝑛3): for Multiplying two matrices.


7. Exponential Time - 𝑂(2n): for Solving the traveling salesman problem using
brute force.

 logn is best time.

 2n and n! are worst times.

 n, 𝑛logn, 𝑛2, 𝑛3 are average times.


52

26
3. Orders of Growth(cont’d)
• Order of growth rates from small to largest:

53

3. Orders of Growth(cont’d)
• Order of growth rates from small to largest:

54

27
Values of some important functions as n  

55

4.Worst-Case, Best-Case, and Average-Case


Efficiencies
 Analysis of algorithms consists of different types of analysis
to evaluate the performance of an algorithm.
 Best-Case analysis

 Worst-Case analysis,

 Average-Case analysis, and

 Amortized analysis.

56

28
4.Worst-Case, Best-Case, and Average-Case Efficiencies
1.Worst-case: Cworst(n)(usually)
maximum time an algorithm takes on any input of size n.
 It provides an upper bound on the algorithm's performance
2.Average-case: Cavg(n)(sometimes)
Average time/same time/expected time an algorithm takes on average
over all inputs of size n.
 Need assumption about probability distribution of all possible inputs.
 The Average case efficiency lies between best case and worst case.
 It provides an average bound on the algorithm's performance
3.Best-case: Cbest (n)
minimum time an algorithm takes on any input of size n.
 It provides lower bound on the algorithm's performance
4. Amortized:
 It applies not to a single run of an algorithm but rather to a sequence of
operations performed on the same data structure.
57

4.Worst-Case, Best-Case, and Average-Case Efficiencies


Example: Consider Sequential Search (Linear Search) algorithm
for search key K.
ALGORITHM SequentialSearch(A[0..n - 1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n - 1] and a search key K
//Output: The index of the first element in A that matches K or -1 if
//there are no matching elements
i ←0
while i < n and A[i] ≠ K do
i ←i + 1
if i < n return i
else return -1

58

29
Best-case efficiency :
 The best-case occurs when the key value is found at the first

position in the array.


 In this case, the algorithm performs only one comparison.

 If we search a first element in list of size n,(i.e. first element

equal to a search key), then the running time is Cbest(n) = 1


Tbest (n) = Ω(1)

Worst-case efficiency:
 The worst-case occurs when the key value is not present in the
array or is located at the last position.
 In this case, the algorithm performs 𝑛 comparisons, where n is
the number of elements in the array.
 For the input of size n, the running time is Cworst(n) = n.
Tworst (n) = O(n)

59

Average case efficiency:


• The Average case efficiency lies between best case and worst case.
• To analyze the algorithm’s average case efficiency, we must make
some assumptions about possible inputs of size n.
• The standard assumptions are that
• The probability of a successful search is equal to p (0 ≤ p ≤ 1)
and
• The probability of the first match occurring in the ith position of
the list is the same for every i.
• Under these assumptions, the average number of key comparisons
Cavg (n) can be calculated as follows.
• No. of comparisons for successful search + No. of comparisons
for unsuccessful search

60

30
Average case efficiency:
• In the case of a successful search,
• the number of comparisons made by the algorithm in such a
situation is i, and
• the probability of the first match occurring in the i th position of
the list is p/n for every i.
• In the case of an unsuccessful search, the number of comparisons is
n and the probability of such a search is (1- p).
successful search unsuccessful search

61

Average case efficiency:


Cavg (n) = p(n + 1)/2 + n(1-p)
• When p=1, Cavg (n) = p(n + 1)/2 + n(1-p)
= 1(n + 1)/2 + n(1-1)
= (n + 1)/2
• When p=0, Cavg(n) = 0(n + 1)/2 + n(1-0)
= 0 +n
=n
• If p = 1 (i.e., for the successful search), the average number of key
comparisons made by sequential search is (n + 1) /2; i.e., the
algorithm will inspect, on average, about half of the list's elements.
• If p = 0 (i.e., for the unsuccessful search), the average number of
key comparisons will be n because the algorithm will inspect all n
elements on all such inputs.
Therefore, Tavg (n) =(n)

62

31
For Sequential Search (Linear Search) algorithm for
search key K.
 Worst case - n key comparisons
 Best case – 1 key comparisons
 Average case - (n+1)/2 for successful search,
- n for successful search

63

PERFORMANCE ANALYSIS

• Performance Analysis: An algorithm is said to be efficient and


fast if it take less time to execute and consumes less memory
space at run time.

• Performance Analysis can be done by computing time and storage


requirements. i.e.,
1. Time complexity
2. Space complexity

64

32
1.Space complexity
• The space complexity of an algorithm is the amount of Memory
Space required by an algorithm during course of execution.
• There are three types of space
1. Instruction space : executable program
2. Data space: Required to store all the constant and variable data
space.
3. Environment: It is required to store environment information
needed to resume the suspended space.

65

1.Space complexity
 The space needed by the algorithm P is calculated by,
S(P) = c + SP(instance characteristics)
= c + SP(n)
Where,
 c is constant and is fixed part
 Sp is variable part
 Fixed part:
 It is independent of characteristics such as size of inputs and outputs.
 It is instruction space (i.e., space for code), space for simple variables, constants,
etc.
 Variable part:
 It depends on instance characteristics such as size of inputs and outputs.
 It is space for component variables, whose size depends on the input or instance
of problem being solved.

66

33
Example 1:
Algorithm sum(a,b,c)
{
a=10; a -> 1
b=20; b -> 1
c=a+b; c -> 1
}

S(P) = c + SP
=3 + 0
=3

67

Example 2:

S(P)=c + SP
=3 + 0
=3

s(p)>=(3)

68

34
Example 3:
Algorithm sum(int a[],int n)
{ a -> n S(P) = c + SP
int s =0; n -> 1 =3+n
=n+3
for i = 1 to n do s -> 1
s = s + a[i]; i -> 1 s(p)>=(n+3)
return s;
} Ans : n + 1 + 1 +1 = n+3 words

The space needed for algorithm is based on input size.


 Size of the variable ‘n’ = 1 word
 Array of a values = n word
 Loop variable i = 1 word
 s variable = 1 word

69

Example 4 : Example 5 :

Algorithm matrixAddition(a, b, c, m, n) Algorithm matrixAddition(a, b, c, n)


{ {
for i=1 to m do for i=1 to n do

for j=1 to n do for j=1 to n do


c[i, j] = a[i, j] + b[i, j] c[i, j] = a[i, j] + b[i, j]
} }

a – mn
b - mn
c – mn
i , j, m, n -1,1,1,1

70

35
2. TIME COMPLEXITY:
 The time complexity of an algorithm is the total amount of time
required by an algorithm to complete its execution.
 The time T(P) taken by a program P is the sum of the compile
time and the run time(execution time).
 T(P) = compile time + execution time
= execution time
= tp( instance characteristics)
= tp(n)

71

2. TIME COMPLEXITY:
procedure to find time complexity:
 Identify blocks and assign value to each step.
 For comments, Declarations =0
 For initialization, assignment, return = 1
 For condition statement= 1 (max of if and else)
 For loop iteration for n times = n+1 (multiplication for nested
loops)
 For body of the loop = n

 Add all values and remove constants. (ex:2n+3 – remove 2 and 3)


 Ignore lower order exponents when higher order exponents are
present. ( ex: n4 + n3 +n – remove n3, n)
 Represent with notation. (T(n) = O(n))

72

36
2. TIME COMPLEXITY:
Example :
for ( i=0; i<n ; i++)  n+1 times
{ Remove constant 1
------ Therfore, Time complexity = O(n)
}

73

2. TIME COMPLEXITY:
Example 1:
Algorithm sum(a,b)
{
sum=a+b; 1
return sum 1
} ----
2 is time complexity

74

37
Example 2:

75

Example 3:
Algorithm sum(int a[],int n)
{
int s =0;
for i = 1 to n do
s = s + a[i];
return s;
}

76

38
TIME COMPLEXITY:
Example 2:
Algorithm matrixAddition(a, b, c, m, n)
{
for i=1 to m do m+1
for j=1 to n do m(n +1)
c[i, j] = a[i, j] + b[i, j] mn
} -------------------
m+1+m(n+1)+mn
=m + 1+mn+m+mn
= 2mn+2m+1

77

Example 5 :

Algorithm matrixAddition(a, b, c, n)
{
for i=1 to n do
for j=1 to n do
c[i, j] = a[i, j] + b[i, j]
}

78

39
Example 3:

Statement s/e Frequency Total

1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. s = 0.0; 1 1 1
4. for i =1 to n do 1 n+1 n+1
5. s = s + a[i]; 1 n n
6. return s; 1 1 1
7. } 0 - 0

Total 2n+3

79

ASYMPTOTIC NOTATION
ASYMPTOTIC NOTATION: The mathematical way of
representing the Time complexity.
The notation we use to describe the asymptotic running time of
an algorithm are defined in terms of functions whose domains
are the set of natural numbers.

Definition : It is the way to describe the behavior of functions in


the limit or without bounds.

Asymptotic growth: The rate at which the function grows…


“growth rate” is the complexity of the function or the amount of
resource it takes up to compute.

Growth rate Time +memory

80

40
Classification of growth
1.Growing with the same rate.
2. Growing with the slower rate.
3.Growing with the faster rate.

81

ASYMPTOTIC NOTATION
3 asymptotic notations are mostly used to represent time
complexity of algorithm.

1.Big oh (O)notation
2.Big omega (Ω) notation
3.Theta(Θ) notation
4.Little oh(o) notation
5.Little omega() notation

82

41
1.Big oh (O)notation
 Asymptotic “less than”(slower rate).
 This notation mainly represent upper bound of algorithm run time.
 Big oh (O)notation is useful to calculate maximum amount of time
needed for execution.
 By using Big-oh notation, worst case time complexity will be
calculated.
Formula : t(n)<=c g(n) n>=n0 , c>0 ,n0 >=1
Definition: A function t(n) is said to be in O(g(n)), denoted
t(n)O(g(n)), if there exist some positive constant c and some
nonnegative integer n0 such that t(n) <= c g(n) for all n >= n0.

83

Big-oh

84

42
Examples
Example : f(n)=2n +3 & g(n)= n
Formula : f(n)<=c g(n) n>=n0 , c>0 ,n0 >=1
f(n)=2n+3 & g(n)=n
Now 3n+2<=c.n
3n+2<=4.n
Put the value of n =1
5<=4 false
N=2 8<=8 true now n0>2 For all value of n>2 & c=4
now f(n)<= c.g(n)
3n+2<=4n for all value of n>2
Above condition is satisfied this notation takes maximum amount of time to
execute .so that it is called worst case complexity.

85

2. Ω-Omega notation
 Asymptotic “greater than”(faster rate).

 It represent Lower bound of algorithm run time.

 By using Big Omega notation, we can calculate minimum amount


of time.

 We can say that it is best case time complexity.

Formula : t(n) >=c g(n) n>=n0 , c>0 ,n0 >=1

Definition: A function t(n) is said to be in (g(n)), denoted t(n)


(g(n)), if there exist some positive constant c and some
nonnegative integer n0 such that t(n) >= c g(n) for all n >= n0.

86

43
Big-omega

87

Examples
Example : f(n)=3n +2
Formula : f(n)>=c g(n) n>=n0 , c>0 ,n0 >=1
f(n)=3n+2
3n+2>=1*n, c=1 put the value of n=1
n=1 5>=1 true n0>=1 for all value of n
It means that f(n)= Ω g(n).

88

44
Exercises: prove the following using the above definition
1. 10n2  (n2)
2. 0.3n2 - 2n  (n2)
3. 0.1n3  (n2)

89

3. -Theta notation
 Asymptotic “Equality”(same rate).

 It is used to represent same rate/same time.

Formula : c2 g(n)<=t(n)<=c1 g(n)


Definition: A function t(n) is said to be in (g(n)), denoted t(n)
(g(n)), if there exist some positive constants c1 and c2 and some
nonnegative integer n0 such that c2 g(n) <= t(n) <= c1 g(n) for all
n >= n0.

90

45
Big-theta

91

Examples

Example : f(n)=3n+2
Formula : c1 g(n)<=f(n)<=c2 g(n)
f(n)=2n+3
1*n<=3n+2<=4*n now put the value of
n=1 we get 1<=5<=4 false
n=2 we get 2<=8<=8 true
n=3 we get 3<=11<=12 true
Now for all value of n>=2 it is true above condition is satisfied.

92

46
Exercises: prove the following using the above definition
1. 10n2  (n2)
2. 0.3n2 - 2n  (n2)
3. (1/2)n(n+1)  (n2)

93

 Theorem: Prove that if t1(n) O(g1(n)) and t2(n) O(g2(n)), then t1(n) + t2(n) 
O(max{g1(n), g2(n)}).
 Proof:
 By using the property of 4 arbitrary real numbers a1, b1, a2, b2 where a1<=b1,
a2<=b2, then a1+a2 <= 2 max{b1,b2}.
 Since t1(n) O(g1(n)) , there exist some positive constant c1 and some
nonnegative integer n1 such that t1(n)  c1*g1(n), for all n  n1 .
 Similarly, since t2(n) O(g2(n)), t2(n)  c2*g2(n), for all n  n2.

 Now, denote c3 = max{c1, c2} and n = max{n1,n2}.

 Then add above two time efficiencies, we get

t1(n) + t2(n) <= c1*g1(n) + c2*g2(n)


<= c3*g1(n) + c3*g2(n)
<= c3 [g1(n) + g2(n)]  apply arbitrary number property,
<= c3 * 2 max{g1(n), g2(n)} a1+a2 <= 2 max{b1,b2}.
<= c3*max{g1(n), g2(n)}
Hence, t1(n) + t2(n)  O(max{g1(n), g2(n)}) for all n  max{n1,n2}.
94

47
Basic asymptotic efficiency classes
1 constant

log n logarithmic

n linear

n log n n-log-n

n2 quadratic

n3 cubic

2n exponential

n! factorial

95

96

48
Mathematical Analysis of nonrecursive algorithms
General Plan for Analyzing Time efficiency of nonrecursive
algorithms:
1. Decide on parameter n indicating input size

2. Identify algorithm’s basic operation

3. Check whether the number of times the basic operation executed


depends only on the input size. If it also depends on some
additional property, determine worst, average, and best cases for
input of size n.

4. Set up a sum for the number of times the basic operation is


executed

5. Simplify the sum using standard formulas and rules.

97

Useful summation formulas and rules


we use especially frequently two basic rules of sum manipulation and
two summation formulas .

98

49
99

100

50
Example 1: Maximum element

101

Example 1: Maximum element - Time


efficiency
 Let us denote C(n) the number of times this
comparison is executed and try to find a formula
expressing it as a function of size n.
 The algorithm makes one comparison on each
execution of the loop, which is repeated for each value
of the loop's variable i within the bounds 1 and n - 1
(inclusively).
 Therefore, we get the following sum for C(n):

102

51
Example 2: Element uniqueness problem

103

Example 2: Element uniqueness problem - Time


efficiency

104

52
Example 3: Matrix multiplication

105

Example 3: Matrix multiplication - Time efficiency


 We consider multiplication as the algorithm's basic
operation.
 Note that for this algorithm, we do not have to choose
between these two operations because on each
repetition of the innermost loop, each of the two is
executed exactly once. So by counting one we
automatically count the other.
 Let us set up a sum for the total number of
multiplications M(n) executed by the algorithm.
(Since this count depends only on the size of the
input matrices, we do not have to investigate the worst-
case, average-case, and best-case efficiencies separately.)

106

53
Example 3: Matrix multiplication - Time efficiency
• There is just one multiplication executed on each repetition.

by using formula ( S1) and rule ( R1)

= (n3)

107

Example 4: Counting binary digits

108

54
• It cannot be investigated in the way as the previous example.

• The basic operation in this algorithm is the comparison n > 1


within the while loop.

• The exact formula for the number of times the comparison n > 1
will be executed is actually ⌊log2n⌋+1.
• Given that each iteration of the loop effectively halves n, the
number of iterations (and thus the number of divisions) is
log2n. Therefore, the number of comparisons is ⌊ log2n⌋ ⌋+1.
• Thus, the worst-case time complexity is O(log2n).
• There is Another solution for this problem, Using recurrence
relations.
109

55

You might also like