0% found this document useful (0 votes)
184 views

01 - Fundamentals of The Analysis of Algorithm Efficiency

This document discusses the fundamentals of analyzing algorithm efficiency. It defines an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. The key aspects of analyzing algorithms are correctness, time efficiency, space efficiency, and optimality. Time efficiency is analyzed by determining how the running time grows with input size, where the basic operation that dominates running time is identified. Historical algorithms like Euclid's algorithm are presented. Asymptotic analysis methods like Big-O, Big-Omega, and Big-Theta notation are introduced to classify algorithms by order of growth.

Uploaded by

Mohamed Alomari
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
184 views

01 - Fundamentals of The Analysis of Algorithm Efficiency

This document discusses the fundamentals of analyzing algorithm efficiency. It defines an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. The key aspects of analyzing algorithms are correctness, time efficiency, space efficiency, and optimality. Time efficiency is analyzed by determining how the running time grows with input size, where the basic operation that dominates running time is identified. Historical algorithms like Euclid's algorithm are presented. Asymptotic analysis methods like Big-O, Big-Omega, and Big-Theta notation are introduced to classify algorithms by order of growth.

Uploaded by

Mohamed Alomari
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Topic 1

Fundamentals of the Analysis of


Algorithm Efficiency

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-1
What is an algorithm?
An algorithm is a sequence of unambiguous instructions
for solving a problem, i.e., for obtaining a required
output for any legitimate input in a finite amount of
time.
problem

algorithm

input “computer” output

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-2
1-2
Algorithm

 An algorithm is a sequence of unambiguous


instructions for solving a problem, i.e., for
obtaining a required output for any legitimate
input in a finite amount of time.

 Can be represented various forms


 Unambiguity/clearness
 Effectiveness
 Finiteness/termination
 Correctness
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-3
1-3
Historical Perspective
 Euclid’s algorithm for finding the greatest common divisor

 Muhammad ibn Musa al-Khwarizmi – 9th century


mathematician
www.lib.virginia.edu/science/parshall/khwariz.html

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-4
1-4
Euclid’s Algorithm
Problem: Find gcd(m,n), the greatest common divisor of two
nonnegative, not both zero integers m and n

Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ?

Euclid’s algorithm is based on repeated application of equality


gcd(m,n) = gcd(n, m mod n)
until the second number becomes 0, which makes the problem
trivial.

Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-5
1-5
Two descriptions of Euclid’s algorithm

Step 1 If n = 0, return m and stop; otherwise go to Step 2


Step 2 Divide m by n and assign the value of the remainder to r
Step 3 Assign the value of n to m and the value of r to n. Go to
Step 1.

while n ≠ 0 do
r ← m mod n
m← n
n←r
return m

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-6
1-6
Analysis of algorithms
 Issues:
• correctness
• time efficiency
• space efficiency
• optimality

 Approaches:
• theoretical analysis
• empirical analysis

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-7
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size

 Basic operation: the operation that contributes the most


towards the running time of the algorithm
input size

T(n) ≈ copC(n)
running time execution time Number of times
for basic operation basic operation is
or cost executed

Note: Different basic operations may cost differently!


A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-8
Input size and basic operation examples

Problem Input size measure Basic operation

Searching for key in a Number of list’s items,


Key comparison
list of n items i.e. n

Multiplication of two Matrix dimensions or Multiplication of two


matrices total number of elements numbers

Checking primality of n’size = number of digits


Division
a given integer n (in binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-9
Empirical analysis of time efficiency
 Select a specific (typical) sample of inputs

 Use physical unit of time (e.g., milliseconds)


or
Count actual number of basic operation’s executions

 Analyze the empirical data

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-10
Best-case, average-case, worst-case

For some algorithms, efficiency depends on form of input:

 Worst case: Cworst(n) – maximum over inputs of size n

 Best case: Cbest(n) – minimum over inputs of size n

 Average case: Cavg(n) – “average” over inputs of size n


• Number of times the basic operation will be executed on typical input
• NOT the average of worst and best case
• Expected number of basic operations considered as a random variable
under some assumption about the probability distribution of all
possible inputs. So, avg = expected under uniform distribution.

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-11
Example: Sequential search

 Worst case n key comparisons

 Best case 1 comparisons

 Average case (n+1)/2, assuming K is in A


A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-12
Types of formulas for basic operation’s count

 Exact formula
e.g., C(n) = n(n-1)/2

 Formula indicating order of growth with specific


multiplicative constant
e.g., C(n) ≈ 0.5 n2

 Formula indicating order of growth with unknown


multiplicative constant
e.g., C(n) ≈ cn2

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-13
Order of growth
 Most important: Order of growth within a constant multiple
as n→∞

 Example:
• How much faster will algorithm run on computer that is
twice as fast?

• How much longer does it take to solve problem of double


input size?

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-14
Values of some important functions as n  

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-15
Asymptotic order of growth
A way of comparing functions that ignores constant factors and
small input sizes (because?)

 O(g(n)): class of functions f(n) that grow no faster than g(n)

 Θ(g(n)): class of functions f(n) that grow at same rate as g(n)

 Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-16
Big-oh

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-17
Big-omega

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-18
Big-theta

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-19
Establishing order of growth using the definition

Definition: f(n) is in O(g(n)), denoted f(n)  O(g(n)), if order


of growth of f(n) ≤ order of growth of g(n) (within
constant multiple), i.e., there exist positive constant c and
non-negative integer n0 such that
f(n) ≤ c g(n) for every n ≥ n0

Examples:
 10n is in O(n2)

 5n+20 is in O(n)

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-20
-notation

 Formal definition
• A function t(n) is said to be in (g(n)), denoted t(n) 
(g(n)), if t(n) is bounded below by some constant
multiple of g(n) for all large n, i.e., if there exist some
positive constant c and some nonnegative integer n0
such that
t(n)  cg(n) for all n  n0

 Exercises: prove the following using the above definition


• 10n2  (n2)
• 0.3n2 - 2n  (n2)
• 0.1n3  (n2)
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-21
-notation

 Formal definition
• A function t(n) is said to be in (g(n)), denoted t(n) 
(g(n)), if t(n) is bounded both above and below by
some positive constant multiples of g(n) for all large
n, i.e., if there exist some positive constant c1 and c2
and some nonnegative integer n0 such that
c2 g(n)  t(n)  c1 g(n) for all n  n0

 Exercises: prove the following using the above definition


• 10n2  (n2)
• 0.3n2 - 2n  (n2)
• (1/2)n(n+1)  (n2)
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-22
>=
(g(n)), functions that grow at least as fast as g(n)

=
(g(n)), functions that grow at the same rate as g(n)
g(n)

<=
O(g(n)), functions that grow no faster than g(n)

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-23
Theorem
 If t1(n)  O(g1(n)) and t2(n)  O(g2(n)), then
t1(n) + t2(n)  O(max{g1(n), g2(n)}).
• The analogous assertions are true for the -notation
and -notation.

 Implication: The algorithm’s overall efficiency will be determined by


the part with a larger order of growth, i.e., its least efficient part.
• For example, 5n2 + 3nlogn  O(n2)
Proof. There exist constants c1, c2, n1, n2 such that
t1(n)  c1*g1(n), for all n  n1
t2(n)  c2*g2(n), for all n  n2
Define c3 = c1 + c2 and n3 = max{n1,n2}. Then
t1(n) + t2(n)  c3*max{g1(n), g2(n)}, for all n  n3
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-24
Some properties of asymptotic order of growth

 f(n)  O(f(n))

 f(n)  O(g(n)) iff g(n) (f(n))

 If f (n)  O(g (n)) and g(n)  O(h(n)) , then f(n)  O(h(n))

Note similarity with a ≤ b

 If f1(n)  O(g1(n)) and f2(n)  O(g2(n)) , then


f1(n) + f2(n)  O(max{g1(n), g2(n)})

Also, 1in (f(i)) =  (1in f(i))

Exercise: Can you prove these properties?

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-25
Establishing order of growth using limits

0 order of growth of T(n) < order of growth of g(n)

lim T(n)/g(n) = c > 0 order of growth of T(n) = order of growth of g(n)


n→∞
∞ order of growth of T(n) > order of growth of g(n)

Examples:
• 10n vs. n2

• n(n+1)/2 vs. n2

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-26
L’Hôpital’s rule and Stirling’s formula

L’Hôpital’s rule: If limn f(n) = limn g(n) =  and


the derivatives f´, g´ exist, then

lim f(n) lim f ´(n)


=
n g(n) n g ´(n)
Example: log n vs. n!
Stirling’s formula: n  (2n)1/2 (n/e)n

Example: 2n vs. n!

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-27
Orders of growth of some important functions
 All logarithmic functions loga n belong to the same class
(log n) no matter what the logarithm’s base a > 1 is
because log a n  log b n / log b a
 All polynomials of the same degree k belong to the same class:

aknk + ak-1nk-1 + … + a0  (nk)

 Exponential functions an have different orders of growth for different a’s

 order log n < order n (>0) < order an < order n! < order nn

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-28
Basic asymptotic efficiency classes
1 constant

log n logarithmic

n linear

n log n n-log-n

n2 quadratic

n3 cubic

2n exponential

n! factorial

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-29
Basic asymptotic efficiency classes

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-30
Time efficiency of nonrecursive algorithms
General Plan for Analysis
 Decide on parameter n indicating input size

 Identify algorithm’s basic operation

 Determine worst, average, and best cases for input of size n

 Set up a sum for the number of times the basic operation is


executed

 Simplify the sum using standard formulas and rules (see


Appendix A)

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-31
Useful summation formulas and rules
lin1 = 1+1+…+1 = n - l + 1
In particular, lin1 = n - 1 + 1 = n  (n)

1in i = 1+2+…+n = n(n+1)/2  n2/2  (n2)

1in i2 = 12+22+…+n2 = n(n+1)(2n+1)/6  n3/3  (n3)


1k  2k  3k    n k  n k  n k  n k    n k  n k 1 ( n k 1 )

0in ai = 1 + a +…+ an = (an+1 - 1)/(a - 1) for any a  1


In particular, 0in 2i = 20 + 21 +…+ 2n = 2n+1 - 1  (2n )

(ai ± bi ) = ai ± bi cai = cai liuai = limai + m+1iuai

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-32
Example 1: Maximum element

C(n) = 1in-1 1 = n-1 = (n) comparisons

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-33
Example 2: Element uniqueness problem

C(n) = 0in-2 (i+1jn-1 1)


= 0in-2 n-i-1 = (n-1)+(n-2)+…+1 = (n-1+1)(n-1)/2
= ( n 2 ) comparisons
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-34
Example 3: Matrix multiplication

M(n) = 0in-1 0jn-1 n


= 0in-1 n 2
= n 3 multiplications
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-35
Example 4: Counting binary digits

It cannot be investigated the way the previous examples are.


The halving game: Find integer i such that n/2i ≤ 1.
Answer: i ≥ log n. So, T(n) = (log n) divisions.
Another solution: Using recurrence relations.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-36
Plan for Analysis of Recursive Algorithms
 Decide on a parameter indicating an input’s size.

 Identify the algorithm’s basic operation.


 Check whether the number of times the basic op. is executed
may vary on different inputs of the same size. (If it may, the
worst, average, and best cases must be investigated
separately.)
 Set up a recurrence relation with an appropriate initial
condition expressing the number of times the basic op. is
executed.
 Solve the recurrence (or, at the very least, establish its
solution’s order of growth) by backward substitutions or
another method.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-37
Example 1: Recursive evaluation of n!
Definition: n ! = 1  2  … (n-1)  n for n ≥ 1 and 0! = 1

Recursive definition of n!: F(n) = F(n-1)  n for n ≥ 1 and


F(0) = 1

Size: n
Basic operation: multiplication
Recurrence relation: M(n) = M(n-1) + 1
M(0) = 0
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-38
Solving the recurrence for M(n)

M(n) = M(n-1) + 1, M(0) = 0

M(n) = M(n-1) + 1
= (M(n-2) + 1) + 1 = M(n-2) + 2
= (M(n-3) + 1) + 2 = M(n-3) + 3

= M(n-i) + i
= M(0) + n
=n
The method is called backward substitution.

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-39
Example 2: The Tower of Hanoi Puzzle

1 3

Recurrence for number of moves:


M(n) = M(n-1) + 1+ M(n-1)
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-40
Solving recurrence for number of moves

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-41
Example 3: Counting #bits

A(n) = A( n / 2 ) + 1, A(1) = 0
A(2 k ) = A( 2 k 1) + 1, A( 2 0) = 1 (using the Smoothness Rule)
= (A( 2 k 2) + 1) + 1 = A( 2 k 2) + 2
= A(2 k i ) + i
= A( 2 k k) + k = k + 0
= log 2 n
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-42
Smoothness Rule

 Let f(n) be a nonnegative function defined on the set of


natural numbers. f(n) is called smooth if it is eventually
nondecreasing and
f(2n) ∈ Θ (f(n))
• Functions that do not grow too fast, including logn, n, nlogn,
and n where >=0 are smooth.
 Smoothness rule
Let T(n) be an eventually nondecreasing function and f(n) be
a smooth function. If
T(n) ∈ Θ (f(n)) for values of n that are powers of b,
where b>=2, then
T(n) ∈ Θ (f(n)) for any n.

A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 2-43

You might also like