0% found this document useful (0 votes)
48 views

Chapter - 08 (Analysis)

The document discusses analyzing algorithm efficiency. It introduces: - Time and space complexity which measure running time and memory usage. - Analyzing efficiency as a function of input size n. - Counting the basic operations like comparisons to estimate running time. - Asymptotic notations like Big-O that describe long-term growth rates to classify algorithms.

Uploaded by

Jenber
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Chapter - 08 (Analysis)

The document discusses analyzing algorithm efficiency. It introduces: - Time and space complexity which measure running time and memory usage. - Analyzing efficiency as a function of input size n. - Counting the basic operations like comparisons to estimate running time. - Asymptotic notations like Big-O that describe long-term growth rates to classify algorithms.

Uploaded by

Jenber
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Chapter 08

Fundamental of the analysis


of Algorithm Efficiency
Introduction
 Analysis of algorithms is usually used in a
narrower, technical sense to mean an
investigation of an algorithm’s efficiency with
respect to two resources: running time and
memory space.

Prepared By: Eyob S. 2


1. The Analysis Framework
 Time efficiency, also called time complexity,
indicates how fast an algorithm in question
runs.

 Space efficiency, also called space complexity,


refers to the amount of memory units
required by the algorithm in addition to the
space needed for its input and output.

Prepared By: Eyob S. 3


Measuring an input size
 Almost all algorithms run longer on larger inputs. Therefore,
it is logical to investigate an algorithm’s efficiency as a
function of some parameter n indicating the algorithm’s input
size.
 It will be the size of the list for problems of sorting, searching,
finding the list’s smallest element, and most other problems
dealing with lists.
 For the problem of evaluating a polynomial p(x)=a nxn + . . . +
a0 of degree n, it will be the polynomial’s degree or the
number of its coefficients, which is larger by 1 than its
degree.
Prepared By: Eyob S. 4
 In computing the product of two n × n matrices, there
are two natural measures of size for this problem. The
first and more frequently used is the matrix order n.
But the other natural contender is the total number
of elements N in the matrices being multiplied. (The
latter is also more general since it is applicable to
matrices that are not necessarily square.)

 The choice of an appropriate size metric can be


influenced by operations of the algorithm in question.

Prepared By: Eyob S. 5


Units for Measuring Running Time
 We can simply use some standard unit of time measurement—a
second, or millisecond, and so on—to measure the running time of a
program implementing the algorithm. There are obvious drawbacks
to such an approach:

• dependence on the quality of a program implementing


the algorithm and of the compiler used in generating the
machine code,

• the difficulty of clocking the actual running time of the


program,

• dependence on the speed of a particular computer.

Prepared By: Eyob S. 6


 Since we are after a measure of an algorithm’s efficiency, we would
like to have a metric that does not depend on these extraneous
factors.
 One possible approach is to count the number of times each of the
algorithm’s operations is executed.
 Basic operation is the operation contributing the most to the total
running time. It is usually the most time-consuming operation in
the algorithm’s innermost loop.
 For example, most sorting algorithms work by comparing elements
(keys) of a list being sorted with each other; for such algorithms,
the basic operation is a key comparison.

Prepared By: Eyob S. 7


 The most time-consuming operation among arithmetical
operations is division, followed by multiplication and then
addition and subtraction, with the last two usually considered
together.

 Let cop be the execution time of an algorithm’s basic


operation on a particular computer, and let C(n) be the
number of times this operation needs to be executed for this
algorithm. Then we can estimate the running time T (n) of a
program implementing this algorithm on that computer by
the formula,
T (n) ≈ copC(n)

Prepared By: Eyob S. 8


Orders of Growth
 A difference in running times on small inputs is not what really
distinguishes efficient algorithms from inefficient ones.
 For large values of n, it is the function’s order of growth that
counts.
 Algorithms that require an exponential number of operations
are practical for solving only problems of very small sizes.
 The efficiency analysis framework concentrates on the order of
growth of an algorithm’s basic operation count as the principal
indicator of the algorithm’s efficiency.

Prepared By: Eyob S. 9


Worst-Case, Best-Case, and Average-Case
Efficiencies
 There are many algorithms for which running time depends
not only on an input size but also on the specifics of a
particular input.
 The worst-case efficiency of an algorithm is its efficiency
for the worst-case input of size n, which is an input (or
inputs) of size n for which the algorithm runs the longest
among all possible inputs of that size.
 The worst-case analysis provides very important
information about an algorithm’s efficiency by bounding its
running time from above.
Prepared By: Eyob S. 10
 The best-case efficiency of an algorithm is its efficiency for
the best-case input of size n, which is an input (or inputs) of
size n for which the algorithm runs the fastest among all
possible inputs of that size.

 Neither the worst-case analysis nor its best-case


counterpart yields the necessary information about an
algorithm’s behavior on a “typical” or “random” input. This
is the information that the average-case efficiency seeks to
provide.
 To analyze the algorithm’s average case efficiency, we must
make some assumptions about possible inputs of size n.

Prepared By: Eyob S. 11


Example

Prepared By: Eyob S. 12


Solution
 Cworst(n) = n , Cbest(n) = 1.

 For the calculation of average case efficiency the standard


assumptions are:
• the probability of a successful search is equal to p (0 ≤ p ≤ 1)
• the probability of the first match occurring in the i th position
of the list is the same for every i.

 In the case of a successful search, the probability of the first


match occurring in the ith position of the list is p/n for every i,
and the number of comparisons made by the algorithm in
such a situation is obviously i.
Prepared By: Eyob S. 13
 In the case of an unsuccessful search, the
number of comparisons will be n with the
probability of such a search being (1− p).

 If p = 1 (the search must be successful)…?


 If p = 0 (the search must be unsuccessful)…?
Prepared By: Eyob S. 14
2. Asymptotic Notations and Basic
Efficiency Classes
 To compare and rank such orders of growth,
computer scientists use three notations: , , and
 DEFINITION A function is said to be in , denoted ,
if is bounded above by some constant multiple of
for all large , i.e., if there exist some positive
constant and some nonnegative integer such that
.

Prepared By: Eyob S. 15


Big-oh notation:

Prepared By: Eyob S. 16


Example
 Show that
Solution:

(for all ) =
 DEFINITION A function is said to be in , denoted , if is
bounded below by some positive constant multiple of
for all large , i.e., if there exist some positive constant
and some nonnegative integer such that

for all .
Prepared By: Eyob S. 17
Big-Omega notation:

Prepared By: Eyob S. 18


Example
 Show that

Solution:

for all , i.e., .

 DEFINITION A function is said to be in , denoted , if is bounded


both above and below by some positive constant multiples of
for all large , i.e., if there exist some positive constants and
and some non-negative integer such that
for all .

Prepared By: Eyob S. 19


Big-theta notation:

Prepared By: Eyob S. 20


Example
1. Show that .

Solution:

•for the upper bound:


.

•for the lower bound:

Prepared By: Eyob S. 21


 THEOREM If and , then .

Using limit to calculate orders of growth:

the first two cases mean that the last two mean that , and the second case means that
 L’Hopital’s rule

Prepared By: Eyob S. 22


Exercise
1. Using limit approach compare the orders of growth of
a) and
b) and
c) and

2. Use the informal definitions of , and to determine whether


the following assertions are true or false.

Prepared By: Eyob S. 23


3. Mathematical Analysis of
Non-recursive Algorithms

Prepared By: Eyob S. 24


General Plan for Analyzing the Time Efficiency of Non-
recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.

2. Identify the algorithm’s basic operation. (As a rule, it is located in the innermost
loop.)

3. Check whether the number of times the basic operation is executed depends only on
the size of an input. If it also depends on some additional property, the worst-case,
average-case, and, if necessary, best-case efficiencies have to be investigated
separately.

4. Set up a sum expressing the number of times the algorithm’s basic operation is
executed.

5. Using standard formulas and rules of sum manipulation, either find a closed form
formula for the count or, at the very least, establish its order of growth.

Prepared By: Eyob S. 25


Important Formulas

Prepared By: Eyob S. 26


Example 1

Prepared By: Eyob S. 27


Solution:
 Input size: n

 Basic operation: comparison

Prepared By: Eyob S. 28


Example 2

Prepared By: Eyob S. 29


Solution:
 Input size: n

 Basic Operation: comparison

Prepared By: Eyob S. 30


Example 3

Prepared By: Eyob S. 31


Solution:
 Input size: matrix order n

 Basic operation: multiplication

Prepared By: Eyob S. 32


Example 4

Prepared By: Eyob S. 33


Solution:

 Input size: a single number n

 Basic operation: comparison

Prepared By: Eyob S. 34


4. Mathematical Analysis of
Recursive Algorithms

Prepared By: Eyob S. 35


General Plan for Analyzing the Time Efficiency
of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.

2. Identify the algorithm’s basic operation.

3. Check whether the number of times the basic operation is executed can
vary on different inputs of the same size; if it can, the worst-case, average-
case, and best-case efficiencies must be investigated separately.

4. Set up a recurrence relation, with an appropriate initial condition, for the


number of times the basic operation is executed.

5. Solve the recurrence or, at least, ascertain the order of growth of its
solution.

Prepared By: Eyob S. 36


Example 1

Prepared By: Eyob S. 37


Solution:
 Input size: a single number n
 Basic operation: multiplication
 The number of multiplications M(n) needed to compute must satisfy the
equality

 M(0) = 0, no multiplication when n=0.


 By the method of backward substitutions:
M(n) = M(n-1) + 1
= [M(n-1-1) + 1] + 1 = M(n-2) + 2…… = M(n-i) + I
when i = n , M(n) = M(0) + n = n
M(n) = n.

Prepared By: Eyob S. 38


Example 2

Prepared By: Eyob S. 39


Solution:
 The number of additions made in computing
BinRec(∟n/2┘ ) is A(└n/2┘), plus one more
addition is made by the algorithm to increase the
returned value by 1. This leads to the recurrence
A(n) = A(└n/2┘) + 1 for n > 1.
 Since the recursive calls end when n is equal to 1
and there are no additions made then, the initial
condition is
A(1) = 0.
Prepared By: Eyob S. 40
 Let n = 2 and by applying smoothness rule:

Prepared By: Eyob S. 41


Thank You!!!

Prepared By: Eyob S. 42

You might also like