0% found this document useful (0 votes)
13 views9 pages

Ahmadmj 3

The document discusses fundamentals of analyzing algorithm efficiency, including: 1) Types of formulas used to count operations like exact, order of growth, and asymptotic formulas. 2) Order of growth analysis focuses on how algorithms scale as input size increases. 3) Big O, Big Omega, and Big Theta notation are used to classify algorithms by order of growth. 4) Examples are provided of analyzing recursive and non-recursive algorithms to determine order of growth using definitions, limits, and summation formulas.

Uploaded by

Abdulhadi Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views9 pages

Ahmadmj 3

The document discusses fundamentals of analyzing algorithm efficiency, including: 1) Types of formulas used to count operations like exact, order of growth, and asymptotic formulas. 2) Order of growth analysis focuses on how algorithms scale as input size increases. 3) Big O, Big Omega, and Big Theta notation are used to classify algorithms by order of growth. 4) Examples are provided of analyzing recursive and non-recursive algorithms to determine order of growth using definitions, limits, and summation formulas.

Uploaded by

Abdulhadi Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Fundamentals of the Analysis of Algorithm Efficiency

Types of formulas for basic operation’s count

Exact formula e.g., C(n) = n(n-1)/2

Formula indicating order of growth with specific multiplicative constant e.g., C(n) ≈ 0.5 n2

Formula indicating order of growth with unknown multiplicative constant e.g., C(n) ≈ cn2

Order of growth

Most important: Order of growth within a constant multiple as n→∞ Example:

How much faster will algorithm run on computer that is twice as fast?

How much longer does it take to solve problem of double input size?

Values of some important functions as n  

Asymptotic order of growth

A way of comparing functions that ignores constant factors and small input sizes

O(g(n)): class of functions f(n) that grow no faster than g(n)

Θ(g(n)): class of functions f(n) that grow at same rate as g(n)

Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)


Big-oh

Big-omega
Big-theta

Establishing order of growth using the definition

Definition: f(n) is in O(g(n)) if order of growth of f(n) ≤ order of growth of g(n) (within constant
multiple),
i.e., there exist positive constant c and non-negative integer n0 such that

f(n) ≤ c g(n) for every n ≥ n0

Examples:

n is O(n2)

0.5 n(n-1) is O(n2)

100n+5 is O(n2)

Establishing order of growth using the definition

Definition: f(n) is in Ω(g(n)) if order of growth of f(n) ≥ order of growth of g(n) (within constant
multiple),
i.e., there exist positive constant c and non-negative integer n0 such that

f(n) ≥ c g(n) for every n ≥ n0

Examples:

n3 is Ω(n2)

100n+5 is NOT Ω(n2)


Establishing order of growth using the definition

Definition: A function f(n) is in Θ(g(n)) if order of growth of f(n) is bounded both above and below by
some positive constant of g(n) for all large n, i.e., there exist some positive constant c1 and c2 and
non-negative integer n0 such that

c1g(n) ≤ f(n) ≤ c2 g(n) for every n ≥ n0

Examples:

0.5n(n-1) is Θ(n2)

Some properties of asymptotic order of growth

f(n)  O(f(n))

f(n)  O(g(n)) iff g(n) (f(n))

If f (n)  O(g (n)) and g(n)  O(h(n)) , then f(n)  O(h(n))


Note similarity with a ≤ b

If f1(n)  O(g1(n)) and f2(n)  O(g2(n)) , then

f1(n) + f2(n)  O(max{g1(n), g2(n)})

Establishing Order of Growth using Limits Examples

Examples:

10n vs. n2

n(n+1)/2 vs. n2

10n vs. n2

limn 10n / n2 = limn 10n / n2 = limn 10 / n = 0

Therefore, the first function 10n has smaller order of growth than n2

n(n+1)/2 vs. n2

limn n(n+1) /2 = limn n(n+1) = limn ( ½ + 1/2n) = 1/2

n2 2 n2

Therefore the first function n(n+1) has the same order of growth with n2

L’Hôpital’s rule and Stirling’s formula

L’Hôpital’s rule: If limn f(n) = limn g(n) =  and

the derivatives f´, g´ exist, then

limn log n = limn n = 

limn log n = limn d(log n) = limn (1/n) = limn 1/n = 0

b n d(n) 1
Orders of growth of some important functions

All logarithmic functions loga n belong to the same class


(log n) no matter what the logarithm’s base a > 1 is

All polynomials of the same degree k belong to the same class: aknk + ak-1nk-1 + … + a0  (nk)

Exponential functions an have different orders of growth for different a’s

order log n < order n (>0) < order an < order n! < order nn

Basic asymptotic efficiency classes

1 constant

log n logarithmic

n linear

n log n n-log-n

n2 quadratic

n3 cubic

2n exponential

n! factorial

Time efficiency of nonrecursive algorithms

General Plan for Analysis

Decide on parameter n indicating input size


Identify algorithm’s basic operation

Determine worst, average, and best cases for input of size n

Set up a sum for the number of times the basic operation is executed

Simplify the sum using standard formulas and rules

Useful summation formulas and rules

liu1 = 1+1+…+1 = u - l + 1

In particular, liu1 = n - 1 + 1 = n  (n)

1in i = 1+2+…+n = n(n+1)/2  n2/2  (n2)

1in i2 = 12+22+…+n2 = n(n+1)(2n+1)/6  n3/3  (n3)

0in ai = 1 + a +…+ an = (an+1 - 1)/(a - 1) for any a  1

In particular, 0in 2i = 20 + 21 +…+ 2n = 2n+1 - 1  (2n )

(ai ± bi ) = ai ± bi cai = cai liuai = limai + m+1iuai

Example 1: Maximum element

The obvious measure of an input’s size here is the number of elements in the array, i.e. n

The operations that are going to be executed most often are in the algorithm’s for loop. The
basic operation is the comparison:

A[i] > maxval

Since the number of comparisons will be the same for all arrays of size n, there is no need to
distinguish among the worst, average and best cases here.

Let C(n) denote the number of times this comparison is executed and try to find a formula
expressing it as a function of size n.
The algorithm makes 1 comparison on each execution of the loop, which is repeated for each
value of the loop’s variable I within the bound 1and n-1 (inclusively).

Therefore we get the following sum for C(n).

In particular,

C(n) = 1in-1 1

= (n – 1) -1 + 1

= n-1  (n) .

Example 2: Element uniqueness problem

Element Uniqueness : Worst-case

The algorithm’s input size is n

The algorithm’s basic operation is comparison between A[i] and A[j] which appeared in the inner
double loop

The worst-case efficiency of the algorithm is given by:

Cworst(n) = 0i n-2 i+1jn-1 1

Element Uniqueness : Worst-case

The algorithm’s worst-case

Cworst(n) = 0i n-2 i+1jn-1 1

Cworst(n) = 0i n-2 [ (n-1)- (i+1) + 1 ]

Cworst(n) = 0i n-2 (n-1-i)

Cworst(n) = 0i n-2 (n-1) - 0i n-2 i

= (n-2+1)(n-1) – {0 + 1 + 2 + . . . + (n-3) + (n-2) }


Cworst(n) = (n-1)(n-1) - (n-2)(n-1)

Cworst(n) = (n-1)n  (n2)

Element Uniqueness : Best-case

The algorithm’s basic operation is comparison between A[i] and A[j]

The best-case efficiency of the algorithm is when the first two elements are the same.

The number of comparison would be 1

Example 3: Matrix multiplication

The algorithm’s worst-case

Cworst(n) = 0i n-1 0jn-1 0kn-1 1

Cworst(n) = 0i n-1 0jn-1 [ (n-1) - 0 + 1 ]

Cworst(n) = 0i n-1 0jn-1 n

Cworst(n) = n 0i n-1 0jn-1 1

Cworst(n) = n 0i n-1 [ (n-1) - 0 + 1 ]

Cworst(n) = n 0i n-1 n

Cworst(n) = n2 0i n-1 1

Cworst(n) = n2 [ (n-1) - 0 + 1 ]

Cworst(n) = (n2)n  (n3)

You might also like