0% found this document useful (0 votes)
220 views

Big O, Big Omega and Big Theta Notation: Prepared By: Engr. Wendell Perez

The document discusses Big O, Big Omega, and Big Theta notation used to analyze the time complexity of algorithms. It provides examples of common time complexity classes like constant, logarithmic, linear, quadratic, and exponential. It explains that Big O notation describes the upper bound of an algorithm's growth rate, Big Omega describes the lower bound, and Big Theta describes the tight or exact bound. The document also includes exercises asking the reader to identify the dominant terms and time complexities of example algorithms.

Uploaded by

kimchen edenelle
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
220 views

Big O, Big Omega and Big Theta Notation: Prepared By: Engr. Wendell Perez

The document discusses Big O, Big Omega, and Big Theta notation used to analyze the time complexity of algorithms. It provides examples of common time complexity classes like constant, logarithmic, linear, quadratic, and exponential. It explains that Big O notation describes the upper bound of an algorithm's growth rate, Big Omega describes the lower bound, and Big Theta describes the tight or exact bound. The document also includes exercises asking the reader to identify the dominant terms and time complexities of example algorithms.

Uploaded by

kimchen edenelle
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Big O, Big Omega and Big

Theta Notation
Prepared by: Engr. Wendell Perez
Complexity of Algorithm
Analysis of an Algorithm – It is a process of deriving
estimates for the time and space needed to execute the
algorithm.
Complexity of an Algorithm – It is the amount of time
and space required to execute the algorithm.

Various types of Time Complexities:


1. Best-Case Time – It is the minimum time needed to
execute the algorithm among all inputs of size n.
Continuation
2. Worse-Case Time – It is the maximum time needed to
execute all inputs of size n.

3. Average-Case Time – It is the average time needed to


execute the algorithm over some finite set of inputs of
size n.
Properties of Algorithms
Input: An algorithm has input values from a specified set.

Output: From the input values, the algorithm produces the output values from a
specified set. The output values are the solution.

Correctness: An algorithm should produce the correct output values for each set of
input values.

Finiteness: An algorithm should produce the output after a finite number of steps for
any input.

Effectiveness: It must be possible to perform each step of the algorithm correctly and in
a finite amount of time.

Generality: The algorithm should work for all problems of the desired
Continuation
A reasonable definition of the size of input for the
algorithm that finds the largest value of a finite
sequence is the number of elements in the input
sequence.

A reasonable definition of the execution time is the


number of iterations of the loop.

worst-case = best-case = average-case = n-1


Continuation
Various Approaches to Algorithm Design
1. Top-Down Approach – It starts by identifying the
major component of the system or program
decomposing them into their lower level component
and iterating untilthe desired level of modular
complexity is achieved.
In this method it takes the form of stepwise working
and refinement of instructions. It starts from abstract
design and each step is refined into more concrete
level until the final refined stage is reached.
Continuation
2. Bottom-Up Approach – The design starts with
designing the most basic or primative components and
proceeds to the higher level component. It works with
layers of abstraction. Starting from below, the
operation that provides a layer of abstraction is
implemented. These operations are further used to
implement more powerful operations and still higher
layers of abstraction until the final stage is reached.
Big O Notation
Big O – It is a notation also called as Landau’s symbol, is a
symbolism use in complexity theory, computer science,
and mathematics to describe the asymptotic behavior of
functions. It tells you how fast a function grows or
declines.

Landau’s symbol comes from the name of the German


number theoretician Edmund Landau who invented the
notation. The letter O is used because the rate of growth
of a function is also called its order (which begins with
letter O).
Continuation
List of classes of functions that are commonly
encountered when analyzing algorithm using Big O
notation.
Notation Name
O (1) Constant
O (log(n)) Logarithmic
O ((log(n))C ) Polylogarithmic
O (n) Linear
O (n2) Quadratic
O (nc) Polynomial
O (cn) Exponential
Continuation
Let f and g be functions with domain { 1, 2, 3...}.
ƒ (n ) = O (g (n)) (Big O Notation)

ƒ (n ) is of order at most g (n) if there exist a positive


constant C1 such that

|ƒ (n)| ≤ C1 | (g (n)) |

We write
ƒ (n) = Ω (g (n)) (Big Omega Notation)
Continuation
and say that ƒ (n) is of order at least g(n) if there exist a
positive constant C2 such that

|ƒ (n)| ≤ C2 | (g (n)) |

We write
ƒ (n) = θ (g (n)) Big Theta Notation

and say that ƒ(n) = O (g(n)) = Ω (g(n))


Example
Suppose that the worst-case time of an algorithm is
t(n) = 60 n2 + 5n + 1
for the input of size n. In this case, t(n) grows like 60n2 .
Note: Why 60n2 ?
Let us assume n= 1 and substitute the value of n = 1 to the
equation
60(1)2 + 5(1) + 1
First term 60(1)2 = 60
Second term 5(1) = 5
Third term 1 = 1
Among the three terms the first term is the fastest growing
term since the answer is 60.
Continuation
Computing for C1 using n=1 and using the equation 60n2 + 5n + 1, we have
60(1)2 + 5(1) + 1 = 66, we obtain C1 =66;

Since we need to prove that


|ƒ (n)| ≤ C1 | (g (n)) |

60n2 + 5n + 1 ≤ 60n2 + 5n2 + n2 = 66n2 for n≥ 1

60n2 + 5n + 1 = O (n2)

Therefore, our Big O notation

O (n2) Quadratic Function


Continuation
For Big Omega Notation
Since
ƒ(n) = O (g(n)) = Ω (g(n))

60n2 + 5n + 1 ≥ 60n2 for n ≥ 1


We can take C2 = 60 when n=1 base on the equation
above;
60n2 + 5n + 1 = O (g(n)) = Ω (g(n))
Therefore: Ω (g(n)) = Ω (n2) Quadratic Function
Continuation
Since O (g(n)) = Ω (g(n)) = θ (g(n));
θ (g(n)) = θ (n2) Quadratic Function

Example 2) Find the Big O, Big Ω and Big θ of the


function ƒ(n) = 2n + 3 log2 n.
Solution:
Since log2 n < n for n ≥ 1 then,
2n + 3 log2 n < 2n + 3n = 5n for n ≥ 1
Thus 2n + 3 log2 n = O (n) Linear Function
Continuation
Since O (g(n)) = Ω (g(n)) = θ (g(n)); therefore;
2n + 3 log2 n = Ω (n) Linear Function
And
2n + 3 log2 n = θ (n) Linear Function
Exercises
Direction: Write your answer in MS Office Word and
send your answer to email address [email protected]
. Use AlgorithmAndComplexity<fullname>Act as the
filename of your file. Use also this on the subject line of
your email for me to know those who had already
submitted the activity. Deadline will be on Wednesday
April 8, 2020. FAIL TO FOLLOW THE DIRECTION
CORRECTLY WILL BE MARKED DOWN WITH
CORRESPONDING DEDUCTION. PLS. FOLLOW THE
INSTRUCTION CORRECTLY.
Activity
1) Assume that each of the expressions below gives the
processing time T(n) spent by an algorithm for solving
a problem of size n. Select the dominant term(s)
having the steepest increase in n and specify the
lowest Big-Oh complexity of each algorithm. Use the
table below
Continuation
Expression Dominant term(s) Big O , Big Ω, Big θ
5 + 0.001n 3 + 0.025n
500n + 100n 1.5 + 50n
log10 n
0.3n + 5n 1.5 + 2.5 · n 1.75
n 2 log2 n + n(log2 n) 2
n log3 n + n log2 n
3 log8 n + log2 log2 log2
n
100n + 0.01n 2
0.01n + 100n 2
2n + n 0.5 + 0.5n 1.25
0.01n log2 n + n(log2 n) 2

You might also like