0% found this document useful (0 votes)
114 views

Big O Notation - Definition and Examples

The document defines Big O notation, which is used to describe how quickly an algorithm's runtime grows as the input size increases. It provides examples of common time complexities like constant time O(1), linear time O(n), quadratic time O(n^2), and logarithmic time O(log n). Lower bounds (Ω) and tight bounds (Θ) are also discussed. Key takeaways are that O(n log n) and O(n) are very efficient, while O(n^2) and higher order polynomials scale poorly for large inputs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views

Big O Notation - Definition and Examples

The document defines Big O notation, which is used to describe how quickly an algorithm's runtime grows as the input size increases. It provides examples of common time complexities like constant time O(1), linear time O(n), quadratic time O(n^2), and logarithmic time O(log n). Lower bounds (Ω) and tight bounds (Θ) are also discussed. Key takeaways are that O(n log n) and O(n) are very efficient, while O(n^2) and higher order polynomials scale poorly for large inputs.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Home Algorithms Go About

Related Big O notation: de nition and examples


Big O notation yourbasic.org
Wikipedia

Time complexity of array/list Big O notation is a convenient way to describe how fast a
operations [Java, Python]
To write fast code, avoid linear-time
function is growing.
operations in Java ArrayLists and
Python lists. Maps or dictionaries
can be e cient alternatives.
yourbasic.org

Follow on Twitter

Algorithms to Go

Most Read

How to analyze time complexity:


Count your steps
Big O notation: de nition and
examples
Dynamic programming [step-by-
step example]
Loop invariants can give you
coding superpowers
API design: principles and best
» De nition » Sloppy notation
practices
O(n log log n) time integer sorting
» Constant time » Ω and Θ notation
» Linear time » Key takeaways
See all 23 algorithm articles
» Quadratic time

De nition
When we compute the time complexity T(n) of an algorithm we rarely get an exact
result, just an estimate. That’s ne, in computer science we are typically only
interested in how fast T(n) is growing as a function of the input size n.

For example, if an algorithm increments each number in a list of length n, we might


say: “This algorithm runs in O(n) time and performs O(1) work for each element”.

Here is the formal mathematical de nition of Big O.

Let T(n) and f(n) be two positive functions. We write T(n) ∊ O(f(n)), and say that
T(n) has order of f(n), if there are positive constants M and n₀ such that
T(n) ≤ M·f(n) for all n ≥ n₀.

This graph shows a situation where all of the conditions in the de nition are met.

In essence:

T(n) ∊ O(f(n)) means that T(n) doesn't grow faster than f(n).

Constant time
Let’s start with the simplest possible example: T(n) ∊ O(1).

According to the de nition this means that there are constants M and n₀ such that
T(n) ≤ M when n ≥ n₀. In other words, T(n) ∊ O(1) means that T(n) is smaller than
some xed constant, whose value isn’t stated, for all large enough values of n.

An algorithm with T(n) ∊ O(1) is said to have constant time complexity.

Linear time
In the Time complexity article, we looked at an algorithm with complexity
T(n) = n -1. Using Big O notation this can be written as T(n) ∊ O(n). (If we choose
M = 1 and n₀ = 1, then T(n) = n - 1  ≤ 1·n when n ≥ 1.)

An algorithm with T(n) ∊ O(n) is said to have linear time complexity.

Quadratic time
The second algorithm in the Time complexity article had time complexity
T(n) = n2/2 - n/2. With Big O notation, this becomes T(n) ∊ O(n2), and we say that
the algorithm has quadratic time complexity.

Sloppy notation
The notation T(n) ∊ O(f(n)) can be used even when f(n) grows much faster
than T(n). For example, we may write T(n) = n - 1 ∊ O(n2). This is indeed true, but not
very useful.

Ω and Θ notation
Big Omega is used to give a lower bound for the growth of a function. It’s de ned in
the same way as Big O, but with the inequality sign turned around:

Let T(n) and f(n) be two positive functions. We write T(n) ∊ Ω(f(n)), and say that
T(n) is big omega of f(n), if there are positive constants m and n₀ such that
T(n) ≥ m(f(n)) for all n ≥ n₀.

Big Theta is used to indicate that a function is bounded both from above and below.

T(n) ∊ Θ(f(n)) if T(n) is both O(f(n)) and Ω(f(n)).

Example

T(n) = 3n3 + 2n + 7 ∊ Θ(n3)

If n ≥ 1, then T(n) = 3n3 + 2n + 7 ≤ 3n3 + 2n3 + 7n3 = 12n3. Hence T(n) ∊ O(n3).

On the other hand, T(n) = 3n3 + 2n + 7 > n3 for all positive n. Therefore T(n) ∊ Ω(n3).

And consequently T(n) ∊ Θ(n3).

Key takeaways
When analyzing algorithms you often come across the following time complexities.

Complexity

Θ(1) Good news

Θ(log n)

Θ(n)

Θ(n log n)

Θ(nk), where k ≥ 2 Bad news

Θ(kn), where k ≥ 2

Θ(n!)

O(n log n) is really good

The rst four complexities indicate an excellent algorithm. An algorithm with worst-
case time complexity W(n) ∊ O(n log n) scales very well, since logarithms grow very
slowly.

log2 1,000 ≈ 10
log2 1,000,000 ≈ 20
log2 1,000,000,000 ≈ 30

In fact, Θ(n log n) time complexity is very close to linear – it takes roughly twice the
time to solve a problem twice as big.

n log n growth rate is close to linear

Ω(n2) is pretty bad

The last three complexities typically spell trouble. Algorithms with time complexity
Ω(n2) are useful only for small input: n shouldn’t be more than a few thousand.

10,0002 = 100,000,000

An algorithm with quadratic time complexity scales poorly – if you increase the
input size by a factor 10, the time increases by a factor 100.

Next

Time complexity of recursive functions

Previous

Time complexity: Count your steps

Share this page:


             

This work is licensed under a CC BY 3.0 license.

You might also like