0% found this document useful (0 votes)
70 views

Why Does Big O Notation / Time Complexity Matter?

Big O notation is used to analyze the time and space complexity of algorithms. It provides a numeric and objective way to judge performance. Time complexity is prioritized over space complexity because processing power is more expensive than memory. Big O notation expresses the worst-case scenario as input size increases to infinity by simplifying calculations and ignoring constants. Common time complexities include constant, linear, and quadratic functions. Logarithmic time complexity means doubling the input only increases operations by one.

Uploaded by

Agam Sahu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Why Does Big O Notation / Time Complexity Matter?

Big O notation is used to analyze the time and space complexity of algorithms. It provides a numeric and objective way to judge performance. Time complexity is prioritized over space complexity because processing power is more expensive than memory. Big O notation expresses the worst-case scenario as input size increases to infinity by simplifying calculations and ignoring constants. Common time complexities include constant, linear, and quadratic functions. Logarithmic time complexity means doubling the input only increases operations by one.

Uploaded by

Agam Sahu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Big O Notation:

adsd
1) Also referred to as Time Complexity
2) Big O Notation gives us a precise, numeric, &objective way of judging the
performance of our code.

Why Does Big O Notation / Time Complexity Matter?


1) It helps you write better code & becomes important as your input gets
bigger.

Why Does better implementation mean?


1) “Better” means faster time to finish and to use less memory(RAM),
although there is a much stronger emphasis on the former.
2) Readability of the code doesn’t factor at all. Performance is king.

Why not hard time measurements?


1) What if there was a way to quickly measure the performance of your code
by just looking at it. Rather than manually timing it?
2) Computers differ widely in their processors, so you will often get different
times for each computer , hard to standardize

Example Problem
Write a function that calculates the sum of all numbers from 1 up to (and including)
some number n.

Most common solution:

// naive solution
function addUpToNaive(n) {
var total = 0; // accumulator
for (var i = 1; i <= n; i++) {
Big O Notation:
adsd
total += i; // loop over
}
return total;
}

// more optimized solution


function addUpToOptimized(n) {
return n * (n + 1) / 2;
}

Counting Operations

Small amount of operations equals to faster algorithm

Counting operations for Big O Notation


1) The number of operations in your code is easy to standardize and remains
consistent regardless of computer, compared to manually timing our code.

Big O Notation:-
It’s about counting no. of operations to perform for how large N is.
Big O Notation:
adsd

//////////////////////////////////////////////////////////////////
Big O Notation:
adsd

Using a loop increases our operations to n times.

Why do we simplify BIG O?

Big O only cares about worst case scenarios & general trends as N
approaches infinity

Different ways functions scale


Constant: f(n) = 1 As inputs grow, the runtime stays about the same. Probably the best
one.

Linear: f(n) = n As input grows by n, the runtime grows by n.

Quadratic: f(n) = n^2 As input grows by n, the runtime grows at n squared. So it gets
really big, really fast. Not the best solution.
Big O Notation:
adsd
Rules of thumb to simplify Big O Expressions 
As inputs scale to infinity, the constants and smaller terms don't matter.

1. Constants don't matter. O(500) -> O(1), O(2n) -> O(n), O(13n^2) -> O(n^2)

2. Smaller terms don't matter. O(n + 10) -> O(n) , O(1000n + 50) -> O(n) , O(n^2 +
5n + 8) -> O(n^2)

Big O Notation of other things:


1. Arithmetic operations are constant. O(1)

2. Variable assignment is constant. O(1)

A Couple More Examples


Logs at least 5 or more
Big O Notation:
adsd
// prints numbers at a min 1-5.
// goes from either 5 or n
// which ever is larger
function logAtLeast5(n) {
for (var i = 1; i <= Math.max(5, n); i++) {
console.log(i);
}
}
Big O is O(n)

Logs at most 5 or less

// prints numbers at a max 5.


// goes from either 5 or n
// which ever is smaller
function logAtMost5(n) {
for (var i = 1; i <= Math.min(5, n); i++) {
console.log(i);
}
}
Big O is O(1) 

Time complexity: How can we analyze the runtime of an algorithm runs as the size of
the inputs increase.

Space complexity
We can also use Big O notation to analyze space complexity.

1) Space complexity: How much additional memory use (RAM) do we need


as the inputs provided to the code gets larger?
2) Storing values in variables always takes up memory.

Examples of Space Complexity Looking at space complexity

function sum(arr) {
let total = 0;
for (let i = 0; i < arr.length; i++){
total += arr[i];
}
return total;
}
Big O Notation:
adsd
O(1) space Our total number of space is two: i and total. Total is only counted once, even
though its updated.

The input amount doesn't matter, since we are looking at space taken up in the
algorithm. We aren't creating new variables based on the length.

//////////////////////////////////////////////

Another example array directly in proportion to the input.

function double(arr) {
let newArr = [];
for (let i = 0; i < arr.length; i++) {
newArr.push(2 * arr[i]);
}
return newArr;
}
O(n)

Rules of Thumb

1. Most primitives (booleans, numbers, undefined, null) are constant space.

2. Strings require O(n) space (where n is string length).

3. Reference types (arrays or objects) are generally O(n), where n is the length (for
arrays) or number of keys (for objects).

Why is Time Complexity Prioritized Over Space


Complexity?
1) Costs to produce & run processors are much higher compared to
RAM.

2) Consumers/ Users in general care about speed than RAM usage.


Big O Notation:
adsd
Big O Notation:
adsd
Logarithms

If the Input / N is doubled then we only have to do one more operation.

O(log N): Logarithmic Time Complexity

If the Input is doubled then we only have to do one more operation

Log2(8) = 3 Can be read as: Two to what power equals 8? Or 2^x = 8 -> 2^3 = 8

Log2(16) = 4

logBase(value) = exponent -> base^exponent = value

RECAP
1. We use Big O Notation to analyze the performance of an algorithm
Big O Notation:
adsd
2. Big O Notation gives a high level understanding of time or space complexity of
an algorithm.

3. Big O notation doesn't care about precision, only about general trends: linear,
quadratic, constant

4. Time or space complexity (measured by Big O) depends only on the algorithm,


not the hardware used to run the algorithm.

You might also like