0% found this document useful (0 votes)
64 views

Computational Complexity: ECE 3340 - David Mayerich

This document discusses computational complexity and big O notation. It provides examples of how complexity scales with input size for common algorithms like vector addition, matrix-vector multiplication, and sorting. Vector addition has linear O(n) complexity while matrix-vector multiplication has quadratic O(n^2) complexity. The complexity analysis shows that algorithms with higher complexity scales will always be slower for large input sizes, regardless of hardware speed. It also discusses how dimensionality impacts volume in metric spaces.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

Computational Complexity: ECE 3340 - David Mayerich

This document discusses computational complexity and big O notation. It provides examples of how complexity scales with input size for common algorithms like vector addition, matrix-vector multiplication, and sorting. Vector addition has linear O(n) complexity while matrix-vector multiplication has quadratic O(n^2) complexity. The complexity analysis shows that algorithms with higher complexity scales will always be slower for large input sizes, regardless of hardware speed. It also discusses how dimensionality impacts volume in metric spaces.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Computational Complexity

ECE 3340 – David Mayerich


Computational Complexity
• Remember the use of Big-O notation?

𝑥𝑥 𝑥𝑥 2 𝑥𝑥 3 𝑥𝑥 4
𝑒𝑒 ≈ 1 + 𝑥𝑥 + + + + 𝑂𝑂(𝑥𝑥 5 )
2 3! 4!

• Big-O notation in the final term specifies:


̶ the final term behaves like 𝑥𝑥 5 when limits are taken
(ex. 𝑥𝑥 → 0 or 𝑥𝑥 → ∞)
̶ the final term is bounded by 𝑂𝑂 𝑥𝑥 5 ≤ 𝐶𝐶𝑥𝑥 5 for some 𝐶𝐶
̶ 𝑂𝑂 𝑥𝑥 𝑛𝑛 < 𝑂𝑂 𝑥𝑥 𝑛𝑛+1 → 𝐶𝐶1 𝑥𝑥 𝑛𝑛 < 𝐶𝐶2 𝑥𝑥 𝑛𝑛+1 for large 𝑥𝑥
̶ if 𝐶𝐶1 ≫ 𝐶𝐶2 , there is still an 𝑥𝑥0 for which 𝐶𝐶1 𝑥𝑥 𝑛𝑛 < 𝐶𝐶2 𝑥𝑥 𝑛𝑛+1 for 𝑥𝑥 > 𝑥𝑥0
Computational Complexity
• What does this tell us?
• In the case of the error term in
𝑥𝑥 𝑥𝑥 2 𝑥𝑥 3 𝑥𝑥 4
𝑒𝑒 ≈ 1 + 𝑥𝑥 ++ + 𝑂𝑂(𝑥𝑥 5 )
+
2 3! 4!
we know that the error behaves like 𝑥𝑥 5
1 1
̶ reducing 𝑥𝑥 by a factor of , the error reduces by a factor of :
2 25

𝑥𝑥 5 𝐶𝐶
𝐶𝐶 = 5 |𝑥𝑥|5
2 2
̶ doubling 𝑥𝑥 increases the error by a factor of 25 :
5 5
𝐶𝐶 2𝑥𝑥 = 𝐶𝐶𝐶 𝑥𝑥
• If we know how the function behaves, we can approximate and bound our
truncation error
Computational Complexity
• The same concepts apply to the time required to solve a numerical
problem
• How many operations are required for c = 𝑎𝑎 + 𝑏𝑏 where 𝑎𝑎, 𝑏𝑏 ∈ ℝ?
1 addition (FLOP)
• What if 𝑎𝑎, 𝑏𝑏 ∈ ℝ3 :
𝑎𝑎1 𝑏𝑏1
𝑎𝑎 = 𝑎𝑎2 , 𝑏𝑏 = 𝑏𝑏2
𝑎𝑎3 𝑏𝑏3
3 FLOPS

• How about if 𝑎𝑎, 𝑏𝑏 ∈ ℝ𝑛𝑛 ?


̶ Vector addition is an example of a 𝑂𝑂(𝑛𝑛) operation
Computational Complexity
• How many operations are required for vector addition?
̶ How long will the calculation take for 1ms/operation?
2ms/operation?
𝒄𝒄 = 𝒂𝒂 + 𝒃𝒃 2ms/op
slow system
𝑐𝑐1 𝑎𝑎1 𝑏𝑏1 1ms/op

time required (ms)


𝑐𝑐2 = 𝑎𝑎2 +
𝑏𝑏2

𝑐𝑐1 𝑎𝑎1 𝑏𝑏1


⋮ = ⋮ + ⋮ 0.5ms/op
𝑐𝑐𝑛𝑛 𝑎𝑎𝑛𝑛 fast system
𝑏𝑏𝑛𝑛

No matter how fast/slow


the system is, the runtime
is linear: 𝑶𝑶 𝒏𝒏
input size (n)
Computational Complexity
• How much time is required for matrix + vector
multiplication? 𝑂𝑂(𝑛𝑛2 )
𝐶𝐶 > 1
𝒄𝒄 = 𝑴𝑴𝑴𝑴 slow system

𝑐𝑐1 𝑀𝑀11 𝑀𝑀12 𝑎𝑎1


𝑐𝑐2 =
𝑀𝑀21 𝑀𝑀22 𝑎𝑎2

time required (ms)


2 𝑂𝑂(𝑛𝑛2 )
𝑂𝑂(𝑛𝑛 )
𝐶𝐶 < 1
𝑡𝑡 = 𝐶𝐶𝑛𝑛2
𝑐𝑐1 𝑎𝑎1 fast system
𝑀𝑀11 ⋯ 𝑀𝑀1𝑛𝑛 𝐶𝐶 = 1
⋮ = ⋮ ⋱ ⋮ ⋮
𝑐𝑐𝑛𝑛 𝑀𝑀𝑛𝑛𝑛 ⋯ 𝑀𝑀𝑛𝑛𝑛𝑛 𝑎𝑎𝑛𝑛

No matter how fast/slow


the system is, the runtime
is quadratic: 𝑶𝑶 𝒏𝒏𝟐𝟐

input size (n)


Computational Complexity
• No matter how fast or slow the processor, an 𝑂𝑂(𝑛𝑛)
algorithm will always beat an 𝑂𝑂 𝑛𝑛2 algorithm for
large 𝑛𝑛

time required (ms)

input size (n)


Computational Complexity – Loops
Algorithm for adding two vectors
01 for i = 1 to n
02 c[ i ] = a[ i ] + b[ i ]
03 end
• How many times is line 2 executed?
̶ This is what an 𝑂𝑂(𝑛𝑛) algorithm looks like.
̶ It doesn’t matter how many commands are in the loop, as long as they
aren’t dependent on 𝑛𝑛

Algorithm for multiplying a matrix and a vector


01 for i = 1 to n
02 for j = 1 to n
03 c[ i ] = c[ i ] + M[ i ][ j ] * a[ j ]
04 end
05 end
• How many times is line 3 executed?
̶ This is what an 𝑂𝑂 𝑛𝑛2 algorithm looks like.
Time Complexity – Behavior
• If we know that vector addition has a 𝑂𝑂(𝑛𝑛) complexity, we also know a few
specifics about the algorithm
̶ doubling the size of the vectors requires twice as much time
𝑂𝑂 2𝑥𝑥 = 𝐶𝐶 2𝑥𝑥 = 2𝐶𝐶|𝑥𝑥|
̶ halving the size of the vectors requires half as much time
𝑥𝑥 𝑥𝑥 1
𝑂𝑂 = 𝐶𝐶 = 𝐶𝐶|𝑥𝑥|
2 2 2
• If we know the how fast a system can perform a floating point operation,
we can estimate how long it will take to perform the vector addition

• What is the time complexity of:


̶ a vector multiplication 𝒚𝒚 = 𝒂𝒂 ⋅ 𝒃𝒃 𝑂𝑂(𝑛𝑛)
̶ a matrix/vector multiplication 𝒚𝒚 = 𝑴𝑴𝑴𝑴 𝑂𝑂(𝑛𝑛2 )
̶ a matrix/matrix multiplication 𝑪𝑪 = 𝑨𝑨𝑨𝑨 𝑂𝑂(𝑛𝑛3 )
Complexity of common problems
• 𝑂𝑂(1) – constant time
̶ single operation on two values of constant bit size
̶ accessing a value in memory (RAM), on an SSD, or in a register
̶ use of a lookup table
• 𝑂𝑂(log 𝑛𝑛) – logarithmic time
̶ binary search
̶ traversing a tree (octree, quadtree, binary tree)
• 𝑂𝑂(𝑛𝑛) – linear time
̶ finding an item in an unsorted list
̶ multiplying two 𝑛𝑛-dimensional vectors
• 𝑂𝑂(𝑛𝑛2 ) – quadratic time
̶ bubble sort algorithm
̶ matrix-vector multiplication
• 𝑂𝑂(𝑛𝑛!) – factorial time
̶ NP problems (probably)
̶ Traveling salesman problem
Curse of Dimensionality
• The number of points required to evenly sample a space increases
exponentially with dimension

𝑂𝑂(𝑛𝑛)

𝑂𝑂(𝑛𝑛2 )
High Dimensionality – Distance
• Volume of a hypersphere and hypercube of radius 𝑟𝑟 = 1
• 1-dimensional (both are lines)
𝑉𝑉𝑠𝑠
𝑉𝑉𝑠𝑠 = 2 𝑉𝑉𝑐𝑐 = 2 = 1.000
𝑉𝑉𝑐𝑐
• 2-dimensional (circle and square)
𝑉𝑉𝑠𝑠
𝑉𝑉𝑠𝑠 = 𝜋𝜋𝑟𝑟 2 = 3.14 𝑉𝑉𝑐𝑐 = 𝑟𝑟 2 = 4 = 0.785
𝑉𝑉𝑐𝑐
• 3-dimensional (sphere and cube)
4 𝑉𝑉𝑠𝑠
𝑉𝑉𝑠𝑠 = 𝜋𝜋𝑟𝑟 3 = 4.19 𝑉𝑉𝑐𝑐 = 𝑟𝑟 3 = 8 = 0.524
3 𝑉𝑉𝑐𝑐
• As dimensions increase,
̶ less of the volume is in the center
̶ more of the volume is in the corners

You might also like