0% found this document useful (0 votes)
29 views

Complexity

The document discusses the complexity of algorithms and how to analyze algorithmic complexity. It defines complexity as a measure of how difficult it is to perform a computation. The most common complexity measures are time and space complexity. Complexity is usually expressed as a function of input size and analyzed for worst-case inputs. Examples analyze the complexity of max, linear search, binary search, bubble sort, and insertion sort algorithms.

Uploaded by

Tanzeel Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Complexity

The document discusses the complexity of algorithms and how to analyze algorithmic complexity. It defines complexity as a measure of how difficult it is to perform a computation. The most common complexity measures are time and space complexity. Complexity is usually expressed as a function of input size and analyzed for worst-case inputs. Examples analyze the complexity of max, linear search, binary search, bubble sort, and insertion sort algorithms.

Uploaded by

Tanzeel Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Complexity of Algorithms

Complexity of Algorithms
 An algorithm must always produce the correct answer, and
should be efficient.
 How can the efficiency of an algorithm be analyzed?
 The algorithmic complexity of a computation is, most
generally, a measure of how difficult it is to perform the
computation.
 That is, it measures some aspect of the cost of
computation (in a general sense of “cost”).
 Amount of resources required to do a computation.

 Some of the most common complexity measures:

 “Time” complexity: # of operations or steps required

 “Space” complexity: # of memory bits required


Complexity Depends on Input
 Most algorithms have different complexities
for inputs of different sizes.
 E.g. searching a long list typically
takes more time than searching a
short one.

 Therefore, complexity is usually expressed as


a function of the input size.
 This function usually gives the
complexity for the worst-case input of any
given length.
Worst-, Average- and Best-case
Complexity
 A worst-case complexity measure estimates
the time required for the most time consuming
input of each size.
 An average-case complexity measure
estimates the average time required for input
of each size.
 An best-case complexity measure estimates
the least time consuming input of each size.

13-4
Example: Max algorithm
 Problem:
Find the simplest form of the exact order
of growth () of the worst-case time complexity
of the max algorithm,
assuming that each line of code takes some
constant time every time it is executed
(with possibly different times for different lines
of code).
Complexity Analysis of max
procedure max(a1, a2, …, an: integers)
v := a1 t1
for i := 2 to n t2
if ai  v then v := ai t3
return v t4

 First, what is an expression for the exact total worst-


case time? (Not its order of growth.)
 t1: once
 t2: n – 1 + 1 times
 t3 (comparison): n – 1 times
 t4: once
Complexity Analysis
 Worst-case time complexity
t(n)  t1  t2  t3  t4
 1 (n 11)  (n 1) 1
 2n 1
 In terms of the number of comparisons made
t(n)  t2  t3
 (n 11)  (n 1)
# of
comparisons  2n 1
 Now, what is the simplest form of the exact () order of
growth of t(n)?
t(n) = (n)
Example: Linear Search
 In terms of the number of comparisons

procedure linear_search (x: integer,


a1, a2, …, an: distinct integers)
i := 1
while (i  n  x  ai)
t11 & t12
i := i + 1
if i  n then location
:= i

t2
Linear Search Analysis
 Worst case time complexity:
t(n)  t11  t12  t2
# of  (n 1)  n 1  2n  2  (n)
comparisons
 Best case: t(n)  t11  t12  t2 111 
 Average case,(1)
if item is present:

3  5  7  . . .  (2n 1) 2(1 2  . . .  n)  n


t(n)  
n
n
n
Example: Binary Search
procedure binary_search (x:integer, a1, a2, …, an:
distinct integers, sorted smallest to largest)
i := 1 Key question:
j := n How many loop iterations?
while i  j begin
t1
m := (i + j)/2

if x  am then i := m + 1 else j := m t2
end
if x = ai then location := i else location := 0 t3
return location
Binary Search Analysis
 Suppose that n is a power of 2, i.e., k: n = 2k.
 Original range from i = 1 to j = n contains n items.
 Each iteration: Size j - i + 1 of range is cut in half.
 Size decreases as 2k, 2k–1, 2k–2,…
 Loop terminates when size of range is 1 = 20 (i = j).
 Therefore, the number of iterations is: k = log2n

t(n)  t1  t2  t3
 (k 1)  k 1  2k  2  2 log2 n  2  (log2 n)

 Even for n  2k (not an integral power of 2), time


complexity is still the same.
Analysis of Sorting
Algorithms
What is the worst-case complexity of the bubble sort and insertion sort in
terms of the number of comparisons made?

The total number of comparisons used by the bubble sort to order a list of n
elements is:

The total number of comparisons used by the insertion sort to sort a list of n
elements is:

Both the insertion sort an bubble sort has worst-case complexity Θ(n2).
Bubble Sort Analysis
procedure bubble_sort (a1, a2, …, an: real numbers
with n  2)
for i := 1 to n – 1
for j := 1 to n – i
if aj  aj+1 then interchange aj and aj+1
{a1, a2, …, an is in increasing order}

 Worst-case complexity in terms of the


number of comparisons: (n2)
Insertion Sort Analysis
procedure insertion_sort (a1, a2, …, an: real numbers; n  2)
for j := 2 to n
begin
i := 1
while aj  ai
i := i + 1
m := aj
for k := 0 to j – i – 1
aj-k := aj-k-1 ai
:= m
end {a1, a2, …, an
are sorted in increasing
order}

 Worst-case complexity in terms of the number of


Common Terminology for the
Complexity of Algorithms
Computer Time Examples
 Assume that time = 1 ns (10-9 second) per operation,
problem size = n bits, and #ops is a function of n.

(1.25 bytes)
#ops(n) (125 n
kB)=
10 n = 106
log2 n 3.3 ns 19.9 ns
n 10 ns 1 ms
n log2 2 n 33 ns 19.9 ms
n 100 ns 16 301,004.5
m 40 s
2n 10
1.024 s
n! 3.63 ms Ouch!
Review: Complexity
 Algorithmic complexity = cost of computation.
 Focus on time complexity for our course.
 Although space & energy are also important.
 Characterize complexity as a function of input size:
 Worst-case, best-case, or average-case.
 Use orders-of-growth notation to concisely summarize the
growth properties of complexity functions.
 Need to know
 Names of specific orders of growth of complexity.
 How to analyze the order of growth of time
complexity for simple algorithms.
Tractable vs. Intractable
 A problem that is solvable using an algorithm with at
most polynomial time complexity is called tractable
(or feasible).
P is the set of all tractable problems.
 A problem that cannot be solved using an algorithm
with worst-case polynomial time complexity is called
intractable (or infeasible).
 Note that n1,000,000 is technically tractable, but really
very hard.nlog log log n is technically intractable, but
easy. Such cases are rare though.
P vs. NP
 NP is the set of problems for which there exists
a tractable algorithm for checking a proposed
solution to tell if it is correct.

 We know that PNP, but the most famous


unproven conjecture in computer science is that
this inclusion is proper.

 i.e., that PNP rather than P=NP.

 Whoever first proves this will be famous!


(or disproves it!)

You might also like