0% found this document useful (0 votes)
165 views

Title: Data Structure Complexity.: Wednesday, December 8, 2021

The document summarizes a presentation about data structure complexity. It discusses two main complexity measures - time complexity and space complexity. It provides examples comparing the time complexity of different sorting algorithms like bubble sort, selection sort, and merge sort. It also discusses complexity classes like Big O notation, which provides an upper bound on an algorithm's running time, and Big Omega notation, which provides a lower bound.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
165 views

Title: Data Structure Complexity.: Wednesday, December 8, 2021

The document summarizes a presentation about data structure complexity. It discusses two main complexity measures - time complexity and space complexity. It provides examples comparing the time complexity of different sorting algorithms like bubble sort, selection sort, and merge sort. It also discusses complexity classes like Big O notation, which provides an upper bound on an algorithm's running time, and Big Omega notation, which provides a lower bound.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 43

DATA STRUCTURE AND

ALGORITHMS
CS 245
PRESENTATION
TITLE:
DATA STRUCTURE COMPLEXITY.

PRESENTERS:-

SULEIMAN MKANGA SULEIMAN.

1 HAMAD KHAMIS ALI.

MWINYI HASSAN HAKIM.

Wednesday, December 8, 2021


PRESENTATION OUTLINE:-

 Introduction.
 Two main complexity measure.

 Comparison between algorithms.

 Example.

 Complexity classes.

 Algorithm alnalysis.

2
INTRODUCTION.

An essential aspect to data structures is algorithms. Data


structures are implemented using algorithms. An
algorithm is a procedure that you can write as a C++
function or program, or any other language. An
algorithm states explicitly how the data will be
manipulated.

3
CONT…

The term complexity refers to the condition to something being


difficult to analyze, understand or solve. Data structure
complexity studies the limiting behavior of algorithms as the
input n become large (approaches infinity).
It is generally acknowledged that although you can buy more
memory or a faster CPU chip, these things will not save you if
you are running an inefficient algorithm.

4
CONT…

This allows algorithm designers to predict the behavior of their


algorithms and to determine which of multiple algorithms to
use, in a way that is independent of computer architecture
computer or clock rate.

5
TWO MAIN COMPLEXITY MEASURES
These are:-

i) Time complexity .

ii) Space complexity.

6
TIME COMPLEXITY .

Time complexity is a function describing the amount of


time an algorithm takes in terms of the amount of input
to the algorithm. "Time" can mean the number of
memory accesses performed, the number of comparisons
between integers, the number of times some inner loop is
executed, or some other natural unit related to the
amount of real time the algorithm will take.

7
SPACE COMPLEXITY.

Space complexity is a function describing the amount of memory


(space) an algorithm takes in terms of the amount of input to
the algorithm. We often speak of "extra" memory needed, not
counting the memory needed to store the input itself.

Space complexity is sometimes ignored because the space used


is minimal and/or obvious, but sometimes it becomes as
important an issue as time.

8
COMPARISON BETWEEN ALGORITHMS.

Let us look on four sorting algorithms, bubble sort,


selection sort, insertion sort, and merge sort. The four
sorting algorithms are analyzed and they are compared.
This report is mainly meant to provide information to the
user as to which sorting algorithm is helpful under vivid
situations.
Time taken (in seconds)

9
CONT…
Input size (n) bubble sort selection sort merge sort insertion sort
 
0 0.000000 0.000000 0.000000 0.000000
 
10000 0.680000 0.220000 0.150000 0.270000
 
20000 2.740000 0.870000 0.590000 1.090000
 
30000 6.160000 1.950000 1.330000 2.450000
 
40000 10.880000 3.490000 2.360000 4.340000
 
50000 17.070000 5.520000 3.690000 6.760000
 
60000 24.639999 7.870000 5.320000 9.790000

10
CONT...
Now let us try to analyze in a bit details, comparing the
number of array accesses by taking selection sort and
merge sort algorithms.

T(n) is for selection sort and Tm(n) is for merge sort.


Time in (nanosecond)

T(n) seems to outperform Tm(n) here, so at first glance one


might think selection sort is better than merge sort. But if
we extend the table:

11
n T(n) Tm(n)
--- ---- -----
20 456 479
21 500 511
22 546 544
23 594 576
24 644 610
25 696 643
26 750 677
27 806 711
28 864 746
29 924 781
30 986 816 12
CONT…

We see that merge sort starts to take a little less time than
selection sort for larger values of n. If we extend the
table to large values:

To put this in perspective, recall that a typical memory


access is done on the order of nanoseconds, or billionths
of a second.

13
Selection sort on ten million items takes roughly 100
trillion accesses; if each one takes ten nanoseconds it
will take 1,000,000 seconds, or about 11 and a half days
to complete. Merge sort, with a "mere" 1.2 billion
accesses, will be done in 12 seconds. For a billion
elements, selection sort takes almost 32,000 years, while
merge sort takes about 37 minutes. And, assuming a
large enough RAM size

14
AN EXAMPLE: SELECTION SORT

Suppose we want to put an array of n floating point numbers


into ascending numerical order. This task is called sorting and
should be somewhat familiar. One simple algorithm for
sorting is selection sort. You let an index i go from 0 to n-1,
exchanging the ith element of the array with the minimum
element from i up to n. Here are the iterations of selection sort
carried out on the sequence {4 3 9 6 1 7 0}:
15
index 0 1 2 3 4 5 6 comments
--------------------------------------------------------- --------
| 4 3 9 6 1 7 0
initial i=0| 0 3 9 6 1 7 4
swap 0,4 i=1| 0 1 9 6 3 7 4
swap 1,3 i=2| 0 1 3 6 9 7
4 swap 3, 9 i=3| 0 1 3 4 9
7 6 swap 6, 4 i=4| 0 1 3 4
6 7 9 swap 9, 6 i=5| 0 1 3
4 6 7 9 (done)

this kind of analysis gives you a good idea of the amount of time
you'll spend waiting, and allows you to compare this algorithms to
other algorithms that have been analyzed in a similar way.

16
COMPLEXITY CLASSES.
 There are three main complexity classes in which
algorithm can be placed.

 Oh.

 Omega.

Theta.
 
17
BIG O NOTATION.
 Big-O notation (also known as big Oh notation, big
Omicron notation, Landau notation, Bachmann–Landau
notation, and asymptotic notation.

 Big –O is the formal method of expressing the upper


bound of an algorithm’s running time. It’s a measure of
the longest amount of time it could possibly take for the
algorithm to complete.

18
CONT..
 The performance of a program is measured in
milliseconds (i.e. its speed of execution) but complexity
is measured in the form of "Big-O" notation.

 The worst case or average case running time or memory


usage of an algorithm is often expressed as a function of
the length of its input using big O notation.

19
CONT..
 Definition.
 Let f(n) and g(n) be functions defined on the set of
natural numbers. A function f(n) is said to be
 f (n) = O(g(n)) if there exist positive constants C and n0
such that
 

 f (n) ≤ C · g(n) for all n ≥ n0.

This means that g(n) is an upper bound of f(n).


 
20
CONT..
Graphhcaly.

21
EXAMPLE.

 f(n) = 3n2 + 4n + 1. Show f(n) is O(n2)


      4n <= 4n2 for all n >= 1
 and
 1 <= n2 for all n >= 1
 
 so
 3n2 + 4n + 1 <= 3n2 + 4n2 + n2 for all n >= 1
<= 8n2 for all n >= 1
 So we have shown f(n) <= 8n2 for all n >= 1

so f(n) is O(n2). (c = 8, n0 = 1)


  22
BIG OMEGA NOTATION.(Ω)

 The big-Omega is opposite of big-O.

 Definition.
 

 The function g(n) is Ω(f(n)) if and only if there exists a


positive real constant c and a positive integer n0 such
that
 g(n) ≥ cf(n) for all n > n0

23
CONT…
 Means that g(n) is lower bound of f(n).

 Graphically:

24
CONT…
 There fore, the running time of an algorithm is
Ω(g(n)).we
 mean that no matter what particular input of size n is
chosen for each value of n,
 the running time on that input is at least a constant times
g(n), for sufficiently
 large n.

 

 

25
BIG THETA NOTATION.(Ѳ)

 Big-Theta notation is a type of order notation for


typically comparing 'run-times' or growth rates between
two growth functions.

 Big-Theta is a stronger statement than Big-O and Big-


Omega.

26
CONT…
 Diffinition.

 Let f(n) and g(n) be functions defined on the set of


natural numbers. A function f(n) is said to be
 f (n) = O(g(n)) if there exist positive constants C1, C2
and n0 such that

 0 ≤ C1g(n) ≤ f(n) ≤ c2g(n) for all n≥n0.

 That means g(n) gives the tight bound of f(n)


27
CONT…

28
ORDER OF MAGNITUDES.

 There are four commonly used Orders.

 Linear order. O(n).

 Logarithmic order. O(log(n)).


 N*logn order O(n * log(n)) .

Quadratic order O(n2).


 
29
O(1) - CONSTANT TIME
 
 O(l) - constant time
 

 This means that the algorithm requires the same fixed


number of steps regardless of the size of the task.

 Examples.

 Push and Pop operations for a stack (containing n


elements);

30
 Insert and Remove operations for a queue.
O(N) - LINEAR TIME

 This means that the algorithm requires a number of steps


proportional to the size of the task.
 Examples.

 Traversal of a list (a linked list or an array) with n


elements;
 Finding the maximum or minimum element in a list, or
sequential search in an unsorted list of n elements;
 Traversal of a tree with n nodes;

 Calculating iteratively n-factorial; finding iteratively the


nth Fibonacci number.
  31
O (N2) - QUADRATIC TIME.
 O (n2) - quadratic time.
  
 The number of operations is proportional to the size of
the task squared.
 Examples:

 Some more simplistic sorting algorithms, for instance a


selection sort of n elements;
 Comparing two two-dimensional arrays of size n by n;

 Finding duplicates in an unsorted list of n elements


(implemented with two nested loops).
32
O(LOG N) - LOGARITHMIC TIME

  
 Examples:

 Binary search in a sorted list of n elements;

 Insert and Find operations for a binary search tree with n


nodes;

 Insert and Remove operations for a heap with n nodes.

33
CONT…
 O(n log n) - "n log n " time
 Examples:

 More advanced sorting algorithms – quick sort, merge


sort
 

 O(an) (a > 1) - exponential time

 Examples:

 Recursive Fibonacci implementation.

 Towers of Hanoi.

 Generating all permutations of n symbols.


34
COMPARISON
 The best time in the above list is obviously constant
time, and the worst is exponential time. Here below are
the orders of asymptotic behavior of the functions from
the above list:
 

 O(l) < O(log n) < O(n) < O(n log n) < O(n2) < O(n3) <
O(an)

35
CONT…

36
ALGORITHM ANALYSIS.

 Algorithm analysis involves measuring how long an


algorithm will take to reach a result and terminate.

 This may be measured by comparison with other


algorithms or it can be measured by determining of what
order the algorithm is.

 Algorithms have a best case, average case, and worse


case running time.

37
CONT…
 The worst case running time is the longest possible time
it could take for the algorithm to terminate.

 The best case is the opposite of this

 The average case attempts to estimate the average time


that the algorithm will take given a certain size of input
but this can be quite difficult to calculate.

38
EXAMPLE

Efficiency is proportional to the number of iterations.

Efficiency time function is f(n) = 1 + (n-1) + c*(n-1) +( n-1)


= (c+2)*(n-1) + 1
= (c+2)n – (c+2) +1

efficiency is : O(n) .

That means the algorithm have linear time running complexity.

39
MORE EXAMPLE

Total number of iterations is the product of total number of inner loop iterations and outer loop iterations.

The running time is O(n2) means that a


function f(n) is O(n2) such that for any
value of n, no matter what particular input
40
of size n is chosen, the running time of that
input is bounded from above by the value
f(n)
ANOTHER EXAMPLE
Code
for (int i=0; i< n ; i++) n+1 n+1

for int j=0 ; j < n; j+ n(n+1 n2+n


+) )
n2
{ cout << i; n*n
n2
p = p + i; n*n
____
} 3n2+2n+1

O(n2)

41
MORE EXAMPLE.

for (int i=0; i< n ; i++)


{ cout << i;
p = p + i;
}

F(n) = n+1+n+n
=3n+1
42
THE END.

THANKS

Question…!

Contribution..

Comment…..!

43

You might also like