UNIT 1 daa
UNIT 1 daa
Noida
Introduction to Algorithm
and Asymptotic Notations
Unit: 1
• First, we will start with the internet which is very much important for our daily life and we cannot
even imagine our life without the internet and it is the outcome of clever and creative algorithms.
Numerous sites on the internet can operate and falsify this huge number of data only with the help of
these algorithms.
• The everyday electronic commerce activities are massively subject to our data, for example, credit or
debit card numbers, passwords, OTPs, and many more. The centre technologies used incorporate
public-key cryptocurrency and digital signatures which depend on mathematical algorithms.
• Even an application that doesn't need algorithm content at the application level depends vigorously on
the algorithm as the application relies upon hardware, GUI, networking, or object direction and all of
these create a substantial use of algorithms.
• There are some other vital use cases where the algorithm has been used such as if we watch any video
on YouTube then next time we will get related-type advice as recommended videos for us.
CO.K PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
ACSE0401.1 3 3 3 3 2 2 1 - 1 - 2 2
ACSE0401.2 3 3 3 3 2 2 1 - 1 1 2 2
ACSE0401.3 3 3 3 3 3 2 2 - 2 1 2 3
ACSE0401.4 3 3 3 3 3 2 2 1 2 1 2 3
ACSE0401.5 3 3 3 3 3 2 2 1 2 1 2 2
B TECH
(SEM-V) THEORY EXAMINATION 20__-20__
DESIGN AND ANALYSIS OF ALGORITHMS
Time: 3 Hours Total
Marks: 100
Note: 1. Attempt all Sections. If require any missing data; then choose
suitably.
SECTION A
1.Q.No.
Attempt all questions in brief.
Question Marks 2 COx 10
1 = 20 2
2 2
. .
10 2
SECTION B
2. Attempt any three of the following: 3 x 10 = 30
1 10
2 10
Ankita9/28/24
Sharma ACSE0401 DAA Unit I 9
End Semester Question Paper Templates
4. Attempt any one part of the following: 1 x 10 = 10
1 10
2 10
5. Attempt any one part of the following: 1 x 10 = 10
Q.No. Question Marks CO
1 10
2 10
6. Attempt any one part of the following: 1 x 10 = 10
Q.No. Question Mark CO
s
1 10
2 10
9/28/24
Ankita Sharma ACSE0401 DAA Unit I 10
Prerequisite and Recap
Prerequisite
• Basic concept of c programming language.
• Concept of stack, queue and link list.
Recap
• Flow Chart
• Algorithm
• https://youtu.be/6hfOvs8pY1k
• Introduction to Algorithm
• Characteristics of Algorithm
.
• Analysis of Algorithm
• Asymptotic Notations
• Recurrence Relation
• Sorting and order Statistics
• Shell sort,
• Heap sort,
• Sorting in linear time
.
• This is an introductory chapter of Design & Analysis of Algorithm covering
the concept, importance and characteristics of algorithms. The complexity
and its calculation has been explained. Further, recursion and different
methodologies to solve them have also been provided.
Set of rules to
obtain the
INPUT expected output OUTPUT
from given
input
Algorithm
• Correctness
• Efficiency
• Simplicity
• Generality
• Non Ambiguity
– Input
– Output
– Definiteness
– Finiteness
– Effectiveness
Pseudocode –
Function Lsearch(list,X)
for index ß 0 to length(list)
if list[index]==X then
return index
END If
END LOOP
return -1
END Function
• Space Complexity
• Time Complexity
• Best Case
• Worst Case
• Average Case
• Best Case
The minimum number of steps taken on any instance of size n.
• Average Case
An average number of steps taken on any instance of size n.
• Worst Case
The maximum number of steps taken on any instance of size n.
9/28/24 Ankita Sharma ACSE0401 DAA Unit I 25
Running time for Algorithm
Algorithm Cost No. of times
Sum(A,n)
{
s =0 C1 1
for i=1 to n do C2 n+1
s = s+A[i] C3 n
return s C4 1
}
• The time taken by the algorithm will not be constant as the above
code contains a ‘for loop’.
6. While loop
while (n > 0)
{ i ß i+1 ;
nß n/2;
} Total Time Complexity = O(log n )
###RECURSION :fact(n)
if(n<=1)
return 1
else
return n*fact(n-1)
§ Theta Notation
§ Big Oh Notation
§ Big Omega Notation
§ Small Oh Notation
§ Small Omega Notation
t
i
m
e
Example
f(n)=2n2 + n
f(n)⩽3.g(n)
n ⩽ n2 for all the values of n>=1.
(you can prove that after n_o value of f(n) does not fluctuate
by putting value of n 1 or greater )
f(n)=2n2 + n
Example
f(n)=2n2 + n
Considering g(n)=n2,
Example
f(n)=4.n3+10.n2+5.n+1
f(n)=4.n3+10.n2+5.n+1
Time
Complexity
Comparison
Amortized analysis
C n=1
T(n) = {
2T(n/2) +cn n>1
Reference video :
https://www.youtube.com/watch?v=HhT1VueqQTo
Master method
Example
. T(n) = 4T(n/2) + n
Here a = 4, b = 2 Þ nlogba = n2; f (n) = n.
CASE 1: f (n) = O(n2 – e) for e = 1.
\ T(n) = Q(n2).
Example
T(n) = 4T(n/2) + n2
Here a = 4, b = 2 Þ nlogba = n2; f (n) = n2.
CASE 2: f (n) = Q(n2lg0n), that is, k = 0.
\ T(n) = Q(n2lg n).
Example
T(n) = 4T(n/2) + n3
a = 4, b = 2 Þ nlogba = n2; f (n) = n3.
CASE 3: f (n) = W(n2 + e) for e = 1 and 4(cn/2)3 £ cn3 for c =
1/2.
\ T(n) = Q(n3).
Solution :
T(N) = (T(N-2) + 1 ) + 1
T(N) = T(N-2) + 2----------------eq. 2
Now find T(N-2) and place in eq.2
T(N) = T(N-3) + 3
……………………..
……………………..
Thus T(N) = T(1) + (N-1) and since T(1)=1, so we place value of T(1)
Thus we get , T(N)=O(N) .
2. Using the recurrence for the binary search given below, find the time
complexity of binary search:
• T(n) = T(n/2) + 1 , T(1) = 1
Insertion sort is a simple sorting algorithm that works similarly to the way
you sort playing cards in your hands.
The array is virtually split into a sorted and an unsorted part. Values from the
unsorted part are picked and placed in the correct position in the sorted part.
INSERTION_SORT (A)
1. FOR j ← 2 TO length[A]
2. DO key ← A[j]
3. //{Put A[j] into the sorted sequence A[1 . . j − 1]}
4. i ← j − 1
5. WHILE i > 0 and A[i] > key
6. DO A[i +1] ← A[i]
7. i←i−1
8. A[i + 1] ← key
9/28/24 Ankita Sharma ACSE0401 DAA Unit I 84
Sorting(CO1)
Best-Case The best case occurs if the array is already sorted. the while-loop
in line 5 executed only once for each j. This happens if given array A is
already sorted. T(n) = an + b = O(n) It is a linear function of n.
Worst case The worst-case occurs if the array is sorted in reverse order i.e.,
in decreasing order. the worst-case occurs, when line 5 executed j times for
each j. This can happens if array A starts out in reverse order T(n) =
an2 + bn + c = O(n2)
Average case: When half of the array is sorted and half of the array is
unsorted it produces O(n2)
30 50 10 60 20 40
Insertion sort (A,n)
{ 1 2 3 4 5 6
for j ß 2 to n
{
Key = A[j]
i= j-1
while(i>0 and A[i] > key)
{
A[i+1] =A[i]
i=i-1
}
A[i+1]=key
}
• The worst-case complexity for shell sort, according to the Poonen Theorem, is (N
log N)2/(log log N)2, or (N log N)2/log log N), or (N(log N)2), or something in
between.
Heap Sort
Heap Sort
• For Example
• We represent heaps in level order, going from left to right.
• The array corresponding to the heap above is [25, 13, 17, 5, 8, 3].
1. Transform the array into a binary tree by inserting each element as a node
in a breadth-first manner.
2. Convert the binary tree into a max heap, ensuring that all parent nodes are
greater than or equal to their child nodes.
3. Swap the root node — the largest element — with the last element in the
heap.
4. Call the heapify() function to restore the max heap
5. Repeat step 3 and 4 until the heap is sorted, and exclude the last element
from heap on each iteration
6. After each swap and heapify () call, ensure the max heap property is
satisfied
• In a max heap, all parent nodes must have values that are greater than or equal to
the values of their children.
• Means swapping node 12 and node 31 positions in the tree to satisfy the
requirements for a max-heap tree.
swapping the element in the first position of the max-heap array with the
element in the last position of the max-heap array.
we will be omitting the last value because it’s in a sorted position. Therefore,
we move forward with the following array into the next step:
[9, 11, 12, 3, 4, 7]
Now, transform the array into a tree, then the tree into a max heap.
• The process of converting the tree or array into a max heap as heapify.
• The tree structure may no longer satisfy the requirements of a max heap
after the root node has been swapped
• The heapify () function should be called again to restore the max heap
property.
rearranged heap [12, 11, 9, 3, 5, 7]
• And again, we swap the values in the first and last position of the max-heap array
representation.
HEAPSORT (A)
1. BUILD_HEAP (A)
2. for i ← length (A) down to 2 do //reducing heap size
exchange A[1] ↔ A[i] // last & first swap
heap-size [A] ← heap-size [A] – 1 // as element del heap size reduces
Heapify (A, 1) // starts from root
The HEAPSORT procedure takes time O(n lg n), since the call to
BUILD_HEAP takes time O(n) and each of the n -1 calls to Heapify takes
time O(lg n).
Heapify (A, i)
1. l ← left [i]
2. r ← right [i]
3. if l ≤ heap-size [A] and A[l] > A[i]
4. then largest ← l
5. else largest ← i
6. if r ≤ heap-size [A] and A[r] > A[largest]
7. then largest ← r
8. if largest ≠ i
9. then exchange A[i] ↔ A[largest]
10. Heapify (A, largest)
• The heap sort combines the best of both merge sort and insertion
sort. Like merge sort, the worst case time of heap sort is O(n log n)
and like insertion sort, heap sort sorts in-place.
• Since the maximum element of the array stored at the root A[1], it can
be put into its correct final position by exchanging it with A[n] (the last
element in A).
• If we now discard node n from the heap than the remaining elements
can be made into heap. Note that the new element at the root may
violate the heap property.
9/28/24 Ankita Sharma ACSE0401 DAA Unit I 114
Sorting(CO1)
Example: A=[7, 4, 3, 1, 2]
Heap_Maximum(A)
return A[1]
Heap_Extract_Maximum(A, heap_size)
if A.heap_size<1
error “heap underflow”
Max = A[1] Running Time –
A[1] = A[heap_size] O(log2 n)
heap_size = heap_size-1
Max_heapify(A,1)
return max
Megha Gupta ACSE0401 DAA
9/28/24 124
Unit I
Heap-Increase-Key(A, i, key)
Heap-Increase-Key(A, i, key)
// Input: A: an array representing a heap, i: an array index, key: a
new key greater than A[i]
// Output: A still representing a heap where the key of A[i] was
increased to key
// Running Time: O(log n) where n =heap-size[A]
1 if key < A[i]
2 error(“New key must be larger than current key”)
3 A[i] ← key
4 while i > 1 and A[Parent(i)] < A[i]
Running Time –
5 exchange A[i] and A[Parent(i)] O(log2 n)
6 i ← Parent(i)
Max_Heap-Insert(A, key)
heap_size = heap_size +1
A[heap_size ] = -infinity //consider it smallest
Heap-Increase-Key(A, heap_size, key)
Running Time –
O(log2 n)
• To sort d digits:
function radixSort(arr)
maxNum = maximum element in arr
exp = 1
while maxNum / exp > 0
countingSort(arr, exp)
exp *= 10
9/28/24 Ankita Sharma ACSE0401 DAA Unit I 130
Sorting(CO1)
Radix Sort
Complexity Analysis of Radix Sort:
Time Complexity:
•Radix sort is a non-comparative integer sorting algorithm that sorts data with integer
keys by grouping the keys by the individual digits which share the same significant
position and value. It has a time complexity of O(d * (n + b)), where d is the number of
digits, n is the number of elements, and b is the base of the number system being used.
•In practical implementations, radix sort is often faster than other comparison-based
sorting algorithms, such as quicksort or merge sort, for large datasets, especially when
the keys have many digits. However, its time complexity grows linearly with the
number of digits, and so it is not as efficient for small datasets.
Auxiliary Space:
•Radix sort also has a space complexity of O(n + b), where n is the number of elements
and b is the base of the number system. This space complexity comes from the need to
create buckets for each digit value and to copy the elements back to the original array
after each digit has been sorted.
9/28/24 Megha Gupta ACSE0401 DAA Unit I 131
Sorting(CO1)
Radix Sort:
Following example shows how Radix sort operates on four 3-digits
number.
Radix sort complexity is O(kn) for n keys which are integers of word
size k. For all there cases time i.e best , worst and average time
complexity is O(kn).
BUCKET_SORT (A)
1. n ← length [A]
2. For i = 1 to n do
3. Insert A[i] into list B[nA[i]]
4. For i = 0 to n-1 do
5. Sort list B with Insertion sort
6. Concatenate the lists B[0], B[1], . . B[n-1] together in order.
• Time Complexity: O(n + k) for best case and average case and
O(n^2) for the worst case.
• Space Complexity: O(nk) for worst case
Q5) In recursion, the condition for which the function will stop
calling itself is ____________
Q7) What is the running time of an insertion sort algorithm if the input
is pre-sorted?
Q10) What is recurrence for worst case of QuickSort and what is the
time complexity in Worst case?
[CO1]
Q3 Solve the recurrence: T (n) = 50 T (n/49) + log n!
[CO1]
Q4 Solve the following recurrence: T (n) = √n T (√n) + n
[CO1]
Q5 Use the master method to give tight asymptotic bounds for the
following recurrence. T (n) = 4T (n/2) + n.
[CO1]
Q6 Solve the recurrence T (n) = 2 T (√n) + 1 by making a change of
variables. Your solution should be asymptotically tight. Do not worry
9/28/24about whether
Ankita Sharma values
ACSE0401 DAA areI
Unit integral
138
[CO1]
Weekly Assignment
Q8) How will you sort following array of 5 elements using heap sort
5, 9, 1, 17 and 6. [CO1]
Q9) llustrate the operation of INSERTION-SORT on the array
A = 31, 41, 59, 26, 41, 58 [CO1]
Q10) What do you mean by ‘Stable sorting algorithms’? Quick sort is
unstable where as merge is an stable sorting algorithm. Do you
agree with the above statement? Justify your answer. [CO1]
Q11) Analyze the running time of quick sort in the average case.
Q12) What is time complexity of counting sort? Sort 1, 9, 3, 3, 4, 5, 6,
7, 7, 8 by counting sort. [CO1]
Q13) Find out the worst case running time of merge sort. [CO1]
• https://www.youtube.com/watch?v=BO145HIUHRg
• https://www.youtube.com/watch?v=mB5HXBb_HY8
• https://www.youtube.com/watch?v=6pV2IF0fgKY
• https://www.youtube.com/watch?v=7h1s2SojIRw
• https://www.youtube.com/watch?v=HqPJF2L5h9U
• https://www.youtube.com/watch?v=JMlYkE8hGJM
• https://www.youtube.com/watch?v=pEJiGC-ObQE
Q.1 The complexity of searching an element from a set of n elements using Binary
search algorithm is_______
a. O(n log n)
b. O(log n)
c. O(n2) Incorrect
d. O(n)
Q.2______ is a condition that is always true at a particular point in an algorithm.
a. assertion
b. constant
c. exception
d. invariant Correct
Q.3 The running time of quick sort depends on the _________ .
a. Selection of pivot elements
b. Number of input
c. Number of passes
d. Arrangements of the elements
Q.7 __________ is the worst case running time of shell sort, using Shell’s increments
a) O(N)
b) O(N log N)
c) O(log N)
d) O(N2)
Q.8 Heap sort is an implementation of ____________ using a descending priority
queue.
a) insertion sort
b) selection sort
c) bubble sort
d) merge sort
Q.9 The descending heap property is ___________
a) A[Parent(i)] = A[i]
b) A[Parent(i)] <= A[i]
c) A[Parent(i)] >= A[i]
d) A[Parent(i)] > 2 * A[i]
Q.11 In insertion sort, the average number of comparisons required to place the
7th element into its correct position is ______.
a) 9
b) 4
c) 7
d) 14
Q.12 _______ is the average case complexity of selection sort?
a) O(nlogn)
b) O(logn)
c) O(n)
d) O(n2)
Q9. Write an algorithm to sort the given array of dement using Quick-
sort. Illustrate the operation of PARTITION procedure on the
array = < 2, 8, 7, 1, 3, 5, 6, 4 > . [CO1]
Q10. Apply BUCKET SORT algorithm on the following array 0 ∙ 78,
0 ∙ 17, 0 ∙ 39, 0 ∙ 26, 0 ∙ 72, 0 ∙ 94, 0 ∙ 21, 0 ∙ 21, 0 ∙ 12, 0 ∙ 23, 0 ∙ 68
[CO1]
Q11. Why Counting sort is called stable sort. [CO1]
Q12. Distinguish between Quick sort and Merge sort, and arrange the
following numbers in increasing order using merge sort
18, 29, 68, 32, 43, 37, 87, 24, 47, 50. [CO1]
Q13. What is divide and conquer strategy and explain the binary
search with suitable example. [CO1]