0% found this document useful (0 votes)
10 views

Unit-I DAA

Uploaded by

jassem alshalili
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Unit-I DAA

Uploaded by

jassem alshalili
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

What is Algorithm?

A finite set of instruction that specifies a sequence of operation is to


be carried out in order to solve a specific problem or class of
problems is called an Algorithm.

Analysis of algorithm
The analysis is a process of estimating the efficiency of an
algorithm. There are two fundamental parameters based on which
we can analysis the algorithm:

o Space Complexity: The space complexity can be understood


as the amount of space required by an algorithm to run to
completion.
o Time Complexity: Time complexity is a function of input
size n that refers to the amount of time needed by an
algorithm to run to completion.

Complexity of Algorithm
The term algorithm complexity measures how many steps are
required by the algorithm to solve the given problem. It evaluates
the order of count of operations executed by an algorithm as a
function of input data size.

To assess the complexity, the order (approximation) of the count of


operation is always considered instead of counting the exact steps.

O(f) notation represents the complexity of an algorithm, which is


also termed as an Asymptotic notation or "Big O" notation. Here the
f corresponds to the function whose size is the same as that of the
input data. The complexity of the asymptotic computation O(f)
determines in which order the resources such as CPU time, memory,
etc. are consumed by the algorithm that is articulated as a function
of the size of the input data.

1
The complexity can be found in any form such as constant,
logarithmic, linear, n*log(n), quadratic, cubic, exponential, etc. It is
nothing but the order of constant, logarithmic, linear and so on, the
number of steps encountered for the completion of a particular
algorithm. To make it even more precise, we often call the
complexity of an algorithm as "running time".

Why is Asymptotic Notation Important?


1. They give simple characteristics of an algorithm's efficiency.

2. They allow the comparisons of the performances of various


algorithms.

Asymptotic Notations:
Asymptotic Notation is used to describe the running time of an algorithm -
how much time an algorithm takes with a given input, n. There are three
different notations: big O, big Theta (Θ), and big Omega (Ω).

There are mainly three asymptotic notations:


1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation) :


Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a
natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
2
3. Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation) :


Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a
natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0
≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
The above expression can be described as if f(n) is theta of g(n), then
the value f(n) is always between c1 * g(n) and c2 * g(n) for large values
of n (n ≥ n0). The definition of theta also requires that f(n) must be non-
negative for values of n greater than n0.

The execution time serves as both a lower and upper bound on


the algorithm’s time complexity.
It exist as both, most, and least boundaries for a given input
value.
2. Big-O Notation (O-notation) :

3
Big-O notation represents the upper bound of the running time of an
algorithm. Therefore, it gives the worst-case complexity of an
algorithm.
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there
exist a positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥
n0
It returns the highest possible output value (big-O)for a given
input.
The execution time serves as an upper bound on the algorithm’s
time complexity.

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤
f(n) ≤ cg(n) for all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in
the best case and quadratic time in the worst case. We can safely say
that the time complexity of the Insertion sort is O(n 2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort,
we have to use two statements for best and worst cases:
 The worst-case time complexity of Insertion Sort is Θ(n 2).
 The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the
time complexity of an algorithm. Many times we easily find an upper
bound by simply looking at the algorithm.
4
3. Omega Notation (Ω- Notation):
Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best case complexity of an algorithm.
The execution time serves as a both lower bound on the
algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete
statement execution in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Ω(g), if there is a constant c > 0 and a natural
number n0 such that c*g(n) ≤ f(n) for all n ≥ n0

Mathematical Representation of Omega notation :


Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤
cg(n) ≤ f(n) for all n ≥ n0 }

Recursion is a process in which the function calls itself indirectly or


directly in order to solve the problem. The function that performs the
process of recursion is called a recursive function. There are certain problems
that can be solved pretty easily with the help of a recursive algorithm

Properties of Recursion:

 Performing the same operations multiple times with different inputs.


 In every step, we try smaller inputs to make the problem smaller.
5
 Base condition is needed to stop the recursion otherwise infinite
loop will occur.
 int fact(int n)
 {
 if (n < = 1) // base case
 return 1;
 else
 return n*fact(n-1);
 }
Searching Algorithms:- are designed to check for an element or retrieve an
element from any data structure where it is stored.

1. Sequential Search: In this, the list or array is traversed sequentially


and every element is checked. For example: Linear Search.
2. Interval Search: These algorithms are specifically designed for
searching in sorted data-structures. These type of searching algorithms
are much more efficient than Linear Search as they repeatedly target
the center of the search structure and divide the search space in half.
For Example: Binary Search.

Binary Search Approach:


Binary Search is a searching algorithm used in a sorted array
by repeatedly dividing the search interval in half. The idea of binary
search is to use the information that the array is sorted and reduce the
time complexity to O(Log n).

Binary Search Algorithm: The basic steps to perform Binary Search


are:
 Sort the array in ascending order.
 Set the low index to the first element of the array and the high index
to the last element.
 Set the middle index to the average of the low and high indices.
 If the element at the middle index is the target element, return the
middle index.

6
 If the target element is less than the element at the middle index,
set the high index to the middle index – 1.
 If the target element is greater than the element at the middle index,
set the low index to the middle index + 1.
 Repeat steps 3-6 until the element is found or it is clear that the
element is not present in the array.

Bubble Sort
Bubble Sort, also known as Exchange Sort, is a simple sorting
algorithm. It works by repeatedly stepping throughout the list to be
sorted, comparing two items at a time and swapping them if they
are in the wrong order. The pass through the list is duplicated until
no swaps are desired, which means the list is sorted.

How Bubble Sort Works


1. The bubble sort starts with the very first index and makes it a
bubble element. Then it compares the bubble element, which
is currently our first index element, with the next element. If
the bubble element is greater and the second element is
smaller, then both of them will swap.
After swapping, the second element will become the bubble
element. Now we will compare the second element with the
third as we did in the earlier step and swap them if required.
The same process is followed until the last element.
2. We will follow the same process for the rest of the iterations.
After each of the iteration, we will notice that the largest
element present in the unsorted array has reached the last
index.

For each iteration, the bubble sort will compare up to the last
unsorted element.

Once all the elements get sorted in the ascending order, the
algorithm will get terminated.

7
Consider the following example of an unsorted array that we will
sort with the help of the Bubble Sort algorithm.

Initially,

Pass 1:

o Compare a0 and a1

As a0 < a1 so the array will remain as it is.

o Compare a1 and a2

Now a1 > a2, so we will swap both of them.

8
o Compare a2 and a3

As a2 < a3 so the array will remain as it is.

o Compare a3 and a4

Here a3 > a4, so we will again swap both of them.

Pass 2:

o Compare a0 and a1

9
As a0 < a1 so the array will remain as it is.

o Compare a1 and a2

Here a1 < a2, so the array will remain as it is.

o Compare a2 and a3

In this case, a2 > a3, so both of them will get swapped.

Pass 3:
10
o Compare a0 and a1

As a0 < a1 so the array will remain as it is.

o Compare a1 and a2

Now a1 > a2, so both of them will get swapped.

Pass 4:

o Compare a0 and a1

11
Here a0 > a1, so we will swap both of them.

Hence the array is sorted as no more swapping is required.

Complexity Analysis of Bubble Sort


Input: Given n input elements.

Output: Number of steps incurred to sort a list.

Logic: If we are given n elements, then in the first pass, it will do n-


1 comparisons; in the second pass, it will do n-2; in the third pass, it
will do n-3 and so on. Thus, the total number of comparisons can be
found by;

Therefore, the bubble sort algorithm encompasses a time


complexity of O(n2) and a space complexity of O(1) because it

12
necessitates some extra memory space for temp variable for
swapping.

Time Complexities:
o Best Case Complexity: The bubble sort algorithm has a best-
case time complexity of O(n) for the already sorted array.
o Average Case Complexity: The average-case time
complexity for the bubble sort algorithm is O(n2), which
happens when 2 or more elements are in jumbled, i.e., neither
in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is
also O(n2), which occurs when we sort the descending order of
an array into the ascending order.

Selection Sort
The selection sort enhances the bubble sort by making only a single
swap for each pass through the rundown. In order to do this, a
selection sort searches for the biggest value as it makes a pass and,
after finishing the pass, places it in the best possible area. Similarly,
as with a bubble sort, after the first pass, the biggest item is in the
right place. After the second pass, the following biggest is set up.
This procedure proceeds and requires n-1 goes to sort n item since
the last item must be set up after the (n-1) th pass.

ALGORITHM: SELECTION SORT (A)


1. 1. k ← length [A]
2. 2. for j ←1 to n-1
3. 3. smallest ← j
4. 4. for I ← j + 1 to k
5. 5. if A [i] < A [ smallest]
6. 6. then smallest ← i
7. 7. exchange (A [j], A [smallest])
13
How Selection Sort works
1. In the selection sort, first of all, we set the initial element as
a minimum.
2. Now we will compare the minimum with the second element. If
the second element turns out to be smaller than the minimum,
we will swap them, followed by assigning to a minimum to the
third element.
3. Else if the second element is greater than the minimum, which
is our first element, then we will do nothing and move on to the
third element and then compare it with the minimum.
We will repeat this process until we reach the last element.
4. After the completion of each iteration, we will notice that our
minimum has reached the start of the unsorted list.
5. For each iteration, we will start the indexing from the first
element of the unsorted list. We will repeat the Steps from 1 to
4 until the list gets sorted or all the elements get correctly
positioned.
Consider the following example of an unsorted array that we
will sort with the help of the Selection Sort algorithm.

A [] = (7, 4, 3, 6, 5).
A [] =

14
1st Iteration:

Set minimum = 7

o Compare a0 and a1

As, a0 > a1, set minimum = 4.

Binary Search
1. In Binary Search technique, we search an element in a sorted
array by recursively dividing the interval in half.

2. Firstly, we take the whole array as an interval.

3. If the Pivot Element (the item to be searched) is less than the


item in the middle of the interval, We discard the second half of the
list and recursively repeat the process for the first half of the list by
calculating the new middle and last element.

4. If the Pivot Element (the item to be searched) is greater than the


item in the middle of the interval, we discard the first half of the list
and work recursively on the second half by calculating the new
beginning and middle element.

5. Repeatedly, check until the value is found or interval is empty.

Analysis:
1. Input: an array A of size n, already sorted in the ascending or
descending order.

15
2. Output: analyze to search an element item in the sorted array
of size n.
3. Logic: Let T (n) = number of comparisons of an item with n
elements in a sorted array.

o Set BEG = 1 and END = n

o Find mid =
o Compare the search item with the mid item.

Linear Search Algorithm


In this article, we will discuss the Linear Search Algorithm. Searching
is the process of finding some particular element in the list. If the
element is present in the list, then the process is called successful,
and the process returns the location of that element; otherwise, the
search is called unsuccessful.

Two popular search methods are Linear Search and Binary Search.
So, here we will discuss the popular searching technique, i.e., Linear
Search Algorithm.

Linear search is also called as sequential search algorithm. It is


the simplest searching algorithm. In Linear search, we simply
traverse the list completely and match each element of the list with
the item whose location is

int search(int arr[], int N, int x)

int i;

for (i = 0; i < N; i++)

16
if (arr[i] == x)

return i;

return -1;

// Driver's code

int main(void)

int arr[] = { 2, 3, 4, 10, 40 };

int x = 10;

int N = sizeof(arr) / sizeof(arr[0]);

// Function call

int result = search(arr, N, x);

(result == -1)

? printf("Element is not present in array")

: printf("Element is present at index %d", result);

return 0;

17
}

Output
Element is present at index 3

Time complexity: O(N)


Auxiliary Space: O(1)

Algorithm of Matrix Chain Multiplication


MATRIX-CHAIN-ORDER (p)

1. n length[p]-1
2. for i ← 1 to n
3. do m [i, i] ← 0
4. for l ← 2 to n // l is the chain length
5. do for i ← 1 to n-l + 1
6. do j ← i+ l -1
7. m[i,j] ← ∞
8. for k ← i to j-1
9. do q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
10. If q < m [i,j]
11. then m [i,j] ← q
12. s [i,j] ← k
13. return m and s.
Step 1: Constructing an Optimal Solution:

PRINT-OPTIMAL-PARENS (s, i, j)
1. if i=j
2. then print "A"
3. else print "("
4. PRINT-OPTIMAL-PARENS (s, i, s [i, j])
5. PRINT-OPTIMAL-PARENS (s, s [i, j] + 1, j)
6. print ")"
18
The main idea of asymptotic analysis is to have a measure of the
efficiency of algorithms that don’t depend on machine-specific
constants and don’t require algorithms to be implemented and time
taken by programs to be compared. Asymptotic notations are
mathematical tools to represent the time complexity of algorithms for
asymptotic analysis.

There are mainly three asymptotic notations:


1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation) :


Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a
natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0

2. Big-O Notation (O-notation) :


Big-O notation represents the upper bound of the running time of an
algorithm. Therefore, it gives the worst-case complexity of an
algorithm.

3. Omega Notation (Ω- Notation):


Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best case complexity of an algorithm.

19
Insertion sort
Insertion sort is a very simple method to sort numbers in an
ascending or descending order. This method follows the incremental
method. It can be compared with the technique how cards are
sorted at the time of playing a game.

This is an in-place comparison-based sorting algorithm. Here, a sub-


list is maintained which is always sorted. For example, the lower
part of an array is maintained to be sorted. An element which is to
be 'inserted' in this sorted sub-list, has to find its appropriate place
and then it has to be inserted there. Hence the name, insertion
sort.
The array is searched sequentially and unsorted items are moved
and inserted into the sorted sub-list (in the same array). This
algorithm is not suitable for large data sets as its average and worst
case complexity are of Ο(n2), where n is the number of items.

Insertion Sort Algorithm


Now we have a bigger picture of how this sorting technique works,
so we can derive simple steps by which we can achieve insertion
sort.

Step 1 − If it is the first element, it is already sorted. return 1;


Step 2 − Pick next element
Step 3 − Compare with all elements in the sorted sub-list
Step 4 − Shift all the elements in the sorted sub-list that is greater
than the value to be sorted
Step 5 − Insert the value
Step 6 − Repeat until list is sorted

20
Pseudocode
Algorithm: Insertion-Sort(A)
for j = 2 to A.length
key = A[j]
i=j–1
while i > 0 and A[i] > key
A[i + 1] = A[i]
i = i -1
A[i + 1] = key

Analysis

Run time of this algorithm is very much dependent on the given


input.

If the given numbers are sorted, this algorithm runs in O(n) time. If
the given numbers are in reverse order, the algorithm runs
in O(n2) time.

Example

We take an unsorted array for our example.

Insertion sort compares the first two elements.

It finds that both 14 and 33 are already in ascending order. For now,
14 is in sorted sub-list.

21
Insertion sort moves ahead and compares 33 with 27.

And finds that 33 is not in the correct position. It swaps 33 with 27.
It also checks with all the elements of sorted sub-list. Here we see
that the sorted sub-list has only one element 14, and 27 is greater
than 14. Hence, the sorted sub-list remains sorted after swapping.

By now we have 14 and 27 in the sorted sub-list. Next, it compares


33 with 10. These values are not in a sorted order.

So they are swapped.

22
However, swapping makes 27 and 10 unsorted.

Hence, we swap them too.

Again we find 14 and 10 in an unsorted order.

We swap them again.

By the end of third iteration, we have a sorted sub-list of 4 items.

23
This process goes on until all the unsorted values are covered in a
sorted sub-list. Now we shall see some programming aspects of
insertion sort.

Implementation
Since insertion sort is an in-place sorting algorithm, the algorithm is
implemented in a way where the key element – which is iteratively
chosen as every element in the array – is compared with it
consequent elements to check its position. If the key element is less
than its successive element, the swapping is not done. Otherwise,
the two elements compared will be swapped and the next element
is chosen as the key element.

Insertion sort is implemented in four programming languages, C, C+


+, Java, and Python −

#include <stdio.h>
void insertionSort(int array[], int size){
int key, j;
for(int i = 1; i<size; i++) {
key = array[i];//take value
j = i;
while(j > 0 && array[j-1]>key) {
array[j] = array[j-1];
j--;
}
array[j] = key; //insert in right place
}
}
int main(){
int n;
n = 5;
int arr[5] = {67, 44, 82, 17, 20}; // initialize the array
printf("Array before Sorting: ");
for(int i = 0; i<n; i++)
printf("%d ",arr[i]);
printf("\n");

24
insertionSort(arr, n);
printf("Array after Sorting: ");
for(int i = 0; i<n; i++)
printf("%d ", arr[i]);
printf("\n");
}
Output
Array before Sorting: 67 44 82 17 20
Array after Sorting: 17 20 44 67 82

25

You might also like