Daa Lab File
Daa Lab File
FOR WOMEN
Branch: B.Tech(IT)-2021
Batch: IT-1
Semester: 4 (Group G)
Submitted To:
A Multiplatform version of RAPTOR is now available for Windows, Mac and Linux. Key
differences:
Factorial of a number.
Convert Fahrenheit temperature to Celsius.
Guess the number and print the number of attempts.
LAB:2
To understand the Brute force approach and divide and conquer strategy as
designing approach of algorithms:
a) To implement linear search based on brute force approach.
b) To implement Binary Search based on Divide and conquer strategy.
A brute force approach is an approach that finds all the possible solutions to find a
satisfactory solution to a given problem. The brute force algorithm tries out all the
possibilities till a satisfactory solution is not found.
Suppose we have converted the problem into the form of the tree shown
below:
Brute force search considers each and every state of a tree, and the state is represented in
the form of a node. As far as the starting position is concerned, we have two choices, i.e., A
state and B state. We can either generate state A or state B. In the case of B state, we have
two states, i.e., states E and F. In the case of brute force search, each state is considered one
by one. As we can observe in the above tree that the brute force search takes 12 steps to
find the solution. On the other hand, backtracking, which uses a Depth-First search, considers
the below states only when the state provides a feasible solution. Consider the above tree,
start from the root node, then move to node A and then node C. If node C does not provide
a feasible solution, then there is no point in considering the states G and H. We backtrack
from node C to node A. Then, we move from node A to node D. Since node D does not
provide a feasible solution, we discard this state and backtrack from node D to node A. We
move to node B, then we move from node B to node E. We move from node E to node K;
Since k is a solution, so it takes 10 steps to find the solution. In this way, we eliminate a
greater number of states in a single iteration. Therefore, we can say that backtracking is
faster and more efficient than the brute-force approach.
The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its
basic idea is to decompose a given problem into two or more similar, but simpler,
subproblems, to solve them in turn, and to compose their solutions to solve the given
problem. Problems of sufficient simplicity are solved directly. For example, to sort a given list
of n natural numbers, split it into two lists of about n/2 numbers each, sort each of them in
turn, and interleave both results appropriately to obtain the sorted version of the given list
(see the picture). This approach is known as the merge sort algorithm.
The name "divide and conquer" is sometimes applied to algorithms that reduce each
problem to only one sub-problem, such as the binary search algorithm for finding a record in
a sorted list (or its analog in numerical computing, the bisection algorithm for root
finding).[2] These algorithms can be implemented more efficiently than general divide-and-
conquer algorithms; in particular, if they use tail recursion, they can be converted into
simple loops. Under this broad definition, however, every algorithm that uses recursion or
loops could be regarded as a "divide-and-conquer algorithm". Therefore, some authors
consider that the name "divide and conquer" should be used only when each problem may
generate two or more subproblems.[3] The name decrease and conquer has been proposed
instead for the single-subproblem class.[4]
An important application of divide and conquer is in optimization,[example needed] where if
the search space is reduced ("pruned") by a constant factor at each step, the overall
algorithm has the same asymptotic complexity as the pruning step, with the constant
depending on the pruning factor (by summing the geometric series); this is known as prune
and search.
Linear Search is defined as a sequential search algorithm that starts at one end and goes
through each element of a list until the desired element is found, otherwise the search
continues till the end of the data set. It is the easiest search algorithm
Binary Search Algorithm: The basic steps to perform Binary Search are:
• Sort the array in ascending order.
• Set the low index to the first element of the array and the high index to the last
element.
• Set the middle index to the average of the low and high indices.
• If the element at the middle index is the target element, return the middle index.
• If the target element is less than the element at the middle index, set the high
index to the middle index – 1.
• If the target element is greater than the element at the middle index, set the low
index to the middle index + 1.
• Repeat steps 3-6 until the element is found or it is clear that the element is not
present in the array.
int main(void){
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
MERGE SORT
About Merge Sort
Merge sort is a sorting algorithm that works by dividing an array into smaller subarrays,
sorting each subarray, and then merging the sorted subarrays back together to form the final
sorted array. In simple terms, we can say that the process of merge sort is to divide the array
into two halves, sort each half, and then merge the sorted halves back together. This process
is repeated until the entire array is sorted. One thing that you might wonder is what is the
specialty of this algorithm. We already have a number of sorting algorithms then why do we
need this algorithm? One of the main advantages of merge sort is that it has a time
complexity of O (n log n), which means it can sort large arrays relatively quickly. It is also a
stable sort, which means that the order of elements with equal values is preserved during the
sort. Merge sort is a popular choice for sorting large datasets because it is relatively efficient
and easy to implement. It is often used in conjunction with other algorithms, such as
quicksort, to improve the overall performance of a sorting routine.
Merge Sort Time Complexity: O(N log(N)), Sorting arrays on different machines.
Merge Sort is a recursive algorithm and time complexity can be expressed as following
recurrence relation.
T(n) = 2T(n/2) + θ(n)
The above recurrence can be solved either using the Recurrence Tree method or the Master
method. It falls in case II of the Master Method and the solution of the recurrence is
θ(Nlog(N)). The time complexity of Merge Sort isθ(Nlog(N)) in all 3 cases (worst, average,
and best) as merge sort always divides the array into two halves and takes linear time to
merge two halves.
Auxiliary Space: O(n), In merge sort all elements are copied into an auxiliary array. So N
auxiliary space is required for merge sort.
LAB:4
QUICK SORT
About Quick Sort: Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks
an element as a pivot and partitions the given array around the picked pivot. There are many
different versions of quickSort that pick pivot in different ways.
• Always pick the first element as a pivot.
• Always pick the last element as a pivot (implemented below)
• Pick a random element as a pivot.
• Pick median as the pivot.
The key process in quickSort is a partition(). The target of partitions is, given an array and an
element x of an array as the pivot, put x at its correct position in a sorted array and put all
smaller elements (smaller than x) before x, and put all greater elements (greater than x) after
x. All this should be done in linear time.
HEAP SORT
About Heap Sort
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It
is similar to the selection sort where we first find the minimum element and place the
minimum element at the beginning. Repeat the same process for the remaining elements.
SHELL SORT
About Shell Sort: Shell sort is mainly a variation of Insertion Sort. In insertion sort, we
move elements only one position ahead. When an element has to be moved far ahead, many
movements are involved. The idea of ShellSort is to allow the exchange of far items. In Shell
sort, we make the array h-sorted for a large value of h. We keep reducing the value of h until
it becomes 1. An array is said to be h-sorted if all sublists of every h’th element are sorted.
smaller elements (smaller than x) before x, and put all greater elements (greater than x) after
x. All this should be done in linear time.
Time Complexity: Time complexity of the above implementation of Shell sort is O(n2). In
the above implementation, the gap is reduced by half in every iteration. There are many
other ways to reduce gaps which leads to better time complexity. See this for more details.
Worst Case Complexity
The worst-case complexity for shell sort is O(n2)
Best Case Complexity
When the given array list is already sorted the total count of comparisons of each interval is
equal to the size of the given array.
So best case complexity is Ω(n log(n))
Average Case Complexity
The shell sort Average Case Complexity depends on the interval selected by the
programmer.
θ(n log(n)2).
THE Average Case Complexity: O(n*log n)~O(n1.25)
Space Complexity
The space complexity of the shell sort is O(1).
LAB-7
PROBLEM: To perform the greedy knapsack
APPROACH
The basic idea of the greedy approach is to calculate the ratio value/weight for each item
and sort the item based on this ratio. Then take the item with the highest ratio and add them
until we cannot add the next item as a whole and at the end add the next item as much as
we can. Which will always be the optimal solution to this problem.
ALGORITHM
Input: Knapsack capacity = M. n is the number of available items with associated profits P[1:
n] and weights W[1: n]. These items are such that P[i]/W[i]>=P[i+1]/W[i +1] where 1 ≤i≤n.
Output: S [1: n] is the fixed size solution vector. S[i] gives the fractional part xi, of item i
placed into a knapsack, 0 ≤ xi ≤1 and 1 ≤ i ≤ n.
CODE:
Output
Given two strings, S1 and S2, the task is to find the length of the longest subsequence
present in both of the strings.
Note: A subsequence of a string is a sequence that is generated by deleting some characters
(possibly 0) from the string without altering the order of the remaining characters. For
example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, etc are subsequences of the string “abcdefg”.
Examples:
Input: S1 = “AGGTAB”, S2 = “GXTXAYB”
Output: 4
Explanation: The longest subsequence which is present in both strings is “GTAB”.
• Create a 2D array dp[][] with rows and columns equal to the length of each input
string plus 1 [the number of rows indicates the indices of S1 and the columns
indicate the indices of S2].
• Initialize the first row and column of the dp array to 0.
• Iterate through the rows of the dp array, starting from 1 (say using iterator i).
• For each i, iterate all the columns from j = 1 to n:
• If S1[i-1] is equal to S2[j-1], set the current element of the dp array to the value of
the element to (dp[i-1][j-1] + 1).
• Else, set the current element of the dp array to the maximum value of dp[i-
1][j] and dp[i][j-1].
• After the nested loops, the last element of the dp array will contain the length of the
LCS.
CODE:
return 0;
}
OUTPUT
Time Complexity: O(m * n) where m and n are the string lengths.
Auxiliary Space: O(m * n) here the recursive stack space is ignored.
CODE:
#include <bits/stdc++.h>
using namespace std;
int lcs(string X, string Y, int m, int n)
{
int L[m + 1][n + 1];
for (int i = 0; i <= m; i++) {
for (int j = 0; j <= n; j++) {
if (i == 0 || j == 0)
L[i][j] = 0;
else
L[i][j] = max(L[i - 1][j], L[i][j - 1]);
}
}
return L[m][n];
}
int main()
{
string S1 = "AGGTAB";
string S2 = "GXTXAYB";
int m = S1.size();
int n = S2.size();
cout << "Length of LCS is " << lcs(S1, S2, m, n);
return 0;
}
OUTPUT
Time Complexity: O(m * n) which is much better than the worst-case time
complexity of Naive Recursive implementation.
Auxiliary Space: O(m * n) because the algorithm uses an array of size
(m+1)*(n+1) to store the length of the common substrings.
LAB-9
PROBLEM: To perform 0-1 knapsack using Branch and Bound
APPROACH
Branch and BoundThe backtracking based solution works better than brute
force by ignoring infeasible solutions. We can do better (than backtracking)
if we know a bound on best possible solution subtree rooted with every
node. If the best in subtree is worse than current best, we can simply ignore
this node and its subtrees. So we compute bound (best solution) for every
node and compare the bound with current best solution before exploring
the node. Example bounds used in below diagram are, A down can give
$315, B down can $275, C down can $225, D down can $125 and E down
can $30. In the next article, we have discussed the process to get these
bounds.
Branch and bound is very useful technique for searching a solution but in
worst case, we need to fully calculate the entire tree. At best, we only need
to fully calculate one path through the tree and prune the rest of it.
ALGORITHM
CODE:
#include <iostream>
#include <algorithm>
#include <vector>
#include <queue>
using namespace std;
class Item {
public:
int value;
int weight;
double ratio;
Item(int value, int weight) {
this->value = value;
this->weight = weight;
this->ratio = (double)value / weight;
}
};
class KnapsackNode {
public:
vector<int> items;
int value;
int weight;
KnapsackNode(vector<int> items, int value, int weight) {
this->items = items;
this->value = value;
this->weight = weight;
}
};
class Knapsack {
public:
int maxWeight;
vector<Item> items;
Knapsack(int maxWeight, vector<Item> items) {
this->maxWeight = maxWeight;
this->items = items;
}
int solve() {
sort(this->items.begin(), this->items.end(), [](const Item& a, const
Item& b) {
return a.ratio > b.ratio;
});
int bestValue = 0;
queue<KnapsackNode> q;
q.push(KnapsackNode({}, 0, 0));
while (!q.empty()) {
KnapsackNode node = q.front();
q.pop();
int i = node.items.size();
if (i == this->items.size()) {
bestValue = max(bestValue, node.value);
} else {
Item item = this->items[i];
KnapsackNode withItem(node.items, node.value + item.value,
node.weight + item.weight);
if (isPromising(withItem, this->maxWeight, bestValue)) {
q.push(withItem);
}
KnapsackNode withoutItem(node.items, node.value, node.weight);
if (isPromising(withoutItem, this->maxWeight, bestValue)) {
q.push(withoutItem);
}
}
}
return bestValue;
}
bool isPromising(KnapsackNode node, int maxWeight, int bestValue) {
return node.weight <= maxWeight && node.value + getBound(node) >
bestValue;
}
int getBound(KnapsackNode node) {
int remainingWeight = this->maxWeight - node.weight;
int bound = node.value;
for (int i = node.items.size(); i < this->items.size(); i++) {
Item item = this->items[i];
if (remainingWeight >= item.weight) {
bound += item.value;
remainingWeight -= item.weight;
} else {
bound += remainingWeight * item.ratio;
break;
}
}
return bound;
}
};
int main() {
vector<Item> items = {
Item(60, 10),
Item(100, 20),
Item(120, 30)
};
Knapsack knapsack(50, items);
int result = knapsack.solve();
cout << "Best value: " << result << endl;
return 0;
}
OUTPUT
Time Complexity: O(N), as only one path through the tree will have to be
traversed in the beat case and its worst time complexity is still given as
O(2N) .
LAB-10
PROBLEM: To perform N-Queen problem using Backtracking
APPROACH
The N Queen is the problem of placing N chess queens on an N×N chessboard so that no
two queens attack each other. For example, the following is a solution for the 4 Queen
problem. The expected output is in form of a matrix that has ‘Q’s for the blocks where
queens are placed and the empty spaces are represented by ‘.’s . For example, the
following is the output matrix for the above 4 queen solution.
ALGORITHM
• Initialize an empty chessboard of size NxN.
• Start with the leftmost column and place a queen in the first row of that column.
• Move to the next column and place a queen in the first row of that column.
• Repeat step 3 until either all N queens have been placed or it is impossible to
place a queen in the current column without violating the rules of the problem.
• If all N queens have been placed, print the solution.
• If it is not possible to place a queen in the current column without violating the
rules of the problem, backtrack to the previous column.
• Remove the queen from the previous column and move it down one row.
• Repeat steps 4-7 until all possible configurations have been tried.
Naive Algorithm
Generate all possible configurations of queens on board and print a
configuration that satisfies the given constraints.
while there are untried configurations
{
generate the next configuration
if queens don't attack in this configuration then
{
print this configuration;
}
}
CODE
#include <bits/stdc++.h>
#define N 4
using namespace std;
return true;
}
OUTPUT