0% found this document useful (0 votes)
24 views

Daa Lab File

The document discusses linear search and binary search algorithms. It explains that linear search sequentially checks each element of a list to find a target element, while binary search uses a divide-and-conquer approach to quickly eliminate half of the search space at each step by comparing the target to the middle element. The document also provides pseudocode for linear search and describes brute force and divide-and-conquer problem solving strategies.

Uploaded by

shriyaagupta18
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Daa Lab File

The document discusses linear search and binary search algorithms. It explains that linear search sequentially checks each element of a list to find a target element, while binary search uses a divide-and-conquer approach to quickly eliminate half of the search space at each step by comparing the target to the middle element. The document also provides pseudocode for linear search and describes brute force and divide-and-conquer problem solving strategies.

Uploaded by

shriyaagupta18
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

INDIRA GANDHI DELHI TECHNICAL UNIVERSITY

FOR WOMEN

DESIGN AND ANALYSIS OF ALGORITHM PRACTICAL FILE


BCS-204

Submitted By: Shriyaa Gupta

Enrollment number: 02301032021

Branch: B.Tech(IT)-2021

Batch: IT-1

Semester: 4 (Group G)

Submitted To:

Dr. Nonita Sharma


INDEX
S. Topic Date Teacher’s
No Signature
1. To understand Raptor Tool for
flowchart creation and
implementation.
To create a flowchart of the
following problems:
a) Print the sum of 10 inputs.
b) Factorial of a number.
c) Convert Fahrenheit
temperature to Celsius.
d) Guess the number and print
the number of attempts.
2. To understand the Brute force
approach and divide and conquer
strategy as designing approach of
algorithms:
a) To implement linear search
based on brute force
approach.
b) To implement Binary
Search based on Divide and
conquer strategy.
3. To learn a sorting algorithm based
on divide and conquer: Merge sort
4. To learn sorting algorithm based
on divide and conquer: Quick sort
5. To learn a sorting algorithm based
on divide and conquer: Heap Sort
6. To learn a sorting algorithm based
on divide and conquer: Shell Sort
7. To perform the greedy (Fractional)
knapsack
8. To perform the Longest Common
Subsequence
9. To perform 0-1 knapsack using
Branch and Bound
10. To perform N-Queen problem
using Backtracking
LAB:1
About Raptor

RAPTOR is a flowchart-based programming environment, designed specifically to help


students visualize their algorithms and avoid syntactic baggage. RAPTOR programs are
created visually and executed visually by tracing the execution through the flowchart.
The required syntax is kept to a minimum. Students prefer using flowcharts to express
their algorithms and are more successful in creating algorithms using RAPTOR than
using a traditional language or writing flowcharts without RAPTOR.

A Multiplatform version of RAPTOR is now available for Windows, Mac and Linux. Key
differences:

• Only Intermediate mode (sub-charts and procedures, no OO)


• Will be able to load some files from Windows-only RAPTOR, but Windows-only
will not load files from RAPTOR Multiplatform
• Documentation will be instead of distributed with the app.

Papers on the RAPTOR application:

• The use of RAPTOR in a general education course


• Global Chinese Conference on Computers in Education (GCCCE)
• Midwest Instruction and Computing Symposium
• American Society for Engineering Education

Print the sum of 10 inputs.

Factorial of a number.
Convert Fahrenheit temperature to Celsius.
Guess the number and print the number of attempts.
LAB:2
To understand the Brute force approach and divide and conquer strategy as
designing approach of algorithms:
a) To implement linear search based on brute force approach.
b) To implement Binary Search based on Divide and conquer strategy.

About Brute force

A brute force approach is an approach that finds all the possible solutions to find a
satisfactory solution to a given problem. The brute force algorithm tries out all the
possibilities till a satisfactory solution is not found.

Such an algorithm can be of two types:


o Optimizing: In this case, the best solution is found. To find the best solution, it may
either find all the possible solutions to find the best solution or if the value of the
best solution is known, it stops finding when the best solution is found. For example:
Finding the best path for the traveling salesman problem. Here the best path means
that traveling all the cities and the cost of traveling should be minimum.
o Satisficing: It stops finding the solution as soon as the satisfactory solution is found.
Or example, finding the traveling salesman path which is within 10% of optimal.
o Often Brute force algorithms require exponential time. Various heuristics and
optimization can be used:
o Heuristic: A rule of thumb that helps you to decide which possibilities we should
look at first.
o Optimization: Certain possibilities are eliminated without exploring all of them.

Let's understand the brute force search through an example.

Suppose we have converted the problem into the form of the tree shown
below:
Brute force search considers each and every state of a tree, and the state is represented in
the form of a node. As far as the starting position is concerned, we have two choices, i.e., A
state and B state. We can either generate state A or state B. In the case of B state, we have
two states, i.e., states E and F. In the case of brute force search, each state is considered one
by one. As we can observe in the above tree that the brute force search takes 12 steps to
find the solution. On the other hand, backtracking, which uses a Depth-First search, considers
the below states only when the state provides a feasible solution. Consider the above tree,
start from the root node, then move to node A and then node C. If node C does not provide
a feasible solution, then there is no point in considering the states G and H. We backtrack
from node C to node A. Then, we move from node A to node D. Since node D does not
provide a feasible solution, we discard this state and backtrack from node D to node A. We
move to node B, then we move from node B to node E. We move from node E to node K;
Since k is a solution, so it takes 10 steps to find the solution. In this way, we eliminate a
greater number of states in a single iteration. Therefore, we can say that backtracking is
faster and more efficient than the brute-force approach.

About Divide and Conquer

The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its
basic idea is to decompose a given problem into two or more similar, but simpler,
subproblems, to solve them in turn, and to compose their solutions to solve the given
problem. Problems of sufficient simplicity are solved directly. For example, to sort a given list
of n natural numbers, split it into two lists of about n/2 numbers each, sort each of them in
turn, and interleave both results appropriately to obtain the sorted version of the given list
(see the picture). This approach is known as the merge sort algorithm.

The name "divide and conquer" is sometimes applied to algorithms that reduce each
problem to only one sub-problem, such as the binary search algorithm for finding a record in
a sorted list (or its analog in numerical computing, the bisection algorithm for root
finding).[2] These algorithms can be implemented more efficiently than general divide-and-
conquer algorithms; in particular, if they use tail recursion, they can be converted into
simple loops. Under this broad definition, however, every algorithm that uses recursion or
loops could be regarded as a "divide-and-conquer algorithm". Therefore, some authors
consider that the name "divide and conquer" should be used only when each problem may
generate two or more subproblems.[3] The name decrease and conquer has been proposed
instead for the single-subproblem class.[4]
An important application of divide and conquer is in optimization,[example needed] where if
the search space is reduced ("pruned") by a constant factor at each step, the overall
algorithm has the same asymptotic complexity as the pruning step, with the constant
depending on the pruning factor (by summing the geometric series); this is known as prune
and search.

About Linear Search

Linear search is the simplest method for searching.


• In the Linear search technique of searching; the element to be found in searching the
elements to
be found is searched sequentially in the list.
• This method can be performed on a sorted or an unsorted list (usually arrays).
• In the case of a sorted list searching starts from 0th element and continues until the
element is found from the list or the element whose value is greater than (assuming the list
is sorted in ascending order), the value being searched is reached.
• As against this, searching in case of unsorted list also begins from the 0th element and
continues until the element or the end of the list is reached.
• The linear search algorithm searches all elements in the array sequentially.
• Its best execution time is 1, whereas the worst execution time is n, where n is the total
number of items in the search array.
• It is the most simple search algorithm in data structure and checks each item in the set of
elements until it matches the search element until the end of data collection.
• When data is unsorted, a linear search algorithm is preferred.

Linear Search is defined as a sequential search algorithm that starts at one end and goes
through each element of a list until the desired element is found, otherwise the search
continues till the end of the data set. It is the easiest search algorithm

Linear Search Algorithm


Step 1: First, read the search element (Target element) in the array.
Step 2: In the second step compare the search element with the first element in the array.
Step 3: If both are matched, display “Target element is found” and terminate the Linear
Search function.
Step 4: If both are not matched, compare the search element with the next element in the
array.
Step 5: In this step, repeat steps 3 and 4 until the search (Target) element is compared with
the last element of the array.
Step 6 – If the last element in the list does not match, the Linear Search Function will be
terminated, and the message “Element is not found” will be displayed.

Linear Search Code with output


#include <iostream>
using namespace std;
int search(int arr[], int N, int x){
int i;
for (i = 0; i < N; i++)
if (arr[i] == x)
return i;
return -1;
}
int main(void){
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int N = sizeof(arr) / sizeof(arr[0]);
int result = search(arr, N, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Linear Search Time Complexity


Time complexity for linear search is denoted by O(n) as every element in the array is
compared only once. In linear search, best-case complexity is O(1) where the element is
found at the first index. Worst-case complexity is O(n) where the element is found at the last
index or element is not present in the array.

About Binary Search


Binary search is the search technique that works efficiently on sorted lists. Hence, to search
an element into some list using the binary search technique, we must ensure that the list is
sorted.
Binary search follows the divide and conquer approach in which the list is divided into two
halves, and the item is compared with the middle element of the list. If the match is found
then, the location of the middle element is returned. Otherwise, we search into either of the
halves depending upon the result produced through the match.

Binary Search Algorithm: The basic steps to perform Binary Search are:
• Sort the array in ascending order.
• Set the low index to the first element of the array and the high index to the last
element.
• Set the middle index to the average of the low and high indices.
• If the element at the middle index is the target element, return the middle index.
• If the target element is less than the element at the middle index, set the high
index to the middle index – 1.
• If the target element is greater than the element at the middle index, set the low
index to the middle index + 1.
• Repeat steps 3-6 until the element is found or it is clear that the element is not
present in the array.

Binary Search Code with output


#include <iostream>
using namespace std;
int binarySearch(int arr[], int l, int r, int x){
if (r >= l) {
int mid = l + (r - l) / 2;
if (arr[mid] == x)
return mid;
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);
return binarySearch(arr, mid + 1, r, x);
}
return -1;
}

int main(void){
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Binary Search Time Complexity


In binary search, best-case complexity is O(1) where the element is found at the middle
index. The worst-case complexity is O(log2n).
LAB:3
To learn sorting algorithm based on divide and conquer: Merge sort

MERGE SORT
About Merge Sort
Merge sort is a sorting algorithm that works by dividing an array into smaller subarrays,
sorting each subarray, and then merging the sorted subarrays back together to form the final
sorted array. In simple terms, we can say that the process of merge sort is to divide the array
into two halves, sort each half, and then merge the sorted halves back together. This process
is repeated until the entire array is sorted. One thing that you might wonder is what is the
specialty of this algorithm. We already have a number of sorting algorithms then why do we
need this algorithm? One of the main advantages of merge sort is that it has a time
complexity of O (n log n), which means it can sort large arrays relatively quickly. It is also a
stable sort, which means that the order of elements with equal values is preserved during the
sort. Merge sort is a popular choice for sorting large datasets because it is relatively efficient
and easy to implement. It is often used in conjunction with other algorithms, such as
quicksort, to improve the overall performance of a sorting routine.

Merge Sort Algorithm:


step 1: start
step 2: declare array and left, right, mid variable
step 3: perform merge function.
if left > right
return
mid= (left+right)/2
mergesort(array, left, mid)
mergesort(array, mid+1, right)
merge(array, left, mid, right)
step 4: Stop

Merge Sort Code with output


// C++ program for Merge Sort
#include <iostream>
using namespace std;
void merge(int array[], int const left, int const mid,int const right){
auto const subArrayOne = mid - left + 1;
auto const subArrayTwo = right - mid;
auto *leftArray = new int[subArrayOne],
*rightArray = new int[subArrayTwo];
for (auto i = 0; i < subArrayOne; i++)
leftArray[i] = array[left + i];
for (auto j = 0; j < subArrayTwo; j++)
rightArray[j] = array[mid + 1 + j];
auto indexOfSubArrayOne= 0,indexOfSubArrayTwo= 0;
int indexOfMergedArray= left;
while (indexOfSubArrayOne < subArrayOne && indexOfSubArrayTwo < subArrayTwo) {
if (leftArray[indexOfSubArrayOne]<= rightArray[indexOfSubArrayTwo]) {
array[indexOfMergedArray]= leftArray[indexOfSubArrayOne];
indexOfSubArrayOne++;
}
else {
array[indexOfMergedArray]= rightArray[indexOfSubArrayTwo];
indexOfSubArrayTwo++;
}
indexOfMergedArray++;
}
while (indexOfSubArrayOne < subArrayOne) {
array[indexOfMergedArray]= leftArray[indexOfSubArrayOne];
indexOfSubArrayOne++;
indexOfMergedArray++;
}
while (indexOfSubArrayTwo < subArrayTwo) {
array[indexOfMergedArray]= rightArray[indexOfSubArrayTwo];
indexOfSubArrayTwo++;
indexOfMergedArray++;
}
delete[] leftArray;
delete[] rightArray;
}
void mergeSort(int array[], int const begin, int const end){
if (begin >= end)
return;
auto mid = begin + (end - begin) / 2;
mergeSort(array, begin, mid);
mergeSort(array, mid + 1, end);
merge(array, begin, mid, end);
}
void printArray(int A[], int size){
for (auto i = 0; i < size; i++)
cout << A[i] << " ";
}
int main(){
int arr[] = { 12, 11, 13, 5, 6, 7 };
auto arr_size = sizeof(arr) / sizeof(arr[0]);

cout << "Given array is \n";


printArray(arr, arr_size);

mergeSort(arr, 0, arr_size - 1);


cout << "\nSorted array is \n";
printArray(arr, arr_size);
return 0;
}

Merge Sort Time Complexity: O(N log(N)), Sorting arrays on different machines.
Merge Sort is a recursive algorithm and time complexity can be expressed as following
recurrence relation.
T(n) = 2T(n/2) + θ(n)
The above recurrence can be solved either using the Recurrence Tree method or the Master
method. It falls in case II of the Master Method and the solution of the recurrence is
θ(Nlog(N)). The time complexity of Merge Sort isθ(Nlog(N)) in all 3 cases (worst, average,
and best) as merge sort always divides the array into two halves and takes linear time to
merge two halves.
Auxiliary Space: O(n), In merge sort all elements are copied into an auxiliary array. So N
auxiliary space is required for merge sort.
LAB:4

To learn sorting algorithm based on divide and conquer: Quick sort

QUICK SORT
About Quick Sort: Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks
an element as a pivot and partitions the given array around the picked pivot. There are many
different versions of quickSort that pick pivot in different ways.
• Always pick the first element as a pivot.
• Always pick the last element as a pivot (implemented below)
• Pick a random element as a pivot.
• Pick median as the pivot.
The key process in quickSort is a partition(). The target of partitions is, given an array and an
element x of an array as the pivot, put x at its correct position in a sorted array and put all
smaller elements (smaller than x) before x, and put all greater elements (greater than x) after
x. All this should be done in linear time.

Quick Sort Algorithm:


/* low –> Starting index, high –> Ending index */
quickSort(arr[], low, high) {
if (low < high) {
/* pi is partitioning index, arr[pi] is now at right place */
pi = partition(arr, low, high);
quickSort(arr, low, pi – 1); // Before pi
quickSort(arr, pi + 1, high); // After pi
}
}

partition (arr[], low, high)


{
// pivot (Element to be placed at right position)
pivot = arr[high];
i = (low – 1) // Index of smaller element and indicates the
// right position of pivot found so far
for (j = low; j <= high- 1; j++){
// If current element is smaller than the pivot
if (arr[j] < pivot){
i++; // increment index of smaller element
swap arr[i] and arr[j]
}
}
swap arr[i + 1] and arr[high])
return (i + 1)
}

Quick Sort Code with output


#include <bits/stdc++.h>
using namespace std;
int partition(int arr[], int low, int high){
int pivot = arr[high];
int i= (low- 1);
for (int j = low; j <= high - 1; j++) {
if (arr[j] < pivot) {
i++;
swap(arr[i], arr[j]);
}
}
swap(arr[i + 1], arr[high]);
return (i + 1);
}
void quickSort(int arr[], int low, int high)
{
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
void printArray(int arr[], int size)
{
int i;
for (i = 0; i < size; i++)
cout << arr[i] << " ";
cout << endl;
}
int main()
{
int arr[] = { 10, 7, 8, 9, 1, 5 };
int n = sizeof(arr) / sizeof(arr[0]);
quickSort(arr, 0, n - 1);
cout << "Sorted array: \n";
printArray(arr, n);
return 0;
}
Quick Sort Time Complexity:

Time taken by QuickSort, in general, can be written as follows.


T(n) = T(k) + T(n-k-1) + (n)
The first two terms are for two recursive calls, the last term is for the partition process. k is
the number of elements that are smaller than the pivot.
The time taken by QuickSort depends upon the input array and partition strategy. Following
are three cases.
Worst Case:
The worst case occurs when the partition process always picks the greatest or smallest
element as the pivot. If we consider the above partition strategy where the last element is
always picked as a pivot, the worst case would occur when the array is already sorted in
increasing or decreasing order. Following is recurrence for the worst case.
T(n) = T(0) + T(n-1) + (n)which is equivalent to T(n) = T(n-1) + (n)
The solution to the above recurrence is (n2).
Best Case:
The best case occurs when the partition process always picks the middle element as the
pivot. The following is recurrence for the best case.
T(n) = 2T(n/2) + (n)
The solution for the above recurrence is (nLogn). It can be solved using case 2
of Master Theorem.
Average Case:
To do average case analysis, we need to consider all possible permutation of array and
calculate time taken by every permutation which doesn’t look easy.
We can get an idea of average case by considering the case when partition puts O(n/9)
elements in one set and O(9n/10) elements in other set. Following is recurrence for this case.

T(n) = T(n/9) + T(9n/10) + (n)


The solution of above recurrence is also O(nLogn):
Although the worst case time complexity of QuickSort is O(n2) which is more than many
other sorting algorithms like Merge Sort and Heap Sort, QuickSort is faster in practice,
because its inner loop can be efficiently implemented on most architectures, and in most
real-world data. QuickSort can be implemented in different ways by changing the choice of
pivot, so that the worst case rarely occurs for a given type of data. However, merge sort is
generally considered better when data is huge and stored in external storage.
LAB-5

To learn sorting algorithm based on divide and conquer: Shell sort

HEAP SORT
About Heap Sort
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It
is similar to the selection sort where we first find the minimum element and place the
minimum element at the beginning. Repeat the same process for the remaining elements.

Heap Sort Algorithm:


First convert the array into heap data structure using heapify, then one by one delete the
root node of the Max-heap and replace it with the last node in the heap and then heapify
the root of the heap. Repeat this process until size of heap is greater than 1.
Build a heap from the given input array.
Repeat the following steps until the heap contains only one element:
a. Swap the root element of the heap (which is the largest element) with the last element of
the heap.
b. Remove the last element of the heap (which is now in the correct position).
c. Heapify the remaining elements of the heap.
The sorted array is obtained by reversing the order of the elements in the input array.

Heap Sort Code with output


// C++ program for implementation of Heap Sort
#include <iostream>
using namespace std;
void heapify(int arr[], int N, int i){
int largest = i;
int l = 2 * i + 1;
int r = 2 * i + 2;
if (l < N && arr[l] > arr[largest])
largest = l;
if (r < N && arr[r] > arr[largest])
largest = r;
if (largest != i) {
swap(arr[i], arr[largest]);
heapify(arr, N, largest);
}
}
void heapSort(int arr[], int N){
for (int i = N / 2 - 1; i >= 0; i--)
heapify(arr, N, i);
for (int i = N - 1; i > 0; i--) {
swap(arr[0], arr[i]);
heapify(arr, i, 0);
}
}
void printArray(int arr[], int N){
for (int i = 0; i < N; ++i)
cout << arr[i] << " ";
cout << "\n";
}
int main(){
int arr[] = { 12, 11, 13, 5, 6, 7 };
int N = sizeof(arr) / sizeof(arr[0]);
heapSort(arr, N);
cout << "Sorted array is \n";
printArray(arr, N);
}

Heap Time Complexity:


Time Complexity: O(N log N)
Auxiliary Space: O(1).
LAB-6

To learn sorting algorithm based on divide and conquer: Shell sort

SHELL SORT
About Shell Sort: Shell sort is mainly a variation of Insertion Sort. In insertion sort, we
move elements only one position ahead. When an element has to be moved far ahead, many
movements are involved. The idea of ShellSort is to allow the exchange of far items. In Shell
sort, we make the array h-sorted for a large value of h. We keep reducing the value of h until
it becomes 1. An array is said to be h-sorted if all sublists of every h’th element are sorted.
smaller elements (smaller than x) before x, and put all greater elements (greater than x) after
x. All this should be done in linear time.

Shell Sort Algorithm:


Step 1 − Start
Step 2 − Initialize the value of gap size. Example: h
Step 3 − Divide the list into smaller sub-part. Each must have equal intervals to h
Step 4 − Sort these sub-lists using insertion sort
Step 5 – Repeat this step 2 until the list is sorted.
Step 6 – Print a sorted list.
Step 7 – Stop.

Shell Sort Code with output

// C++ implementation of Shell Sort


#include <iostream>
using namespace std;
int shellSort(int arr[], int n){
for (int gap = n/2; gap > 0; gap /= 2){
for (int i = gap; i < n; i += 1){
int temp = arr[i];
int j;
for (j = i; j >= gap && arr[j - gap] > temp; j -= gap)
arr[j] = arr[j - gap];
arr[j] = temp;
}
}
return 0;
}
void printArray(int arr[], int n){
for (int i=0; i<n; i++)
cout << arr[i] << " ";
}
int main()
{
int arr[] = {12, 34, 54, 2, 3}, i;
int n = sizeof(arr)/sizeof(arr[0]);
cout << "Array before sorting: \n";
printArray(arr, n);
shellSort(arr, n);
cout << "\nArray after sorting: \n";
printArray(arr, n);
return 0;
}

Shell Sort Time Complexity:

Time Complexity: Time complexity of the above implementation of Shell sort is O(n2). In
the above implementation, the gap is reduced by half in every iteration. There are many
other ways to reduce gaps which leads to better time complexity. See this for more details.
Worst Case Complexity
The worst-case complexity for shell sort is O(n2)
Best Case Complexity
When the given array list is already sorted the total count of comparisons of each interval is
equal to the size of the given array.
So best case complexity is Ω(n log(n))
Average Case Complexity
The shell sort Average Case Complexity depends on the interval selected by the
programmer.
θ(n log(n)2).
THE Average Case Complexity: O(n*log n)~O(n1.25)
Space Complexity
The space complexity of the shell sort is O(1).
LAB-7
PROBLEM: To perform the greedy knapsack

APPROACH

The basic idea of the greedy approach is to calculate the ratio value/weight for each item
and sort the item based on this ratio. Then take the item with the highest ratio and add them
until we cannot add the next item as a whole and at the end add the next item as much as
we can. Which will always be the optimal solution to this problem.

ALGORITHM

Input: Knapsack capacity = M. n is the number of available items with associated profits P[1:
n] and weights W[1: n]. These items are such that P[i]/W[i]>=P[i+1]/W[i +1] where 1 ≤i≤n.

Output: S [1: n] is the fixed size solution vector. S[i] gives the fractional part xi, of item i
placed into a knapsack, 0 ≤ xi ≤1 and 1 ≤ i ≤ n.

for (i=1; i <= n; i++){


S[i]=0.0;
}
balance: = M;
for (i=1; i <= n; i++)
{
if (W [i] > balance)
break;
S[i]=1.0;
balance= balance - W[i];
}
if (i <= n) {
S[i]= (balance / W[i]) ;
return S;
}

CODE:

// C++ program to solve fractional Knapsack Problem


#include <bits/stdc++.h>
using namespace std;
struct Item {
int value, weight;
Item(int value, int weight)
{
this->value = value;
this->weight = weight;
}
};
static bool cmp(struct Item a, struct Item b){
double r1 = (double)a.value / (double)a.weight;
double r2 = (double)b.value / (double)b.weight;
return r1 > r2;
}
double fractionalKnapsack(int W, struct Item arr[], int N){
sort(arr, arr + N, cmp);
double finalvalue = 0.0;
for (int i = 0; i < N; i++) {
if (arr[i].weight <= W) {
W -= arr[i].weight;
finalvalue += arr[i].value;
}
else {
finalvalue+= arr[i].value* ((double)W / (double)arr[i].weight);
break;
}
}
return finalvalue;
}
int main()
{
int W = 50;
Item arr[] = { { 60, 10 }, { 100, 20 }, { 120, 30 } };
int N = sizeof(arr) / sizeof(arr[0]);
cout << fractionalKnapsack(W, arr, N);
return 0;
}

Output

Time Complexity: O(N * log N)

Auxiliary Space: O(N)


LAB-8
PROBLEM: To perform Longest Common Subsequence

APPROACH & ALGORITHM

Given two strings, S1 and S2, the task is to find the length of the longest subsequence
present in both of the strings.
Note: A subsequence of a string is a sequence that is generated by deleting some characters
(possibly 0) from the string without altering the order of the remaining characters. For
example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, etc are subsequences of the string “abcdefg”.
Examples:
Input: S1 = “AGGTAB”, S2 = “GXTXAYB”
Output: 4
Explanation: The longest subsequence which is present in both strings is “GTAB”.

Naive Approach for LCS:


The problem can be solved using recursion based on the following idea:

• Create a recursive function [say lcs()].


• Check the relation between the last characters of the strings that are not yet
processed.
• Depending on the relation call the next recursive function as mentioned above.
• Return the length of the LCS received as the answer.

Memoization Approach for LCS:


If we notice carefully, we can observe that the above recursive solution holds the following
two properties:
Optimal Substructure:
See for solving the structure of L(X[0, 1, . . ., m-1], Y[0, 1, . . . , n-1]) we are taking the help of
the substructures of X[0, 1, …, m-2], Y[0, 1,…, n-2], depending on the situation (i.e., using
them optimally) to find the solution of the whole.
Overlapping Subproblems:
If we use the above recursive approach for strings “AXYT” and “AYZX“, we will get a partial
recursion tree as shown below. Here we can see that the subproblems L(“AXY”, “AYZ”) is
being calculated more than once. If the total tree is considered there will be several such
overlapping subproblems.
L(“AXYT”, “AYZX”)
/ \
L(“AXY”, “AYZX”) L(“AXYT”, “AYZ”)
/ \ / \
L(“AX”, “AYZX”) L(“AXY”, “AYZ”) L(“AXY”, “AYZ”) L(“AXYT”, “AY”)
Approach: Because of the presence of these two properties we can use Dynamic
programming or Memoization to solve the problem.
Create a recursive function. Also create a 2D array to store the result of a unique state.
During the recursion call, if the same state is called more than once, then we can directly
return the answer stored for that state instead of calculating again.
Dynamic Programming for LCS:
We can use the following steps to implement the dynamic programming approach for LCS.

• Create a 2D array dp[][] with rows and columns equal to the length of each input
string plus 1 [the number of rows indicates the indices of S1 and the columns
indicate the indices of S2].
• Initialize the first row and column of the dp array to 0.
• Iterate through the rows of the dp array, starting from 1 (say using iterator i).
• For each i, iterate all the columns from j = 1 to n:
• If S1[i-1] is equal to S2[j-1], set the current element of the dp array to the value of
the element to (dp[i-1][j-1] + 1).
• Else, set the current element of the dp array to the maximum value of dp[i-
1][j] and dp[i][j-1].
• After the nested loops, the last element of the dp array will contain the length of the
LCS.

CODE:

Below is the implementation of the recursive approach:


#include <bits/stdc++.h>
using namespace std;
int lcs(string X, string Y, int m, int n)
{
if (m == 0 || n == 0)
return 0;
if (X[m - 1] == Y[n - 1])
return 1 + lcs(X, Y, m - 1, n - 1);
else
return max(lcs(X, Y, m, n - 1),
lcs(X, Y, m - 1, n));
}
int main()
{
string S1 = "AGGTAB";
string S2 = "GXTXAYB";
int m = S1.size();
int n = S2.size();
cout << "Length of LCS is " << lcs(S1, S2, m, n);
return 0;
}
OUTPUT

Time Complexity: O(2n)


Auxiliary Space: O(1)
CODE:

Following is the memoization implementation for the LCS problem.


#include <bits/stdc++.h>
using namespace std;
int lcs(char* X, char* Y, int m, int n, vector<vector<int> >& dp)
{
if (m == 0 || n == 0)
return 0;
if (X[m - 1] == Y[n - 1])
return dp[m][n] = 1 + lcs(X, Y, m - 1, n - 1, dp);
if (dp[m][n] != -1) {
return dp[m][n];
}
return dp[m][n] = max(lcs(X, Y, m, n - 1, dp), lcs(X, Y, m - 1, n, dp));
}
int main()
{
char X[] = "AGGTAB";
char Y[] = "GXTXAYB";
int m = strlen(X);
int n = strlen(Y);
vector<vector<int> > dp(m + 1, vector<int>(n + 1, -1));
cout << "Length of LCS is " << lcs(X, Y, m, n, dp);

return 0;
}
OUTPUT
Time Complexity: O(m * n) where m and n are the string lengths.
Auxiliary Space: O(m * n) here the recursive stack space is ignored.
CODE:

#include <bits/stdc++.h>
using namespace std;
int lcs(string X, string Y, int m, int n)
{
int L[m + 1][n + 1];
for (int i = 0; i <= m; i++) {
for (int j = 0; j <= n; j++) {
if (i == 0 || j == 0)
L[i][j] = 0;

else if (X[i - 1] == Y[j - 1])


L[i][j] = L[i - 1][j - 1] + 1;

else
L[i][j] = max(L[i - 1][j], L[i][j - 1]);
}
}
return L[m][n];
}
int main()
{
string S1 = "AGGTAB";
string S2 = "GXTXAYB";
int m = S1.size();
int n = S2.size();
cout << "Length of LCS is " << lcs(S1, S2, m, n);
return 0;
}

OUTPUT

Time Complexity: O(m * n) which is much better than the worst-case time
complexity of Naive Recursive implementation.
Auxiliary Space: O(m * n) because the algorithm uses an array of size
(m+1)*(n+1) to store the length of the common substrings.
LAB-9
PROBLEM: To perform 0-1 knapsack using Branch and Bound

APPROACH

Branch and bound is an algorithm design paradigm which is generally used


for solving combinatorial optimization problems. These problems typically
exponential in terms of time complexity and may require exploring all
possible permutations in worst case. Branch and Bound solve these
problems relatively quickly.
Let us consider below 0/1 Knapsack problem to understand Branch and
Bound. Given two integer arrays val[0..n-1] and wt[0..n-1] that represent
values and weights associated with n items respectively.
Find out the maximum value subset of val[] such that sum of the weights of
this subset is smaller than or equal to Knapsack capacity W. Let us explore
all approaches for this problem.
1. A Greedy approach is to pick the items in decreasing order of value
per unit weight. The Greedy approach works only for fractional
knapsack problem and may not produce correct result for 0/1
knapsack.
2. We can use Dynamic Programming (DP) for 0/1 Knapsack
problem. In DP, we use a 2D table of size n x W. The DP Solution
doesn’t work if item weights are not integers.
3. Since DP solution doesn’t always work, a solution is to use Brute
Force. With n items, there are 2 n solutions to be generated, check
each to see if they satisfy the constraint, save maximum solution
that satisfies constraint. This solution can be expressed as tree.

4. We can use Backtracking to optimize the Brute Force solution. In


the tree representation, we can do DFS of tree. If we reach a point
where a solution no longer is feasible, there is no need to continue
exploring. In the given example, backtracking would be much more
effective if we had even more items or a smaller knapsack capacity.

Branch and BoundThe backtracking based solution works better than brute
force by ignoring infeasible solutions. We can do better (than backtracking)
if we know a bound on best possible solution subtree rooted with every
node. If the best in subtree is worse than current best, we can simply ignore
this node and its subtrees. So we compute bound (best solution) for every
node and compare the bound with current best solution before exploring
the node. Example bounds used in below diagram are, A down can give
$315, B down can $275, C down can $225, D down can $125 and E down
can $30. In the next article, we have discussed the process to get these
bounds.

Branch and bound is very useful technique for searching a solution but in
worst case, we need to fully calculate the entire tree. At best, we only need
to fully calculate one path through the tree and prune the rest of it.
ALGORITHM

function knapsack(items, max_weight):


best_value = 0
queue = [{items: [], value: 0, weight: 0}]
while queue is not empty:
node = queue.pop()
if node is a leaf node:
update best_value if necessary
else:
for each remaining item:
child = create child node for item
if child is promising:
queue.append(child)
return best_value
function is_promising(node, max_weight, best_value):
if node.weight > max_weight:
return False
if node.value + bound(node.items) < best_value:
return False
return True
function bound(items):
# Calculate an upper bound on the value of the remaining items
# using some heuristic (e.g., the fractional knapsack algorithm)

CODE:

#include <iostream>
#include <algorithm>
#include <vector>
#include <queue>
using namespace std;
class Item {
public:
int value;
int weight;
double ratio;
Item(int value, int weight) {
this->value = value;
this->weight = weight;
this->ratio = (double)value / weight;
}
};
class KnapsackNode {
public:
vector<int> items;
int value;
int weight;
KnapsackNode(vector<int> items, int value, int weight) {
this->items = items;
this->value = value;
this->weight = weight;
}
};
class Knapsack {
public:
int maxWeight;
vector<Item> items;
Knapsack(int maxWeight, vector<Item> items) {
this->maxWeight = maxWeight;
this->items = items;
}
int solve() {
sort(this->items.begin(), this->items.end(), [](const Item& a, const
Item& b) {
return a.ratio > b.ratio;
});
int bestValue = 0;
queue<KnapsackNode> q;
q.push(KnapsackNode({}, 0, 0));
while (!q.empty()) {
KnapsackNode node = q.front();
q.pop();
int i = node.items.size();
if (i == this->items.size()) {
bestValue = max(bestValue, node.value);
} else {
Item item = this->items[i];
KnapsackNode withItem(node.items, node.value + item.value,
node.weight + item.weight);
if (isPromising(withItem, this->maxWeight, bestValue)) {
q.push(withItem);
}
KnapsackNode withoutItem(node.items, node.value, node.weight);
if (isPromising(withoutItem, this->maxWeight, bestValue)) {
q.push(withoutItem);
}
}
}
return bestValue;
}
bool isPromising(KnapsackNode node, int maxWeight, int bestValue) {
return node.weight <= maxWeight && node.value + getBound(node) >
bestValue;
}
int getBound(KnapsackNode node) {
int remainingWeight = this->maxWeight - node.weight;
int bound = node.value;
for (int i = node.items.size(); i < this->items.size(); i++) {
Item item = this->items[i];
if (remainingWeight >= item.weight) {
bound += item.value;
remainingWeight -= item.weight;
} else {
bound += remainingWeight * item.ratio;
break;
}
}
return bound;
}
};
int main() {
vector<Item> items = {
Item(60, 10),
Item(100, 20),
Item(120, 30)
};
Knapsack knapsack(50, items);
int result = knapsack.solve();
cout << "Best value: " << result << endl;
return 0;
}

OUTPUT

Time Complexity: O(N), as only one path through the tree will have to be
traversed in the beat case and its worst time complexity is still given as
O(2N) .
LAB-10
PROBLEM: To perform N-Queen problem using Backtracking
APPROACH

The N Queen is the problem of placing N chess queens on an N×N chessboard so that no
two queens attack each other. For example, the following is a solution for the 4 Queen
problem. The expected output is in form of a matrix that has ‘Q’s for the blocks where
queens are placed and the empty spaces are represented by ‘.’s . For example, the
following is the output matrix for the above 4 queen solution.

ALGORITHM
• Initialize an empty chessboard of size NxN.
• Start with the leftmost column and place a queen in the first row of that column.
• Move to the next column and place a queen in the first row of that column.
• Repeat step 3 until either all N queens have been placed or it is impossible to
place a queen in the current column without violating the rules of the problem.
• If all N queens have been placed, print the solution.
• If it is not possible to place a queen in the current column without violating the
rules of the problem, backtrack to the previous column.
• Remove the queen from the previous column and move it down one row.
• Repeat steps 4-7 until all possible configurations have been tried.

function solveNQueens(board, col, n):


if col >= n:
print board
return true
for row from 0 to n-1:
if isSafe(board, row, col, n):
board[row][col] = 1
if solveNQueens(board, col+1, n):
return true
board[row][col] = 0
return false

function isSafe(board, row, col, n):


for i from 0 to col-1:
if board[row][i] == 1:
return false
for i,j from row-1, col-1 to 0, 0 by -1:
if board[i][j] == 1:
return false
for i,j from row+1, col-1 to n-1, 0 by 1, -1:
if board[i][j] == 1:
return false
return true

board = empty NxN chessboard


solveNQueens(board, 0, N)

Naive Algorithm
Generate all possible configurations of queens on board and print a
configuration that satisfies the given constraints.
while there are untried configurations
{
generate the next configuration
if queens don't attack in this configuration then
{
print this configuration;
}
}

Backtracking Algorithm Method 1:


The idea is to place queens one by one in different columns, starting from
the leftmost column. When we place a queen in a column, we check for
clashes with already placed queens. In the current column, if we find a row
for which there is no clash, we mark this row and column as part of the
solution. If we do not find such a row due to clashes, then we backtrack and
return false.
Method 1:
1) Start in the leftmost column
2) If all queens are placed
return true
3) Try all rows in the current column.
Do following for every tried row.
a) If the queen can be placed safely in this row
then mark this [row, column] as part of the
solution and recursively check if placing
queen here leads to a solution.
b) If placing the queen in [row, column] leads to
a solution then return true.
c) If placing queen doesn't lead to a solution then
unmark this [row, column] (Backtrack) and go to
step (a) to try other rows.
4) If all rows have been tried and nothing worked,
return false to trigger backtracking.

CODE
#include <bits/stdc++.h>
#define N 4
using namespace std;

void printSolution(int board[N][N])


{
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++)
if(board[i][j])
cout << "Q ";
else cout<<". ";
printf("\n");
}
}

bool isSafe(int board[N][N], int row, int col)


{
int i, j;
for (i = 0; i < col; i++)
if (board[row][i])
return false;

for (i = row, j = col; i >= 0 && j >= 0; i--, j--)


if (board[i][j])
return false;

for (i = row, j = col; j >= 0 && i < N; i++, j--)


if (board[i][j])
return false;

return true;
}

bool solveNQUtil(int board[N][N], int col)


{
if (col >= N)
return true;
for (int i = 0; i < N; i++) {
if (isSafe(board, i, col)) {
board[i][col] = 1;
if (solveNQUtil(board, col + 1))
return true;
board[i][col] = 0;
}
}
return false;
}
bool solveNQ()
{
int board[N][N] = { { 0, 0, 0, 0 },
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 },
{ 0, 0, 0, 0 } };
if (solveNQUtil(board, 0) == false) {
cout << "Solution does not exist";
return false;
}
printSolution(board);
return true;
}
int main()
{
solveNQ();
return 0;
}

OUTPUT

You might also like