0% found this document useful (0 votes)
54 views108 pages

5CS4-AOA-Unit-2_ppt @zammers

Uploaded by

aagyiulti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views108 pages

5CS4-AOA-Unit-2_ppt @zammers

Uploaded by

aagyiulti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 108

Arya Institute Of Engineering and Technology,

Jaipur

Submitted by: Pratibha Sharma


Subject: Analysis of Algorithms
Dept.- Computer Science & Engineering
AIET, Jaipur

1
Greedy Method
 Def: A greedy algorithm is an algorithmic
paradigm that follows the problem solving
heuristic of making the locally optimal choice at
each stage with the hope of finding a global
optimum.
 "Greedy Method finds out of many options,
but you have to choose the best option."
 In this method, we have to find out the best
method/option out of many present ways.
 In this approach/method we focus on the
first stage and decide the output, don't think
about the future.
 This method may or may not give the best output.
 Greedy Algorithm solves problems by making the
best choice that seems best at the particular
moment.
 Many optimization problems can be determined
using a greedy algorithm.
 A greedy algorithm may provide a solution that is
close to optimal.
 A greedy algorithm works if a problem exhibits the
following two properties:
 Greedy Choice Property: A globally optimal solution can be reached at
by creating a locally optimal solution. In other words, an optimal solution
can be obtained by creating "greedy" choices.
 Optimal substructure: Optimal solutions contain optimal sub solutions.
In other words, answers to sub problems of an optimal solution are
optimal.
 Example:
 machine scheduling
 Fractional Knapsack Problem
 Minimum Spanning Tree
 Huffman Code
 Job Sequencing
 Activity Selection Problem
 Steps for achieving a Greedy Algorithm
are:
 Feasible: Here we check whether it
satisfies all possible constraints or not, to
obtain at least one solution to our
problems.
 Local Optimal Choice: In this, the
choice should be the optimum which is
selected from the currently available
 Unalterable: Once the decision is made,
at any subsequence step that option is
not altered.
Components of Greedy Algorithm

 Greedy algorithms have the following five components



 A candidate set − A solution is created from this set.
 A selection function − Used to choose the best
candidate to be added to the solution.
 A feasibility function − Used to determine whether a
candidate can be used to contribute to the solution.
 An objective function − Used to assign a value to a
solution or a partial solution.
 A solution function − Used to indicate whether a
complete solution has been reached.

Fractional Knapsack Problem

 List of items is given, each item has its own value


and weight. Items can be placed in a knapsack
whose maximum weight limit is W. The problem is
to find the weight that is less than or equal to W,
and value is maximized.

 There are two types of Knapsack problem.


 0 – 1 Knapsack
 Fractional Knapsack

 For the 0 – 1 Knapsack, items cannot be divided


into smaller pieces, and for
 Fractional knapsack, items can be broken into
smaller pieces.
 Algorithm
 fractionalKnapsack(weight, itemList, n)
 Input − maximum weight of the knapsack, list of items and the
number of items
 Output: The maximum value obtained.
 Begin sort the item list based on the ration of value and
weight
 currentWeight := 0
 knapsackVal := 0
 for all items i in the list do
 if currentWeight + weight of item[i] < weight then
 currentWeight := currentWeight + weight of item[i]
 knapsackVal := knapsackVal + value of item[i]
 else
 remaining := weight – currentWeight
 knapsackVal “= knapsackVal + value of item[i] * (remaining/weight
of item[i])
 break the loop
 done End
 Example: Consider 5 items along their
respective weights and values: -
 I = (I1,I2,I3,I4,I5)
 w = (5, 10, 20, 30, 40)
 v = (30, 20, 100, 90,160)
 The capacity of knapsack W = 60
 I = (I1,I2,I3,I4,I5)
 w = (5, 10, 20, 30, 40)
 v = (30, 20, 100, 90,160)
 The capacity of knapsack W = 60
 Now fill the knapsack according to the decreasing
value of pi.
 First, we choose the item Ii whose weight is 5.
 Then choose item I3 whose weight is 20.
 Now,the total weight of knapsack is 20 + 5 = 25
 Now the next item is I5, and its weight is 40,
 but we want only 35, so we chose the fractional
part of it,
Job Sequencing Problem

 The sequencing of jobs on a single processor with deadline


constraints is called as Job Sequencing with Deadlines.

 Here-
 You are given a set of jobs.
 Each job has a defined deadline and some profit associated with it.
 The profit of a job is given only when that job is completed within
its deadline.
 Only one processor is available for processing all the jobs.
 Processor takes one unit of time to complete a job.

 The problem states-


 “How can the total profit be maximized if only one job can be
completed at a time?”
Greedy Algorithm-

 Greedy Algorithm is adopted to determine how the next job is selected for an optimal solution.
 The greedy algorithm described below always gives an optimal solution to the job sequencing
problem-

 Step-01:

 Sort all the given jobs in decreasing order of their profit.

 Step-02:

 Check the value of maximum deadline.
 Draw a Gantt chart where maximum time on Gantt chart is the value of maximum deadline.

 Step-03:

 Pick up the jobs one by one.
 Put the job on Gantt chart as far as possible from 0 ensuring that the job gets completed before
its deadline..

 Step-01:
 Sort all the given jobs in decreasing order
of their profit-
 Part-01:

 The optimal schedule is-
 J2 , J4 , J3 , J5 , J1
 This is the required order in which the jobs must be completed in
order to obtain the maximum profit.

 Part-02:

 All the jobs are not completed in optimal schedule.
 This is because job J6 could not be completed within its deadline.

 Part-03:

 Maximum earned profit
 = Sum of profit of all the jobs in optimal schedule
 = Profit of job J2 + Profit of job J4 + Profit of job J3 + Profit of job
J5 + Profit of job J1
 = 180 + 300 + 190 + 120 + 200

Optimal Merge Pattern
 Merge a set of sorted files of different length into
a single sorted file. We need to find an optimal
solution, where the resultant file will be generated
in minimum time.

 When two or more sorted files are to be merged


all together to form a single file, the minimum
computations done to reach this file are known
as Optimal Merge Pattern.

 Given n number of sorted files, the task is to find


the minimum computations done to reach
Optimal Merge Pattern.

 If the number of sorted files are given, there are many ways
to merge them into a single sorted file. This merge can be
performed pair wise. Hence, this type of merging is called
as 2-way merge patterns.

 As, different pairings require different amounts of time, in this


strategy we want to determine an optimal way of merging
many files together. At each step, two shortest sequences are
merged.

 To merge a p-record file and a q-record file requires


possibly p + q record moves, the obvious choice being,
merge the two smallest files together at each step.

 Two-way merge patterns can be represented by binary


merge trees. Let us consider a set of n sorted files {f1, f2, f3,
…, fn}. Initially, each element of this is considered as a single
node binary tree. To find this optimal solution, the following
algorithm is used.
 Algorithm:
 TREE (n)
 for i := 1 to n – 1
 do declare new node
 node.leftchild := least (list)
 node.rightchild := least (list)
 node.weight) := ((node.leftchild).weight) +
((node.rightchild).weight)
 insert (list, node);
 return least (list);
 At the end of this algorithm, the weight of
the root node represents the optimal cost.
 Example
 Let us consider the given files, f1, f2, f3, f4 and
f5 with 20, 30, 10, 5 and 30 number of
elements respectively.
 If merge operations are performed
according to the provided sequence, then
 M1 = merge f1 and f2 => 20 + 30 = 50
 M2 = merge M1 and f3 => 50 + 10 = 60
 M3 = merge M2 and f4 => 60 + 5 = 65
 M4 = merge M3 and f5 => 65 + 30 = 95
 Hence, the total number of operations is
 50 + 60 + 65 + 95 = 270
 Now, the question arises is there any better
solution?
 Sorting the numbers according to their size
in an ascending order, we get the following
sequence −
 f4, f3, f1, f2, f5
 Hence, merge operations can be performed
on this sequence
 M1 = merge f4 and f3 => 5 + 10 = 15
 M2 = merge M1 and f1 => 15 + 20 = 35
 M3 = merge M2 and f2 => 35 + 30 = 65
 M4 = merge M3 and f5 => 65 + 30 = 95
 Therefore, the total number of operations is
 15 + 35 + 65 + 95 = 210
Spanning Tree
 A spanning tree is a subset of Graph G, which has
all the vertices covered with minimum possible
number of edges.

 Hence, a spanning tree does not have cycles and


it cannot be disconnected.

 By this definition, we can draw a conclusion that


every connected and undirected Graph G has at
least one spanning tree.

 A disconnected graph does not have any spanning


tree, as it cannot be spanned to all its vertices.
We found three spanning trees off one complete graph. A complete undirected
graph can have maximum nn-2 number of spanning trees, where n is the number
of nodes. In the above addressed example, n is 3, hence 33−2 = 3 spanning
trees are possible.
General Properties of Spanning
Tree
 We now understand that one graph can have more than one
spanning tree. Following are a few properties of the spanning
tree connected to graph G −
 A connected graph G can have more than one spanning tree.

 All possible spanning trees of graph G, have the same number


of edges and vertices.

 The spanning tree does not have any cycle (loops).

 Removing one edge from the spanning tree will make the
graph disconnected, i.e. the spanning tree is minimally
connected.

 Adding one edge to the spanning tree will create a circuit or


loop, i.e. the spanning tree is maximally acyclic.
 Mathematical Properties of Spanning Tree:

 Spanning tree has n-1 edges, where n is the number


of nodes (vertices).

 From a complete graph, by removing maximum e - n


+ 1 edges, we can construct a spanning tree.

 A complete graph can have maximum nn-2 number


of spanning trees.

 Thus, we can conclude that spanning trees are a


subset of connected Graph G and disconnected
graphs do not have spanning tree.
Application of Spanning Tree

 Spanning tree is basically used to find a minimum path


to connect all nodes in a graph. Common application
of spanning trees are −

 Civil Network Planning


 Computer Network Routing Protocol
 Cluster Analysis

 Let us understand this through a small example.


Consider, city network as a huge graph and now plans
to deploy telephone lines in such a way that in
minimum lines we can connect to all city nodes. This
is where the spanning tree comes into picture.
Minimum Spanning Tree (MST)

 In a weighted graph, a minimum spanning tree is a


spanning tree that has minimum weight than all other
spanning trees of the same graph.

 In real-world situations, this weight can be measured


as distance, congestion, traffic load or any arbitrary
value denoted to the edges.

 Minimum Spanning-Tree Algorithm


 We shall learn about two most important spanning
tree algorithms here −
 Kruskal's Algorithm
 Prim's Algorithm
Prim’s Algorithm Implementation-
 The implementation of Prim’s Algorithm is explained in the following steps-

 Step-01:

 Randomly choose any vertex.
 The vertex connecting to the edge having least weight is usually selected.

 Step-02:

 Find all the edges that connect the tree to new vertices.
 Find the least weight edge among those edges and include it in the existing
tree.
 If including that edge creates a cycle, then reject that edge and look for the
next least weight edge.

 Step-03:

 Keep repeating step-02 until all the vertices are included and Minimum
Spanning Tree (MST) is obtained.
 Prim’s Algorithm Time Complexity-

 Worst case time complexity of Prim’s Algorithm is-
 O(ElogV) using binary heap
 O(E + VlogV) using Fibonacci heap

 If adjacency list is used to represent the graph, then using breadth first search,
all the vertices can be traversed in O(V + E) time.
 We traverse all the vertices of graph using breadth first search and use a min
heap for storing the vertices not yet included in the MST.
 To get the minimum weight edge, we use min heap as a priority queue.
 Min heap operations like extracting minimum element and decreasing key value
takes O(logV) time.

 So, overall time complexity
 = O(E + V) x O(logV)
 = O((E + V)logV)
 = O(ElogV)

 This time complexity can be improved and reduced to O(E + VlogV) using
Fibonacci heap.
Kruskal’s Algorithm Implementation-

 Step-01:
Sort all the edges from low weight to high weight.
Step-02:
Take the edge with the lowest weight and use it to connect
the vertices of graph.
If adding an edge creates a cycle, then reject that edge and go
for the next least weight edge.
Step-03:
Keep adding edges until all the vertices are connected and a
Minimum Spanning Tree (MST) is obtained.
Kruskal’s Algorithm Time Complexity-

Worst case time complexity of Kruskal’s


Algorithm
= O(ElogV) or O(ElogE)

Analysis-

•The edges are maintained as min heap.


•The next edge can be obtained in O(logE) time if graph has E edges.
•Reconstruction of heap takes O(E) time.
•So, Kruskal’s Algorithm takes O(ElogE) time.
•The value of E can be at most O(V2).
•So, O(logV) and O(logE) are same.
Special Case-

•If the edges are already sorted, then there is no need to construct min heap.
•So, deletion from min heap time is saved.
•In this case, time complexity of Kruskal’s Algorithm = O(E + V)
Dynamic Programming
 Dynamic Programming is also used in
optimization problems.
 Like divide-and-conquer method, Dynamic
Programming solves problems by combining the
solutions of subproblems.
 Moreover, Dynamic Programming algorithm
solves each sub-problem just once and then saves
its answer in a table, thereby avoiding the work of
re-computing the answer every time.
 Two main properties of a problem suggest that
the given problem can be solved using Dynamic
Programming. These properties are overlapping
sub-problems and optimal substructure.
 Overlapping Sub-Problems
 Similar to Divide-and-Conquer approach,
Dynamic Programming also combines solutions
to sub-problems.
 It is mainly used where the solution of one sub-
problem is needed repeatedly.
 The computed solutions are stored in a table,
so that these don’t have to be re-computed.
Hence, this technique is needed where
overlapping sub-problem exists.
 For example, Binary Search does not have
overlapping sub-problem. Whereas recursive
program of Fibonacci numbers have many
overlapping sub-problems.
 Optimal Sub-Structure
 A given problem has Optimal Substructure
Property, if the optimal solution of the given
problem can be obtained using optimal solutions
of its sub-problems.
 For example, the Shortest Path problem has the
following optimal substructure property −
 If a node x lies in the shortest path from a source
node u to destination node v, then the shortest
path from u to v is the combination of the
shortest path from u to x, and the shortest path
from x to v.
 The standard All Pair Shortest Path algorithms
like Floyd-Warshall and Bellman-Ford are typical
examples of Dynamic Programming.
 Steps of Dynamic Programming Approach
 Dynamic Programming algorithm is designed
using the following four steps −

 Characterize the structure of an optimal


solution.

 Recursively define the value of an optimal


solution.

 Compute the value of an optimal solution,


typically in a bottom-up fashion.

 Construct an optimal solution from the


computed information.
Applications of Dynamic
Programming Approach

 Matrix Chain Multiplication


 Longest Common Subsequence
 Travelling Salesman Problem
 0-1 Knapsack Problem
Matrix Chain Multiplication Problem

 Given a sequence of matrices, find the most


efficient way to multiply these matrices together.
 The problem is not actually to perform the
multiplications, but merely to decide in which
order to perform the multiplications.
 We have many options to multiply a chain of
matrices because matrix multiplication is
associative.
 In other words, no matter how we parenthesize
the product, the result will be the same.
 For example, if we had four matrices A, B, C, and
D, we would have:
 (ABC)D = (AB)(CD) = A(BCD) = ....
 However, the order in which we parenthesize the product affects the
number of simple arithmetic operations needed to compute the
product, or the efficiency.
 For example, suppose A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C
is a 5 × 60 matrix.Then,
 (AB)C = (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations
A(BC) = (30×5×60) + (10×30×60) = 9000 + 18000 = 27000
operations.
 Clearly the first parenthesization requires less number of operations.

 Given an array p[] which represents the chain of matrices such


that the ith matrix Ai is of dimension p[i-1] x p[i]. We need to
write a function MatrixChainOrder() that should return the
minimum number of multiplications needed to multiply the chain.
 Example: We are given the sequence {4, 10, 3, 12, 20,
and 7}.
 The matrices have size 4 x 10, 10 x 3, 3 x 12, 12 x 20, 20
x 7.
 We need to compute M [i,j], 0 ≤ i, j≤ 5. We know M [i, i]
= 0 for all i.

 Let us proceed with working away from the diagonal.


We compute the optimal solution for the product of 2
matrices.
Here P0 to P5 are Position and M1 to M5 are matrix of size (pi to pi-1)

On the basis of sequence, we make a formula

In Dynamic Programming, initialization of every method done by '0'.So we initialize it by '0'.It will sort
out diagonally.
We have to sort out all the combination but the minimum output combination is taken into
consideration.
 Calculation of Product of 2 matrices:

 1. m (1,2) = m1 x m2
 = 4 x 10 x 10 x 3
 = 4 x 10 x 3 = 120

 2. m (2, 3) = m2 x m3
 = 10 x 3 x 3 x 12
 = 10 x 3 x 12 = 360

 3. m (3, 4) = m3 x m4
 = 3 x 12 x 12 x 20
 = 3 x 12 x 20 = 720

 4. m (4,5) = m4 x m5
 = 12 x 20 x 20 x 7
 = 12 x 20 x 7 = 1680
 We initialize the diagonal element with equal i,j value
with '0'.
 After that second diagonal is sorted out and we get
all the values corresponded to it
 Now the third diagonal will be solved out in the same
way.
 Now product of 3 matrices:
 M [1, 3] = M1 M2 M3
 There are two cases by which we can solve this
multiplication:
 ( M1 x M2) + M3,
 M1+ (M2x M3)
 After solving both cases we choose the case in which
minimum output is there.

 M [1, 3] =264
 As Comparing both output 264 is minimum in both
cases so we insert 264 in table and ( M1 x M2) + M3 this
combination is chosen for the output making.
 M [2, 4] = M2 M3 M4
 There are two cases by which we can solve
this multiplication: (M2x M3)+M4, M2+(M3 x
M4)
 After solving both cases we choose the case
in which minimum output is there.

 M [2, 4] = 1320
 As Comparing both output 1320 is
minimum in both cases so we insert 1320 in
table and M2+(M3 x M4) this combination is
chosen for the output making.
 M [3, 5] = M3 M4 M5
 There are two cases by which we can solve this
multiplication:

 ( M3 x M4) + M5,
 M3+ ( M4xM5)
 After solving both cases we choose the case in
which minimum output is there.

 M [3, 5] = 1140
 As Comparing both output 1140 is minimum in
both cases so we insert 1140 in table and ( M3 x
M4) + M5this combination is chosen for the
output making.
 Now Product of 4 matrices:
 M [1, 4] = M1 M2 M3 M4
 There are three cases by which we can solve this multiplication:
 ( M1 x M2 x M3) M4
 M1 x(M2 x M3 x M4)
 (M1 xM2) x ( M3 x M4)

 After solving these cases we choose the case in which minimum


output is there
 M [1, 4] =1080
 As comparing the output of different cases then '1080' is minimum
output, so we insert 1080 in the table and (M1 xM2) x (M3 x M4)
combination is taken out in output making,
 M [2, 5] = M2 M3 M4 M5
 There are three cases by which we can solve this
multiplication:
 (M2 x M3 x M4)x M5
 M2 x( M3 x M4 x M5)
 (M2 x M3)x ( M4 x M5)

 After solving these cases we choose the case in


which minimum output is there
 M [2, 5] = 1350
 As comparing the output of different cases then
'1350' is minimum output, so we insert 1350 in the
table and M2 x( M3 x M4 xM5)combination is taken
out in output making.
 Now Product of 5 matrices:
 M [1, 5] = M1 M2 M3 M4 M5
 There are five cases by which we can solve this
multiplication:
 (M1 x M2 xM3 x M4 )x M5
 M1 x( M2 xM3 x M4 xM5)
 (M1 x M2 xM3)x M4 xM5
 M1 x M2x(M3 x M4 xM5)

 After solving these cases we choose the case in which


minimum output is there
 M [1, 5] = 1344
 As comparing the output of different cases then '1344' is minimum
output, so we insert 1344 in the table and M1 x M2 x(M3 x M4 x
M5)combination is taken out in output making.
 Final Output is:
 Step 3: Computing Optimal Costs:

 let us assume that matrix Ai has dimension pi-1x


pi for i=1, 2, 3....n.
 The input is a sequence (p0,p1,......pn) where length
[p] = n+1.
 The procedure uses an auxiliary table m [1....n,
1.....n] for storing m [i, j] costs an auxiliary table s
[1.....n, 1.....n] that record which index of k
achieved the optimal costs in computing m [i, j].
 The algorithm first computes m [i, j] ← 0 for i=1,
2, 3.....n, the minimum costs for the chain of
length 1.
MATRIX-CHAIN-ORDER (p)
 1. n length[p]-1
 2. for i ← 1 to n
 3. do m [i, i] ← 0
 4. for l ← 2 to n // l is the chain length
 5. do for i ← 1 to n-l + 1
 6. do j ← i+ l -1
 7. m[i,j] ← ∞
 8. for k ← i to j-1
 9. do q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
 10. If q < m [i,j]
 11. then m [i,j] ← q
 12. s [i,j] ← k
 13. return m and s.
 We will use table s to construct an optimal solution.
 Step 1: Constructing an Optimal Solution:
 PRINT-OPTIMAL-PARENS (s, i, j)
 1. if i=j
 2. then print "A"
 3. else print "("
 4. PRINT-OPTIMAL-PARENS (s, i, s [i, j])
 5. PRINT-OPTIMAL-PARENS (s, s [i, j] + 1, j)
 6. print ")"

 Analysis: There are three nested loops. Each loop executes


a maximum n times.
 l, length, O (n) iterations.
 i, start, O (n) iterations.
 k, split point, O (n) iterations
 Body of loop constant complexity
 Total Complexity is: O (n3)
Algorithm with Explained Example

 Question: P [7, 1, 5, 4, 2}
 Solution: Here, P is the array of a dimension of
matrices.
 So here we will have 4 matrices:
 A17x1 A21x5 A35x4 A44x2
 i.e. First Matrix A1 have dimension 7 x 1
 Second Matrix A2 have dimension 1 x 5
 Third Matrix A3 have dimension 5 x 4
 Fourth Matrix A4 have dimension 4 x 2
 Let say, From P = {7, 1, 5, 4, 2} - (Given)
 And P is the Position p0 = 7, p1 =1, p2 = 5, p3 = 4,
p4=2. Length of array P = number of elements in P
∴length (p)= 5
 From step 3 Follow the steps in Algorithm
in Sequence According to Step 1 of
Algorithm Matrix-Chain-Order
 Step 1:
 n ← length [p]-1 Where n is the total
number of elements And length [p] = 5 ∴
n=5-1=4n=4
 Now we construct two tables m and s.
Table m has dimension [1.....n, 1.......n]
Table s has dimension [1.....n-1, 2.......n]
 Now we compare the value for both k=1
and k = 2.
 The minimum of two will be placed in m
[i,j] or s [i,j] respectively.
 Here, also find the minimum value of m [i,j] for two
values of k = 2 and k =3

 But 28 < ∞
 So m [i,j] ← q
 And q ← 28
 m [2, 4] ← 28
 and s [2, 4] ← 3
 e. It means in s table at s [2,4] insert 3 and at m [2,4]
insert 28.
So no change occurs. So the value of m [1, 4] remains 42. And value of s [1, 4] = 1
Longest Common Subsequence
 The longest common subsequence problem is finding the longest sequence
which exists in both the given strings.
 Subsequence
 Let us consider a sequence S = <s1, s2, s3, s4, …,sn>.
 A sequence Z = <z1, z2, z3, z4, …,zm> over S is called a subsequence of S, if
and only if it can be derived from S deletion of some elements.
 Common Subsequence
 Suppose, X and Y are two sequences over a finite set of elements. We can
say that Z is a common subsequence of X and Y, if Z is a subsequence of
both X and Y.
 Longest Common Subsequence
 If a set of sequences are given, the longest common subsequence problem
is to find a common subsequence of all the sequences that is of maximal
length.
 The longest common subsequence problem is a classic computer science
problem, the basis of data comparison programs such as the diff-utility, and
has applications in bioinformatics. It is also widely used by revision control
systems, such as SVN and Git, for reconciling multiple changes made to a
revision-controlled collection of files.
 Naïve Method
 Let X be a sequence of length m and Y a sequence of length n.
Check for every subsequence of X whether it is a subsequence
of Y, and return the longest common subsequence found.
 There are 2m subsequences of X. Testing sequences whether or not
it is a subsequence of Y takes O(n) time. Thus, the naïve algorithm
would take O(n2m) time.

 Dynamic Programming
 Let X = < x1, x2, x3,…, xm > and Y = < y1, y2, y3,…, yn > be the
sequences. To compute the length of an element the following
algorithm is used.
 In this procedure, table C[m, n] is computed in row major order
and another table B[m,n] is computed to construct optimal
solution.
 Algorithm: LCS-Length-Table-Formulation (X,Y)
 m := length(X)
 n := length(Y)
 for i = 1 to m
 do
 C[i, 0] := 0
 for j = 1 to n
 do
 C[0, j] := 0
 for i = 1 to m
 do
 for j = 1 to n
 do
 if xi = yj
 C[i, j] := C[i - 1, j - 1] + 1
 B[i, j] := ‘D’
 else if C[i -1, j] ≥ C[i, j -1]
 C[i, j] := C[i - 1, j] + 1
 B[i, j] := ‘U’
 else
 C[i, j] := C[i, j - 1]
 B[i, j] := ‘L’
 return C and B
 Algorithm: Print-LCS (B, X, i, j)
 if i = 0 and j = 0
 return if B[i, j] = ‘D’
 Print-LCS(B, X, i-1, j-1)
 Print(xi)
 else if B[i, j] = ‘U’
 Print-LCS(B, X, i-1, j)
 else
 Print-LCS(B, X, i, j-1)
 This algorithm will print the longest
common subsequence of X and Y.
 Analysis
 To populate the table, the outer for loop iterates m times
and the inner for loop iterates n times.
 Hence, the complexity of the algorithm is O(mn),
 where m and n are the length of two strings.

 Example
 In this example, we have two strings X = BACDB and Y =
BDCB to find the longest common subsequence.
 Following the algorithm LCS-Length-Table-Formulation (as
stated above), we have calculated table C (shown on the left
hand side) and table B (shown on the right hand side).
 In table B, instead of ‘D’, ‘L’ and ‘U’, we are using the diagonal
arrow, left arrow and up arrow, respectively. After generating
table B, the LCS is determined by function LCS-Print. The
result is BCB.
0/1 Knapsack Problem

 Knapsack Problem:
 Knapsack is basically means bag. A bag of
given capacity.
 We want to pack n items in your luggage.
 The ith item is worth vi dollars and weight
wi pounds.
 Take as valuable a load as possible, but
cannot exceed W pounds.
 vi wi W are integers.
 W ≤ capacity
 Value ← Max
 Input:
 Knapsack of capacity
 List (Array) of weight and their corresponding value.
 Output: To maximize profit and minimize weight in capacity.
 The knapsack problem where we have to pack the knapsack with
maximum value in such a manner that the total weight of the items
should not be greater than the capacity of the knapsack.
 Knapsack problem can be further divided into two
parts:

 1. Fractional Knapsack: Fractional knapsack problem


can be solved by Greedy Strategy where as 0 /1
problem is not.
 It cannot be solved by Dynamic Programming Approach.
 2. 0/1 Knapsack Problem:
 In this item cannot be broken which means thief should
take the item as a whole or should leave it. That's why it
is called 0/1 knapsack Problem.
 Each item is taken or not taken.
 Cannot take a fractional amount of an item taken or
take an item more than once.
 It cannot be solved by the Greedy Approach because it
is enable to fill the knapsack to capacity.
 Example of 0/1 Knapsack Problem:
 Example: The maximum weight the knapsack can hold
is W is 11. There are five items to choose from. Their
weights and values are presented in the following table:
The [i, j] entry here will be V [i, j], the best value obtainable using the
first "i" rows of items if the maximum capacity were j. We begin by
initialization and first row.

V [i, j] = max {V [i - 1, j], vi + V [i - 1, j -wi]


 The value of V [3, 7] was computed as follows:
 V [3, 7] = max {V [3 - 1, 7], v3 + V [3 - 1, 7 - w3]
 = max {V [2, 7], 18 + V [2, 7 - 5]}
= max {7, 18 + 6} = 24


 Finally output is:

 The maximum value of items in the knapsack is 40, the


bottom-right entry).
Algorithm of Knapsack Problem

 The dynamic programming approach can now be


coded as the following algorithm:
 KNAPSACK (n, W)
 1. for w = 0, W
 2. do V [0,w] ← 0
 3. for i=0, n
 4. do V [i, 0] ← 0
 5. for w = 0, W
 6. do if (wi≤ w & vi + V [i-1, w - wi]> V [i -1,W])
 7. then V [i, W] ← vi + V [i - 1, w - wi]
 8. else V [i, W] ← V [i - 1, w]
 int main()
 {
 int i, n, val[20], wt[20], W;

 printf("Enter number of items:");
 scanf("%d", &n);

 printf("Enter value and weight of items:\n");
 for(i = 0;i < n; ++i){
 scanf("%d%d", &val[i], &wt[i]);
 }

 printf("Enter size of knapsack:");
 scanf("%d", &W);

 printf("%d", knapSack(W, wt, val, n));
 return 0;
 }

You might also like