Cs 6402 Design and Analysis of Algorithms
Cs 6402 Design and Analysis of Algorithms
Algorithm analysis Time space trade off Asymptotic notations Conditional asymptotic notation
Removing condition from the conditional asymptotic notation Properties of Big-oh notation
Recurrence equations Solving recurrence equations Analysis of linear search.
1. Introduction to algorithms
Define algorithm
An algorithm is a finite set of unambiguous instructions that, if followed accomplishes a
particular task. All algorithms must satisfy the following criteria:
Input zero or more quantities are externally supplied.
At least one quantities is produced.
Definiteness each instruction is clear and unambiguous.
Finiteness the instruction of an algorithm has to be terminated after a finite number
of steps for all cases.
Effectiveness required instructions for the problem is defined.
What is an algorithm?
1
Notation of algorithm
2
3
4
2. Fundamentals of alogirthm problem efficiency
5
6
7
8
9
3. Fundamentals of analysis of algorithm efficiency
To process of investigation of an algorithm efficiency with respect to two resources are
running time and space of memory.
Define Analysis
Define analysis as the separation of a complicated whole ideas into its one of parts for
individual study.
Analysis framework
There are 2 kinds of efficiency.
Time efficiency it indicates how fast an algorithm runs. The running time of an algorithm
on a particular input is the number of basic operations or steps executed.
Space efficiency it deals with extra space the algorithm requires.
In general, the time taken by an algorithm grows with the size of the input.
So, the running time of the program depends on the size of its input.
The input size is best measured as the number of items in the input.
The parameter of the algorithms input size is .
For example, it will be the array size n for the problems like sorting, searching and
finding the smallest or largest elements etc.
The input size n will either be polynomial degree or the number of its co-efficient,
which is larger by one than its degree , for evaluating polynomial expression is p(x) =
an xn + .a0 of degree n.
For computing the product of two n by n matrices, the input size is either the matrix
order n.
If the input to an algorithm is a graph, the input size can be described by the number
of the vertices and edges of the graph represented by adjacency matrix.
The input size metric is influenced by the operations of the algorithm. For example,
in a spell checking algorithm, the algorithm examines individual characters in the
input.
Measuring the size by the b of bits in the ns binary representation b = [log 2n +1 ]
10
Dependence on the speed of a particular computer, dependence on the quality of a program
implementing the algorithm and of the compiler used in generation g the machine code, the
difficulty of looking the actual running time of the program.
This approach is to count the number of times each of the algorithms operations is executed.
To identify the most important operation of the algorithm called the basic operations, the
operations contributing the most to the total running time and compute the number of times
the basic operation is executed.
Example 1 sorting algorithm work by comparing elements of a list being sorted with each
other; for such algorithm the basic operation is a key comparison.
Example 2 matrix multiplication algorithm is 2 arithmetic operations are multiplication and
addition.
The framework for the analysis of an algorithms time efficiency suggests a measuring it by
counting the number of times the algorithms basic operation is executed on inputs of size n.
Let c (n) = number of times this operation needs to be executed for this algorithm.
Cop = the time of execution of an algorithms basic operation on a particular computer.
T (n) = estimate the running time of a program implementing this algorithm.
T (n) = Cop . c (n)
Example
C (n) = 1/2 n (n-1) = 1/2 n2- 1/2 n = 1/2 n2
T (2n) / T (n) =
iii). Orders of growth
The difference in algorithmic efficiencies becomes clear and important, only for large values
of n, because of function order of growth.
2 3 n
n Log2n - N n Log2n n n - 2 n!
logarithmic quadratic cubic exponential factorial
function function
2 3 3
10 3.3 10 3.3 * 10 10 10 10 3.6 * 106
An algorithm with a time complexity of n is more efficient than one with a time complexity of
n 2 for sufficiently large values of n.
Suppose we have 2 algorithms for the same problem, and their every case time complexities
is 100 n for first algorithm and 0. 01n2 for second algorithm.
For larger values of n, first algorithm is more efficient if; 0. 01n2 > 100n dividing both sides
by 0.011n yields n >10,000 is first algorithm is more efficient than second algorithm in n
value is larger than 10,000.
Algorithms with time complexities such as n and 100n are called linear time algorithms.
11
Time complexities are linear in the input size n, whereas algorithm with time complexities
such as n2 and 0. 01n2 are called quadratic time algorithms.
4. Worst case, best case and average case efficiency or analysis of linear search or
sequential search
Worst case efficiency
It considers the maximum number of times the basic operation is executed. It is denoted by
W (n) or Cworst (n) worst cast time complexity analysis. If T (n), every case time complexity
exists, then W (n) = T (n).
Best case efficiency
Best case time complexity analysis is the determination of the smallest number of times the
basic operation is done. B (n) = C best (n) best case time complexity. If T (n), every case time
complexity exists, then clearly B (n) = T (n), whereas if T (n) does not exist, we have to find
B (n).
Average case efficiency
A (n ) or C avg (n) is defined as the average or expected value of the number of times the
algorithm does the basic operation for an input size of n. C avg (n) average time complexity .
If T (n), every case time complexity exists, then clearly A (n) = T (n), whereas if T (n) does
not exist, we have to find A (n).
These cases are analysed with sequential search algorithm or linear search algorithm.
Algorithm
Linear search (A [0, 1n-1], k)
// searches for a given value in a given array by linear search
// Input: An array A [0n- 1] and a search key k.
// Output: The index of the first element of A that matches k+1 or -1 if there are no matching
elements.
i -> 0
while i <n & A [i] k do
i <-i + 1
if i < n, return i
else return -1
Algorithm analysis
Basic operation - comparison of an item in the array with k.
Input size n, number of items in the array.
Worst case the basic operation is done at most n times, which is the case
when k is the last item in the array, or if k is not in the array. C worst (n) = n=
maximum over input size n.
12
Best case n >=1, there must be at least one pass through the loop. If
k = A [1], there will be one pass through the regardless of the size of n.
B (n) = or C best (n) = 1 = minimum over input size n.
Average case we first analyse the case where it is known that k is in S,
where the items in S and are all distinct, for 1<=i<=n, the probability that k is
in the ith array slot is 1/n.
13
14
15
16
17
18
5. Analysis of binary search
Problem: determine whether x is in the sorted array of S of n keys.
Input: n, key S from 1 to n, key x.
19
Output: the location of x is in S.
Algorithm
Binsearch (int n, const keysize S [ ], keytype x, index location)
{
index low , mid ,high
low = 1
high =n
location = 0
while (low<= high &&location == 0)
{
mid = [ (low + high) /2 ]
if (x=== S [mid ])
location = mid
else if (x < S [mid ])
high = mid -1;
else
low =mid +1;
}}
6. Analysis of matrix multiplication
Problem: determine the product of 2 n * n matrices.
Input: int n, two dimensional array of numbers A & B, each of which has both in rows and
columns indexed from 1 to n.
Output: two dimensional array of numbers C, which has both its rows and columns indexed
from 1 to n, containing the product of A* B.
Algorithm
Matrixmult (int n, const no. A [][], const no. B [] [], C[] [] )
{
index i,j,k;
for (i =1 , i<=n ;i++)
for (j =1 ; j=n ; j++)
{
C [i] [j] = 0;
for (k=1 ,k<=n; k++)
C [i] [j] = C[i] [j] + A [i] [k] * B[k] [j];
}}
20
Therefore, number of times the basic operation is executed is T (n) = n*n *n = n 3,
T (2n) = (2n) 3, T (2n) /T (n) = (2n) / (n3) = 8. This algoi9thm taken 8 times longer time if
input is doubled.
21
22
23
Mathematical analysis of recursive algorithms Fibonacci numbers
Fibonacci series 0, 1, 1, 2, 3, 4, 5, 8.
Simple recurrence relation is F (n) = F (n- 1) + F (n-2), for n>1 ----------1
Initial conditions is F (0) =0, F (1) =1 -----------2
24
1 needs to be rewritten as
F (n) = F (n- 1) - F (n-2) =0 ----------3
The characteristic equation is r2 r 1 =0
Algorithm
Fib (n)
// computes the nth Fibonacci number
// Input: a non-negative integer n
// Output: the nth Fibonacci number
F (0) <- 0, F (1) <- 1
for i <- 2 to n do
F [i] <- F [i-1] + F [i-2]
Return F (n).
Analysis
Finding the value of the largest element in a list of n numbers
Algorithm
Maxelement (A [0-----n-1])
// determines the value of the largest element in a given array
// Input: an array A [0---------n] of real number.
// Output: the value of the largest element in A
maxval <- A [0]
for i<- 1 to n-1 do
if A [i] >maxval
maxval<- A[i]
return maxval.
Analysis
Input size = number of elements in the array = n.
Basic operation comparison and assignment
No best case, worst case and average case efficiencies
C (n) = i-1 to n -11 = (n-1) -1 +1 = n-1 (n).
Computing matrix multiplication
Algorithm
Matrix multiplication (A [0n-1, 0.n-1], B [0n-1, 0.n-1]
// Multiplies two n by n matrices
// Input: two n by n matrices A & B
// Output: matrix C = A*B
for i <-0 to n-1 do
for j <-0 to n-1 do
25
C [i,j] = 0
for k <-0 to n-1 do
C [i,j ] = C [i,j] + A[i,k ] * B[ k,j]
Return C
Analysis
Input size is equivalent to the order of the matrix
Basic operation multiplication
Time complexity equation is M (n) =i=0 to n-1 j=0 ton-1 k=0to n-1 1= n3
Find the number of binary digits in the binary representation of a positive
decimal integer
Algorithm
Binary (n)
// Input: a positive decimal integer n
// output: the number of binary digits in ns binary representation
Count =1
While n>1 do
count = count + 1
n = [n/2]
return count
Analysis
Input size is equivalent to the number of elements in the array n.
Basic operation comparison
Efficiency log 2 n + 1
26
27
28
9. Analysis of space and time complexity
Space complexity: The space complexity of an algorithm is the amount of memory it needs
to run to completion. The space needed by each algorithm is the sum of following
components.
Instruction space The space needed to store the compiled version of the program
instructions.
Data space The space needed to store all constant and variable values.
Environment stack space The space needed to store information to resume execution of
partially completed functions. Example: if function 1, invokes function 2, then we must at
least save a pointer to the instruction of function 1 to be executed when function 2 terminates.
29
The total space needed by an algorithm can be simply divided into 2 parts from the 3
components of space complexity.
Fixed part a fixed part space is independent of the characteristics (e.g., number, size of the
inputs and outputs). This part includes the instruction space (ie., space for the code) , space
for simple variables and fixed size component variables (also called aggregate ) , space for
constants and so on.
Variable part - a variable part space needed by component variables whose, size is
dependent on the particular problem instances being solved, and the space needed by
reference variables (depends on the instant characteristics). The space requirement of any
algorithm P may be written as: S (P) = C + SP (instance characteristics).
C constant fixed part of the space requirement.
SP variable component depends on the magnitude (size) of the inputs to outputs from the
algorithm.
Example for Space complexity the problem instance is characterised by the specific values
of a, b and c. Assume that one word is to store the values of each a, b and c and result that is
the space needed by abc is independent of the instant characteristics (fixed part), S P (instance
characteristics) =0 + 0 =0.
Time complexity The time complexity of an algorithm is the amount of time (or the
number of steps) it needs to run to completion. T (p) = time taken by a program / algorithm is
T (p) = compile time + run time or execution time.
Compile time - it does not depend on the space needed by reference variables. Also, we
many assume a compiled program will be run several times without recompilation.
Run time - it depends on the instant characteristics (the space needed by reference variables)
denoted by tp.
tp (n) = Ca add (n) + Cs sub(n) + Cm mul(n) + Cd div(n) +.
Ca - time needed for addition
Cs - time needed for subtraction
Cm time needed for multiplication
Cd time needed for division
Program step it is defined as semantically meaningful segment of a program that has an
execution time that is independent of the instant characteristics. The program statements are
classified into 3 types depends on the task to be performed and the number of steps required
by the unique type is given below;
1. Comments zero step
2. Assignment statement one step
3. Iterative statement finite number of steps ( for , while , repeat until)
30
Methods
Global variable count initial value = 0, g++ - new instruction introduced in this
program. This is done, so that each time a statement in the original program is
executed, count is incremented by the step count of that statement.
To determine the step count of an algorithm is to build a table in which we list the
total number of steps contributed by each statement. The final step count is obtained
by consecutive 3 steps :
The number of steps per execution of the statement is calculated.
Total number of times each statement is executed (i.e., frequency)
Multiply and frequency to find the total steps of each statement and add the total steps
of each statement to obtain a final step count (i.e., total).
Example: introduce a variable count in the algorithm sum computes a [i] iteratively, a[i]
s real number.
for i = 1 to n do
S = S + a [i];
Return S; }
Step count the change in the value of count by the time this program terminates is the
number of steps executed by the algorithm sum.
31
UNIT II
General method it is easier to solve several small instances of a problem than one larger one. They
divide the problem into smaller instances of the same problem then solve the smaller instances
recursively, and finally combine the solutions to obtain the solution for the original input.
Example: Detecting a counterfeit coin
Problem: you are given a bag with 16 coins and told that one of these coins may be counterfeit (not
real coin). You are told that counterfeit coins are lighter than genuine ones.
Solution: Your task is to determine whether the bag contains a counterfeit coin. You have a machine
that compares the weight of 2 sets of coins and tells you which set is lighter or whether both sets have
the same weight.
Steps: 1. We can compare the weights of coins 1 and 2.
2. If coin 1 is lighter than coin 2, then coin 1 < coin 2, true, coin 1 counterfeit, false coin 2
counterfeit.
3. If both coins 1 and 2 have the same weight, we compare 3 & 4.
4. Again, if one coin is lighter, a counterfeit coin has been detected and we are done.
32
5. If not, we compare coins 5 and 5.
6. Proceeding in this way, we can determine whether the bag contains a counterfeit coin by making at
8 weight comparisons. This process identifies the counterfeit coin.
Figure divide and conquer technique.
Algorithm
DAC (P)
{
If small (P) then return S (P);
Else
{
Divide P into smaller instances P1, P2 Pk , k>=1
Apply DAC to each of these problems
Return combine (DAC (P1 ), DAC (P2 ), ----- DAC (Pk ), }}
Analysis
P problem to be solved
Small (P) Boolean value function that determines whether the input size is small that answer
can be computed without splitting.
Problem P is divided into smaller sub problems. P1, P2 Pk , k>=1
Combine function that determines the solutions to P using the solutions to k sub problems.
Size of P = n
Size of k sub problems = n1, n2nk
Computing time of DAC is described by the recurrence relation is T(n)
33
T(n) = { g(n) , n-small otherwise , T (n1) + T (n2) + -----------T (nk) + f (n)
T (n) time for DAC on any input of size n
g (n) - time to compute the answer directly for small input.
f (n ) time for dividing P and combining the solutions to sub problems.
The complexity of DAC algorithm is given by recurrence form
T ( n) = { T (1) =1, n>1 , a T(n/b) + f (n) ,n>1 , a and b constants.
Merge sort
Merge sort is based on the divide-and-conquer technique. Since we are dealing with sub problems,
we state each sub problem as sorting a subarray A[p .. r]. Initially, p = 1 and r = n, but these values
change as we recurs through sub problems.
1. Divide Step
If a given array A has zero or one element, simply return; it is already sorted. Otherwise,
split A[p .. r] into two subarrays A[p .. q] and A[q + 1 .. r], each containing about half of
the elements of A[p .. r]. That is, q is the halfway point of A[p .. r].
2. Conquer Step
Conquer by recursively sorting the two subarrays A[p .. q] and A[q + 1 .. r].
3. Combine Step
Combine the elements back in A[p .. r] by merging the two sorted subarrays A[p .. q]
and A[q + 1 .. r] into a sorted sequence. To accomplish this step, we will define a
procedure MERGE (A, p, q, r).
Example 1:
34
Example 2:
35
Algorithm
i =0 ; j =0 ; k =0
C [k ] = A [i]; i= i +1 ;
else
C [k] = B [j ] ; j = j + 1;
k =k +1;
if i=P
else
Analysis
Total:
T(n) = (1) if n = 1
T(n) = (n lg n)
Recurrence relations
36
Equation or an inequality that characterizes a function by its values on smaller inputs.
Solution Methods
Substitution Method.
Recursion-tree Method.
Master Method.
Recurrence relations arise when we analyze the running time of iterative or recursive
algorithms.
T(n) = (1) if n c
Example:
Recurrence: T(n) = 1 if n = 1
Guess: T(n) = n lg n + n.
Induction:
= 2 ((n/2)lg(n/2) + (n/2)) + n
= n (lg(n/2)) + 2n
= n lg n n + 2n
= n lg n + n
T(n) = (1) if n = 1
37
T(n) = c if n = 1
c > 0: Running time for the base case and time per array element for the divide and combine
steps.
-------------------------------------------------------------------------------------------------------------------------
Binary search
38
39
Algorithm
40
// If x = a[j], for some j, then return j else return 1
else return -1
----------------------------------------------------------------------------------------------------------------------
Different approaches
Example:
Large instance.
min(A) = 2, min(B) = 1.
max(A) = 6, max(B) = 9.
min{min(A),min(B)} = 1.
max{max(A), max(B)} = 9.
Algorithm
41
Analysis : Complexity: If T(n) represents this no., then the resulting recurrence relations is
T (n)=T([n/2]+T[n/2]+2 n>2
1 n=2
1 n=1
When n is a power of 2, n=2k for some positive integer k, then
T (n) = 2T(n/2) +2
= 2(2T(n/4)+2)+2
= 4T(n/4)+4+2
= 2k-1 T (2) + 1 I k-1 2i
= 2k-1+ 2k - 2
T (n) = (3n/2) 2
42
Quick sort
Definition
Quicksort is a divide-and-conquer sorting algorithm in which division is dynamically carried out (as
opposed to static division in Merge sort).
Divide: Rearrange the elements and split the array into two sub arrays and an element in between
such that so that each element in the left sub array is less than or equal the middle element and each
element in the right sub array is greater than the middle element.
Combine: Since sorting is done in place pivot divides a into 2 sub lists x and y.
Example:
43
Algorithm
// sort the elements a [low] ---- a [high] which resides in the global array a [1:n] into ascending order
Analysis
44
o Actually: how do we know that this is worst case?
It could be proved, but we won't do it
45
Each of these levels has weight $cn$
The deep side of the tree terminates in $\log_{10/9}n$ levels
Each of these levels has weight $ cn$
The entire tree has weight $O(cn\log_{10})n + cn\log_{10/9}n = O(n\lg n)$
Space Performance:
o No extra space needed
o Total space: $\Theta(n)$
o But, ... what about the space for ...?
---------------------------------------------------------------------------------------------------------------------------
We are given an array of n points in the plane, and the problem is to find out the closest pair of points
in the array. This problem arises in a number of applications. For example, in air-traffic control, you
may want to monitor planes that come too close together, since this may indicate a possible collision.
Recall the following formula for distance between two points p and q.
The Brute force solution is O(n^2), compute the distance between each pair and return the smallest.
We can calculate the smallest distance in O(nLogn) time using Divide and Conquer strategy. In this
post, a O(n x (Logn)^2) approach is discussed. We will be discussing a O(nLogn) approach in a
separate post.
Algorithm
Following are the detailed steps of a O(n (Logn)^2) algortihm.
Input: An array of n points P[]
Output: The smallest distance between two points in the given array.
As a pre-processing step, input array is sorted according to x coordinates.
1) Find the middle point in the sorted array, we can take P[n/2] as middle point.
2) Divide the given array in two halves. The first subarray contains points from P[0] to P[n/2]. The
second subarray contains points from P[n/2+1] to P[n-1].
46
3) Recursively find the smallest distances in both subarrays. Let the distances be dl and dr. Find the
minimum of dl and dr. Let the minimum be d.
4) From above 3 steps, we have an upper bound d of minimum distance. Now we need to consider the
pairs such that one point in pair is from left half and other is from right half. Consider the vertical line
passing through passing through P[n/2] and find all points whose x coordinate is closer than d to the
middle vertical line. Build an array strip[] of all such points.
5) Sort the array strip[] according to y coordinates. This step is O(nLogn). It can be optimized to O(n)
by recursively sorting and merging.
6) Find the smallest distance in strip[]. This is tricky. From first look, it seems to be a O(n^2) step, but
it is actually O(n). It can be proved geometrically that for every point in strip, we only need to check
at most 7 points after it (note that strip is sorted according to Y coordinate). See this for more analysis.
7) Finally return the minimum of d and distance calculated in above step (step 6)
Algorithm
47
Analysis
Definition : there exists a set of points on a plane which is said to be convex if for any 2 points and
A,B in the set , the entire line segment with the end points at A & B belongs to the set .
Example :
The convex hull problem of finding the smallest convex polygon that contains given n pts in a plane
can be solved using DAC method. This version of solving convex hull problem is called quick hull
because this method is based on quick sort technique.
48
Input : A set S of planar points
Step 1: If S contains no more than five points, use exhaustive searching to find the convex hull and
return.
Step 2: Find a median line perpendicular to the X-axis which divides S into S L and SR ; SL lies to the
left of SR .
Step 3: Recursively construct convex hulls for SL and SR. Denote these convex hulls by Hull(SL) and
Hull(SR) respectively.
Example
The set of n points is divided into two subsets, L containing the leftmost n/2 points
and R containing the rightmost n/2 points.
2) The convex hulls of the subsets L and R are computed recursively.
49
The remaining part of the algorithm is a solution for the base case (i.e., the leaves of your
recursion). In the example shown above, the final hull appears as follows:
Algorithm
// Problem description: This algorithm finds the convex hull from the set of points.
Convex hull = { }
//find left and right most points say A & B, convex hull ={ A,B}.
50
P 2 left side of the line A B }
Return
// from the given set of points in T , find point say z ,from segment XY . Add point Z to convex hull at
the location between X and Y.
// recursively call
Time complexity:
= O(n log n)
Basic
51
52
53
----------------------------------------------------------------------------------------------------------------
54
Brute force method
Introduction
The first algorithm design technique we shall explore a straight forward approach to solving problem,
usually based on problem statement and definitions of the concepts involved. Force comes from
using computer power not intellectual power. In short, brute force means just does it.
Example: - computing an : For a given number a and non-negative int n, find the exponent ion as
follows: a n =a*a*a--------*a a for n times. Computing n! can be computed as 1*2*3*---n. Performing
multiplication of 2 matrices. Searching a key value from given list of elements.
it is general approach to problem solving. Many elementary algorithm tasks such as sum of all
the elements or finding largest element from the list can e done using BF method.
For solving some problems such as sorting, matrix multiplication with no restriction on input
size, the BF method is applied.
Problem: To find the 2 closest points from the set of n points. It can be considered to be in 2
dimensional planner cases. Points can be airplanes, db records. Two dimensional planes are point is
specified by a pair (X, Y). Hence P=(X, Y). For simplicity we consider 2d case of Euclidean distance,
Euclidean distance d(Pi, Pj) = [(xi-xj)2 + (yi-yj)2] ,where p I and pj are 2 points for which i<j, d(pi,pj)
=d(pi,pj) ,i<j.
Brute force Compute distance between each pair of disjoint points and find a pair with the
smallest distance.
Algorithm
dmin
for i 1 to n-1 do
for j i+1 to n do
d sqrt((xi-xj)2 + (yi-yj)2)
55
if d < dmin then
dmin d; index1 i; index2 j
return index1, index2
Analysis
Input size = n
-------------------------------------------------------------------------------------------------------------------------
The convex hull of a set S of points is the smallest convex set containing S. If S is a set of 2
points its convex hull is the line segment connecting these points. If S is a set of 3 points then
its convex hull is the triangle.
Example
56
Definition of extreme points: An extreme point of a convex set is a point which is not a
middle point of any line segment with end points in the set.
Algorithm or procedure
In order to find the convex hull using BF approach , the simplest rule applied .
Find out a line segment by connecting 2 points P i & P j in such that all the other
points will lie on the same side of the straight line.
Repeat this rule for every pair of points, so that the boundary for convex hull be
formed.
Analysis
The straight line through two points (x1, y1), (x2, y2) in the coordinate plane can be defined by the
following equation ax + by = c ,where a = y2 y1, b = x1 x2, c = x1y2 - y1x2 ,Such a line divides
the plane into two half-planes: for all the points in one of them: ax + by > c, while for all the points in
the other, ax + by < c. Efficiency: (n3)
--------------------------------------------------------------------------------------------------------------
Exhaustive search
It is a method in which solution is obtained by searching each element of given problem. It makes use
of straight forward approach.
57
b Tour Cost .
b abcda 2+3+7+5 = 17
b abdca 2+4+7+8 = 21
b acbda 8+3+4+5 = 20
b acdba 8+7+4+2 = 21
b adbca 5+4+3+8 = 20
b adcba 5+7+3+2 = 17
b Efficiency:
b Tour Cost .
b abcda 2+3+7+5 = 17
b abdca 2+4+7+8 = 21
b acbda 8+3+4+5 = 20
b acdba 8+7+4+2 = 21
b adbca 5+4+3+8 = 20
b adcba 5+7+3+2 = 17
b Efficiency: (n-1)!/2
2. Knapsack problem
Definition: Suppose that there are n objects from i = 1, 2 n. Each object i has some weight
wi and values associate d with each object is Vi and capacity of knapsack is W. A thief has to
pick up the most valuable objects to fill the knapsack to its capacity. Given n items; weights
w1 ,w2 ,-----------wn ;v1 ,v2vn.
Example:
1 2 $20
2 5 $30
58
3 10 $50
4 5 $10
Solution
0 $0
{1} 2 $20
{2} 5 $30
{3} 10 $50
{4} 5 $10
{1,2} 7 $50
{1,3} 12 $70
{1,4} 7 $30
{2,3} 15 $80
{2,4} 10 $40
{3,4} 15 $60
{1,2,4} 12 $60
Algorithm:
We go through all combinations and find the one with maximum value and with total weight less or
equal to W.
Efficiency:
59
Definition: There are n people who need to be assigned to n jobs, one person per job. The
cost of assigning person P to job j is C [i,j]. Find an assignment that minimizes the total cost.
Example
60
UNIT III
Computing a Binomial Coefficient Wars halls and Floyd algorithm Optimal Binary Search
Trees Knapsack Problem and Memory functions. Greedy Technique Prims algorithm- Kruskal's
Algorithm Dijkstras Algorithm-Huffman Trees.
DYNAMIC PROGRAMMING
General method
It is an algorithm design method that can be used when the solution problem can be viewed as a result
of a sequence decisions .It is a general algorithm design technique for solving problems defined by
recurrences with overlapping sub problems.
Main idea set up a recurrence relation a solution to a larger instance to solutions of some smaller
instances. Solve smaller instances once. Record solutions in a table. Exact solution to the initial
instance from that table.
Knapsack problem
Shortest path
Multistage graph
Fibonacci series
Binomial coefficient
Floyds algorithm
Optimal Binary search
1. Computing binomial coefficient
Principle of optimality: It states that in an optimal sequence of choices or decisions, each
sub sequence must be optimal.
It is an example of applying dynamic programming .In mathematics, binomial is a coefficient
of any of the terms in the expansion (a + b)n. It is denoted by C (n, k), where (0<=n<=k).
61
i.e. n=4, k=3:
For i = 1:
For i = 2:
For i=3:
For i=4:
Algorithm
62
else C[i, j] C[i-1, j-1] + C[i-1, j] // recursive relation
return C[n, k]
Analysis
Addition - basic operation. Because k n, the sum needs to be split into two parts because only the
half the table needs to be filled out for i < k and remaining part of the table is filled out across the
entire row.
A(n, k) = sum for upper triangle + sum for the lower rectangle
= i=1k j=1i-1 1 + i=1n j=1k 1
= i=1k (i-1) + i=1n k
= (k-1)k/2 + k(n-k) (nk)
2. Floyd s algorithm
It is for finding shortest path between every parir of vertices of a graph. It works for both
directed and undirected graph.
Weighted graph it is a graph in which weight or distances are given along the edges.
Example
Algorithm
63
D = wt //copy weighted matrix to D.
// this initialization given D 0
for k 0 to n 1
{ for i 0 to n 1
{ for j 0 to n 1
{ a[i,j] min(a[i,j], a[i,k] + a[k,j]) }}}
Return D //computed D(n) is returned.
Analysis
In given algorithm the basic operation is a[i,j] min(a[i,j], a[i,k] + a[k,j]) .
3. Warshalls algorithm
Basic concepts
Digraph: The graph in which all the edges are directed then it is called digraph.
Algorithm
Analysis
Time complexity (n 3)
64
Basic operation is computation of Rk [i,j ] .This operation is located within 3 nested for loops.
Memory functions
While solving recurrence relation using dynamic programming approach common sub problems may
be solved more than once and this makes inefficient solving of the problem. Hence solves only
necessary sub problems .Hence we can define the goal of memory function as: solve only sub
problems that are necessary and solve these sub problems only once. Memorization is a way to deal
with overlapping sub problems in dynamic programming. During memorization
Definition:
Given a set of n items of known weights w1,,wn and values v1,,vn and a knapsack of capacity
W, the problem is to find the most valuable subset of the items that fit into the knapsack.
Step 1:
Identify the smaller sub-problems. If items are labeled 1..n, then a sub-problem would be to find an
optimal solution for Sk = {items labeled 1, 2, .. k}
Step 2:
Recursively define the value of an optimal solution in terms of solutions to smaller problems.
Initial conditions:
Step 3:
Bottom up computation using iteration
Question:
Apply bottom-up dynamic programming algorithm to the following instance of the knapsack problem
Capacity W= 5
65
Solution:
66
67
What is the composition of the optimal subset?
The composition of the optimal subset if found by tracing back the computations for the entries in the
table.
68
Efficiency:
O( n * W )
Algorithm
For i = 1 to n do
For j = 1 to w do
For j = 0 to w do
For i= 1 to n do
69
Algorithm memfun knapsack (i,j)
// Problem description : Implementation of memory function or method for the knapsack problem.
Else
---------------------------------------------------------------------------------------------------------------------------
Problem
70
71
72
73
74
---------------------------------------------------------------------------------------------------------------------------
Greedy Algorithm
Variety of problems has n inputs and requires us to obtain a subset that satisfies some constraints. Any
subset that satisfies some constraints is called a feasible solution. We need to find a feasible solution
that either maximizes or minimizes a given objective function. A feasible solution that does this is
called an optimal solution.
General method
for i = 1 to n do
{ x = select (a) ;
return solution ; }
75
Explanation
Select function selects an input from a [ ] and removes it ,the selection inputs value is
assigned to x.
Feasible solution is a Boolean value function that determines whether x can be included into
the solution vector.
The function union combines x with the solution and updates the objective function.
The function greedy describes the essential way that a greedy algorithm will look, once a
particular problem is chosen and the functions select , feasible and union are properly
implemented.
Optimal solutions
Approximate solutions
Spanning tree: A spanning tree of a graph G is a sub graph which is basically tree and it contains
all the vertices of G containing circuit.
MST: A MST of a weighted connected graph G is a spanning tree with min or smallest weight.
Weight of the tree: It is defined as the sum of weights of all its edges.
Applications
1. Cluster Analysis
2. Handwriting recognition
3. Image segmentation
4. Designing efficient routing algorithms
76
Prims algorithm
Steps:
Start with tree T1 consisting of one (any) vertex and grow tree one vertex at a time to
produce MST through a series of expanding subtrees T1, T2, , Tn
On each iteration, construct Ti+1 from Ti by adding vertex not in Ti that is closest to those
already in Ti (this is a greedy step!)
Stop when all vertices are included.
Example:
Algorithm
77
Tree[0]=1 // take initial vertex
For k = 1 to nodes do
{ mindist = infinitive
For i=o to nodes-1 do
{ For j=o to nodes-1 do
{ if G[I,j] AND ((tree[i] AND !tree[j] OR (!tree[i] and tree[j])))then
{ if (G [I,j]<mindist )then
{ mindist G[i,j]
V1=i
V2=j}}}}
Analysis
= (modulus v2).
----------------------------------------------------------------------------------------------------------------------
Kruskals algorithm
Step 2: Add the next smallest weight edge to the forest if it will not cause a cycle.
78
Example
Algorithm
Algorithm kruskals ()
// Output ; Prints the spanning tree with the total cost of SPT
Count =0 , k=0,sum =0
For i= o to totalnodes do
Parent [i] =i
{ pos = min(totaledges)
If(pos=-1)then
Break
79
V1=G[pos].v1
V2=G[pos].v2
i=find (v1,parent)
j=find(v2,parent)
if(i!j)then
{ tree[k][0]=v1
Union(i,j,parent)
} G[pos].comst infinity}
Uf(count=totalnodes1) then
Analysis
Efficiency = (modulus log (modulus E)) , E-total number of edges in the graph.
---------------------------------------------------------------------------------------------------------------------------
Dijkstras algorithm
It is popular algorithm for finding shortest path. It is called single source shortest path algorithm.
In this algorithm, for a given vertex called source the shortest path to all other vertices is obtained.
In this process of finding shortest path, first it finds the shortest path form the source to a vertex
nearest to it , then second nearest and so on.
80
Example:
Algorithm
For i= 0 to totalnode
81
{ dist [i]= cost [source ,i]
For i = 1 to totalnode
{ mindist = infinitive
For j =0 to totalnodes
V1=j}}}
S[v1]=1
For v2 to totalnodes1
{ if (S[v2]=0) then
Path [v2]=v1 // next selected destination vertex with shortest distance . All such vertices are
accumulated in array path[].}}}}}
----------------------------------------------------------------------------------------------------------------------
Huffman trees
It are constructed for encoding a given text of n characters. While encoding a given text , each
character is associated with some bit sequence . Such a bit sequence is called code word/
Encoding types
82
Fixed length encoding: It is a kind of encoding in which each character is associated with a bit
string of some fixed length.
Variable length encoding: It is a kind of encoding in which each character is associated with a
code word of different length.
Example:
Suppose, for example, that we have six events with names and probabilities given
in the table below.
Our first step is to order these from highest (on the left) to lowest (on the right)
probability as shown in the following figure, writing out next to each event its
probability for since this value will drive the process of constructing the code.
Figure 7.6: Preparing for Huffman code construction. List all events in descending
order of probability.
Now we perform a construction process in which we will pair events to form a new
single combined event which will replace the pair members. This step will be
repeated many times, until there are no more pairs left.
First we find the two events with least combined probability. The first time we do
this, the answer will always be the two right hand events. We connect them
together, as shown in Figure 7.2 calling this a combined event (EF in this case)
and noting its probability (which is the sum of those of E and F in this case.) We
also place a 0 next to the left hand branch and a 1 next to the right hand branch of
the connection of the pair as shown in the figure. The 0 and 1 make up part of the
code we are constructing for these elements.
83
Figure: Preparing for Huffman code construction. List all events in descending
order of probability.
Now we repeat the last step, dealing only with the remaining events and the
combined event. This time combining C and D creates a combined event with less
probability than combining any others.
Again we repeat the process. This time, combining the combined events CD and
EF create the new combined event with least probability.
84
Having finished our connection tree, we are ready to read off of the diagram the
codes that we will associated with each of the original events. To obtain the code,
we start at the top level of the tree and make our way to the event we wish to code.
The series of 0's and 1's we encounter along the way on the branches of the tree
comprise our code. Doing so for each event in this case yields the following result.
Applications
85
UNIT IV ITERATIVE IMPROVEMENT 9
The Simplex Method-The Maximum-Flow Problem Maximum Matching in Bipartite Graphs- The
Stable marriage Problem
Simplex method
Before understanding simplex method, we will understand the linear programming concept. The
standard form of linear programming is p = ax+by+cz.
Linear programming:
It is a problem in we have to find the maximum value of a linear objective function. The
desired largest value of the objective function is called the optimal value and a collection of
values x, y, z. The variables x, y, z are called decision variables.
A feasible solution is a set of values for the decision variables that satisfies all of the
constraints in an optimization problem.
A basic solution for which all variables are non-negative is called a basic feasible solution.
To construct initial simple table.
We will write the equation in the form :
-c1x1, - c2x2 . . . cnxn +0(s1) + 0 (s2) + --+Z=0
The basic variables are s1, s2 and s3. Non basic variables are x, y, z.
To construct basic feasible solution.
After setting up the initial simplex table the next task is to check the optimality then if current
solution is not optimal, improve the current solution.
Entering variable smallest negative entry in the bottom row of the table.
Departing variable it is the smallest negative ration of RHS/aij in the column determined by
entering variable.
Intersection of entering variables column and departing variables row is called pivot.
Applying gauss Jordan elimination method.
Gauss Jordan elimination method - In linear algebra, Gaussian elimination (also known as
row reduction) is an algorithm for solving systems of linear equations. It is usually understood
as a sequence of operations performed on the corresponding matrix of coefficients.
Next, we improve the current solution.
Example1
Solution
86
Steps
x1 + x2 + s1 - - -
5x1 + 2x2 + - + s2
3x1 + 8x2 + - - - + s3
2. To build initial basic feasible solution.
----------------------------------------------------------------------------------------------------------------------
Example 2
Example1
Maximize Z= 6x1 +8x2 subject to constraints, x1 +x2 +x3=10, 2x1+3x2 +x4 = 25 ,x1+5x2 +x5=35 ,x1,x2
,x3,x4,x5 =>0
Solution
Steps
87
x1 + x2 + x3 + s1 + - + - = 10
2x1 + 3x2 + 0 + x4 + - + s2 + = 25
x1 + 5x2 + 0 + 0 + x5 + - + - + s3 = 35
2. To build initial basic feasible solution.
88
It can be defined by Digraph G = (V, E) with source s V and sink t Nonnegative integer
capacity c (e) for each e E, c (u,v) >=0.If c (u,v) / E then c (u, v) =0.
Properties
Capacity constraint for all u, v V the flow of edge (u, v) is always less than or
equal to the capacity of that edge. (u,v) = i.e.(u,v) <=c(u,v)
Skew symmetry for all u, v V f (u, v) =-f(v, u) that means reverse edge.
Flow conversation for any vertex v expect, S, t, ie. v Vf (u, v) =0. The value of
flow from u to v can be defined as modulus f = v Vf (S, v).
Ford Fulkerson method it works for solving max flow problem. It is an iterative method.
Residual network suppose we have a flow network G= (V ,E) with source and sink
t. Let every edge u,v is having a pair flow / capacity representation of a graph with
residual capacity is called residual network. Cf ( u, v) =c (u,v) f (u,v).
Augmenting path it is a simple path from source S to t in residual network G f.
Cuts in flow network cut is a collection of arcs such that it they are removed there is
no path from S to t.
Algorithm
Example
---------------------------------------------------------------------------------------------------
To pair the vertices from 2 sets. e.g., if there are m applicants for the jobs and there are n jobs .Then
each applicants must be paired with some job. It is easy to represent the elements of such problems
using graph using the edges between the vertices of 2 distinct sets.
Bipartite graph
The graph G=(V,E ) in which the vertex set V is divided into 2 disjoint sets X and Y in such a way
that every edge eE has one point in x and other end point in y.
Example
89
Matching: A matching M is a subset of edges such that each node in V appears in at most one edge
in M. In other words, matching in a graph is a subset of edges that no 2 edges share a vertex.
Two colourable graph: A graph can be coloured with only 2 colours such that no edge connects the
same colour.
Alternating path: It is a path in G, such that for every pair of subsequent edges one of them is
matching pair M and other is not.
Augmenting path: It is a path in graph G, such that it is an alternating path with special property that
its start and end vertices are free or unmatched.
Maximum matching problem: Finding maximum matching in a graph. Let us apply iterative
improvement technique to maximum matching Biparite
Example:
90
Algorithm
91
-----------------------------------------------------------------------------------------------------------------
Goal to find stable matching between 2 sets (men and women) with various preferences to
each other.
Example: consider that there are 4 men and 2 women. The men are named as A, B, C, D and
the women are named as X, Y, Z, and W.
A Y W X Z
B W X Y Z
92
C Y Z W X
D Y Z W X
X B C D A
Y C D B A
Z C B A D
W C B D A
STEPS:
1. Initially each men will propose a woman of his first choice: A proposes Y, B proposes W.
Mens preferences X Y Z W
A ?
B ?
Mens preferences X Y Z W
A X
B ?
3. Now woman W has to decide between B and A. As her priority is to B than A , they (ie B
& W) get engaged.
Mens preferences X Y Z W
A Tick
93
B Tick
C Tick
D Tick
Procedure:
Initially all the men and all the women are free, but having their own preference list
with them.
Propose one of the freeman m proposes the woman w. This woman is normally the
highest preference one from his preference list.
Response if the woman w is free then and she accepts the proposal of matched m. If
she is not free, she compares the m with the current mate. If she prefers m than the
current mate then she accepts his proposal making former mate free otherwise simply
rejects ms proposal. Finally, returns the matching pairs of (m, w).
94
Example: in which lower bounds of corresponding algorithm can be obtained:
Number of comparisons needed to sort an array of size n.
Number of comparisons that are required to search a desired element from a sorted
array.
Number of additions needed to add two n * n matrices.
To obtain efficiency of an algorithm there are 2 ways:
First establish the asymopition efficiency class for the given problem.
Then check where the class of given problem fits in the hierarchy of efficiency
classes.
In other words, whether the problem lies in linear, quadric, log and exponential
category of efficiency class is to be checked.
Example: Binary search algorithm the best Lower bound is O (log n) and fastest searching
algorithm has an efficiency of O (log n). That means there exists tightness.
Various methods
Trivial lower bound it is simple method for obtaining lower bounds. Just count
how much input data algorithm reads and how much output it produces such a
count gives trivial lower bound .Example : Generating all permutations of n
umber O (n !), size of output is n !.Product of 2 n * n matrices algorithm trivial
lower bound is O (n 2).
Information theoretic arguments it is based on the amount of information
produced by algorithm. Example: game of guessing number . First of all think
of some number from 1 to n. Find answer by asking questions. Answers can be
either yes or no. Let us encode number by log 2 n bits. Each answer will give
information about each bit. Instance : is first bit zero no =1
Is second bit zero -> yes = 0
Is third bit zero -> yes =0
Is fourth bit zero ->yes = 0, number is 8 encoded in binary form. To obtain lower
bound by this method a mechanism called decision trees is used. e.g., sorting
searching.
---------------------------------------------------------------------------------------------------------------------------
3. Decision tree
All sorting and searching algorithm are based on comparison method. The comparison is
usually made on input items. A model is prepared to study the performance of such algorithm
which is called decision trees
In decision trees,
Internal nodes represent comparisons made between input items.
95
External nodes or lead nodes represent the results of the algorithm.
There may be a case that we get more number of leaf nodes that actual outputs.
Example: consider a decision tree drawn for finding largest number among the 3
variables x, y, z.
Simple graphical notations are used in condition
Transition
Outcome / result -
Draw a decision tree
Decision tree following observations can be made , h-height of the tree, n- total number
of leaf nodes then h>=log n .Due to the inequality indicated above equation the LB on
height of the binary decision tree can be obtained . And we get such a LB using the
information about the output of algorithm.
Decision trees for sorting algorithms
We can derive the LB from DT for sorting. E.g., binary insertion sort. We will compare 3
elements and based on their values those elements can be arranged in ascending order.
96
a b c b:a
a <= b a>b First Iteration
b, a
c:b a, b c:a
b <= c b>c a>c
a <= c
Second Iteration
a, b, c c:a
b, a, c c:b
a <= c a > c b <= c b > c
a, c, b c, a, b b, c, a c, b, a
Time complexity
Theorem: Any decision tree sorting n elements has height (nlogn)
Proof:
97
Decision trees for searching
When a particular element has to be searched from a sorted list or elements then decision trees can be
drawn. We will consider an example of binary search and obtain the LB using DT, worst case for
binary search is C (n) = [ (log n ) + 1] = log (n +1) .
Time Complexity: number of leaves (ranges where not found) = N + 1 ..Height of binary tree with
N+1 leaves log2 (N+1) .Iherefore, the minimum number of comparisons required by any
comparison-based searching algorithm log2 (N+1).lower bound (log N) is tight .----------------
98
----------------------------------------------------------------------------------------------------------
Backtracking:
What is backtracking?
99
A dead node is a generated which is not to be expanded further or all of whose
children have been generated.
Methods of generating state space tree
Back tracking - in this method, as soon as near child c of the current E- node N is
generated, this child will be the new E-node.
Branch and bound - E-node will remain as E-node until it is dead.
3.1 N queens problem
States as follows : consider a n * n chessboard on which we have to place in n queens so
that , no 2 queens attack each other by being in the same row or same column or on the
same diagonal.
Example
Now, we start with empty chessboard.
Place Q1 in the first possible position of its row .ie on first row and first column.
Q
This is dead because third Q cannot be placed in the next column, as there is
no acceptable position for Q3. Use backtracking and places 2 Q at (2, 4)
positons.
Q
Q
100
8 queens solution
Algorithm
Queen (n)
// Problem description: to implement n queens problem.
// Input: Number of Queens n
For column <- 1to n do
If (place (row, column) then
{board [row] column
If (row = n) then // dead end
Print board (n)
// Printing the board configuration.
Else //try next Q with next position
Q (row + 1 ,n ) }}
Algorithm
Algorithm place (r ,c)
// Problem description: for placing the queen at appropriate position
// input: R and C of the chess board
// returns 0 for the conflicting R and C, positon 1 for no conflict.
For <- i 1 to row -1 do
101
{ // checking for C and diagonal conflicts
If (board [i] = column) then return 0
Else if (abs [board [i] column) = abs (i-row)) then
Return 0}
// no conflicts hence Q can be placed.
Return 1.
Hamiltonian circuit problem
Problem description
Given an undirected graph and 2 nodes X and Y then find a path from X to Y visiting each node in the
graph exactly once.
Example
1 2 3 4
8 7 6 5
->1,3,4,5,6,7,8,2,1 and
->1,2,8,7,6,5,4,3,1.
Algorithm
Algorithm Hamiltonian (k)
// Problem description: Recursive backtracking algorithm for finding Hamiltonian circuit.
// G [1 :n ,1:n]adjacency matrix used for G
// The cycle begins from the vertex 1
{
Loop
102
{Next value (k)
If (x (k)=0) then return;
{
If k=n then
Print (x)
Else
Hamiltonian (k+1);
End if
}
Repeat
}
Algorithm Nextvalue (k)
{
Repeat
{
X [k]=(X [k]+1) mod (n+1); //next vertex
If (X [k]=0) then return;
If (G [X [k-1], X [k]] 0) then
{
For j=1 to k-1 do if (X [j]=X [k]) then break;
// Check for distinction.
If (j=k) then //if true then the vertex is distinct.
If ((k<n) or ((k=n) and G [X [n], X [1]] 0)) then return;
}
} Until (false);
}
Subset sum of problem
Problem statement: Let S= { S1,S2Sn} be a set of +ve integer, then we have to find a subset whose
sum is equal to given positive integer d. To sort the sets elements in ascending order S 1<=S2<=----Sn.
Example
103
Algorithm
[email protected] 24
Branch and bound problem
Introduction
104
It is a general algorithmic method for finding optimal solutions of various optimization problems. It is
a general optimization technique that applies where the greedy method and dynamic programming
fail. We will discuss how bonding functions help in determining the expansion of a node in a state
space tree.
General method
It is a state space tree is built and all the children of E nodes ( a live node whose children are
currently being generated) are generated before any other node can become a live node.
For exploring new nodes either a BFS or DFS can be used.
In branch bound technique, BFS like state space tree will be called FIFO search.
This is because the list of live node is FIFO list (queue).
DFS like state space tree search will be called LIFO search, because the list of live node is
LIFO list (stack).
In this method a space tree of possible solutions is generated.
Then partitioning is done at each node of the tree.
We compute lower bound and upper bound at each node.
This computation leads to selection of answer node
Bounding functions are used to avoid the generation of subtrees that do not contain an answer
node.
General algorithm
Procedure B&B()
begin
E: nodepointer;
E := new(node); -- this is the root node which
-- is the dummy start node
H: heap; -- A heap for all the live nodes
-- H is a min-heap for minimization problems,
-- and a max-heap for maximization problems.
while (true) do
if (E is a final leaf) then
-- E is an optimal solution
print out the path from E to the root;
return;
endif
Expand(E);
if (H is empty) then
report that there is no solution;
105
return;
endif
E := delete-top(H);
endwhile
end
Procedure Expand(E)
begin
- Generate all the children of E;
- Compute the approximate cost value CC of each child;
- Insert each child into the heap H;
end
The selection rule for the next E-node in FIFO or LIFO branch-and-bound is sometimes
blind. i.e. the selection rule does not give any preference to a node that has a very good
chance of getting the search to an answer node quickly.
The search for an answer node can often be speeded by using an intelligent ranking
function, also called an
^ approximate cost
function C
Branching: A set of solutions, which is represented by a node, can be partitioned into mutually
exclusive sets. Each subset in the partition is represented by a child of the original node.
Lower bounding: An algorithm is available for calculating a lower bound on the cost of any solution
in a given subset.
Searching: Least-cost search (LC)
Cost and approximation
Each node, X, in the search tree is associated with a cost: C(X)
C(X) = cost of reaching the current node, X (Enode), from the root + the cost of reaching an
answer node from X.
106
C(X) = g(X) + h(X)
Initial State
Final State
1 2 3 1 2 3
5 6 5 8 6
7 8 4 7 4
107
3
Algorithm
typedef struct { list_node_t * next; list_node_t * parent;
// Helps in tracing path when answer is found float cost; }
list_node_t; list_node_t list_node; algorithm LCSearch ( t )
{ // Search t for an answer node
// Input: Root node of tree t
// Output: Path from answer node to root Branch and Bound 5 if ( *t is an answer
node )
{ print ( *t ); return; }
E = t;
108
// E-node Initialize the list of live nodes to be empty; while ( true )
{ for each child x of E { if x is an answer node { print the path from x to t; return; }
Add ( x );
// Add x to list of live nodes; x->parent = E;
// Pointer for path to root }
if there are no more live nodes
{ print ( "No answer node" ); return; }
E = Least();
// Find a live node with least estimated cost
// The found node is deleted from the list of live nodes }}
1.1 Bounding
Bounding functions are used to avoid the generation of subtrees that do not contain
the answer nodes. In bounding lower and upper bounds are generated at each node. A
cost function c(x) is such that c(x) <= C (x) is used to provide the lower bound on
solution obtained from any node x.
Let upper is upper bound on cost of min.cost solution. In case, all the live nodes with
c(x) upper can be killed. Start of the upper set to infinitive. After generating the
children of current E- node, upper can be updated by min.cost answer node .Each
time a new answer node can be obtained.
Job sequencing with deadlines problem
Problem statement
Let there be n jobs with different processing times. With only one processor the jobs
are executed on or before given deadlines. Each job i is given by a tuple (Pi, di,ti),
where ti is the processing time required by job.
Example: n=4,
Job pi di ti
1 5 1 1
2 10 3 2
3 6 2 1
4 3 1 1
Solution
Steps
State space tree can be drawn for proper selection of job i for creating J. 2
ways for state space tree. Fixed tuple size formulation ,variable tuple size
c(x) = i<mi Sx P i,m=max {i/isx } ,upper bound U (x) = i s x p i
at node 1 : c(x)=0 , i=1 to n P I = 5+10+6+3 = 24
109
at node 2 : c(x) =0 , Pi , i<1 =0 ,sum of P I excluding x1=1,P1=5 ,
i=2 to n P I = 10+6+3 = 19 ,u=19
at node 3 : c(x) =0 , Pi , i<2 , Pi without x1=2,P2=10 ,
i=1 to n P I = 5+ 6+3 =14
1.2 Assignment problem
Problem statement: There are n people to whom n jobs are to be assigned so that total
cost of assignment is as small as possible. It is specified by n by n matrix and each
element in each row has to be selected in such a way that number 2 is selected elements
are in same column and their sum is smallest as far as possible.
Example:
J1 J2 J3 J4 PERSON
10 2 7 8 A
6 4 3 7 B
5 8 1 8 C
7 6 10 4 D
Solution
Steps: if we select min. value from each row then we get, add all the lower values from
rows => 2 +3+1+4=10, ie set lower bound =10. Now the stare space tree can be drawn
step by step
Start
LB =10
A->4
A->1 LB =8+3+1+4=16
LB =10+3+1+4=18 A->2
A->3
LB =2+3+1+4=10
LB =7+3+1+4=15
110
2. Introduction to NP hard and NP completeness problem
Classification of problems
Decision algorithm: Any problem for which answer is either yes or no is called decision problem.
The algorithm for decision problem is called decision algorithm.
Optimization algorithm: Any problem that involves the identification of optimal cost (max or min)
is called optimization problem. The algorithm for optimization problem is called optimization
algorithm.
Definitions of P: Problems that can be solved in polynomial time. (P stands for polynomial). E.g.,
searching, sorting, all pair shortest path.
Definitions of NP: It stands for non-deterministic polynomial time. e.g., TSP, graph colouring,
knapsack, Hamiltonian circuit problem.
NP complete
NP -hard
NP complete: a problem D is called NP complete if class NP, every problem in NP can also be
solved in polynomial time.
Non deterministic algorithm The algorithm in which every operation is uniquely defined is called
deterministic algorithm.The algorithm in which every operation may not have unique result , rather
there can be specified set of possibilities for every operation .Such an algorithm is called non-
deterministic algorithm.
Algorithm
111
{ for i=1 to n do
{ write (i);
Success(); }
Write(0);
Fail(); }
3. Approximation Algorithms
3.1 Approximation for NP Hard Problems
3.2 Traveling Salesman problem
3.3 Knapsack problem.
112