0% found this document useful (0 votes)
109 views

Divide & Conquer Unit 1

The document discusses the divide and conquer algorithmic approach. It can be summarized in 3 points: 1) Divide and conquer works by dividing a problem into smaller subproblems, solving the subproblems independently, and then combining the solutions to solve the original problem. 2) It has applications in problems like finding maximum/minimum values, matrix multiplication, merge sort, and binary search. 3) The time complexity of divide and conquer algorithms is often O(n log n) or O(n^2) depending on how the problem is divided and combined.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views

Divide & Conquer Unit 1

The document discusses the divide and conquer algorithmic approach. It can be summarized in 3 points: 1) Divide and conquer works by dividing a problem into smaller subproblems, solving the subproblems independently, and then combining the solutions to solve the original problem. 2) It has applications in problems like finding maximum/minimum values, matrix multiplication, merge sort, and binary search. 3) The time complexity of divide and conquer algorithms is often O(n log n) or O(n^2) depending on how the problem is divided and combined.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Divide & Conquer

• In divide and conquer approach, the problem in hand, is divided into


smaller sub-problems and then each problem is solved independently.
• When we keep on dividing the subproblems into even smaller sub
problems, we may eventually reach a stage where no more division is
possible.
• Those "atomic" smallest possible sub-problem (fractions) are solved.
The solution of all sub-problems is finally merged in order to obtain
the solution of an original problem.
Divide/Break
• This step involves breaking the problem into smaller sub-problems. Sub-problems should
represent a part of the original problem.
• This step generally takes a recursive approach to divide the problem until no sub-problem is
further divisible.
• At this stage, sub-problems become atomic in nature but still represent some part of the actual
problem.

Conquer/Solve
• This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems
are considered 'solved' on their own.

Merge/Combine
• When the smaller sub-problems are solved, this stage recursively combines them until they
formulate a solution of the original problem.
• This algorithmic approach works recursively and conquer & merge steps works so close that they
appear as one.

Main Problem
Application of Divide and Conquer Approach

Following are some problems, which are solved using divide and
conquer approach.

• Finding the maximum and minimum of a sequence of numbers


• Strassen’s matrix multiplication
• Merge sort
• Binary search
ADVANTAGES :
• Solving difficult problems
• Algorithm efficiency •
Parallelism
• Memory access
• Roundoff control

Example of Divide & Conquer


(merge sort)
• Let the given array be
• Divide the array into two halves.
• Again, divide each subpart recursively into two halves until you get
individual elements.

• Now, combine the individual elements in a sorted manner. Here,


conquer and combine steps go side by side.
Control Abstraction of Divide and Conquer

• A control abstraction is a procedure whose flow of control is clear but


whose primary operations are specified by other procedures whose
precise meanings are left undefined
• The control abstraction for divide and conquer technique is DAndC(P),
where P is the problem to be solved.
Algorithm DAndC (P)
{
if SMALL (P) then return S (p); g(n) else
{
divide p into smaller instances p1, p2, …. Pk, k >=
1; apply DAndC to each of these sub problems;
return (COMBINE (DANDC (p1) , DANDC (p2),…., DANDC
(pk)); }
}
• DAndC (P) --- “P” is the problem to be solved.
• Small(P) ---- is a Boolean valued function which determines whether
the input size is small enough so that the answer can be computed
without splitting. If this is so function ‘S’ is invoked otherwise, the
problem ‘p’ into smaller sub problems.
• otherwise, the problem ‘p’ into smaller sub problems. •
These sub problems p1, p2, . . . , pk are solved by recursive
application of DAndC.
• Combine is a function that determines the solution to “P” using the
solutions to K-subproblems.
Time complexity
��(��)
• T(n)=
�� ��1 + �� ��2 + �� ��3 + +��
���� + ��(��) g(n) is for small inputs.
T(n1) time for recursion
f(n)=time for dividing P and combining the solution
• If we have a problem of size n, then suppose the recursive algorithm
divides the problem into a sub-problems, each of size n/b
where,
n = size of input
a = number of subproblems in the recursion
n/b = size of each subproblem. All subproblems are assumed to have
the same size.
f(n) = cost of the work done outside the recursive call, which includes
the cost of dividing the problem and cost of merging the solutions
Example for merge sort
T(n) = aT(n/b) + f(n)
= 2T(n/2) + O(n)
Where,
a = 2 (each time, a problem is divided into 2
subproblems) n/b = n/2 (size of each sub problem is half
of the input)
f(n) = time taken to divide the problem and merging the subproblems
T(n/2) = O(n log n) (To understand this, please refer to the master
theorem.)

Now, T(n) = 2T(n log n) + O(n)


≈ O(n log n)
Solving Recurrence Relations
The recurrence relations can be solved by
1. Substitution Method
2. Masters Method
Substitution Method
Guess the form of the solution.
• Use mathematical induction to find the constants and show that the
solution works.
• We assume the problem size n is fairly large and we then substitute
the general form of recurrence for each occurrence of function T on
the right hand side.
EXAMPLE 1
T(n)=T(n-1)+1 -----------------1
T(0)=0
SOL:-T(n)=T(n-1)+1 ---------------------------------------1
Find T(n-1) Substitute n-1 in place of n in 1
T(n-1)=T(n-1-1)+1=> T(n-2)+1 Substitute T(n-1) in 1
T(n)=(T(n-2)+1)+1 =>
T(n-2)+2------------------------------2 Find T(n-2)
T(n-2)= T(n-2-1)+1=> T(n-3)+1
T(n)=T(n-3)+1+2=T(n-3)+3=> T(n-4)+4=➔ T(n-5)+5 =>T(n-k)+k
T(n)=T(n-k)+k
Let k=n
T(n-n)+n ➔ T(0)+n=> 0+n=n➔O(n)
Example 2
T (n) = 1 if n=1 n-k=1 k=n-1
= 2T (n-1) if n>1
SOL: T(n)=2T(n-1)
Find T(n-1) substitute n-1 in 1
T(n-1)=2T(n-1-1)=2T(n-2)
T(n)=4T(n-2)
T(n-2)=2T(n-3)
T(n)=8T(n-3) ➔16T(n-4)→32T(n-5)➔64T(n-6)2➔2^k(T(n-k))
2^(n-1)(T(n-n+1)
)
2^(n-1)(T(1))➔2^(n-1)
2^n/2➔O(2^n)
Example 3
Example :T(n)=T(n-1)+n
T(0)=0 initial condition
Sol: T(n-1)=T(n-2)+n-1
T(n)=T(n-2)+2n-1
T(n-2)=T(n-3)+n-2
T(n)=T(n-3)+3n-(1+2)
T(n-3)=T(n-4)+n-3
T(n)=T(n-4)+4n-(1+2+3) ➔ T(n-5)+5n-(1+2+3+4)→T(n-6)+6n-(1+2+3+4+5)
→T(n)=T(n-k)+kn-(1+2+3+------k-1)
T(n-k)+kn-(k(k-1)/2)➔k=n
→T(0)+n(n)-(n(n-1)/2)➔0+n^2-(n^2-n)/2=O(n^2)

Example 4
T(n)=2T(n/2)+n-------------------------------------------------------------1
T(1)=1
SOL: Find T(n/2)
T(n/2)=2T(n/4)+n/2
T(n)=2(2T(n/4)+n/2)+n➔4T(n/4)+2n
T(n/4)=2T(n/8)+n/4
T(n)=4(2T(n/8)+n/4)+2n➔8T(n/8)+3n→16T(n/16)+4n→32T(n/32)+5n➔2^5
T(n/2^5)+5n→2^kT(n/2^k)+kn
n/2^k=1➔n=2^k➔k=log2n
T(n)=2^lognT(1)+logn(n)➔n(1)+nlogn➔O(nlogn)
Example 5
T (n) =2T(n/2) +2 n>1
• T(1)=0
• Sol: T(n/2)=2T(n/4)+2
T(n)=2(2T(n/4)+2)+2➔4T(n/4)+4+2->8T(n/8)+8+4+2➔
16T(n/16)+16+8+4+2➔
2^kT(n/2^k)+2^k+2^k-1…………2➔2^k(T(1)+(2^K+1- 2)
→2^logn(0)+2^logn+1=>
Example 6
• T (n) =2T(n/2) +n 2 n>1 •
T(1)=0
Master Method
• Master Method is a direct way to get the solution of a recurrence
relation, provided that it is of the following type:

• T(n) = aT(n/b) + f(n) where a >= 1 and b > 1


• T(n)=2T(n/2)+1
• T(n)=7T(n/2)
Solving using Masters Method
1. Compare Given Equation with
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1

2. Take h(n)= f(n)


log
n ba
3. Calculate u(n) by the following.
4. Solve T(n)

T(n)=2T(n/2)+cn
• 1. a=2 b=2 f(n)=cn
• 2. h(n)= ��(��)
��^ log ����=cn/n^(log 22)=cn/n=c=➔c(log
�� �������� 1
n)^0 • 3. u(n)=
1= ��(log �� )
log ��
• 4. T(n)=�� �� [T(1)+u(n)]=n[1+��(log�� )
]=��(nlog�� ) •

Solve T(n)=8T(n/2)+n^2 n>0


1 n=1
Solve by using substitution method and Masters method
a. T(n)=9T(n/3)+n
b. T(n)=7T(n/2)+18n^2
Applications
1.Binary search,
2.Quick sort,
3.Merge sort,
4. Strassen’s matrix multiplication.
Binary Search
• Binary Search is process of searching an element in a sorted list.
• The element which is to be searched from the list of elements are
called “key” element.
• The searching is done by dividing the array into two subarrays, using
mid value.
• The “mid” is called Middle Element in the list, which is given as
Mid=(low+high)/2
where low is the lowest position or first element position in the array
list and high is the highest position or last element position in the list.
• Let a[mid] be the mid element of array.

Then the three condition that needs to be tested while searching array
1. If key<a[mid] then search left subarray.
2. If key>a[mid] then search in right subarray.
3. if key=a[mid] then the desired element is present in list.
Example
• 10,20,30,40,50,60,70
Key is 30
10, 20, 30 40 50, 60 70

1 2 3 4 5 6 7
Mid=(low+high)/2=1+7=8/2=4 a[4]=40
Key 30 with 40
2. If key<a[mid] then search left subarray. Key=30
30<40
So search in left subtree

0+2=2/2=1
10, 20, 30

1 2 3
30>a[1] 30>20 so search in left subtree 10
30
2

1 30=30 found at a[2]


Low High Mid A[mid] Compare
1 7 4 40 30<40
1 3 2 20 30>20
2 2 2 30 30=30

10 20 30 40 50 60 70 80
a[1] a[2] a[3] a[8]
• Key is 70
• 10 20 30 40 50 60 70 80 mid=1+8=9/2=4 10 20 30
50 60 70 80
50 70 80
70 80
Low High Mid A[mid] compare
1 8 4 40 70>40
5 8 13/2=6 60 70>60
7 8 15/2=7 70 70=70
2,4,5,6,8,9 Key is 8
a[0]=2,a[1]=4 a[2]=5 a[3]=6 a[4]=8 a[5]=9
• a[6]
• 1. mid ➔(low+high)/2➔0+5/2=>2 a[2]=5
• 2. 8>5 check it sub array 2,4,5,6,8,9
• 2,4, 5 6,8,9
• 6,8,9
• Low=3 high=5 mid=(8/2)=4 a[4]=8 6 9 •
Key==mid

Example
• arr = [2, 3, 5, 7, 8, 10, 12, 15, 18,
20] target = 7
low=0
high=9
.ow= g=
mid=4
2. 7<8
low=0 high=3
mid=1 3. a[mid]=>a[1]=3 7>3
low=2 high=3 mid=2
A[]=1 2 5 6 7 10 24 56 84 100 115 120 131
150 key 150
Low High Mid A[mid] compare
1 14 1+14=15/2=7 24 150>24
8 14 11 115 150>115
12 14 13 131 150>113
14 14 14 150 found

if(key==a[mid]) then return


Iterative Binary search mid;
8. else if(key<a[mid] then
1. Algorithm Ibinsearch(A,n,key)
2. {
3. low:=1; high:=n;
4. while(low<=high) 9. {
5. { 10. low:=1 ; high:=mid-1; 11. }
6. mid:=(low+high)/2; 7. 12. else if(key>a[mid]) then 13. {
14. low:=mid+1; high:=n; 15. } 16. }
while(4<=6)
2,4,6,8,9,10 key is {
8 mid=4+6=10/2=5

• Alg(A,6,8)
•{ 8!=9
• low:=1 high=6 8<9
• while(1<=6) low=4 ; high=4
•{ while(4<=4)
• mid=1=6=7/2=3 {
• 8!=6 mid=8/2=4
• 8>6 key==a[mid]➔8==8 return mid;
• Low=4 high=6
Algorithm recursive Binary
search 1. Algorithm
Rbinsearch(A,key,low,high 11. if(key==a[mid]) then return mid;
) 2. { 12. else if(key<a[mid]) then return
3. if(low==high) then Rbinsearch(A,key,low,mid-1);
4. { 13. else return
5. if(key==a[low]) then return low; Rbinsearch(A,key,mid+1,high)
6. else return 0; ; } 150 8,14
7. } 150,12,14
8. else 150,14,14
10. mid:=(low+high)/2; }
9. {
3,6,9,12,13 key 6
• Alg binsearch(A,6,1,5) mid=3 a[3]=9 6<9

• binsearch(A,6,1,2) mid=3/2=1 3 6>3 •


• binsearch(A,6,2,2) 6=6 return 2
A[]=1 2 5 6 7 10 24 56 84 100 115 120 131
150 key 9
Low High Mid A[MID] 9<24
1 14 7 24
1 6 3 5 9>5
4 6 5 7 9>7
6 6 6 10 Unsuccessful search

Analysis of Binary Search(Time complexity)


• Best Case: When key is middle element. Total no of comparisons is 1.
Time complexity is O(1).

• Average Case: When key found between recursive calls. But not till
end.
Let say the iteration in Binary Search terminates after k iterations. In the above example, it
terminates after 3 iterations, so here k = 3
At each iteration, the array is divided by half. So let’s say the length of array at any iteration is
n At Iteration 1,
Length of array = n
At Iteration 2,
Length of array = n⁄2
At Iteration 3,
Length of array = (n⁄2)⁄2 = n⁄2(2) =n/4
Therefore, after Iteration k,
Length of array = n⁄2^k
Also, we know that after
After k divisions, the length of array becomes 1
Therefore
Length of array = n⁄2^k = 1
=> n = 2^k
Applying log function on both sides:
=> log2 (n) = log2(2k)
• => log2 (n) = k log2 (2)
• As (loga(a) = 1)
• Therefore,
• => k = log2 (n)
• Hence, the time complexity of Binary Search is

• log2(n)

Average case is log2(n)


• N=10
Ist iteration 5 N/2
II iteration 5/2=2 N/4 N/2^2 III iteration
2/2=1 N/8 N/2^3 N/2^3=1

K th iteration N/2^k=1
N=2^k➔k=log 2 N➔O(log N)
n=30

Ist iteration N/2=15 II


N/2^2=7 III N/2^3 =3
IV N/2^4=1
K N/2^k=1
Worst case:
Alg binsrch(a,l,h,x)
// a[l:h] is array
// x is search key
{
if(l==h) if n is small then time is 1 {
if(x==a[l]) then return l;
else return 0;
}
else
{1
mid=(l+h)/2;
if(a[mid]<x)
then binsrch(a,l,mid+1,x); T(n/2)
else
binsrch(a,mid+1,h,x);
}
• Time complexity
T(n)=1 if n<=1
T(n)= T(n/2)+1 n>1
Apply Masters thereom or substitution method
a=1 b=2 f(n)=1
h(n)=f(n)/n^log b a➔1/n^(log 1)➔1/n^0➔1➔1(log n)^0
�� ^(0 + 1)
u(n)=��(log /0+1➔��(log n)
T(n)=n^0(1+��(log n))➔��(log n)
Merge Sort
• Merge sort works by dividing the given array in two sub arrays of
equal size.
• The sub arrays are sorted independently using recursion
• The sorted subarrays are merged to get the solution.
Merging of two sorted sub lists Process:

First list contains elements 1,5,6,8,10,11 : a[low:mid] i.e a[1:6]


Second list contain elements 3,7,11,12,13: a[mid+1:high] a[7:11]
First list low to mid
Second list mid+1 to high
a[i]& a[j] are compared which is smaller is copied to B[low:high] temporary array.
3 B[4] 6 & 7 6
1 low i 7 B[5] 8 & 7 7
5 i=2 1 B[6]= 8 & 11 8
6 i=3 1 B[7] 10 & 11 10
8 i=4 1 B[8] 11 &11 11
10 i=5
B[9] 11
11 Mid i=6 B[ 12
i=7
B[2 13
B[3
2 i=low 0 j=mid+1 B[1] 2&0 0
4 i=2 1 j=5 2&1 1
5 i=3 3 j=6 2&3 2
7 i=mid=4 6 j=high 4&3 3
j=high+1 4&6 4
7 5&6 5
9 7&6 6
10 7

Algorithm

• Algorithm mergesort(l,h)
{
if(l<=h)
{
mid:=(l+h)/2
mergesort(l,mid);
mergesort(mid+1,h)
; merge(l,mid,h);
}
}
Merge(l,m,h) 1. 7. b[p]:= a[i]; i++;
8. else
Algorithm merge(l,m,h) 2. { 9. b[p]:=a[j];
3. i:=l; j:=mid+1 p:=l;
10. j++;
4. while((i<=mid))&&(j<=high)) 5. {
11. p++;
6. if(a[i]<=a[j]) then
12. }
13. if(i>mid) then
14. {
15. for q:=j to high
28.
1. Algorithm merge(l,mid,h)
18. p++; 2. {
3. i=l,p=l,j=mid+1;
19. }
4. while((i<=mid)&&(j<=h)) 5. {
20. }
6. if(a[i]<=a[j])
21. Else 7. {
22. for q:= I to mid 23. { 8. b[p]=a[i];
24. b[p]:=a[q]; 25. p++; 9. i++;
10. }
26. }
11. else
27. } 12. {
13. b[p]=a[j];
14. j:=j+1;
15. } 28. for k:=i to mid do 29. {
16. p++; 30. b[p]=a[k];
17. } 31. p++;
18. if(i>mid) 32. }
19. { 33. }
20. for k:=j to h do 21. { 34. for k:=1 to h do
22. b[p]:=a[k]; 23. p++; 35. a[k]:=b[k];
24. } 36. }
25. } 37.
26. else
27. {

Analysyis
Algorithm mergesort(l,h)
{
if(l<=h) then
else
{
mid:=(l+h)/2
mergesort(l,mid); T(n/2)
mergesort(mid+1,h); T(n/2)
merge(l,mid,h); n
}
}
T(n)=1 if n is small
T(n/2)+T(n/2)+n➔2T(n/2)+n
• T(n)=2T(n/2)+n
Substitution method
T(n/2)=2T(n/4)+n/2
T(n)= 2[2T(n/4)+n/2]+n➔4Tn/4+n+n➔4T(n/4)+2n
T(n/4)=2T(n/8)+n/4
T(n)=8T(n/8)+3n➔16T(n/16)+4n➔2^4 T(n/2^4)+4n
→ 2^k T(n/2^k)+kn
n/2^k=1➔n=2^k=>k=log n
2^(log n) T(1)+(logn )n➔n logn➔O(n
log n)
T(n)=1 if n is small
2T(n/2)+n

• Masters theorem
• a=2 b=2 f(n)=n
• h(n)=f(n)/n^(logb a)➔ f(n)=n/n^(log 2 2)➔n/n=1 (logn ^0) •
u(n)=Teta(log n)
• T(n)=n^(log 2 2)[1+teta(log n)]➔ n[teta(log n)]➔teta(n log n)
• Worst Case Time Complexity O(n*log n)
• Best Case Time Complexity O(n*log n)

• Average Time Complexity O(n*log n)

• Space Complexity: O(n)

• Time complexity of Merge Sort is O(n*Log n) in all the 3 cases (worst,


average and best) as merge sort always divides the array in two halves and
takes linear time to merge two halves.
Quick Sort
• Quick Sort follows a recursive algorithm.
• It divides the given array into two sections using a partitioning element
called as pivot.

The division performed is such that-

• All the elements to the left side of pivot are smaller than pivot. • All the
elements to the right side of pivot are greater than pivot. • After dividing the
array into two sections, the pivot is set at its correct position.
• Then, sub arrays are sorted separately by applying quick sort algorithm
recursively.
Example- 1 3 6 9 1 5 2
• 1.Pivot =3
• 2. i=1 i++ a[i]<=pivot
• 3. j=high j- - a[j]>=pivot
• 4. i<j swap a[i]&a[j];
• 5. i>j swap a[j] & pivot
369152
a[1] a[2] a[3] a[4] a[5] a[6]
• Pivot 3;
• i=1 3<=3 i++ j=6 2>=3 false
• i=2 6<=3 false
• i<j 2<6 swap a[2]&a[6]
•329156
• a[1] a[2] a[3] a[4] a[5] a[6]
i=2 2<=3 i++ j=6 6>=3 j--
i=3 a[3]<=pivot 9<=3 false j=5 5>=3 j--
j=4 1>=3 false
321956
i=3 1<=3 i++ j=4 9>=3
i=4 9<=3 false j=3 1>=3 false swap a[j] & pivot

1 2 3 9 5 6 pivot
• 1 2 1st sublist

9 5 6 2nd sublist
• Step 1 - Consider the first element of the list as pivot (i.e.,
Element at first position in the list).
• Step 2 - Define two variables i and j. Set i and j to first and
last elements of the list respectively.
• Step 3 - Increment i until list[i] > pivot then stop.
• Step 4 - Decrement j until list[j] < pivot then
stop. • Step 5 - If i < j then exchange list[i] and
list[j].
• Step 6 - Repeat steps 3,4 & 5 until i > j.
• Step 7 - Exchange the pivot element with list[j] element.
Repeat until loop

You might also like