UNIT2_final
UNIT2_final
Here, we will sort an array using the divide and conquer approach
(ie. merge sort).
Again, divide each subpart recursively into two halves until you get
individual elements.
where,
n = size of input
n/b = size of each subproblem. All subproblems are assumed to have the
same size.
f(n) = cost of the work done outside the recursive call, which includes
the cost of dividing the problem and cost of merging the solutions
Let us take an example to find the time complexity of a recursive problem.
= 2T(n/2) + O(n)
Where,
f(n) = time taken to divide the problem and merging the subproblems
T(n/2) = O(n log n) (To understand this, please refer to the master
theorem.)
≈ O(n log n)
We interview candidates on a rolling basis, and at any given point we want to hire the best candidate
we’ve seen so far. If a better candidate comes along, we immediately fire the old one and hire the
new one.
Hire-Assistant(n)
1 best=0 // candidate 0 is a least -qualified
2 for i=1 to n
3 Interview candidate i
4 If candidate I is better than other candidate best
5 Best=i
6 Hire candidate i
Best Case
If each candidate is worse than all who came before, we hire one candidate:
Θ(cin + ch) = Θ(cin)
Worst Case
If each candidate is better than all who came before, we hire all n (m = n):
Θ(cin + chn) = Θ(chn) since ch > ci
But this is pessimistic. What happens in the average case? To figure this out, we
need some tools ...
Probabilistic Analysis
Randomized Algorithms
We might not know the distribution of inputs or be able to model it.
However, uniform random distributions are easier to analyze than unknown
distributions.
To obtain such a distribution regardless of input distribution, we
can randomize within the algorithm to impose a distribution on the inputs.
Then it is easier to analzye expected behavior.
An algorithm is randomized if its behavior is determined in parts by values
provided by a random number generator.
Maximum Sum
3+5+1+7+9
=25
But the problem gets more interesting when some of the elements are
negative then the subarray whose sum of the elements is largest over the sum
of the elements of any other subarrays of that element can lie anywhere in the
array.
Maximum Sum
=3-1-1+10-3-2+4
=11
Maximum Subarray Sum Using
Divide and Conquer
As we know that the divide and conquer solves a problem by breaking into
subproblems, so let's first break an array into two parts. Now, the subarray
with maximum sum can either lie entirely in the left subarray or entirely in
the right subarray or in a subarray consisting both i.e., crossing the middle
element.
The first two cases where the subarray is entirely on right or on the
left are actually the smaller instances of the original problem
So, we can solve them recursively by calling the function to calculate
the maximum sum subarray on both the parts.
Therefore, we can say that matrix multiplication is possible only when M==A. The given two
matrices are: Matrix A of the size: 3 × 3
Complexity Analysis
Time Complexity: O(N ^3) , where the given matrices are square matrices of size N*N
each.
For multiplying every column with every element in the row from the given matrices
uses two loops, and adding up the values takes another loop. Therefore the overall
time complexity turns out to be O(N^3^).
Space Complexity: O(N^2), where the given matrices are square matrices of size N*N each.
The matrix of size N*N has been used to store the result when the two matrices are
multiplied.
Strassen’s Matrix Multiplication Algorithm
In this context, using Strassen’s Matrix multiplication algorithm, the
time consumption can be improved a little bit.
It reduces the time complexity for matrix multiplication.
Strassen’s Matrix multiplication can be performed only on square matrices
where N is a power of 2. And also the order of both of the matrices is N × N.
The main idea is to use the divide and conquer technique in this algorithm.
We need to divide matrix A and matrix B into 8 submatrices and then
recursively compute the submatrices of the result.
Strassen’s Matrix multiplication can be performed only on square
matrices where n is a power of 2. Order of both of the matrices
are n × n
To optimize the matrix multiplication Strassen's matrix multiplication algorithm can
be used. It reduces the time complexity for matrix multiplication. Strassen’s Matrix
multiplication can be performed only on square matrices where N is a power of 2.
And also the order of both of the matrices is N × N.
The main idea is to use the divide and conquer technique in this algorithm. We need
to divide matrix A and matrix B into 8 submatrices and then recursively compute the
submatrices of the result.
Where the resultant matrix is C and can be obtained in the following way. Now we
need to store the result as follows:
The recurrence relation obtained is: T(N)=8T(N2) +4 O(N2)
Here O(N^2) term is for matrix addition. Since, addition is performed 4 times during
the recursion, Therefore, it is 4 times O(N2).
The recurrence relationship can be simplified using the Master's Theorem.
And the overall time complexity turns out to be O(N3) which is not better than the
naive method which is discussed earlier.
To optimize it further, we use Strassen’s Matrix Multiplication where we don't
need 8 recursive calls, as we can solve them using 7 recursive calls and this requires
some manipulation which is achieved using addition and subtraction.
Algorithm: Strasen’s Algorithm.
1. Divide the given matrix A and matrix B into 4 sub-matrices of size each N2xN2as
shown in the above diagram.