Decrease and Conquer
Decrease and Conquer
Decrease and conquer is a technique used to solve problems by reducing the size of the input
data at each step of the solution process. This technique is similar to divide-and-conquer, in
that it breaks down a problem into smaller subproblems, but the difference is that in decrease-
and-conquer, the size of the input data is reduced at each step. The technique is used when
it’s easier to solve a smaller version of the problem, and the solution to the smaller problem
can be used to find the solution to the original problem.
“Divide-and-Conquer” vs “Decrease-and-Conquer”:
As per Wikipedia, some authors consider that the name “divide and conquer” should be used
only when each problem may generate two or more subproblems. The name decrease and
conquer has been proposed instead for the single-subproblem class. According to this
definition, Merge Sort and Quick Sort comes under divide and conquer (because there are 2
sub-problems) and Binary Search comes under decrease and conquer (because there is one
sub-problem).
Implementations of Decrease and Conquer :
This approach can be either implemented as top-down or bottom-up. Top-down
approach : It always leads to the recursive implementation of the problem. Bottom-up
approach : It is usually implemented in iterative way, starting with a solution to the smallest
instance of the problem.
Variations of Decrease and Conquer :
There are three major variations of decrease-and-conquer:
1. Decrease by a constant
2. Decrease by a constant factor
3. Variable size decrease
Decrease by a Constant : In this variation, the size of an instance is reduced by the same
constant on each iteration of the algorithm. Typically, this constant is equal to one , although
other constant size reductions do happen occasionally. Below are example problems :
Insertion sort
Graph search algorithms: DFS, BFS
Topological sorting
Algorithms for generating permutations, subsets
Decrease by a Constant factor: This technique suggests reducing a problem instance by the
same constant factor on each iteration of the algorithm. In most applications, this constant
factor is equal to two. A reduction by a factor other than two is especially rare. Decrease by a
constant factor algorithms are very efficient especially when the factor is greater than 2 as in
the fake-coin problem. Below are example problems :
Binary search
Fake-coin problems
Russian peasant multiplication
Variable-Size-Decrease : In this variation, the size-reduction pattern varies from one
iteration of an algorithm to another. As, in problem of finding gcd of two number though the
value of the second argument is always smaller on the right-handside than on the left-hand
side, it decreases neither by a constant nor by a constant factor. Below are example
problems :
Computing median and selection problem.
Interpolation Search
Euclid’s algorithm
There may be a case that problem can be solved by decrease-by-constant as well as decrease-
by-factor variations, but the implementations can be either recursive or iterative. The iterative
implementations may require more coding effort, however they avoid the overload that
accompanies recursion.
Interpolation Search
Given a sorted array of n uniformly distributed values arr[], write a function to search for a
particular element x in the array.
Linear Search finds the element in O(n) time, Jump Search takes O(√ n) time and Binary
Search takes O(log n) time.
The Interpolation Search is an improvement over Binary Search for instances, where the
values in a sorted array are uniformly distributed. Interpolation constructs new data points
within the range of a discrete set of known data points. Binary Search always goes to the
middle element to check. On the other hand, interpolation search may go to different
locations according to the value of the key being searched. For example, if the value of the
key is closer to the last element, interpolation search is likely to start search toward the end
side.
To find the position to be searched, it uses the following formula.
// The idea of formula is to return higher value of pos
// when element to be searched is closer to arr[hi]. And
// smaller value when closer to arr[lo]
arr[] ==> Array where elements need to be searchedx ==> Element to be searchedlo
==> Starting index in arr[]hi ==> Ending index in arr[]
There are many different interpolation methods and one such is known as linear interpolation.
Linear interpolation takes two data points which we assume as (x1,y1) and (x2,y2) and the
formula is : at point(x,y).
This algorithm works in a way we search for a word in a dictionary. The interpolation
search algorithm improves the binary search algorithm. The formula for finding a value is:
K = data-low/high-low.
K is a constant which is used to narrow the search space. In the case of binary search, the
value for this constant is: K=(low+high)/2.
Algorithm
The rest of the Interpolation algorithm is the same except for the above partition logic.
Step1: In a loop, calculate the value of “pos” using the probe position formula.
Step2: If it is a match, return the index of the item, and exit.
Step3: If the item is less than arr[pos], calculate the probe position of the left
sub-array. Otherwise, calculate the same in the right sub-array.
Step4: Repeat until a match is found or the sub-array reduces to zero.
Time Complexity: O(log2(log2 n)) for the average case, and O(n) for the worst case
Auxiliary Space Complexity: O(1)
Another approach:-
This is the iteration approach for the interpolation search.
Step1: In a loop, calculate the value of “pos” using the probe position formula.
Step2: If it is a match, return the index of the item, and exit.
Step3: If the item is less than arr[pos], calculate the probe position of the left
sub-array. Otherwise, calculate the same in the right sub-array.
Step4: Repeat until a match is found or the sub-array reduces to zero.
The Fake Coin Problem
Problem
You are given n number of identical looking coins. They are of same weights but one coin is
a fake coin which is made of a lighter metal.
There is an old-fashioned balance scale machine that enables you to compare any two sets of
coins. If it tips either to the left or to the right, you will know that one of the two sets is
heavier than the other. But, the machine charges you each time you weigh anything.
Now, your task is to design an algorithm to find the fake coin in the fewest number of
weighings.
You may well have realized that you can divide the pile in half, weigh the halves, and narrow
your focus to the pile that is lighter. This approach sounds a lot like the binary search method.
Binary search divides the problem to two sub-problems to solve. The binary search divides
but it solves only one of the sub-problems and discards the other, thus, it is considered as
decrease and conquer approach rather than the divide and conquer approach. This binary
search approach works for finding the fake coin.
Binary search divides the problem by the factor of 2, but we can do better to solve the fake
coin problem.
Suppose we divide the coins into three piles, where at least two of them contain the same
number of coins. After weighing the equal-sized piles, we can eliminate 2/3rd of the coins.
If n mod 3 = 0, we can divide the coinds into three piles of exactly n/3 coins.
If n mod 3 = 1, then n = 3k + 1 for some k. We can divide the coins into three piles of k, k,
and k+1. It will simplify our algorithm, though, if we split them into three piles of k+1, k+1,
and k-1.
And if n mod 3 = 2, then n = 3k + 2 for some k. We can divide the coins into three piles
of k+1, k+1, and k.
if n = 1 then
the coin is fake
else
divide the coins into piles of A = ceiling(n/3), B = ceiling(n/3),
and C = n-2*ceiling(n/3)
weigh A and B
if the scale balances then
iterate with C
else
iterate with the lighter of A and B
The Selection Problem
Problem
Find the ith smallest element in a a set of n unsorted elements. This problem is referred to as
the selection problem or the ith "order statistic".
If i=1, this is finding the minimum element of a set.
If i=n, this is finding the maximum element of a set.
If i=n/2, this is finding the median or halfway point element of a set.
INPUT: A set of n numbers and a number i, with 1 <= i <= n.
OUPUT: The element x in the given set of n numbers that is larger than exactly i-1 other
elements in that set.
Designing the algorithm to find the ith smallest element from given array:-
Selecting a pivot oint, p, out of the array.
Splitting the array into S1 and S2, where all elements in S1 are less than p and all
elements in S2 are greater than p.
If i = |Si| + 1, then p is the ith smallest element.
• Otherwise, if i ≤ |S1| then the ith smallest element is somewhere in S1. Repeat the
process recursively on S1 looking for the ith smallest element.
• Or if i is somewhere in S2. Repeat the process recursively looking for the
i-|S1|-1 smallest element.
The question arrises on the selection of p (pivot value). We can take p as the value closest to
the median of the whole array. If p is the largest element or the smallest, the problem size is
only reduced by 1. We can pick p using one of the following:-
• Always pick the element for pivot from, nth or 1st.
• Pick a random element for pivot.
• Pick 3 random elements, and then pick the median of those 3 elements as pivot.
Once we have p it is fairly easy to partition the elements.
Pseudo-code for partitioning
// Partitions array A[p..r]
Partition(A,p,r)
// Choose first element as partition element
x ← A[p]
i ← p-1
j ← r+1
while true
do repeat
j←j-1 until
A[j]≤ x repeat
i←i+1 until A[i] ≥ x
if i<j
then exchange A[i] ↔ A[j]
else
// indicates index of partitions
return j
Generating Subsets
Problem
Design an algorithm to generate the subsets of a given set.
Solution
Algorithm for building subsets of the array,
Choose one element from input i.e. subset[len] = S[pos]. We can decide to include it
in current subset or not.
Recursively form subset including it i.e. allSubsets(pos+1, len+1, subset)
Recursively form subset excluding it i.e. allSubsets(pos+1, len, subset)
And most importantly for efficiency, making sure to generate each set once.