0% found this document useful (0 votes)
7 views

Decrease and Conquer

The decrease-and-conquer technique involves solving a problem by reducing it to smaller instances, either recursively or iteratively. It includes three variations: decrease by a constant, decrease by a constant factor, and variable size decrease, and is often simpler and more efficient than other methods. Examples of problems suitable for this technique include binary search and the fake coin problem, highlighting its effectiveness in finding solutions through size reduction.

Uploaded by

Chitra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Decrease and Conquer

The decrease-and-conquer technique involves solving a problem by reducing it to smaller instances, either recursively or iteratively. It includes three variations: decrease by a constant, decrease by a constant factor, and variable size decrease, and is often simpler and more efficient than other methods. Examples of problems suitable for this technique include binary search and the fake coin problem, highlighting its effectiveness in finding solutions through size reduction.

Uploaded by

Chitra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Decrease and Conquer

The decrease-and-conquer technique is based on exploiting the relationship between a


solution to a given instance of a problem and a solution to its smaller instance. Once such a
relationship is established, it can be exploited either top down or bottom up. The former leads
naturally to a recursive implementation, although, as one can see from several examples in
this chapter, an ultimate implementation may well be non-recursive. The bottom-up variation
is usually implemented iteratively, starting with a solution to the smallest instance of the
problem; it is called sometimes the incremental approach. There are three major variations
of decrease-and-conquer:
1. decrease by a constant
2. decrease by a constant factor
3. variable size decrease

As divide-and-conquer approach includes following steps:


1. Divide the problem into a number of subproblems that are smaller instances of the
same problem.
2. Conquer the sub problems by solving them recursively. If the subproblem sizes are
small enough, however, just solve the sub problems in a straightforward manner.
3. Combine the solutions to the sub problems into the solution for the original problem.

Similarly, the approach decrease-and-conquer works, it also include following steps:


1. Decrease or reduce problem instance to smaller instance of the same problem and
extend solution.
2. Conquer the problem by solving smaller instance of the problem.
3. Extend solution of smaller instance to obtain solution to original problem.

Basic idea of the decrease-and-conquer technique is based on exploiting the relationship


between a solution to a given instance of a problem and a solution to its smaller instance.

This approach is also known as incremental or inductive approach.

Decrease and conquer is a technique used to solve problems by reducing the size of the input
data at each step of the solution process. This technique is similar to divide-and-conquer, in
that it breaks down a problem into smaller subproblems, but the difference is that in decrease-
and-conquer, the size of the input data is reduced at each step. The technique is used when
it’s easier to solve a smaller version of the problem, and the solution to the smaller problem
can be used to find the solution to the original problem.

1. Some examples of problems that can be solved using the decrease-and-conquer


technique include binary search, finding the maximum or minimum element in an
array, and finding the closest pair of points in a set of points.
2. The main advantage of decrease-and-conquer is that it often leads to efficient
algorithms, as the size of the input data is reduced at each step, reducing the time
and space complexity of the solution. However, it’s important to choose the right
strategy for reducing the size of the input data, as a poor choice can lead to an
inefficient algorithm.

“Divide-and-Conquer” vs “Decrease-and-Conquer”:
As per Wikipedia, some authors consider that the name “divide and conquer” should be used
only when each problem may generate two or more subproblems. The name decrease and
conquer has been proposed instead for the single-subproblem class. According to this
definition, Merge Sort and Quick Sort comes under divide and conquer (because there are 2
sub-problems) and Binary Search comes under decrease and conquer (because there is one
sub-problem).
Implementations of Decrease and Conquer :
This approach can be either implemented as top-down or bottom-up. Top-down
approach : It always leads to the recursive implementation of the problem. Bottom-up
approach : It is usually implemented in iterative way, starting with a solution to the smallest
instance of the problem.
Variations of Decrease and Conquer :
There are three major variations of decrease-and-conquer:
1. Decrease by a constant
2. Decrease by a constant factor
3. Variable size decrease
Decrease by a Constant : In this variation, the size of an instance is reduced by the same
constant on each iteration of the algorithm. Typically, this constant is equal to one , although
other constant size reductions do happen occasionally. Below are example problems :
 Insertion sort
 Graph search algorithms: DFS, BFS
 Topological sorting
 Algorithms for generating permutations, subsets
Decrease by a Constant factor: This technique suggests reducing a problem instance by the
same constant factor on each iteration of the algorithm. In most applications, this constant
factor is equal to two. A reduction by a factor other than two is especially rare. Decrease by a
constant factor algorithms are very efficient especially when the factor is greater than 2 as in
the fake-coin problem. Below are example problems :
 Binary search
 Fake-coin problems
 Russian peasant multiplication
Variable-Size-Decrease : In this variation, the size-reduction pattern varies from one
iteration of an algorithm to another. As, in problem of finding gcd of two number though the
value of the second argument is always smaller on the right-handside than on the left-hand
side, it decreases neither by a constant nor by a constant factor. Below are example
problems :
 Computing median and selection problem.
 Interpolation Search
 Euclid’s algorithm
There may be a case that problem can be solved by decrease-by-constant as well as decrease-
by-factor variations, but the implementations can be either recursive or iterative. The iterative
implementations may require more coding effort, however they avoid the overload that
accompanies recursion.

Advantages of Decrease and Conquer:


1. Simplicity: Decrease-and-conquer is often simpler to implement compared to
other techniques like dynamic programming or divide-and-conquer.
2. Efficient Algorithms: The technique often leads to efficient algorithms as the size
of the input data is reduced at each step, reducing the time and space complexity
of the solution.
3. Problem-Specific: The technique is well-suited for specific problems where it’s
easier to solve a smaller version of the problem.

Disadvantages of Decrease and Conquer:


1. Problem-Specific: The technique is not applicable to all problems and may not be
suitable for more complex problems.
2. Implementation Complexity: The technique can be more complex to implement
when compared to other techniques like divide-and-conquer, and may require
more careful planning.

Russian Peasant (Multiply two numbers using bitwise operators)


Given two integers, write a function to multiply them without using multiplication
operator.
There are many other ways to multiply two numbers. One interesting method is
the Russian peasant algorithm . The idea is to double the first number and halve the second
number repeatedly till the second number doesn’t become 1. In the process, whenever the
second number become odd, we add the first number to result (result is initialized as 0)
The following is simple algorithm.
Let the two given numbers be 'a' and 'b'
1) Initialize result 'res' as 0.
2) Do following while 'b' is greater than 0
a) If 'b' is odd, add 'a' to 'res'
b) Double 'a' and halve 'b'
3) Return 'res'.
Time Complexity: O(log2b)
Auxiliary Space: O(1)
How does this work?
The value of a*b is same as (a*2)*(b/2) if b is even, otherwise the value is same as
((a*2)*(b/2) + a). In the while loop, we keep multiplying ‘a’ with 2 and keep dividing ‘b’
by 2. If ‘b’ becomes odd in loop, we add ‘a’ to ‘res’. When value of ‘b’ becomes 1, the
value of ‘res’ + ‘a’, gives us the result.
Note that when ‘b’ is a power of 2, the ‘res’ would remain 0 and ‘a’ would have the
multiplication.

Interpolation Search
Given a sorted array of n uniformly distributed values arr[], write a function to search for a
particular element x in the array.
Linear Search finds the element in O(n) time, Jump Search takes O(√ n) time and Binary
Search takes O(log n) time.
The Interpolation Search is an improvement over Binary Search for instances, where the
values in a sorted array are uniformly distributed. Interpolation constructs new data points
within the range of a discrete set of known data points. Binary Search always goes to the
middle element to check. On the other hand, interpolation search may go to different
locations according to the value of the key being searched. For example, if the value of the
key is closer to the last element, interpolation search is likely to start search toward the end
side.
To find the position to be searched, it uses the following formula.
// The idea of formula is to return higher value of pos
// when element to be searched is closer to arr[hi]. And
// smaller value when closer to arr[lo]
arr[] ==> Array where elements need to be searchedx ==> Element to be searchedlo
==> Starting index in arr[]hi ==> Ending index in arr[]
There are many different interpolation methods and one such is known as linear interpolation.
Linear interpolation takes two data points which we assume as (x1,y1) and (x2,y2) and the
formula is : at point(x,y).
This algorithm works in a way we search for a word in a dictionary. The interpolation
search algorithm improves the binary search algorithm. The formula for finding a value is:
K = data-low/high-low.

K is a constant which is used to narrow the search space. In the case of binary search, the
value for this constant is: K=(low+high)/2.

The formula for pos can be derived as follows.


Let's assume that the elements of the array are linearly distributed.

General equation of line : y = m*x + c.


y is the value in the array and x is its index.

Now putting value of lo,hi and x in the equation


arr[hi] = m*hi+c ----(1)
arr[lo] = m*lo+c ----(2)
x = m*pos + c ----(3)

m = (arr[hi] - arr[lo] )/ (hi - lo)

subtracting eqxn (2) from (3)


x - arr[lo] = m * (pos - lo)
lo + (x - arr[lo])/m = pos
pos = lo + (x - arr[lo]) *(hi - lo)/(arr[hi] - arr[lo])

Algorithm
The rest of the Interpolation algorithm is the same except for the above partition logic.
 Step1: In a loop, calculate the value of “pos” using the probe position formula.
 Step2: If it is a match, return the index of the item, and exit.
 Step3: If the item is less than arr[pos], calculate the probe position of the left
sub-array. Otherwise, calculate the same in the right sub-array.
 Step4: Repeat until a match is found or the sub-array reduces to zero.

Time Complexity: O(log2(log2 n)) for the average case, and O(n) for the worst case
Auxiliary Space Complexity: O(1)
Another approach:-
This is the iteration approach for the interpolation search.
 Step1: In a loop, calculate the value of “pos” using the probe position formula.
 Step2: If it is a match, return the index of the item, and exit.
 Step3: If the item is less than arr[pos], calculate the probe position of the left
sub-array. Otherwise, calculate the same in the right sub-array.
 Step4: Repeat until a match is found or the sub-array reduces to zero.
The Fake Coin Problem
Problem
You are given n number of identical looking coins. They are of same weights but one coin is
a fake coin which is made of a lighter metal.

There is an old-fashioned balance scale machine that enables you to compare any two sets of
coins. If it tips either to the left or to the right, you will know that one of the two sets is
heavier than the other. But, the machine charges you each time you weigh anything.
Now, your task is to design an algorithm to find the fake coin in the fewest number of
weighings.

Approach and Solution


Since we have discussed the concept of decrease and conquer, so it is clear that decrease and
conquer approach works here.

You may well have realized that you can divide the pile in half, weigh the halves, and narrow
your focus to the pile that is lighter. This approach sounds a lot like the binary search method.
Binary search divides the problem to two sub-problems to solve. The binary search divides
but it solves only one of the sub-problems and discards the other, thus, it is considered as
decrease and conquer approach rather than the divide and conquer approach. This binary
search approach works for finding the fake coin.

Binary search divides the problem by the factor of 2, but we can do better to solve the fake
coin problem.

Suppose we divide the coins into three piles, where at least two of them contain the same
number of coins. After weighing the equal-sized piles, we can eliminate 2/3rd of the coins.
If n mod 3 = 0, we can divide the coinds into three piles of exactly n/3 coins.
If n mod 3 = 1, then n = 3k + 1 for some k. We can divide the coins into three piles of k, k,
and k+1. It will simplify our algorithm, though, if we split them into three piles of k+1, k+1,
and k-1.

And if n mod 3 = 2, then n = 3k + 2 for some k. We can divide the coins into three piles
of k+1, k+1, and k.

Designing the alogrithm for the above approach,


INPUT : integer n

if n = 1 then
the coin is fake
else
divide the coins into piles of A = ceiling(n/3), B = ceiling(n/3),
and C = n-2*ceiling(n/3)
weigh A and B
if the scale balances then
iterate with C
else
iterate with the lighter of A and B
The Selection Problem
Problem
Find the ith smallest element in a a set of n unsorted elements. This problem is referred to as
the selection problem or the ith "order statistic".
If i=1, this is finding the minimum element of a set.
If i=n, this is finding the maximum element of a set.
If i=n/2, this is finding the median or halfway point element of a set.
INPUT: A set of n numbers and a number i, with 1 <= i <= n.
OUPUT: The element x in the given set of n numbers that is larger than exactly i-1 other
elements in that set.

Approach and Solution


We can use sorting algorithm like Merge sort to sort the array and then return the ith element
from the start of the array. This approach will complete our job with O(nlog(n)) time
complexity.
Another approach can be, that we can divide the set of n numbers into two groups, one which
contains elements less than the pivot value and one which contains elements greater than the
pivot value, where the pivot value is a an element taken from the array. This implies, we will
have,
S1: elements <p
S2: elements >p
, where p refers to the pivot value.
Note that the elements in S1 are not sorted, but all of them are smaller than element p. We
know that p is the (|S1| + 1)th smallest element of n. |S| represents the size of S. This is the
same idea used in the quicksort algorithm.

Designing the algorithm to find the ith smallest element from given array:-
 Selecting a pivot oint, p, out of the array.
 Splitting the array into S1 and S2, where all elements in S1 are less than p and all
elements in S2 are greater than p.
 If i = |Si| + 1, then p is the ith smallest element.
• Otherwise, if i ≤ |S1| then the ith smallest element is somewhere in S1. Repeat the
process recursively on S1 looking for the ith smallest element.
• Or if i is somewhere in S2. Repeat the process recursively looking for the
i-|S1|-1 smallest element.
The question arrises on the selection of p (pivot value). We can take p as the value closest to
the median of the whole array. If p is the largest element or the smallest, the problem size is
only reduced by 1. We can pick p using one of the following:-
• Always pick the element for pivot from, nth or 1st.
• Pick a random element for pivot.
• Pick 3 random elements, and then pick the median of those 3 elements as pivot.
Once we have p it is fairly easy to partition the elements.
Pseudo-code for partitioning
// Partitions array A[p..r]
Partition(A,p,r)
// Choose first element as partition element
x ← A[p]
i ← p-1
j ← r+1
while true
do repeat
j←j-1 until
A[j]≤ x repeat
i←i+1 until A[i] ≥ x
if i<j
then exchange A[i] ↔ A[j]
else
// indicates index of partitions
return j
Generating Subsets
Problem
Design an algorithm to generate the subsets of a given set.

Solution
Algorithm for building subsets of the array,
 Choose one element from input i.e. subset[len] = S[pos]. We can decide to include it
in current subset or not.
 Recursively form subset including it i.e. allSubsets(pos+1, len+1, subset)
 Recursively form subset excluding it i.e. allSubsets(pos+1, len, subset)
 And most importantly for efficiency, making sure to generate each set once.

Pseudo-code for the above approach,


int S[N]
void allSubsets(int pos, int len, int[] subset)
{
if(pos == N)
{
print(subset)
return
}
// Try the current element in the subset.
subset[len] = S[pos]
allSubsets(pos+1, len+1, subset)
// Skip the current element.
allSubsets(pos+1, len, subset)
}

Examples of Decrease and conquer

Decrease by one (constant)


Insertion sort
Graph Search – DFS, BFS
Topological sorting
Generation subsets / combinations

Decrease by constant factor


Binary Search
Fake coin
Josephus Problem
Russian Peasants Multiplication

Variable size decrease and conquer


Euclid’s Algorithm
Selection by partition

You might also like