0% found this document useful (0 votes)
83 views

Greedy Algorithms "Take What You Can Get Now" Strategy: Examples

Greedy algorithms make locally optimal choices at each step to construct an overall solution. They do not always produce the globally optimal solution. Examples include minimal spanning trees, shortest paths in graphs, and the knapsack problem. Divide-and-conquer and decrease-and-conquer algorithms break problems into smaller subproblems, solve the subproblems, and combine the solutions. Dynamic programming avoids recomputing solutions by storing results of subproblems in a table and building up to the overall solution from the bottom up.

Uploaded by

Harish
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Greedy Algorithms "Take What You Can Get Now" Strategy: Examples

Greedy algorithms make locally optimal choices at each step to construct an overall solution. They do not always produce the globally optimal solution. Examples include minimal spanning trees, shortest paths in graphs, and the knapsack problem. Divide-and-conquer and decrease-and-conquer algorithms break problems into smaller subproblems, solve the subproblems, and combine the solutions. Dynamic programming avoids recomputing solutions by storing results of subproblems in a table and building up to the overall solution from the bottom up.

Uploaded by

Harish
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

1.

Greedy Algorithms "take what you can get now"


strategy
The solution is constructed through a sequence of steps, each
expanding a partially constructed solution obtained so far. At each
step the choice must be locally optimal this is the central point of
this technique.
Examples:
Minimal spanning tree
Shortest distance in graphs
Greedy algorithm for the Knapsack problem
The coin exchange problem
Huffman trees for optimal encoding
Greedy techniques are mainly used to solve optimization problems. They do not
always give the best solution.
Example:
Consider the knapsack problem with a knapsack of capacity 10 and 4 items given by
the < weight :value> pairs: <5:6>, <4:3>, <3: 5>, <3: 4>. The greedy algorithm will
choose item1 <5:6> and then item3 <3:5> resulting in total value 11, while the
optimal solution is to choose items 2, 3, and 4 thus obtaining total value 12.
It has been proven that greedy algorithms for the minimal spanning tree, the shortest
paths, and Huffman codes always give the optimal solution.

2. Divide-and-Conquer, Decrease-and-Conquer
These are methods of designing algorithms that (informally)
proceed as follows:
Given an instance of the problem to be solved, split this into several
smaller sub-instances (of the same problem), independently solve
each of the sub-instances and then combine the sub-instance
solutions so as to yield a solution for the original instance.
With the divide-and-conquer method the size of the problem
instance is reduced by a factor (e.g. half the input size), while with
the decrease-and-conquer method the size is reduced by a
constant.
Examples of divide-and-conquer algorithms:

Computing an (a > 0, n a nonnegative integer) by recursion


Binary search in a sorted array (recursion)
Mergesort algorithm, Quicksort algorithm (recursion)
The algorithm for solving the fake coin problem (recursion)
Examples of decrease-and-conquer algorithms:
Insertion sort
Topological sorting
Binary Tree traversals: inorder, preorder and postorder
(recursion)
Computing the length of the longest path in a binary tree
(recursion)
Computing Fibonacci numbers (recursion)
Reversing a queue (recursion)
Warshalls algorithm (recursion)
The issues here are two:
1. How to solve the sub-instance
2. How to combine the obtained solutions
The answer to the second question depends on the nature of the problem.
In most cases the answer to the first question is: using the same method. Here another
very important issue arises: when to stop decreasing the problem instance, i.e. what is
the minimal instance of the given problem and how to solve it.
When we use recursion, the solution of the minimal instance is called terminating
condition

3. Dynamic Programming
One disadvantage of using Divide-and-Conquer is that the process
of recursively solving separate sub-instances can result in the same
computations being performed repeatedly sinceidentical subinstances may arise.
The idea behind dynamic programming is to avoid this pathology by
obviating the requirement to calculate the same quantity twice.
The method usually accomplishes this by maintaining a table of
sub-instance results.
Dynamic Programming is a Bottom-Up Technique in which the
smallest sub-instances areexplicitly solved first and the results of

these used to construct solutions to progressively larger subinstances.


In contrast, Divide-and-Conquer is a Top-Down
Technique which logically progresses from the initial instance down
to the smallest sub-instance via intermediate sub-instances.
Examples:
Fibonacci numbers computed by iteration.
Warshalls algorithm implemented by iterations

You might also like