0% found this document useful (0 votes)
4 views

Module 3

Hi i am description

Uploaded by

Pracheen M G
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Module 3

Hi i am description

Uploaded by

Pracheen M G
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

MODULE 3

GREEDY METHOD

by Prof. Ambika S
Greedy – General Method
• This method is used for solving optimization problems.
• An optimization problem is the problem of f inding the best solution from
all feasible solutions.
• An optimization problem is a problem that demands either maximum or
minimum results.
• The feasible solution is a subset that satisfies the given constraints.
• The optimal solution is the solution which is the best and the most
favorable solution in the subset.
• In the case of feasible, if more than one solution satisfies the given criteria
then those solutions will be considered as the feasible, whereas the
optimal solution is the best solution among all the solutions.
• Often it is easy to f ind a feasible solution but dif ficult to f ind the optimal
solution.
The control abstraction for a subset paradigm using greedy method
Coin Change Problem
Problem Statement:
Given coins of several denominations f ind out a way to give a customer an
amount with fewest number of coins.

• Basic principle is:


– At every iteration for search of a coin, take the largest coin which can
f it into remain amount to be changed at that particular time.
– At the end you will have optimal solution.
Example:
If denominations are 1, 5, 10, 25 and 100 and the change required is 30, the
solutions are,
Amount : 30 Solutions :
3 x 10 ( 3 coins )
6 x 5 ( 6 coins )
1 x 25 + 5 x 1 ( 6 coins )
1 x 25 + 1 x 5 ( 2 coins )
The last solution is the optimal one as it gives us change only with 2 coins.
Knapsack Problem ( Fractional Knapsack Problem)
Job sequencing with deadlines
Problem statement
• Given an array of n jobs
• Every job has a deadline and Every job has a associated prof it if the job is
f inished before the deadline.
• Given that every job takes single unit of time for processing.
• How the jobs can be scheduled, to maximize total prof it if only one job can
be scheduled at a time?

• The greedy strategy to solve job sequencing problem is,


– “At each time select the job that that satisf ies the constraints and gives
maximum prof it.
– i.e consider the jobs in the non increasing order of the pi’s
Minimum Spanning Tree MST
• A spanning tree of a connected graph is its connected acyclic subgraph (i.e., a tree)
that contains all the vertices of the graph.
• A minimum spanning tree of a weighted connected graph is its spanning tree of
the smallest weight, where the weight of a tree is def ined as the sum of the
weights on all its edges.
• The minimum spanning tree problem is the problem of f inding a minimum
spanning tree for a given weighted connected graph.
Prim’s Algorithm
Analysis of Ef ficiency
• Depends on the data structures chosen
– for the graph & for the priority queue of the set V − VT whose vertex priorities are the
distances to the nearest tree vertices.
• If graph representation - weight matrix,
• priority queue - is implemented as an unordered array,
the running time will be in Θ(|V|2).
• On each of the |V|−1 iterations,
– the array implementing the priority queue is traversed to f ind and delete the minimum
and then to update, if necessary, the priorities of the remaining vertices.
• We can implement the priority queue as a min-heap.
– A min-heap is a complete binary tree in which every element is less than or
equal to its children.
– Deletion of the smallest element from and insertion of a new element into a
min-heap of size n are O(log n) operations.
• If a graph is represented as adjacency lists and the priority queue is
implemented as a min-heap,
– the running time is in O(|E| log |V|).
Kruskals Algorithm
Working
• Sorting the graph's edges in non decreasing order of their weights.
• Then, starting with the empty sub graph,
– it scans this sorted list adding the next edge on the list to the current sub
graph if such an inclusion does not create a cycle and simply skipping the
edge otherwise.
Analysis

• Ef ficiency of Kruskal’s algorithm is based on the time needed for sorting the
edge weights of a given graph.
• With an ef ficient sorting algorithm, the time ef ficiency of Kruskal's algorithm
will be in
O (|E| log |E|).
Single Source Shortest Path
Problem Statement
• Given a graph and a source vertex in graph, f ind shortest paths from
source to all vertices in the given graph
• Dijkstra's Algorithm is the best-known algorithm
• It is similar to Prim’s algorithm
• This algorithm is applicable to undirected and directed graphs with
nonnegative weights only.
Dijkstra's Algorithm
Working
• First, it f inds the shortest path from the source to a vertex nearest to it, then to a
second nearest, and so on
• In general, before its ith iteration commences, the algorithm has already
identif ied the shortest paths to i-1 other vertices nearest to the source.
• The next vertex nearest to the source can be found among the vertices adjacent to
the vertices of Ti. called as "fringe vertices";
• They are the candidates from which Dijkstra's algorithm selects the next vertex
nearest to the source.
• To identify the ith nearest vertex, the algorithm computes, for every fringe vertex u,
– the sum of the distance to the nearest tree vertex v and the length d (shortest
path from the source to v) (already computed !!!)
• then selects the vertex with the smallest such sum.
Analysis
• Ef ficiency is Θ(|V|2 ) for graphs represented by their weight matrix and the
priority queue implemented as an unordered array.

• For graphs represented by their adjacency lists and the priority queue
implemented as a min-heap, it is in O ( |E| log |V| )
Optimal Tree Problem
Background
• Suppose we have to encode a text that comprises characters from some n-
character alphabet
• Encode by assigning to each of the text's characters some sequence of bits
called the codeword.
• There are two types of encoding: Fixed-length encoding, Variable-length
encoding
Fixed length Coding
• This method assigns to each character a bit string of the same length m.
• Example: ASCII code.

Variable length Coding


• This method assigns code-words of different lengths to different characters
Huffman Trees and Codes
Huffman's Algorithm
• Step 1: Initialize n one-node trees and label them with the characters of the alphabet. Record the frequency of
each character in its tree's root to indicate the tree's weight.
• Step 2: Repeat the following operation until a single tree is obtained.
– Find two trees with the smallest weight. Make them the left and right subtree of a new tree and record the
sum of their weights in the root of the new tree as its weight.
• A tree constructed by the above algorithm is called a Huffman tree. It def ines-in the manner described-a
Huffman code.
Transform and Conquer
Two stages
1. The transformation stage - the problem’s instance is modif ied to be, for one
reason or another, more amenable to solution.
2. The conquering stage, - it is solved.
Heap
• Heap is a partially ordered data structure that is especially suitable for
implementing priority queues.
• Priority queue is a multiset of items with an orderable characteristic called an
item’s priority, with the following operations:
– f inding an item with the highest priority
– deleting an item with the highest priority
– adding a new item to the multiset
• Def inition: A heap can be def ined as a binary tree with keys assigned to its nodes, one key per node,
provided the following two conditions are met:
– The shape property —the binary tree is essentially complete
(or simply complete), i.e., all its levels are full except
possibly the last level, where only some rightmost leaves
may be missing.
– The parental dominance or heap property —the key in each
node is greater than or equal to the keys in its children.

Heap?
Heap? Heap?
Properties of Heap
1. There exists exactly one essentially complete
binary tree with n nodes. Its height is equal to
2. The root of a heap always contains its largest
element.
3. A node of a heap considered with all its
descendants is also a heap.
4. A heap can be implemented as an array by
recording its elements in the top down, left-to-
right fashion.
Construction of Heap
• There are two principal alternatives
for constructing Heap.
1. Bottom-up heap construction
2. Top-down heap construction
Bottom up heap construction
1. Initialize the essentially complete binary tree with n nodes by placing keys in the
order given.
2. Then “heapif ies” the tree as follows.
1. Starting with the last parental node, the check whether the parental
dominance holds for the key
• If it does not, exchange the node’s key K with the larger key of its children
and checks whether the parental dominance holds for K in its new position.
• This process continues until the parental dominance for K is satisf ied.
2. After completing the “heapif ication” of the subtree rooted at the current
parental node, proceed to do the same for the immediate predecessor.
3. The algorithm stops after this is done for the root of the tree.
Top-down heap construction algorithm
• It constructs a heap by successive insertions of a new key into a previously
constructed heap.
1. First, attach a new node with key K in it after the last leaf of the existing heap.
2. Then shift K up to its appropriate place in the new heap as follows.

a. Compare K with its parent’s key: if the latter is greater than or


equal to K, stop (the structure is a heap);
b. Otherwise, swap these two keys and compare K with its new
parent.
This swapping continues until K is not greater than its last parent
or it reaches root
Inserting a new key for already existing
heap
• Inserting a new key
(10)
Delete an item from a heap
Maximum Key Deletion from a heap
1. Exchange the root’s key with the last key K of the heap.
2. Decrease the heap’s size by 1.
3. “Heapify” the smaller tree by sifting K down the tree exactly in the same way we
did it in the bottom-up heap construction algorithm.
That is, verify the parental dominance for K: if it holds, we are done;
if not, swap K with the larger of its children and repeat this operation until the
parental dominance condition holds for K in its new position.
Heap Sort
• This is a two-stage algorithm that works as follows.
Stage 1 (heap construction): Construct a heap for a given array.
Stage 2 (maximum deletions): Apply the root-deletion
operation n−1 times to the remaining heap.
• As a result, the array elements are eliminated in decreasing order.
• Since under the array implementation of heaps an element being deleted is
placed last, the resulting array will be exactly the original array sorted in
increasing order.

You might also like