Graph Algorithms 10 271709207487730
Graph Algorithms 10 271709207487730
com
Graph Algorithm
Content:
INTRODUCTION
It is a path with no vertex repeated. A simple cycle is a simple path except the first and last
vertex is repeated and, for an undirected graph, number of vertices >= 3. A tree is a graph
with no cycle. A complete graph is a graph is a graph in which all edges are present. A sparse
graph is a graph with relatively few edges. A dense graph is a graph with many edges. An
undirected graph has no specific direction between the vertices. A directed graph has edges
that are “one way”. We can go from one vertex to another, but not vice versa. A weighted
graph has weights associated with each edge. The weights can represent distances, costs, etc.
It is an array, which contains a list for each vertex. The list for a given vertex holds all the
other vertices that are connected to the first vertex by a single edge.
2 | Page
www.byjusexamprep.com
It is a matrix with a row and column for each vertex. It is 1 if there is an edge between the
row vertex and the column vertex. The diagonal may be zero or one. Every edge is
represented twice.
It have a nice property: Every edge of G can be classified into one of three groups. Some edges
are in T themselves. Few connect two vertices at the same level of T. and the remaining ones
connect two vertices on two adjacent levels. It is not possible for an edge to skip a level.
Therefore, It really is a shortest path tree starting from its root. Every vertex has path to the
root, with path length equal to its level (just follow the tree itself), and no path can skip a
level so this really is a shortest path.
Example: Examine the graph G in Fig. Describe the whole process of breadth first search.
3 | Page
www.byjusexamprep.com
Solution:
Color[5] = GRAY
𝜋[6] ← 3 𝜋[5] ← 3
Now,
4 | Page
www.byjusexamprep.com
It edges explored out of the most recently discovered vertex v that still has unexplored edges
living it. When all of v’s edges have been explored, the search still has unexplored edges
leaving the vertex from which v was discovered. This method runs until we have launched
all the vertices that are reachable from the original source vertex. If any undetected vertices
remain, then one of them is selected as a new source and the search is repeated from that
source. This entire method is repeated until all vertices are discovered.
In this, each vertex v has two timestamps the first timestamp d[v] i.e. discovery time records
when v is first discovered i.e. grayed, and the second timestamp f[v] i.e. finishing time
records when the search finishes examining v’s adjacency list i.e, blacked. For every vertex
d[u] < f[u]. The running time of DFS is θ (V + E)
Example: Show the progress of the depth – first – search (DFS) algorithm on a directed
graph.
Solution:
5 | Page
www.byjusexamprep.com
6 | Page
www.byjusexamprep.com
TOPOLOGICAL SORT
Directed acyclic graphs or DAG’s are used for topological sorts. A topological sort of a
directed a cyclic graph G = (V, E) is a linear ordering of 𝑢, 𝑣 𝜖 𝑉 such that if (u, v) ∈ E then u
appears before v in this ordering. If G is cyclic, no such ordering exists.
TOPOLOGICAL-SORT (G)
1. Call DFS (G) to determine finishing times f[v] for each vertex v
2. Every vertex is finished, insert it onto the front of a linked list.
3. Return the linked list of vertices.
It can also be viewed as placing all the vertices along a horizontal line so that all directed
edges go from left to right. DAG’s are used in larger applications to determine precedence.
We can perform a topological sort in time θ(V + E).
SPANNING TREE
It is a graph just a sub graph that contains all the vertices and is a tree. A graph may have
many spanning trees. We can also allocate a weight to every edge, which is a number
representing how adverse it is, and use this to allocate a weight to a spanning tree by
calculating the sum of the weights of the edges in tree. A lower spanning tree or minimum
weight spanning tree is then a spanning tree with weight less than or equal to the weight of
every other spanning tree, any undirected graph has a minimum spanning forest.
To describe how to find a lower Spanning Tree, we will look at two algorithms: the Kruskal
algorithm and the Prim algorithm. Both different in their methodology, but both in the end,
end up with the MST. Kruskal’s algorithm uses edges, and Prim’s algorithm uses vertex
connection in determining the MST. Both algorithms are greedy algorithms that run in
polynomial time. At every step, one of several possible choices must be made.
KURSKAL’S ALGORITHM
It is an algorithm that finds a minimum spanning tree for a connected weighted graph. It
finds a secure edges to add to the developing forest by finding, of all the edges that connect
any two trees in the forst, an edge (u, v) of least weight. It means it observe a subset of the
edges that forms a tree that involves each vertex, where the total weight of all the edges in
the tree is minimized. If graph is not attached, then it finds a minimum spanning forest (a
minimum spanning tree for each connected component).
First sort the edges by weight using a comparison sort in O(E log E) time; Next, we use a
disjoint-set data structure to keep track of which vertices are in which components. We need
to execute O(E) operations, two search operations and possibly one union for every edge.
7 | Page
www.byjusexamprep.com
Even a simple disarrange-set data structure such as disjoint-set forests with union by rank
can perform O(E) operations in O(E log V) time. Thus the total time is
Example: Find the minimum spanning tree of the following graph using Krushal’s algorithm.
Solution: Find we initialize the set A to the empty set and create │v│trees, one containing
each vertex with MAKE-SET procedure. Then sort the edges in E into order by non-
decreasing weight, i.e,.
Now, check for each edge (u, v) whether the end points u and v belong to the same tree, then
the edge (u, v) cannot be added. Otherwise, the two vertices belongs to different trees and
the edge (u, v) is added to A and the vertices in the two tress are merged in by UNION
procedure.
8 | Page
www.byjusexamprep.com
Then (a, b) and (I, g) edges are considered and forest becomes.
Now, edge (h, i). Both h and I vertices are in same set, thus it creates a cycle. So this edge is
discarded.
edge (c, d), (b, c), (a, h), (d, e), (e, f) are considered and forest becomes.
In (e, f) edge both end points e and f exist in same tree so discarded this edge.
After edge (d, f) and the final spanning tree is shown as in dark lines.
PRIM’S ALGORITHM
The main idea of Prim’s algorithm is similar to that of Dijkstra’s algorithm (discussed later)
for finding shortest path in a given graph. It has the property that the edges in the set A
always form a single tree. We being with some vertex v in a given graph G = (V, E), defining
the initial set of vertices A. then, in each emphasis, we select a minimum-weight edge (u, v),
9 | Page
www.byjusexamprep.com
connecting a vertex v in the set A to the vertex u outside of set A. then vertex u is brought in
to A. this method is frequent until a spanning tree is formed. Like krushkal’s algorithm, here
too, the important fact about MSTs is we always choose the smallest-weight edge joining a
vertex inside set A to the one outside the set A.
Example: Create minimum cost spanning tree for the following graph using Prim’s
algorithm.
Solution: First we initialize the priority queue Q. to carry all the vertices and the key of each
vertex to ∞ except for the root, whose key is set to 0. Suppose 0 vertex is the root, i.e., r. By
EXRACT-MIN(Q) produce, now u =r and Adj[u] = [5, 1].
Removing u from the set Q and adds it to the set V – Q of vertices in the tree. Now, update the
key and 𝜋 fields of every vertex v adjacent to u but not in the tree.
Key [5] = ∞
Adj[5] = [4]
10 | P a g e
www.byjusexamprep.com
Adj[4] = [6, 3]
key[3] = 22
Adj[3] = {4, 6, 2}
Key [2] = 12
Adj[2] = {3, 1}
3 ≠ Q.
11 | P a g e
www.byjusexamprep.com
Adj[1] = {0, 6, 2}
DIJKSTRA’S ALGORITHM
It is named behind its discoverer, Dutch computer scientist Edsger Dijkastra, is a greedy
algorithm that solves the single-source shortest path problem for a directed graph G = (V, E)
with nonnegative edge weights i.e. we suppose that w(u, v) ≥ 0 for each edge (u, v) ∈ E.
It keeps a set S of vertices whose final shortest-path weights from the sources have already
been examined, for all vertices v ∈ S, we have d[v] = δ(s, v). The algorithm frequently choose
the vertex u ∈ V – S with the minimum shortest – path estimate, inserts u into S, and relaxes
12 | P a g e
www.byjusexamprep.com
all edges leaving u. we preserve a priority queue Q that holds all the vertices in v – s, keyed
by their d values. Graph G is represented by adjacency lists.
Because it always selects the “lightest” or “closest” vertex in V – S to insert into set S, we say
that it uses a greedy strategy.
It bears some similarity to both breadth-first search and Prime’s algorithm for computing
minimum spanning trees. It is like breadth-first search in that set S corresponds to the group
of black vertices in a breadth-first search; just as vertices in S have their final shortest-path
weights, so black vertices in a breadth-first search have their correct breadth-first distances.
It is like Prim’s algorithm in both algorithms use a priority line to find the “lightest” vertex
outside a given set, insert this vertex into the set, and adjust the weights of the remaining
vertices outside the set accordingly.
ANALYSIS
Example:
Initialize:
13 | P a g e
www.byjusexamprep.com
“E” ← EXTRACT-MIN(Q):
14 | P a g e
www.byjusexamprep.com
INTRODUCTION
The all-parts shortest path problem can be considered the mother of all routing problems. If
aims to compute the shortest path from each vertex v to every other u. Using standard single
source algorithms, we can expect to get a naïve implementation of O(n3) if we use Dijkstra
for example, i.e. running a O(n2) process n times. Likewise, if we use the Bellman-Ford-Moore
algorithm on a dense graph, it’ll take about O(n4), but handle negative arc-length too.
15 | P a g e
www.byjusexamprep.com
Storing all paths explicitly can be very memory expensive indeed, as we need one spanning
tree for each vertex, this is often impractical in terms of memory consumption, so these are
considered as all-pairs shortest distance problems, which aim to find just the distance from
each to each node to another. We want the output in tabular form: the entry in u’s row and
v’s column should be the weight of a shortest path from u to v.
Algorithm Cost
Matrix multiplication O(V3 lg V)
Floyed-Warshall O(V3)
johnson O(V2 lg V + VE)
NP – COMPLETENESS
CLASSES OF PROBLEMS
16 | P a g e
www.byjusexamprep.com
These problems fall somewhere between class (3) and class (4) given above.
However, for every o the problems in the class, it is known that it is in NP, i.e.,
each can be solved by at least one-Non-Deterministic Turning Machine, the
difficulty time of which is a polynomial function of the size of the problem. Now,
we can go still further and categorize the problems as.
17 | P a g e
www.byjusexamprep.com
The set of all problems that can be solved if we always guess correctly what
computation path we should follow. Roughly speaking, it includes problems with
exponential algorithms but has not proved that they cannot have polynomial time
algorithms. A language L ∈ NP is and only if there exists a polynomial-tree
verification algorithm A for L.
EXAMPLE OF LANGUAGES IN NP
Theorem
Proof. L2 ∈ P means that we have a polynomial time algorithm A2 for L2. Since
L1 ≤ pL2, we have a polynomial-time transformation f mapping input x for L1 to
18 | P a g e
www.byjusexamprep.com
an input for L2. Combining these, we get the following polynomial-time algorithm
for solving L1:
Every steps (1) and (2) takes polynomial time. So the combine algorithm takes
polynomial time. Hence L1 ∈ P.
NOTE: This does not imply that if L1 ≤ pL2 and L1 ∈ P, then L2 ∈ P. This
statement is not true.
NP – COMPLETENESS
Polynomial – time depletion provide a formal means for showing that one problem
is at least as hard as another, to within polynomial – time factor. That is, if L1 ≤
pL2, then L1 is not more than polynomial factor harder than L2, which is why the
“less than or equal to” notation for reduction is mnemonic. We can define the
set of NP – Complete languages, which are the hardest problems in NP.
1. L ∈ NP; and
2. For every L’ ∈ NP, L’ ≤ pL
19 | P a g e
www.byjusexamprep.com
Theorem
PROVING NP – COMPLETENESS
a. P is in NP.
b. All problems in NP – complete can be reduced to P.
a. P is in NP.
b. Find a problem P’ that has already been proven to be in NP – Complete.
c. Show that P’ ≤ pP.
21 | P a g e