Advances in Algorithms_Unit 5 (2)
Advances in Algorithms_Unit 5 (2)
The Bellman-Ford algorithm is a classical algorithm used for finding the shortest paths
from a single source node to all other nodes in a weighted graph. Unlike Dijkstra's
algorithm, the Bellman-Ford algorithm can handle graphs with negative weight edges and
can also detect negative weight cycles.
Key Features of Bellman-Ford:
Can handle negative weight edges.
Can detect negative weight cycles in the graph.
Works on both directed and undirected graphs.
Suitable for graphs with negative weights, but does not work with negative weight
cycles in terms of shortest path calculation.
Working of the Bellman-Ford Algorithm
The Bellman-Ford algorithm works by iteratively relaxing all the edges of the graph.
Relaxation means that the algorithm tries to improve the shortest known distance to each
vertex by checking if a shorter path can be found by going through a specific edge.
Steps of the Algorithm:
1. Initialization:
o Set the distance to the source vertex as 0 and the distance to all other vertices
as infinity (∞).
o Let the source node be S, and the distance to S is initialized as d[S] = 0.
2. Relaxation:
o Repeat the following process for V - 1 times (where V is the number of
vertices in the graph):
For each edge (u, v) with weight w(u, v), check if the current known
distance to vertex v can be improved by going through vertex u.
Specifically, check if d[u] + w(u, v) < d[v]. If this is true, update d[v]
to d[u] + w(u, v).
3. Negative Weight Cycle Detection:
o After V - 1 relaxations, repeat the relaxation step once more. If any distance
can still be updated, it means there is a negative weight cycle in the graph.
This is because any vertex that can still be relaxed indicates that it is part of a
cycle where the total weight keeps decreasing.
4. Return the Shortest Path Distances:
o After completing all the relaxations, the algorithm returns the shortest path
distances from the source node to all other nodes, provided there are no
negative weight cycles.
Time Complexity of Bellman-Ford Algorithm
The time complexity of the Bellman-Ford algorithm is O(V * E), where V is the
number of vertices and E is the number of edges in the graph.
o The algorithm performs V - 1 relaxations, and for each relaxation, it checks all
edges, leading to the time complexity of O(V * E).
Handling Negative Weights and Negative Cycles
The Bellman-Ford algorithm is unique in that it can handle negative weight edges. In
traditional shortest path algorithms like Dijkstra's, negative weights would cause
incorrect results. However, the Bellman-Ford algorithm does not suffer from this
problem and can still compute the correct shortest paths even when negative weights
are present.
The algorithm can also detect negative weight cycles in a graph. If the algorithm can
still relax an edge after V - 1 relaxations, it indicates that there is a negative weight
cycle, because any such cycle would continue to reduce the path length indefinitely.
Application of Bellman-Ford Algorithm
Shortest Path Calculation: In networks or graphs with negative weights (e.g., in
financial transactions, or routing algorithms with fluctuating costs), Bellman-Ford is
used to find the shortest path.
Detecting Negative Weight Cycles: The Bellman-Ford algorithm is used to detect
negative weight cycles in a graph, which can be useful in certain scenarios such as in
arbitrage detection in currency exchange networks.
Used in Algorithms: Bellman-Ford is often used in dynamic programming and
optimization problems where negative weights may appear.
Bellman ford algorithm is a single-source shortest path algorithm. This algorithm is used
to find the shortest distance from the single vertex to all the other vertices of a weighted
graph.
There are various other algorithms used to find the shortest path like Dijkstra algorithm,
etc.
If the weighted graph contains the negative weight values, then the Dijkstra algorithm
does not confirm whether it produces the correct answer or not.
In contrast to Dijkstra algorithm, bellman ford algorithm guarantees the correct answer
even if the weighted graph contains the negative weight values.
Consider the below graph:
As we can observe in the above graph that some of the weights are negative. The above graph
contains 6 vertices so we will go on relaxing till the 5 vertices. Here, we will relax all the
edges 5 times. The loop will iterate 5 times to get the correct answer. If the loop is iterated
more than 5 times then also the answer will be the same, i.e., there would be no change in the
distance between the vertices.
Relaxing means:
1. If (d(u) + c(u , v) < d(v))
2. d(v) = d(u) + c(u , v)
To find the shortest path of the above graph, the first step is note down all the edges which
are given below:
(A, B), (A, C), (A, D), (B, E), (C, E), (D, C), (D, F), (E, F), (C, B)
Let's consider the source vertex as 'A'; therefore, the distance value at vertex A is 0 and the
distance value at all the other vertices as infinity shown as below:
Since the graph has six vertices so it will have five iterations.
First iteration
Consider the edge (A, B). Denote vertex 'A' as 'u' and vertex 'B' as 'v'. Now use the relaxing
formula:
d(u) = 0
d(v) = ∞
c(u , v) = 6
Since (0 + 6) is less than ∞, so update
1. d(v) = d(u) + c(u , v)
d(v) = 0 + 6 = 6
Therefore, the distance of vertex B is 6.
Consider the edge (A, C). Denote vertex 'A' as 'u' and vertex 'C' as 'v'. Now use the relaxing
formula:
d(u) = 0
d(v) = ∞
c(u , v) = 4
Since (0 + 4) is less than ∞, so update
1. d(v) = d(u) + c(u , v)
d(v) = 0 + 4 = 4
Therefore, the distance of vertex C is 4.
Consider the edge (A, D). Denote vertex 'A' as 'u' and vertex 'D' as 'v'. Now use the relaxing
formula:
d(u) = 0
d(v) = ∞
c(u , v) = 5
Since (0 + 5) is less than ∞, so update
1. d(v) = d(u) + c(u , v)
d(v) = 0 + 5 = 5
Therefore, the distance of vertex D is 5.
Consider the edge (B, E). Denote vertex 'B' as 'u' and vertex 'E' as 'v'. Now use the relaxing
formula:
d(u) = 6
d(v) = ∞
c(u , v) = -1
Since (6 - 1) is less than ∞, so update
1. d(v) = d(u) + c(u , v)
d(v) = 6 - 1= 5
Therefore, the distance of vertex E is 5.
Consider the edge (C, E). Denote vertex 'C' as 'u' and vertex 'E' as 'v'. Now use the relaxing
formula:
d(u) = 4 , d(v) = 5 and c(u , v) = 3
Since (4 + 3) is greater than 5, so there will be no updation.
The value at vertex E is 5.
Consider the edge (D, C). Denote vertex 'D' as 'u' and vertex 'C' as 'v'. Now use the relaxing
formula:
d(u) = 5, d(v) = 4 and c(u , v) = -2
Since (5 -2) is less than 4, so update
1. d(v) = d(u) + c(u , v) d(v) = 5 - 2 = 3
Therefore, the distance of vertex C is 3.
Consider the edge (D, F). Denote vertex 'D' as 'u' and vertex 'F' as 'v'. Now use the relaxing
formula:
d(u) = 5
d(v) = ∞
c(u , v) = -1
Since (5 -1) is less than ∞, so update
1. d(v) = d(u) + c(u , v)
d(v) = 5 - 1 = 4
Therefore, the distance of vertex F is 4.
Consider the edge (E, F). Denote vertex 'E' as 'u' and vertex 'F' as 'v'. Now use the relaxing
formula:
d(u) = 5
d(v) = ∞
c(u , v) = 3
Since (5 + 3) is greater than 4, so there would be no updation on the distance value of
vertex F.
Consider the edge (C, B). Denote vertex 'C' as 'u' and vertex 'B' as 'v'. Now use the relaxing
d(u) = 3
d(v) = 6
c(u , v) = -2
Since (3 - 2) is less than 6, so update
1. d(v) = d(u) + c(u , v)
d(v) = 3 - 2 = 1
Therefore, the distance of vertex B is 1.
Now the first iteration is completed. We move to the second iteration.
formula:
Second iteration:
In the second iteration, we again check all the edges. The first edge is (A, B). Since (0 + 6) is
greater than 1 so there would be no updation in the vertex B.
The next edge is (A, C). Since (0 + 4) is greater than 3 so there would be no updation in the
vertex C.
The next edge is (A, D). Since (0 + 5) equals to 5 so there would be no updation in the vertex
D.
The next edge is (B, E). Since (1 - 1) equals to 0 which is less than 5 so update:
d(v) = d(u) + c(u, v)
d(E) = d(B) +c(B , E)
=1-1=0
The next edge is (C, E). Since (3 + 3) equals to 6 which is greater than 5 so there would be no
updation in the vertex E.
The next edge is (D, C). Since (5 - 2) equals to 3 so there would be no updation in the vertex
C.
The next edge is (D, F). Since (5 - 1) equals to 4 so there would be no updation in the vertex
F.
The next edge is (E, F). Since (5 + 3) equals to 8 which is greater than 4 so there would be no
updation in the vertex F.
The next edge is (C, B). Since (3 - 2) equals to 1` so there would be no updation in the vertex
B.
Third iteration
We will perform the same steps as we did in the previous iterations. We will observe that
there will be no updation in the distance of vertices.
1. The following are the distances of vertices:
2. A: 0
3. B: 1
4. C: 3
5. D: 5
6. E: 0
7. F: 3
Drawbacks of Bellman ford algorithm
o The bellman ford algorithm does not produce a correct answer if the sum of the edges
of a cycle is negative. Let's understand this property through an example. Consider the
below graph.
o In the above graph, we consider vertex 1 as the source vertex and provides 0 value to
it. We provide infinity value to other vertices shown as below:
Since the graph contains 4 vertices, so according to the bellman ford algorithm, there would
be only 3 iterations. If we try to perform 4th iteration on the graph, the distance of the vertices
from the given vertex should not change. If the distance varies, it means that the bellman ford
algorithm is not providing the correct answer.
4th iteration
The first edge is (1, 3). Since (0 +5) equals to 5 which is greater than -6 so there would be no
change in the vertex 3.
The next edge is (1, 2). Since (0 + 4) is greater than 2 so there would be no updation.
The next edge is (3, 2). Since (-6 + 7) equals to 1 which is less than 3 so update:
d(v) = d(u) + c(u, v)
d(2) = d(3) +c(3, 2)
= -6 + 7 = 1
In this case, the value of the vertex is updated. So, we conclude that the bellman ford
algorithm does not work when the graph contains the negative weight cycle.
Therefore, the value at vertex 2 is 1.
Network Flow: Edmonds-Karp Algorithm
The Edmonds-Karp algorithm for finding the maximum flow in network
graphs. It’s a variant of the Ford-Fulkerson method, where augmenting
paths are identified using breadth-first search (BFS). We define the network
flow-related notions required throughout our discussion. We assume we’re
given a directed graph , where is the set of vertices, and is the set of edges
of the graph. We also assume that the size of is N and the size of is M.
Assume we have a network graph shown below. The edge capacities are
depicted as well:
Let’s show how the Edmonds-Karp algorithm works by finding the maximum
flow on the graph step by step. Initially, we build Gr, which fully matches G as
there’s no flow in G yet. Below, we can see Gr on the left and G on the right.
For the edges of G, we depict the current flow on them followed by their
capacity:
Next, we find the first shortest path from to using BFS. The path consists of
three edges and is shown on below. The flow increases by 8 along the
augmenting path. We parallelly change the flow across the edges of as the
algorithm progresses:
As the flow changes, new reverse edges may appear in . Below, we can see
new reverse edges, along with the second shortest path from to consisting
of four edges. The flow increases by 5 this time:
Next, we find another augmenting path consisting of four edges. The path
increases the flow by 2:
We’re in a situation now where there’re no augmenting paths left in . It’s the
indication that the algorithm has finished and we’ve found the maximum flow
from to :
The maximum flow value may be fetched by either adding the flows on the
outgoing edges of S or by adding the flows on the incoming edge of t. In either
case, the maximum flow value is 15 for our example.
Example 2:
The Edmonds-Karp algorithm solves the maximum flow problem for a
directed graph. The flow comes from a source vertex (s) and ends up in a sink
vertex (t), and each edge in the graph allows a flow, limited by a capacity. The
Edmonds-Karp algorithm is very similar to the Ford-Fulkerson algorithm,
except the Edmonds-Karp algorithm uses Breadth First Search (BFS) to find
augmented paths to increase flow.
The Edmonds-Karp algorithm starts with using Breadth-First Search to
find an augmented path where flow can be increased, which is s→v1→v3→t.
After finding the augmented path, a bottleneck calculation is done to find how
much flow can be sent through that path, and that flow is: 2. So a flow of 2 is
sent over each edge in the augmented path.
The source vertex s only has an outgoing flow, and the sink vertex t has only
incoming flow. It is easy to see that the following equation holds:
∑ ❑ f ((s ,u))= ∑ ❑ f ((u ,t))
(s ,u )∈E (u , t)∈ E
Let's define one more thing. A residual capacity of a directed edge is the
capacity minus the flow. It should be noted that if there is a flow along some
directed edge (u, v) , then the reversed edge has capacity 0 and we can define the
flow of it as f((v, u)) = -f((u, v)) . This also defines the residual capacity for all
the reversed edges. We can create a residual network from all these edges,
which is just a network with the same vertices and edges, but we use the
residual capacities as capacities.
The Ford-Fulkerson method works as follows. First, we set the flow of each
edge to zero. Then we look for an augmenting path from s to t . An augmenting
path is a simple path in the residual graph where residual capacity is positive for
all the edges along that path. If such a path is found, then we can increase the
flow along these edges. We keep on searching for augmenting paths and
increasing the flow. Once an augmenting path doesn't exist anymore, the flow is
maximal.
Let us specify in more detail, what increasing the flow along an
augmenting path means. Let C be the smallest residual capacity of the edges in
the path. Then we increase the flow in the following way: we update f((u, v)) +=
C and f((v, u)) -=C for every edge (u, v)$ in the path. Here is an example to
demonstrate the method. We use the same flow network as above. Initially we
start with a flow of 0.
We can find the path s - A - B - t with the residual capacities 7, 5, and 8. Their
minimum is 5, therefore we can increase the flow along this path by 5. This
gives a flow of 5 for the network.
1. Overview
In this tutorial, we’ll discuss how to find the minimum cut of a graph by
calculating the graph’s maximum flow. We’ll describe the max-flow min-cut
theorem and present an algorithm to find the maximum flow of a graph. Finally,
we’ll demonstrate the algorithm with an example and analyze the time
complexity of the algorithm.
2. Minimum Cut in a Graph
In general, a cut is a set of edges whose removal divides a connected graph
into two disjoint subsets. There are two variations of a cut: maximum cut and
minimum cut. Before discussing the max-flow min-cut theorem, it’s important
to understand what a minimum cut is.
Let’s assume a cut partition the vertex set into two sets and .
The net flow of a cut can be defined as the sum of
flow , where are two nodes , and . Similarly the
capacity of a cut is the sum of the individual capacities,
where are two nodes , and .
The minimum cut of a weighted graph is defined as the minimum sum of
weights of edges that, when removed from the graph, divide the graph into
two sets.
Let’s see an example:
Here in this graph, is an example of a minimum cut. It removes the
edges and , and the sum of weights of these two edges are minimum
among all other cuts in this graph.
3. Maximum Flow in a Graph
NP-Completeness
Introduction
NP-completeness is a fundamental concept in theoretical computing that
addresses the inherent difficulty of certain computational problems. The
abbreviation "NP" stands for non-deterministic polynomial time, which is a
class of decision problems that can be solved in polynomial time by a non-
deterministic Turing machine. NP-completeness provides the ability to identify
and classify problems that are believed to be inherently hard in the sense that
efficient algorithms for solving them may not exist. Stephen Cook introduced
this concept in 1971 when he showed the existence of problems with a certain
degree of complexity. The cornerstone of NP-completeness is the notion of a
"hard" problem of the NP category. A decision problem is said to be NP-
complete if it satisfies two critical conditions: Being in NP: solutions to the
problem can be quickly checked using a deterministic polynomial time
algorithm. If someone proposes a solution, its correctness can be verified
relatively efficiently. Difficulty: The problem is at least as difficult as the
hardest problems in NP. If there is an efficient algorithm for every NP-complete
problem, then there is an efficient algorithm for all problems in NP. This is
known as the Cook-Levin theorem, which proved the NP-completeness of the
Boolean satisfiability problem (SAT). The importance of NP-completeness lies
in its effect on algorithmic efficiency. If an efficient algorithm could be found
for every NP-complete problem, this would imply the existence of efficient
algorithms for all NP problems.
Basically, the question is whether every problem that can be solved quickly can
also be solved quickly. The identification of new NP-complete problems has
been a critical area of research, and many well-known problems in various
fields, including graph theory, optimization, and planning, have been shown to
be NP-complete. NP-completeness has far-reaching consequences for
cryptography, optimization, and understanding the limits of computing. Since
my last data update in September 2021, P vs. The NP problem remains one of
the most important unsolved problems in computer science.
History of NP-Perfection
The history of NP-completeness is closely related to the development of
computational complexity theory and the study of the inherent difficulty of
computational problems. Here is a brief overview of the major milestones in the
history of NP-completeness: Boolean Satisfiability Problem (SAT): The concept
of NP-completeness was introduced by Stephen Cook in his landmark 1971
article, The Complexity of Theorem-Proving Procedures (The Complexity of
Theorem-Proving Procedures). Cook showed the existence of the NP-complete
problem by establishing the NP-completeness of Boolean satisfiability. problem
(SAT). Cook's result was groundbreaking and laid the foundation for the theory
of NP-completeness. Cooke-Levin theorem: Stephen Cook's paper presented the
Cook-Levin theorem, which shows that the Boolean satisfiability problem is
NP-complete. This theorem is fundamental to NP-completeness and is the
starting point for proving NP-completeness of other problems. Box 21 NP-
complete problems: In 1972, Richard Karp published the seminal paper
"Reducibility Among Combinatorial Problems", in which he identified 21
distinct combinatorial problems that are NP-complete. This work showed that
the computational difficulty of many problems in different fields, including
graph theory, set theory, and optimization, is the same. NP-completeness proofs
for various problems: After Karp's original set of 21 problems, researchers
continued to prove the NP-completeness of additional problems in various
fields. This involved problems with scheduling, chart coloring, and the traveling
salesman. Developing categories of complexity: As researchers explored the
landscape of computational complexity, additional categories of complexity
such as P (polynomial time), NP (nondeterministic polynomial time), and co-NP
were defined. The problem P vs. NP, which asks whether P is equal to NP,
became a central question in the field and remains unresolved. Cook's theorem
and Levin's work: Cook's early work was influential, and Leonid Levin
independently developed similar ideas around the same time. Levin's work
contributed to the understanding of NP-completeness and complexity theory.
Contributions from other researchers: Many other researchers have made
important contributions to the field of NP-completeness, including Michael
Sipser, Neil Immerman, and others who have extended the theory to concepts
such as polynomial-time hierarchy and space complexity. Practical implications:
Although NP-completeness implies theoretical hardness, it also has practical
implications. Many real-world problems believed to be NP-complete exhibit
similar computational challenges. This led to the development of approximation
algorithms and heuristics to solve these problems in practice. The history of NP-
completeness is characterized by the study of the nature of computation, the
limits of efficient algorithms, and the classification of problems based on their
inherent difficulty. P vs. The NP problem remains one of the most important
open questions in computer science.
The NP-completeness Problem
The notion of NP-completeness is a central theme in computational complexity
theory. It refers to a class of problems that are both in the NP (nondeterministic
polynomial time) complexity class and, informally speaking, as hard as the
hardest NP problems. A problem is NP-complete if it is at least as hard as the
NP-hardest problems. Here are some key points and problems with NP-
completeness: Definition of NP-completeness: A decision problem is in NP if it
can be solved quickly (in polynomial time). A problem is NP-complete if it is in
NP and any problem in NP can be reduced to it in polynomial time. Cook's
Theorem: Cook's theorem, proved by Stephen Cook in 1971, was the first to
establish the notion of NP-completeness. This showed that the Boolean
satisfiability problem (SAT) is NP-complete. Transitivity of polynomial time
reduction: If problem A can be reduced to problem B in polynomial time and B
is NP-complete, then A is also NP-complete. This is the basis for proving the
NP-completeness of many problems. Consequences of NP-completeness: If any
NP-complete problem can be solved in polynomial time, then every problem in
NP can be solved in polynomial time. This is called the P vs. NP, which is one
of the most famous open problems in computer science. Common NP-Complete
problems: Many well-known problems are NP-complete, including the traveling
salesman problem, the knapsack problem, and the clique problem. Practical
implications: NP-completeness of a problem does not necessarily mean that it is
difficult to solve in practice. This means that as the size of the input increases,
the time required to solve the deterministic algorithm problem increases
exponentially. Approximation algorithms: For many NP-complete problems,
finding an exact solution in polynomial time is unlikely. Instead, approximation
algorithms are often used to find optimal solutions in a reasonable amount of
time. Applications: NP-complete problems have applications in many fields,
including cryptography, optimization, planning, and logistics. Heuristic
approaches: Since the exact solution of NP-complete problems is difficult,
heuristic approaches and approximation algorithms are often used in practice to
find solutions that are good enough for practical purposes. Understanding NP-
completeness has important implications for the limits of efficient computing
and has implications for the design of algorithms and problem-solving strategies
in computing.