MSC Math Graph Theory Unit 1
MSC Math Graph Theory Unit 1
1. Introduction
In mathematics and computer science, a graph is a structure used to model relationships between
objects. It's a fundamental concept in a field called Graph Theory. Think of it as a visual way of
representing connections. The objects are represented by "dots" (called vertices), and the
connections between them are represented by "lines" (called edges). For example, a social
network can be modeled as a graph where each person is a vertex, and a "friendship" connection
is an edge. Understanding graphs is extremely important because they are used to solve real-
world problems in networking, logistics, biology, computer chip design, and many other areas.
This section will define what a graph is formally and explore the many different types of graphs
that exist, which are classified based on their properties and structure.
A graph is a mathematical structure consisting of two sets: a set of vertices and a set of edges.
Let's break this down with a simple example. Imagine we have four friends: Alice, Bob, Charlie,
and David.
The set of vertices V would be the friends: V={Alice, Bob, Charlie, David}.
Let's say Alice is friends with Bob, Bob is friends with Charlie, Charlie is friends with
David, and David is friends with Alice. The set of edges E represents these friendships:
E={{Alice, Bob},{Bob, Charlie},{Charlie, David},{David, Alice}}.
This graph G=(V,E) represents the friendship circle. The vertices are the fundamental points or
objects in our model, and the edges are the relationships that connect them.
To understand graphs properly, we need to know some basic terms that are used frequently.
Vertex (or Node): A single point or element in the graph. In our example, Alice is a
vertex.
Edge (or Link): A line that connects two vertices. The friendship between Alice and Bob
is an edge. An edge connecting vertices u and v is written as {u,v} for undirected graphs
or (u,v) for directed graphs.
Adjacent Vertices: Two vertices are called adjacent if there is an edge connecting them.
In our example, Alice and Bob are adjacent, but Alice and Charlie are not.
Incident Edge: An edge is said to be incident on the vertices it connects. The edge
{Alice, Bob} is incident on both vertex Alice and vertex Bob.
Degree of a Vertex: The degree of a vertex is the total number of edges incident on it. A
self-loop (an edge connecting a vertex to itself) is usually counted twice. For a vertex v,
its degree is denoted by deg(v). If Bob is only friends with Alice and Charlie, then
deg(Bob)=2.
Isolated Vertex: A vertex with a degree of 0. This means it is not connected to any other
vertex in the graph.
Pendant Vertex: A vertex with a degree of 1. It is also sometimes called a leaf vertex.
Graphs can be broadly classified into two main categories based on whether their edges have a
direction.
Undirected Graph: An undirected graph is a graph where the edges do not have a
direction. The relationship between two vertices is mutual. If there is an edge between
vertex A and vertex B, it means you can go from A to B and also from B to A. The edge
is represented as an unordered pair {A,B}, which is the same as {B,A}.
o Example: A social network like Facebook. If you are friends with someone, they
are also friends with you. The friendship is a two-way relationship.
Directed Graph (Digraph): A directed graph, or digraph, is a graph where every edge
has a specific direction, indicated by an arrow. If there is an edge from A to B, it doesn't
necessarily mean there is an edge from B to A. The edge is an ordered pair (A,B), which
is different from (B,A).
o In directed graphs, the degree of a vertex is split into two types:
In-degree: The number of edges coming into the vertex.
Out-degree: The number of edges going out from the vertex.
o Example: A social network like Twitter or Instagram. You can follow someone
(an edge from you to them) without them following you back. Another example is
a network of one-way streets in a city.
Graphs are also categorized based on whether they allow multiple edges between the same two
vertices or edges that connect a vertex to itself.
Simple Graph: A simple graph is an undirected graph that has no self-loops and no
multiple edges. A self-loop is an edge that connects a vertex to itself. Multiple edges (or
parallel edges) are when two or more edges connect the same pair of vertices. Most of the
graphs studied in introductory graph theory are simple graphs.
Multigraph: A multigraph is a graph that allows multiple edges between the same two
vertices but has no self-loops.
o Example: A road network between two cities. There might be several different
highways connecting City A to City B. Each highway would be a separate edge.
Pseudograph: A pseudograph is the most general type of undirected graph. It allows
both multiple edges and self-loops.
o Example: A computer network diagram where a machine might have a
connection back to itself (a self-loop) for diagnostic purposes, and multiple cables
connecting two specific machines.
There are several special types of graphs that have unique properties and are very important in
both theory and application.
Complete Graph (Kn): A complete graph is a simple undirected graph where every
distinct pair of vertices is connected by a unique edge. A complete graph with n vertices
is denoted by Kn. In Kn, the degree of every vertex is n−1. The total number of edges in a
complete graph Kn is calculated by the formula:
Edges=2n(n−1)
3. Conclusion
To summarize, a graph is a powerful tool for representing objects and their interconnections. The
fundamental components of a graph are its vertices (nodes) and edges (links). The most
important way to classify graphs is based on whether their edges are undirected (mutual
relationships) or directed (one-way relationships). Further classifications, such as simple
graphs, multigraphs, and pseudographs, depend on the rules about self-loops and multiple
edges. Finally, special structures like complete graphs, bipartite graphs, and weighted graphs
provide the foundation for solving a vast range of computational problems. A solid
understanding of these basic definitions and types is the first and most critical step in mastering
graph theory and its applications.
1. Introduction
In graph theory, we often want to describe how to get from one vertex to another by moving
along the edges. The terms Walk, Path, and Circuit are precise mathematical concepts used to
define different ways of traversing a graph. While they might sound similar, they have very
specific rules about repeating vertices and edges. Understanding these distinctions is
fundamental to studying graphs, as they form the basis for many important algorithms. For
example, finding the shortest route in a GPS system involves finding a specific type of "path,"
while determining if a network is fully connected involves checking for the existence of "walks"
between its nodes. This section will carefully define each of these concepts, explain their
properties with clear examples, and establish the relationship between them.
To make the explanations clear, let's use the following sample graph for all our examples:
Our sample graph has vertices V={A,B,C,D,E} and edges connecting them.
2.1 Walk: The Basic Traversal
A walk is the most general and unrestricted way of moving through a graph.
Definition: A walk is a sequence of vertices and edges, starting and ending with a vertex.
In the sequence, each vertex is followed by an edge that connects it to the next vertex. A
walk can be written as v0,e1,v1,e2,v2,...,ek,vk, where the edge ei connects the vertices
vi−1 and vi. For simple graphs, we often just list the sequence of vertices, like v0,v1,v2
,...,vk, since the edge between them is clear.
Key Property: In a walk, you are free to repeat both vertices and edges as many times as
you like.
Example: Using our sample graph, here is a valid walk from vertex A to vertex D:
A→B→C→B→A→C→D
o In this sequence, the vertex B is visited twice.
o The vertex C is visited twice.
o The vertex A is visited twice.
o The edge {A,B} (traversed as A→B) and the edge {B,A} (traversed as B→A) are
the same edge, used twice. The same applies to the edge {B,C}.
Length of a Walk: The length of a walk is the number of edges it contains. The example
walk above has a length of 6.
Open and Closed Walks:
o Open Walk: A walk is open if its starting and ending vertices are different. Our
example A→...→D is an open walk.
o Closed Walk: A walk is closed if its starting and ending vertices are the same.
For example, A→B→C→A is a closed walk of length 3.
A path is the most restrictive and perhaps the most commonly used type of traversal.
Just as walks have open and closed versions, trails and paths also have their closed counterparts.
3. Conclusion
In graph theory, the way we traverse a graph is described with precise terminology. A Walk is
the most basic traversal, with no rules against reusing vertices or edges. A Trail tightens this rule
by forbidding the reuse of edges, while still allowing vertices to be revisited. A Path is the most
restrictive, forbidding the reuse of any vertex (and therefore any edge). Their closed counterparts
follow similar rules: a Circuit is a closed trail (no repeated edges), and a Cycle is a "simple"
circuit with no repeated intermediate vertices. These definitions are not just academic exercises;
they are the building blocks for algorithms that determine connectivity, find shortest routes,
detect dependencies, and analyze the very structure of networks. A clear understanding of the
hierarchy—Path ⊂ Trail ⊂ Walk—is essential for any further study in this field.
1. Introduction
One of the most fundamental properties of a graph is its connectivity. In simple terms,
connectivity tells us whether the graph is all in one piece or broken into several separate parts. A
graph is considered connected if you can get from any vertex to any other vertex by following a
sequence of edges. If you can't, the graph is disconnected. Think of a country's road network: if
you can drive from any city to any other city, the network is connected. If some cities are on an
island with no bridges to the mainland, the network is disconnected. This concept is crucial in
many real-world applications, such as computer networks (can all computers communicate?),
social networks (is everyone part of the same community?), and logistics (can goods be
transported between all warehouses?). Understanding connectivity helps us analyze the structure
and reliability of these networks.
2. Detailed Explanation (Key Points Expanded)
An undirected graph is said to be connected if there exists a path between every pair of distinct
vertices in the graph.
In simpler terms: Pick any two dots (vertices) in the graph. If you can always trace a
line (a path of edges) from the first dot to the second, then the entire graph is connected.
It means the graph is a single, unbroken structure.
Formal Condition: For any two vertices u and v in the graph's vertex set V, there is a
path from u to v.
Example:
In the graph above, you can find a path between any two vertices. For instance, to go
from vertex A to vertex D, you can follow the path A→B→C→D. Since a path can be
found between all possible pairs, this graph is connected.
In the graph above, you can travel between A, B, and C. You can also travel between D,
E, and F. However, there is no path from any vertex in the set {A,B,C} to any vertex in
the set {D,E,F}. Therefore, this graph is disconnected.
A disconnected graph is made up of several connected pieces. Each of these pieces is called a
component.
For directed graphs, where edges have direction, connectivity is more complex. We have two
main types:
In the graph where A→B→C→A, you can get from A to B directly. You can get
from B to A by following the path B→C→A. Since this holds true for all pairs,
the graph is strongly connected. Every strongly connected graph is also weakly
connected.
2.5 Cut Vertices and Cut Edges (Bridges)
Cut Vertex (or Articulation Point): A cut vertex is a vertex that, if removed (along
with all edges connected to it), increases the number of connected components in the
graph. It's like a central train station that, if closed, would split the rail network into
separate parts.
o Example: Consider a graph shaped like a dumbbell: two triangles connected by a
single vertex in the middle. That middle vertex is a cut vertex. Removing it would
leave two disconnected triangles.
Cut Edge (or Bridge): A cut edge is an edge that, if removed, increases the number of
connected components. It's a critical link between two parts of a graph. An edge is a
bridge if and only if it is not part of any cycle.
o Example: In a tree, every edge is a bridge. If you have two clusters of nodes
connected by a single edge, that edge is a bridge. Removing it disconnects the
clusters.
We can measure "how connected" a graph is using the following metrics, which tell us about its
robustness.
Vertex Connectivity κ(G): The vertex connectivity of a graph G, denoted κ(G), is the
minimum number of vertices that need to be removed to make the graph disconnected
(or reduce it to a single vertex). A higher number means the graph is more robust against
vertex failures.
o Example: For a cycle graph with 5 vertices (C5), you need to remove at least two
vertices to break it. So, κ(C5)=2.
Edge Connectivity λ(G): The edge connectivity of a graph G, denoted λ(G), is the
minimum number of edges that need to be removed to make the graph disconnected. A
higher number means the graph is more robust against link failures.
o Example: For the cycle graph C5, you need to remove at least two edges to break
it. So, λ(C5)=2.
κ(G)≤λ(G)≤δ(G)
where δ(G) is the minimum degree of any vertex in the graph. This tells us that it's always easier
(or equally hard) to disconnect a graph by removing vertices than by removing edges.
3. Conclusion
Connectivity is a core concept in graph theory that classifies graphs based on their wholeness. A
graph is connected if it forms a single piece, allowing a path between any two vertices.
Otherwise, it is disconnected and consists of multiple components. For directed graphs, the
distinction between weak connectivity (connected if directions are ignored) and strong
connectivity (every vertex is reachable from every other and vice-versa) is vital. To measure the
robustness of a network, we identify cut vertices and bridges—critical failure points—and
calculate the graph's vertex and edge connectivity. These concepts are not just theoretical; they
are essential for designing and analyzing resilient real-world networks, from the internet to
transportation systems.
1. Introduction
Graph theory is more than just an abstract mathematical subject; it's a powerful and versatile tool
for solving a huge number of real-world problems. The simple structure of a graph—a set of
vertices connected by edges—provides a universal language to model and analyze systems based
on relationships. Any time you can represent a problem in terms of objects and the connections
between them, you can likely use a graph to understand and solve it. From finding the fastest
route to a destination to understanding the spread of information on social media, graphs are
working behind the scenes. This section explores some of the most important and common
applications of graph theory across various fields, demonstrating its practical significance in
science, technology, and everyday life.
Graphs are at the very heart of computer science, used to model networks, organize data, and
design algorithms.
Computer Networks: The most direct application is modeling computer networks like
the internet.
o Vertices represent: Computers, routers, servers, or other network devices.
o Edges represent: The physical (like ethernet cables, fiber optics) or logical
(wireless) connections between these devices.
o Problems solved: Graph algorithms are used to find the most efficient paths for
data packets to travel (routing), to identify network vulnerabilities (cut vertices),
and to manage network traffic flow. A disconnected graph would mean some
devices cannot communicate.
World Wide Web (WWW): The entire web can be seen as a massive directed graph.
o Vertices represent: Individual web pages.
o Edges represent: Hyperlinks from one page to another. An edge from page A to
page B exists if A has a link to B.
o Problems solved: Search engines like Google use graph algorithms to rank web
pages. The famous PageRank algorithm is based on this concept. It assumes that
more important pages will have more links pointing to them from other important
pages.
Data Structures: Many data structures are fundamentally graph structures.
o Trees: A tree is a special type of connected, acyclic graph used for organizing
hierarchical data, like file systems on a computer or family trees.
o Binary Search Trees (BSTs): Used for efficient searching, inserting, and
deleting of data.
o Heaps: A tree-based data structure used to implement priority queues.
Compilers: In the process of compiling code, graphs are used to manage dependencies
and optimize the program.
o Dependency Graphs: Vertices can represent tasks or modules of code, and a
directed edge from A to B means A must be completed before B can start. An
algorithm called topological sort is used on this graph to find a valid order of
execution.
Social media platforms are one of the most visible and relatable examples of graph theory in
action.
This is perhaps the most common application of graphs that people interact with daily.
GPS and Mapping Services: Services like Google Maps, Apple Maps, and Waze use
weighted graphs to find the best route.
o Vertices represent: Intersections, landmarks, or specific locations.
o Edges represent: The roads or paths between these locations.
o Weights on Edges: The edges are "weighted" with values representing distance,
travel time, or even traffic conditions.
o Problem solved: The core problem is to find the shortest path between a starting
vertex and a destination vertex. Famous algorithms like Dijkstra's Algorithm or
the A search algorithm* are used to solve this efficiently. The "shortest" path
could mean the one with the minimum distance, minimum travel time, or a
combination of factors.
Airline and Supply Chain Networks:
o Vertices represent: Airports or warehouses.
o Edges represent: Flight routes or transportation links.
o Problems solved: Airlines use graph theory to schedule flights and crews
efficiently. Logistics companies use it to optimize delivery routes, solving
complex problems like the Traveling Salesman Problem (finding the shortest
possible route that visits a set of cities and returns to the origin).
Graphs are essential for modeling complex biological and chemical systems.
Electrical Engineering: The design and analysis of electrical circuits is entirely based on
graph theory.
o Vertices represent: Nodes or junctions in the circuit.
o Edges represent: The components of the circuit (resistors, capacitors, etc.).
o Application: Kirchhoff's laws, which are fundamental to circuit analysis, can be
understood and applied using graph-theoretic principles (cycles and cuts).
Recommendation Engines: Services like Netflix, Amazon, and Spotify use bipartite
graphs to recommend products or content.
o One set of vertices represents users, and the other set represents items (movies,
products, songs). An edge connects a user to an item they have rated or
purchased. The system then recommends items that are connected to "similar"
users.
Game Theory and State Diagrams: In artificial intelligence, graphs can be used to
model the states of a game.
o Vertices represent: Different states or configurations of the game (e.g., the
arrangement of pieces on a chessboard).
o Edges represent: Legal moves that transition from one state to another. AI
algorithms explore this graph to find the best sequence of moves.
3. Conclusion
The applications of graphs are incredibly diverse and widespread, touching almost every field of
modern science and technology. From the structure of the internet and social networks to the
logistics of navigation and the complexities of biological systems, graphs provide a simple yet
powerful framework for modeling and analyzing interconnected data. The core idea is always the
same: represent the entities as vertices and the relationships between them as edges. By doing
so, complex real-world systems can be translated into mathematical structures, allowing us to use
well-established algorithms to find solutions, make predictions, and gain deeper insights.
Understanding graph theory is, therefore, not just an academic exercise but a fundamental skill
for problem-solving in the 21st century.
Topic: Operations on Graphs
1. Introduction
Just as we have arithmetic operations like addition and multiplication for numbers, we have a set
of defined operations for graphs. These operations allow us to create new, more complex
graphs from one or more existing graphs, or to modify a graph to study its properties. By
understanding these fundamental operations, we can construct large and intricate graph structures
from simpler building blocks. This is a crucial concept in graph theory because it helps in
proving theorems, defining new classes of graphs, and understanding the relationships between
different graph structures. This section will cover the most common unary operations (acting on
a single graph) and binary operations (acting on two graphs), such as Union, Intersection,
Complement, and various graph products.
Subgraphs: A subgraph is a graph formed from a subset of the vertices and edges of a
larger graph. Let G=(V,E) be a graph. A graph H=(V′,E′) is a subgraph of G if V′ is a
subset of V and E′ is a subset of E.
o Example: Any path within a larger, more complex graph is a subgraph of that
graph. If you have a graph representing a city's road network, the subgraph of
roads and intersections within a specific neighborhood is a subgraph of the city's
network.
o Induced Subgraph: A common type is a vertex-induced subgraph. Here, you
select a subset of vertices V′⊆V, and the edge set E′ contains all the edges from
the original graph G that connect vertices within V′. You take the vertices and all
the original connections between them.
Complement of a Graph: The complement of a simple graph G, denoted as Gˉ or Gc, is
a graph with the same set of vertices as G, but with a completely opposite set of edges.
o Definition: Let G=(V,E) be a simple graph. The complement Gˉ is a graph with
the same vertex set V. An edge {u,v} exists in Gˉ if and only if the edge {u,v}
does not exist in G.
o In simpler terms: If two vertices are connected in G, they are not connected in
Gˉ. If they are not connected in G, they are connected in Gˉ.
o Example: Consider a path graph on 3 vertices, P3, with vertices {1,2,3} and
edges {{1,2},{2,3}}. Its complement P3ˉ will have the same vertices but only the
edge {1,3}, because that was the only pair not connected in P3.
o Interesting Fact: The complement of a complete graph Kn is a null graph (a
graph with no edges). The complement of a disconnected graph is always
connected.
2.2 Binary Operations (Operations on Two Graphs)
Binary operations combine two graphs, G1=(V1,E1) and G2=(V2,E2), to create a new graph. For
most of these operations, we assume that the vertex sets V1 and V2 are disjoint (they have no
vertices in common).
Union (G1∪G2): The union of two graphs is the simplest combination. It's essentially
placing the two graphs side-by-side.
o Definition: The union G=G1∪G2 is a graph with the combined vertex set V=V1
∪V2 and the combined edge set E=E1∪E2.
o In simpler terms: You just take all the vertices and edges from both graphs and
put them together into a new, larger graph.
o Result: If both G1 and G2 are non-empty, their union will be a disconnected
graph with at least two components (one for each of the original graphs).
o Example: If G1 is a triangle (C3) and G2 is a single edge (K2), their union G1
∪G2 is a graph that looks like a triangle and a separate line segment.
Intersection (G1∩G2): Intersection is typically defined for two graphs that share the
same vertex set.
o Definition: Let G1=(V,E1) and G2=(V,E2). Their intersection G=G1∩G2 is a
graph with the vertex set V and an edge set E=E1∩E2.
o In simpler terms: The resulting graph has an edge between two vertices only if
that edge exists in both of the original graphs.
o Example: If G1 is a cycle on four vertices (C4) and G2 is a complete graph on
the same four vertices (K4), their intersection will be the cycle graph C4, since all
edges of C4 are also present in K4.
Join (G1+G2): The join operation creates a new graph by combining two graphs and
then adding all possible connections between them.
o Definition: The join G=G1+G2 is formed by taking the union G1∪G2 and then
adding edges to connect every vertex in V1 to every vertex in V2.
o In simpler terms:
1. Place the two graphs G1 and G2 next to each other.
2. Draw an edge from each vertex of G1 to every single vertex of G2.
o Example: Let G1 be a graph with two vertices and no edges, and G2 be a graph
with three vertices and no edges. Their join, G1+G2, is the complete bipartite
graph K2,3.
Cartesian Product (G1×G2 or G1□G2): The Cartesian product is a very common and
useful operation, especially for constructing grid-like graphs.
o Vertex Set: The vertex set of the product graph is the Cartesian product of the
original vertex sets, V=V1×V2. This means each vertex in the new graph is an
ordered pair (u,v), where u is from V1 and v is from V2.
o Edge Set: Two vertices (u1,v1) and (u2,v2) are connected by an edge if one of
the following is true:
1. u1=u2 AND v1 is adjacent to v2 in G2. (This creates a "vertical" edge, a
copy of G2 for each vertex in G1).
2. v1=v2 AND u1 is adjacent to u2 in G1. (This creates a "horizontal" edge,
a copy of G1 for each vertex in G2).
o Example: The most intuitive example is the product of two path graphs. Let P2
be a path with 2 vertices. The product P2×P2 results in a square (a cycle graph C4
). The product Pm×Pn creates an m×n grid graph. The famous hypercube graph is
also a result of graph products.
3. Conclusion
Graph operations are the fundamental tools used to manipulate and construct graphs. Unary
operations like finding a subgraph or the complement allow us to analyze the internal structure
and properties of a single graph. Binary operations, which combine two graphs, are essential for
building larger, more complex networks from simple ones. The Union operation simply places
graphs side-by-side, the Join operation fully connects them, and the Cartesian Product creates
intricate, structured graphs like grids and hypercubes. Mastering these operations is key to a
deeper understanding of graph theory, as they form the basis for constructing examples, proving
theorems about graph properties, and defining important families of graphs.
1. Introduction
A graph is an abstract mathematical concept used to model relationships. However, to use graphs
to solve problems with a computer, we need a concrete way to store their structure—the vertices
and the edges—in the computer's memory. This process is called graph representation.
Choosing the right representation is a critical step because it directly impacts the performance
(both time and memory usage) of the algorithms we run on the graph. For example, some
representations make it very fast to check if two vertices are connected, while others are better
for listing all the neighbors of a particular vertex. The most common methods involve a trade-off
between memory space and the speed of certain operations. This section will detail the primary
methods for representing graphs: the Adjacency Matrix, Adjacency List, and Incidence Matrix.
To compare the different methods, we will use the following simple, unweighted, undirected
graph as our running example. Let's call it G.
Vertices V={0,1,2,3}
Edges E={{0,1},{0,2},{0,3},{1,2}}
An adjacency matrix is a 2D array (a table) used to represent a graph. The size of the matrix is
V×V, where V is the number of vertices.
How it Works: The matrix, let's call it Adj, is a square table of size V×V. Each row and
each column corresponds to a vertex. The value in the cell Adj[i][j] indicates whether
there is an edge between vertex i and vertex j.
o For an unweighted graph, Adj[i][j] = 1 if there is an edge from i to j, and
Adj[i][j] = 0 if there is no edge.
o For a weighted graph, Adj[i][j] = w if there is an edge from i to j with
weight w. If there is no edge, the cell can store a special value like infinity (∞) or
0.
o For an undirected graph, the matrix is symmetric (Adj[i][j] = Adj[j][i]).
o For a directed graph, the matrix is not necessarily symmetric. Adj[i][j] = 1
means there's an edge from i to j, but it doesn't imply an edge from j to i.
Example (for our graph G): Our graph has 4 vertices, so we need a 4×4 matrix.
Adj=012300111110102110031000
Notice the matrix is symmetric because our graph is undirected. For example, the '1' at
Adj[0][1] represents the edge between vertex 0 and 1, and the '1' at Adj[1][0]
represents the same edge.
Pros:
o Fast Edge Lookup: Checking if an edge exists between two vertices i and j is
extremely fast. You just look at Adj[i][j]. This is a constant time operation,
denoted as O(1).
o Simple to Implement: The matrix representation is straightforward to code.
Cons:
o Space Inefficient for Sparse Graphs: A sparse graph is one with very few
edges. In this case, the adjacency matrix will be filled mostly with zeros, wasting
a lot of memory. The space complexity is always O(V2), regardless of the number
of edges.
o Slow to Iterate Neighbors: To find all the neighbors of a vertex i, you have to
traverse the entire i-th row, checking which cells are 1. This takes O(V) time.
o Adding/Removing Vertices is Hard: Adding a new vertex requires resizing the
entire matrix to (V+1)×(V+1), which is a costly operation.
2.2 Adjacency List
An adjacency list is a collection of lists, one for each vertex in the graph. It is the most common
way to represent graphs.
How it Works: It consists of an array (or list) of size V. Each entry Array[i] in this
array points to a list (often a linked list or a dynamic array) that contains all the vertices
adjacent to vertex i.
o For an unweighted graph, each list simply contains the vertex numbers of its
neighbors.
o For a weighted graph, the list stores pairs of (neighbor, weight).
o For a directed graph, the list for vertex i only contains vertices j for which there
is a directed edge from i to j.
Example (for our graph G): We have 4 vertices, so we have an array of 4 lists.
o List[0] -> [1, 2, 3]
o List[1] -> [0, 2]
o List[2] -> [0, 1]
o List[3] -> [0]
Pros:
o Space Efficient for Sparse Graphs: The memory used is proportional to the
number of vertices and edges. The space complexity is O(V+E), which is much
better than O(V2) for graphs that don't have many edges.
o Fast to Iterate Neighbors: Finding all neighbors of a vertex i is efficient. You
just traverse the list at List[i]. The time taken is proportional to the degree of
vertex i.
o Easy to Add/Remove Vertices: Adding a new vertex is simple: you just add a
new entry to the main array with an empty list.
Cons:
o Slower Edge Lookup: To check if an edge exists between vertices i and j, you
have to search through the entire adjacency list of vertex i to see if j is present. In
the worst case, this can take time proportional to the degree of i, which can be up
to O(V).
An incidence matrix relates vertices to edges. It's less common for general-purpose algorithms
but has uses in graph theory proofs.
How it Works: It's a 2D matrix of size V×E, where rows represent vertices and columns
represent edges.
o For an undirected graph, the entry Inc[i][j] = 1 if vertex i is an endpoint of
edge j, and 0 otherwise. Each column will have exactly two 1s.
o For a directed graph, a common convention is Inc[i][j] = 1 if edge j starts at
vertex i, -1 if it ends at vertex i, and 0 otherwise.
Example (for our graph G): Our graph has 4 vertices and 4 edges. Let's label the edges:
e1={0,1},e2={0,2},e3={0,3},e4={1,2}. We need a 4×4 matrix.
Inc=0123e11100e21010e31001e40110
3. Conclusion
The way a graph is represented in a computer is a fundamental choice that depends on the
problem at hand. The Adjacency Matrix provides the fastest possible lookup to check for a
specific edge, but it pays for this speed with a large memory footprint, making it suitable only for
dense graphs. The Adjacency List, on the other hand, is memory-efficient for sparse graphs—
which are very common in the real world—and is optimized for finding all the neighbors of a
vertex. For these reasons, the Adjacency List is the most popular and generally preferred method
of graph representation in programming. The Incidence Matrix is a third, less common option
with specific theoretical applications. Ultimately, the decision rests on the classic space-time
trade-off: choosing between using more memory for faster operations or saving memory at the
cost of slower performance for certain tasks.
Topic: Isomorphism of Graphs
1. Introduction
In graph theory, two graphs can look very different when drawn on paper but still be identical in
their core structure. The concept that captures this idea of structural sameness is called
isomorphism. Think of it like having two different blueprints for a house that describe the exact
same building; the layout of rooms (vertices) and the doorways between them (edges) are
identical, even if one blueprint is drawn in blue ink and the other in black. Two graphs are
isomorphic if they are just different drawings of the same underlying graph. Determining if two
graphs are structurally the same is a fundamental question in many fields, such as chemistry (are
two molecular diagrams the same compound?) and computer science (are two data structures
equivalent?). While the concept is intuitive, proving whether two graphs are isomorphic can be a
challenging task.
Two graphs are isomorphic if their vertices can be re-labeled so that they become identical.
Formally, this re-labeling is described by a special function.
Let G1=(V1,E1) and G2=(V2,E2) be two simple graphs. They are said to be isomorphic if there
exists a function f:V1→V2 that satisfies two conditions:
1. The function f is a bijection: This means the function is both one-to-one and onto.
o One-to-one: Every vertex in G1 maps to a unique vertex in G2. No two vertices
in G1 map to the same vertex in G2.
o Onto: Every vertex in G2 is mapped to by some vertex in G1. There are no
leftover vertices in G2.
o A direct consequence of this is that both graphs must have the exact same
number of vertices.
2. The function f preserves adjacency: This is the most crucial part. For any two vertices
u and v in G1, the edge {u,v} exists in E1 if and only if the edge {f(u),f(v)} exists in E2.
o In simple terms: If two vertices are connected in the first graph, their
corresponding mapped vertices must be connected in the second graph. And if
they are not connected in the first, their mapped versions must not be connected in
the second.
Graph G1:
o Vertices V1={a,b,c,d}
o Edges E1={{a,b},{b,c},{c,d},{d,a}} (A square shape)
Graph G2:
o Vertices V2={1,2,3,4}
o Edges E2={{1,2},{2,3},{3,4},{4,1}} (A diamond shape)
f(a)=1
f(b)=2
f(c)=3
f(d)=4
Also, consider a non-edge. {a,c} is not an edge in G1. Is {f(a),f(c)}={1,3} an edge in G2? No.
Since the function f is a bijection and preserves adjacency, we can conclude that G1 and G2 are
isomorphic.
Finding the specific function f to prove isomorphism can be very difficult, like finding a needle
in a haystack. It's often much easier to prove that two graphs are not isomorphic. We do this by
finding a structural property that one graph has and the other doesn't. Such properties, which are
preserved under isomorphism, are called graph invariants.
If two graphs are isomorphic, they must have the same value for every graph invariant.
Therefore, if you find even one invariant that differs, you have proven they are not isomorphic.
Consider the two graphs G1 and G2 below. Let's use our invariant checklist to see if they are
isomorphic.
Let's try another example where the degree sequence is the same.
The general problem of determining whether any two given graphs are isomorphic is known as
the graph isomorphism problem. It is a famous unsolved problem in computational complexity
theory. While we have algorithms that can solve it, we don't know if there is an "efficient" one—
that is, one that runs in polynomial time for all possible graphs. It is one of the very few
problems that is known to be in the class NP but is not known to be either in P or NP-complete.
3. Conclusion