0% found this document useful (0 votes)
40 views11 pages

Spectrum and Eigenvalues of Random Graphs

Chapter 11 discusses the spectrum of random graphs, focusing on the eigenvalues and eigenvectors derived from the adjacency matrix of a graph, which provide essential information about its structure. It introduces spectral graph theory, detailing properties of adjacency and Laplacian matrices, and explores the empirical eigenvalue distribution (ESD) of random graphs, particularly the Erdős-Rényi model. The chapter also examines the behavior of the spectrum under varying probabilities of edge formation, highlighting the transition from discrete to continuous spectral components as the graph evolves.

Uploaded by

David Woek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views11 pages

Spectrum and Eigenvalues of Random Graphs

Chapter 11 discusses the spectrum of random graphs, focusing on the eigenvalues and eigenvectors derived from the adjacency matrix of a graph, which provide essential information about its structure. It introduces spectral graph theory, detailing properties of adjacency and Laplacian matrices, and explores the empirical eigenvalue distribution (ESD) of random graphs, particularly the Erdős-Rényi model. The chapter also examines the behavior of the spectrum under varying probabilities of edge formation, highlighting the transition from discrete to continuous spectral components as the graph evolves.

Uploaded by

David Woek
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapter 11

Spectrum of random graphs

Given a graph G, with n vertices, its adjacency matrix A(G) is the n ×n matrix whose
(i , j ) entry is 1 if vertices i and j are adjacent, and 0 otherwise. The eigenvalues
of the graph G are defined to be the eigenvalues of A(G). The collection of eigen-
values of G are also known as the spectrum of G. Also important are the eigenvec-
tors corresponding to the eigenvalues of the graph. The spectrum and eigenvectors,
contain all the information about the graph and there are certain statistics of eigen-
values, which gives a huge amount of information about the geometry. The study of
spectrum has found utility in community detection, control of epidemics, in under-
standing diffusions on graphs, centrality measures, etc.

11.1 Spectral graph theory


If the graph is undirected then the eigenvalues are real and if the graph is directed,
the eigenvalues can be complex. There are two other matrices which are also as
important as the adjacency matrices. These are called the Laplacian matrices. They
can be defined in various ways. Let D(G) = Diag(k 1 , . . . , k n ) is the diagonal matrix
with diagonal entries as the degrees (k i )i ∈[n] .
We define Graph Laplacians as

- the (standard or combinatorial) Laplacian L(G) = D(G) − A(G);

- the normalized Laplacian L (G) = D(G)−1/2 L(G)D(G)−1/2 ;

- the PageRank Laplacian L pr = I − D(G)−1 A(G).

The adjacency and graph laplacians and their spectrum are crucially related to
some of the graph characteristics. We list some of the features of these matrices. For
a n × n, symmetric matrix M , let the eigenvalues be denoted by λ1 (M ) ≤ λ2 (M ) ≤
· · · ≤ λn (M ).

Theorem 11.1 1. All eigenvalues of A(G) are real if G is undirected.

2. The (i , j )th entry of A(G)m is the number of walks on G starting from i and
ending at j in m steps.

223
224 CHAPTER 11. SPECTRUM OF RANDOM GRAPHS

3. LG is symmetric and positive semi-definite (x t Lx ≥ 0, for all x). 0 is an eigen-


value with 0 = λ1 (L(G)) ≤ λ2 (L(G)) ≤ . . . ≤ λn (L(G)) with

LG 1n = 0.1

4. If G is a connected graph, then λ2 (L(G)) > 0.

5. The multiplicity of 0 in spectrum of L(G)= no of connected components of G.

6. L (G) is symmetric, positive semi-definite and

0 = λ1 (L (G)) ≤ . . . λn (L (G)) ≤ 2.

D(G)1/2 1n is an eigenvector associated with 0.

7. Let G = (V, E ) be a graph with largest degree k max . Let λmax (A(G)) be the largest
eigenvalue of A(G). Then λmax (A(G)) ≤ k max .

8. Let G be a tree with maximum degree d max . Then


p p
k max ≤ λmax (AG ) ≤ 2 k max − 1.

Exercise 11.1 Show that L G is symmetric and positive semi-definite (x t Lx ≥ 0, for


all x).

Exercise 11.2 Show that λmax (A(G)) ≤ k max .

Homework 11.1 Show the property 4, that is If G is a connected graph, then λ2 (L(G)) >
0.

[Begin intermezzo]
The spectrum of page-rank Laplacian is related to the random walk on the graph
and various expansion properties. Transition probability of a random walk on G
is PG = D(G)−1 A(G) (recall this is I − L pr ). Suppose P is self-adjoint2 equivalently
reversible with respect to π, the stationary measure of the walk. The mixing time is
the first time the random walk reaches the stationary measure, and the bounds on
the mixing time follows from the second largest eigenvalue of PG and the spectral
gaps of L pr . We refer to the Markov Chain book [4] for details. We will not go in this
direction in this chapter. The page-rank is used as the Google Page rank is related to
it.
[End intermezzo]
Given a graph G and its adjacency matrix A(G), we define the Empirical eigenvalue
distribution (ESD) of a graph as

1X n
µA = δλ .
n i =1 i

Note that if the graph G is random then ESD is a random measure. Also if the graph G
is random, A(G) is a matrix with random elements and hence this falls under the area
1 1 is an n × 1 vector with all 1 entries
n
2 A matrix P is self-adjoint, means P = P t , where P t is the transpose of the matrix.
11.1. SPECTRAL GRAPH THEORY 225

of random matrices. The area of random matrices is a well-studied area in physics,


mathematics and computer science. One of the first results in this area was devel-
oped by E. Wigner. He considered ensembles of random matrices where the entries
of the matrix are independent and identically distributed and studied the limit of
the ESD as n → ∞. The most famous result of this type is Wigner’s semicircle law. It
was shown by Wigner that ESD converges to semicircle distribution whose density
if given by p
ρ sc (x) = 4 − x2 for − 2 ≤ x ≤ 2.
The random matrices in Wigner’s setup corresponds to random graphs where
each possible edge appears independently with some probability p. This random
graph is the Erdős-Rényi random graph E R n (p). If p is constant as n grows, then the
set-up is equivalent to Wigner’s random matrix setting. However in graph theory,
we are often interested in cases where p decreases with n. In the next section we
discuss some results related to the limiting ESD of E R n (p). We shall also consider
another model of random graphs. A k-regular graph is a graph where every ver-
tex has degree k. Let G n,k denote a random k- regular graph on k-vertices, chosen
uniformly at random from all k-regular graphs on n-labelled vertices. Unlike the
Erdős-Rényi random graph. the edges of G n,k are not independent. Mckay deter-
mined the limiting ESD for each k. It was shown recently that as k increases with n,
then the limiting ESD of G n,k does approach the semicircle distribution. We discuss
both these results.
We begin with some plots of the empirical distribution of the E R n (p) graph with
1
p = α/n. We generally mean adjust it, that is, plot the eigenvalues of M n = pnσ (A n −
p J n ) where σ2 = p(1 − p) and J n is a matrix with all 1 entries. We begin with a dis-
cussion of the case p = ω(1/n) (np → ∞), where Wigner’s semicircle law still holds.
However, p = O(1/n), new phenomena begin to emerge (see figure 11.1), and we ob-
serve a discrete component to the spectrum of the graph, which is directly related to
the geometry of the connected components of the E R n (p) discussed in the previous
chapters. We focus on the case when p ≤ 12 since the other case can be analyzed by
taking the graph complement.
We first state the Wigner type result, for a dense E R n (p). Details can be found in
[3, Theorem 3.4] or [7, Appendix A].

Theorem 11.2 Assume p = ω n1 and p ≤ 21 . Let A n be the adjacency matrix of a


¡ ¢

random graph E R n (p). Then, as n → ∞, the empirical spectral distribution of the


matrix p 1 A n converges in distribution to the semicircle distribution which has
np(1−p)
a density ρ sc (x) with support on [−2, 2].

The method of moments is a key technique used in probability theory and ran-
dom matrix theory to prove the convergence of the ESD of random matrices to a
limiting spectral distribution (LSD). Here’s a brief description of how it works in this
context: The method of moments involves showing that the moments of the empir-
ical spectral distribution converge to the moments of the limiting spectral distribu-
tion as the size of the matrix grows to infinity.
Moments of the Empirical Spectral Distribution: The k-th moment of the ESD µ X n
of an n × n random matrix X n is defined as:

1X n
m k(n) = λk
n i =1 i
226 CHAPTER 11. SPECTRUM OF RANDOM GRAPHS

Figure 11.1: Normalized empirical spectral distribution of E R n (α/n) for various val-
ues of α. Taken with n = 1000 using 100 trials.

where λ1 , · · · , λn are the eigenvalues of X n .

Exercise 11.3 Show that m k(n) = n1 Tr(X nk ), where Tr denotes the trace of a matrix, that
is, sum of the diagonal elements.

Limiting Moments: The goal is to show that for each k ≥ 1, the kth moment m k(n)
converges to a deterministic limit as n → ∞:

m k(n) → m k , as n → ∞

where m k are the moments of the proposed limiting distribution.


Identification of the Limiting Distribution: If all moments m k uniquely determine
a probability distribution (this is true if the moment problem is determinate), then
the ESD converges to the limiting distribution characterized by these moments. A
key criteria condition which ensures this is called the Carleman’s condition:

−1/2k
X
m 2k = ∞.
k=1
11.1. SPECTRAL GRAPH THEORY 227

Consider a random matrix X n . The kth moment of the ESD can be related to the
expectation of the trace of X nk :
" #
1 1
E [m k(n) ] = E [Tr(X nk )] = E
X
X n (i 1 , i 2 )X n (i 2 , i 3 ) · · · X n (i k , i 1 )
n n 1≤i 1 ,··· ,i k ≤n

By calculating these moments and showing that they converge to the moments
of a known distribution (e.g., the semicircle distribution for Wigner matrices), one
proves the convergence of the ESD to the LSD.

Homework 11.2 Check the even 2k-th moment of ρ sc is given by Catalan numbers,
à !
1 2k
m 2k (ρ sc ) = C k = .
k +1 k

Exercise 11.4 Show that the Catalan numbers satisfy the following recursion: C 0 = 1
and for any n ≥ 1, C n+1 = ni=0 C i C n−i .
P

Let

σ=
p
p(1 − p)

be the standard deviation of the non-diagonal entries of A. Observe that σ1 A n is


a random matrix whose non-diagonal entries, i < j , are binomial random variables
1
taking value p(1−p) with probability p and 0 otherwise. Note that the variance of
ζi j is 1 . It will be convenient to shift the entries so that the mean is zero. Let J n
be the n × n matrix all of whose entries are 1. ¡ It is known (see [6, Lemma 39]) that
1 1
¢
the eigenvalues of pnσ A n and those of pnσ A n − p J n interlace, so they share the
same global spectral properties, and in particular their limiting ESDs are identical.
So instead of working with σ1 A n , we consider the centered matrix

1¡ ¢
Mn = An − p Jn
σ
whose non-diagonal entries ξi j , i < j , are binomial random variables with mean
1−p −p
0 and variance 1 , taking value σ with probability p and value σ with probability
p
1 − p. Note that when p = ω n1 , ¯ξi j ¯ = o( n). We can now finish the proof essen-
¡ ¢ ¯ ¯

tially the same way as the method of moments proof of Wigner’s semicircle law.

11.1.1 E R n (p) when p = α/n


When p = O n1 , the empirical spectral distribution of E R n (p) no longer seems to
¡ ¢

converge to semicircle distribution. Let us consider the case when p = α


n ¡and n
¢ → ∞.
See Figure 11.1 the observed (normalized) eigenvalue distribution for G n, α n when
n = 1000 and for various values of α. We make the following observations:

1. The spectra seems to be a composition of two components: a "discrete com-


ponent" consisting of spikes, and a "continuous component."

2. For small values of α, the discrete spectrum is dominant, and for larger values
of α, the continuous spectrum is dominant.
228 CHAPTER 11. SPECTRUM OF RANDOM GRAPHS

3. The continuous spectrum seems to be approaching a semicircle for as α gets


larger.

The third point makes sense from the result in the previous section, since we know
that if α → ∞, however slowly with n, then the limiting distribution is indeed a semi-
circle distribution. To explain the presence of the discrete spectra, we need some
information about the structure of E R n (p), which we discuss next.
One of the key results of Erdős and Rényi concerns the qualitative nature of the
structure of E R n (p) for different regimes of p. The graph breaks into a number of
connected components. When p = α n , with α fixed and n → ∞, the size of the largest
connected component has the following "double jump" behavior:

• When p = α n with α < 1, almost surely all components of E R n (p) will have size
O(log n), mostly being trees.

• When p = n1 , almost surely the largest component of E R n (p) has size on the
order of n 2/3 .

• When p = α n with α > 1, almost surely there a unique largest connected com-
ponent (the giant¡ component) of size g (α)n, where g is some continuous func-
tion satisfying g 21 = 0 and limα→∞ g (α) = 1. All other components have size
¢

O(log n), mostly trees. In other words, p = n1 is a sharp threshold for the exis-
log n
tence of a giant component in E R n (p). It is also known that p = n is a sharp
threshold for E R n (p) being connected.

The above characterization of the structure of E R n (p) helps us to make an attempt


at explaining the spectra observed in the previous section. Note that most of these
claims are speculative in nature, as no rigorous proofs are known (although some
"physicist’s proofs" exist).

• When p = α n with α < 1, the spectrum is contributed entirely by trees. We


should be able to approximate the limiting ESD by computing the spectra of
small trees. The dominance of trees explains the discrete nature of the spectra.

• When p = α n with α > 1, there is a giant component, which contributes to the


continuous component of the spectra. There are still small connected compo-
nents, mostly trees, that contribute to the discrete component of the spectra.
Also contributing to the observed discrete spectra are the trees with one leaf
vertex attached to the giant component.

[Begin intermezzo] There has been a lot of development in the recent years on
sparse random graphs. Specially in terms of graph convergence there has been var-
ious topologies, under which one can define the notion of graph convergence. One
such theme is the local weak convergence. Where roughly it is said that a sequence
G
of graphs (G n )n≥1 converge to rooted graph (G, o), if B r n (o n ) ( a ball of radius r on a
uniformly chosen vertex o n in G n ) is isomorphic to a B rG (o), for n sufficiently large
enough. Such topologies turn out to be very useful. It can be shown for example, that
E R n (λ) converge to a rooted Galton-Watson tree with offspring distribution Poi(λ).
The local topology, renders many functionals to be continuous. In fact, many of the
above results about the spectrum can be proved using this technology.
[End intermezzo]
11.1. SPECTRAL GRAPH THEORY 229

11.1.2 Random regular graphs


In this section, we consider a different random graph model from the previous sec-
tion. Recall that a d -regular graph is a graph where every vertex has degree d . Let
G n,d be a random d -regular graph, where we choose uniformly³ at random
´ from all
d
d -regular graphs on n labeled vertices. Note that G n,d and E R n n−1have the same
³ ´
d
edge density, but a key difference is the entries of the adjacency matrix of E R n n−1
are independent, while those of G n,d are not.

Figure 11.2: Normalized empirical spectral distribution of a random d-regular graph


Gn,d for various values of d. Taken with n = 1000 using 100 trials

Theorem 11.3 Let d ≥ 2 be a fixed integer. As n → ∞, the empirical spectral distribu-


tion of a random d -regular graph on n vertices approaches
 p
 d 4(d −1)−x 2 p
2 −x 2 , if |x| ≤ 2 d − 1
f d (x) = 2π ( d )
0, otherwise

Here we give a sketch of the proof of Theorem 3.1, following the idea of the origi-
nal paper of [5] but with a somewhat different combinatorial analysis. The idea is to
use the method of moments to count trees, similar to the proof of Wigner’s semicir-
cular result. ¡ ¢
Sketch of the proof: Fix d . Let A n = a i j denote the adjacency matrix of G n,d .
Let m k denote the k-th moment of the limiting ESD, so that

1
m k = lim E Tr A kn
n→∞ n
230 CHAPTER 11. SPECTRUM OF RANDOM GRAPHS

Note that Tr A kn is the number of closed walks of length k in A n . When d is fixed


and n → ∞, almost surely the graph is locally a d -regular tree. So in the limit, n1 Tr A kn
is number of closed walks of length k in an (infinite) d -regular tree starting at the
root. So m k is the number of closed walks in a d-regular tree starting at the root.
Since the length of a closed walk on a tree is always even, we have m k = 0 whenever
k is odd.
In a walk of length 2k, suppose the walk returns to the root for the first time after
2(i + 1) steps, for 0 ≤ i ≤ k − 1. The first step is a down-step to the first level (we call
the root the zeroth level), and there are d choices for the first step. Subsequently,
before returning to the root, the walk takes place below the first level. At each step
it takes either a down-step (in which there are always d − 1 choices) or it has an up-
step. Between steps 1 and 2i + 1, we have a closed walk staying below the first level,
and thus the sequence of choices for up-step or down-step corresponds to a Dyck
path, and the number of such choices is the i -th Catalan number C i . Since there are
d − 1 choices for each down step, the number of possible walks between steps 1 and
2i + 1 is C i (d − 1)i . After the walk returns to the root for the first time, there are now
m 2(k−1−i ) ways to continue. So we obtain the recurrence relation

k−1
C i (d − 1)i m 2(k−1−i )
X
m 2k = d
i =0

We turn this recurrence relation into a relation of generating functions in the


variable y. Multiplying by y k and summing over all y, we find that

à !à !
∞ ∞ k−1 ∞ ∞
k i k k k k
X X X X X
m 2k y = 1+d C i (d −1) m 2(k−1−i ) y = 1+d y C k (d − 1) y m 2k y
k=0 k=1 i =0 k=0 k=0

Let

m 2k y k
X
M (y) =
k=0

be the generating function for m 2k . We know from the generating function of the
Catalan numbers that
p
∞ 1− 1 − 4(d − 1)y
k k
X
C k (d − 1) y =
k=0 2(d − 1)y
So we get

d q
M (y) = 1 + (1 − 1 − 4(d − 1)y)M (y)
2(d − 1)
and thus
¶−1
d
µ q
M (y) = 1 − (1 − 1 − 4(d − 1)y)
2(d − 1)
To show that f d (x) is indeed the limiting distribution, by the moment method, it
suffices to check that
p
Z 2 d −1
k
p x f d (x)d x = m k
−2 d −1
11.1. SPECTRAL GRAPH THEORY 231

for all k (we can check that Carleman’s condition applies for the uniqueness of
the limiting distribution). Let M k denote the left-hand side quantity. Then M k = 0
for odd k since the distribution f d is symmetric about zero. The generating function
for M 2k is
p
∞ k Z 2 d −1
k
x 2k y k f d (x)d x
X X
M 2k y = p
k=0 k=0 −2 d −1
Z 2pd −1
f d (x)
= p 2
dx
−2 d −1 1 − x y
Z 2pd −1 p
d 4(d − 1) − x 2
= p ¡ ¢¡ ¢dx
−2 d −1 2π d 2 − x 2 1 − x 2 y

Thep final integral can be evaluated using the following procedure: (1) substitute
x = 2 d − 1 cos θ; (2) convert it to a complex contour integral along the unit circle
using z = e i θ ; (3) evaluate the integral using residue theorem. The calculation is
routine but tedious, so we omit the details. The final result of the calculation show
that it is the same generating function as (2), so that M k = m k for all k, and thus the
method of moments show that ESD of G n,d indeed converges to f d as claimed.

Homework 11.3 A dyck path is a finite sequence ε = (ε1 , . . . , εn ) ∈ {+1, −1}n with

(a) 1 ≤ m ≤ n, ε1 + . . . + εm ≥ 0.

(b) ε1 + ε2 + · · · + εn = 0.

If D 2k be the number of dyck paths of length 2k, then show that D 2k = C k , the Catalan
numbers.

11.1.3 Regular random graph with d increasing


Above we considered random graphs G d ,n , letting n → ∞ while holding d fixed. It
was recently shown by [7] that, instead of holding d fixed, if we let d → ∞, however
slowly with n, then the¡limiting ESD is the semicircle distribution, similar to the case
of E R n (p) when p = ω n1 .
¢

Theorem 11.4 Let d → ∞ and d ≤ n2 . Let A n be the adjacency matrix of G n,d , and
r ³ ´
let σ = dn 1 − dn . Then, as n → ∞, the empirical spectral distribution of the matrix
p1 A n converges in distribution to the semicircle distribution ρ sc .

³ ´
d
It was shown that if d → ∞, then E R n n is d -regular with probability at least
p
−O(n d )
e . This is a small probability, but it is bounded from below. We know from
Theorem 11.2 that the normalized ESD of E R n (p) approaches the semicircle distri-
bution. What [7] showed is a quantitative version of this convergence. They proved a
high-concentration result showing that probability that the ESD of E R n (p) deviates,
p
in some sense, from the semicircle distribution is even much smaller than e −O(n d ) ,
so that with high probability, a random d -regular graph also has its ESD close to a
semicircle.
232 CHAPTER 11. SPECTRUM OF RANDOM GRAPHS

[Begin intermezzo] In this chapter we gave a small glimpse of the area of spectrum
of random graphs and connections to random matrices. Both these topics are ac-
tive area of research. There are many random graphs for which the spectrum is not
known in complete details. Also there is a huge application of this in various areas
of statistics, computer science and physics. In terms of theoretical studies, there are
deep connections to number theory. Also in methods we just saw the method of
moments which has a combinatorial flavour. There are other methods like the use
of Stieltjes transform which uses much more sophisticated machinery. We refer to
the book by [1] for random matrix details and [2] for spectral graph theory.
[End intermezzo]
Bibliography

[1] Anderson, Greg W., Alice Guionnet, and Ofer Zeitouni. An introduction to ran-
dom matrices. No. 118. Cambridge university press, 2010.

[2] Chung, Fan RK. Spectral graph theory. Vol. 92. American Mathematical Soc.,
1997.

[3] F. Chung, L. Lu, and V. Vu. The spectra of random graphs with given expected
degrees. Internet Math., 1(3):257-275, 2004.

[4] A. Levin and P. Yuval. Markov chains and mixing times. Vol. 107. American
Mathematical Soc., 2017.

[5] B. D. McKay. The expected eigenvalue distribution of a large regular graph. Lin-
ear Algebra Appl., 40:203-216, 1981.

[6] T. Tao and V. Vu. Random matrices: universality of local eigenvalue statistics.
Acta Math., 206(1):127-204, 2011.

[7] L. V. Tran, V. H. Vu, and K. Wang. Sparse random graphs: Eigenvalues and eigen-
vectors. Random Structures Algorithms, 42 (1): 110-134, 2013.

233

You might also like