Skip to main content
Springer logoLink to Springer
. 2021 Aug 14;388(1):507–579. doi: 10.1007/s00220-021-04167-y

Delocalization Transition for Critical Erdős–Rényi Graphs

Johannes Alt 1, Raphael Ducatez 1, Antti Knowles 1,
PMCID: PMC8550299  PMID: 34720130

Abstract

We analyse the eigenvectors of the adjacency matrix of a critical Erdős–Rényi graph G(N,d/N), where d is of order logN. We show that its spectrum splits into two phases: a delocalized phase in the middle of the spectrum, where the eigenvectors are completely delocalized, and a semilocalized phase near the edges of the spectrum, where the eigenvectors are essentially localized on a small number of vertices. In the semilocalized phase the mass of an eigenvector is concentrated in a small number of disjoint balls centred around resonant vertices, in each of which it is a radial exponentially decaying function. The transition between the phases is sharp and is manifested in a discontinuity in the localization exponent γ(w) of an eigenvector w, defined through w/w2=N-γ(w). Our results remain valid throughout the optimal regime logNdO(logN).

Introduction

Overview

Let A be the adjacency matrix of a graph with vertex set [N]={1,,N}. We are interested in the geometric structure of the eigenvectors of A, in particular their spatial localization. An 2-normalized eigenvector w=(wx)x[N] gives rise to a probability measure x[N]wx2δx on the set of vertices. Informally, w is delocalized if its mass is approximately uniformly distributed throughout [N], and localized if its mass is essentially concentrated in a small number of vertices.

There are several ways of quantifying spatial localization. One is the notion of concentration of mass, sometimes referred to as scarring [49], stating that there is some set B[N] of small cardinality and a small ε>0 such that xBwx2=1-ε. In this case, it is also of interest to characterize the geometric structure of the vertex set B and of the eigenvector w restricted to B. Another convenient quantifier of spatial localization is the p-norm wp for 2p. It has the following interpretation: if the mass of w is uniformly distributed over some set B[N] then wp2=|B|-1+2/p. Focusing on the -norm for definiteness, we define the localization exponent γ(w) through

w2=:N-γ(w). 1.1

Thus, 0γ(w)1, and γ(w)=0 corresponds to localization at a single vertex while γ(w)=1 to complete delocalization.

In this paper we address the question of spatial localization for the random Erdős–Rényi graph G(N,d/N). We consider the limit N with ddN. It is well known that G(N,d/N) undergoes a dramatic change in behaviour at the critical scale dlogN, which is the scale at and below which the vertex degrees do not concentrate. Thus, for dlogN, with high probability all degrees are approximately equal and the graph is homogeneous. On the other hand, for dlogN, the degrees do not concentrate and the graph becomes highly inhomogeneous: it contains for instance hubs of exceptionally large degree, leaves, and isolated vertices. As long as d>1, the graph has with high probability a unique giant component, and we shall always restrict our attention to it.

Here we propose the Erdős–Rényi graph at criticality as a simple and natural model on which to address the question of spatial localization of eigenvectors. It has the following attributes.

  • (i)

    Its graph structure provides an intrinsic and nontrivial notion of distance.

  • (ii)

    Its spectrum splits into a delocalized phase and a semilocalized phase. The transition between the phases is sharp, in the sense of a discontinuity in the localization exponent.

  • (iii)

    Both phases are amenable to rigorous analysis.

Our results are summarized in the phase diagram of Fig. 1, which is expressed in terms of the parameter b parametrizing d=blogN on the critical scale and the eigenvalue λ of A/d associated with the eigenvector w. To the best of our knowledge, the phase coexistence for the critical Erdős–Rényi graph established in this paper had previously not been analysed even in the physics literature.

Fig. 1.

Fig. 1

The phase diagram of the adjacency matrix A/d of the Erdős–Rényi graph G(N,d/N) at criticality, where d=blogN with b fixed. The horizontal axis records the location in the spectrum and the vertical axis the sparseness parameter b. The spectrum is confined to the coloured region. In the red region the eigenvectors are delocalized while in the blue region they are semilocalized. The grey regions have width o(1) and are not analysed in this paper. For b>b the spectrum is asymptotically contained in [-2,2] and the semilocalized phase does not exist. For b<b a semilocalized phase emerges in the region (-λmax(b),-2)(2,λmax(b)) for some explicit λmax(b)>2

Throughout the following, we always exclude the largest eigenvalue of A, its Perron–Frobenius eigenvalue, which is an outlier separated from the rest of the spectrum. The delocalized phase is characterized by a localization exponent asymptotically equal to 1. It exists for all fixed b>0 and consists asymptotically of energies in (-2,0)(0,2). The semilocalized phase is characterized by a localization exponent asymptotically less than 1. It exists only when b<b, where

b:=12log2-12.59. 1.2

It consists asymptotically of energies in (-λmax(b),-2)(2,λmax(b)), where λmax(b)>2 is an explicit function of b (see (1.14) below). The density of states at energy λR is equal to Nρb(λ)+o(1), where ρb is an explicit exponent defined in (1.14) below and illustrated in Fig. 2. It has a discontinuity at 2 (and similarly at -2), jumping from ρb(2-)=1 to ρb(2+)=1-b/b. The localization exponent γ(w) from (1.1) of an eigenvector w with associated eigenvalue λ satisfies with high probability

γ(w)=1+o(1)if|λ|<2,γ(w)ρb(λ)+o(1)if|λ|>2.

This establishes a discontinuity, in the limit N, in the localization exponent γ(w) as a function of λ at the energies ±2. See Fig. 2 for an illustration; we also refer to Appendix A.1 for a simulation depicting the behaviour of w throughout the spectrum. Moreover, in the semilocalized phase scarring occurs in the sense that a fraction 1-o(1) of the mass of the eigenvectors is supported in a set of at most Nρb(λ)+o(1) vertices.

Fig. 2.

Fig. 2

The behaviour of the exponents ρb and γ as a function of the energy λ. The dark blue curve is the exponent ρb(λ) characterizing the density of states Nρb(λ)+o(1) of the matrix A/d at energy λ. The entire blue region (light and dark blue) is the asymptotically allowed region of the localization exponent γ(w) of an eigenvector of A/d as a function of the associated eigenvalue λ. Here d=blogN with b=1 and λmax(b)2.0737. We only plot a neighbourhood of the threshold energy 2. The discontinuity at 2 of ρb is from ρb(2-)=1 to ρb(2+)=1-b/b=2-2log2

The eigenvalues in the semilocalized phase were analysed in [10], where it was proved that they arise precisely from vertices x of abnormally large degree, Dx2d. More precisely, it was proved in [10] that each vertex x with Dx2d gives rise to two eigenvalues of A/d near ±Λ(Dx/d), where Λ(α):=αα-1. The same result for the O(1) largest degree vertices was independently proved in [54] by a different method. We refer also to [14, 15] for an analysis in the supercritical and subcritical phases.

In the current paper, we prove that the eigenvector w associated with an eigenvalue λ in the semilocalized phase is highly concentrated around resonant vertices at energy λ, which are defined as the vertices x such that Λ(Dx/d) is close to λ. For this reason, we also call the resonant vertices localization centres. With high probability, and after a small pruning of the graph, all balls Br(x) of a certain radius r1 around the resonant vertices are disjoint, and within any such ball Br(x) the eigenvector w is an approximately radial exponentially decaying function. The number of resonant vertices at energy λ is comparable to the density of states, Nρb(λ)+o(1), which is much less than N. See Fig. 3 for a schematic illustration of the mass distribution of w.

Fig. 3.

Fig. 3

A schematic representation of the geometric structure of a typical eigenvector in the semilocalized phase. The giant component of the graph is depicted in pale blue. The eigenvector’s mass (depicted in dark blue) is concentrated in a small number of disjoint balls centred around resonant vertices (drawn in white), and within each ball the mass decays exponentially in the radius. The mass outside the balls is an asymptotically vanishing proportion of the total mass

The behaviour of the critical Erdős–Rényi graph described above has some similarities but also differences to that of the Anderson model [11]. The Anderson model on Zn with n3 is conjectured to exhibit a metal-insulator, or delocalization-localization, transition: for weak enough disorder, the spectrum splits into a delocalized phase in the middle of the spectrum and a localized phase near the spectral edges. See e.g. [8, Figure 1.2] for a phase diagram of its conjectured behaviour. So far, only the localized phase of the Anderson model has been understood rigorously, in the landmark works [4, 39], as well as contributions of many subsequent developments. The phase diagram for the Anderson model bears some similarity to that of Fig. 1, in which one can interpret 1/b as the disorder strength, since smaller values of b lead to stronger inhomogeneities in the graph.

As is apparent from the proofs in [4, 39], in the localized phase the local structure of an eigenvector of the Anderson model is similar to that of the critical Erdős–Rényi graph described above: exponentially decaying around well-separated localization centres associated with resonances near the energy λ of the eigenvector. The localization centres arise from exceptionally large local averages of the potential. The phenomenon of localization can be heuristically understood using the following well-known rule of thumb: one expects localization around a single localization centre if the level spacing is much larger than the tunnelling amplitude between localization centres. It arises from perturbation theory around the block diagonal model where the complement of balls Br(x) around localization centres is set to zero. On a very elementary level, this rule is illustrated by the matrix H(t)=(0tt1), whose eigenvectors are localized for t=0, remain essentially localized for t1, where perturbation theory around H(0) is valid, and become delocalized for t1, where perturbation theory around H(0) fails.

More precisely, it is a general heuristic that the tunnelling amplitude decays exponentially in the distance between the localization centres [25]. Denoting by β(λ)>1 the rate of exponential decay at energy λ, the rule of thumb hence reads

β(λ)-Lε(λ), 1.3

where L is the distance between the localization centres and ε(λ) the level spacing at energy λ. For the Anderson model restricted to a finite cube of Zn with side length N1/n, the level spacing ε(λ) is of order N-1 (see [57] and [8, Chapter 4]) whereas the diameter of the graph is of order N1/n. Hence, the rule of thumb (1.3) becomes

β(λ)-N1/nN-1,

which is satisfied and one therefore expects localization. For the critical Erdős–Rényi graph, the level spacing ε(λ) is N-ρ(λ)+o(1) but the diameter of the giant component is only logNlogd. Hence, the rule of thumb (1.3) becomes

N-logβ(λ)logdN-ρ(λ)+o(1),

which is never satisfied because logβ(λ)logd0 as N. Thus, the rule of thumb (1.3) is satisfied in the localized phase of the Anderson model but not in the semilocalized phase of the critical Erdős–Rényi graph. The underlying reason behind this difference is that the diameter of the Anderson model is polynomial in N, while the diameter of the critical Erdős–Rényi graph is logarithmic in N. Thus, the critical Erdős–Rényi graph is far more connected than the Anderson model; this property tends to push it more towards the delocalized behaviour of mean-field systems. As noted above, another important difference between the localized phase of the Anderson model and the semilocalized phase of the critical Erdős–Rényi graph is that the density of states is of order N in the former and a fractional power of N in the latter.

Up to now we have focused on the Erdős–Rényi graph on the critical scale dlogN. It is natural to ask whether this assumption can be relaxed without changing its behaviour. The question of the upper bound on d is simple: as explained above, there is no semilocalized phase for d>blogN, and the delocalized phase is completely understood up to dN/2, thanks to Theorem 1.8 below and [35, 42]. The lower bound is more subtle. In fact, it turns out that all of our results remain valid throughout the regime

logNdO(logN). 1.4

The lower bound logN is optimal in the sense that below it both phases are disrupted and the phase diagram from Fig. 1 no longer holds. Indeed, for dlogN a new family of localized states, associated with so-called tuning forks at the periphery of the graph, appear throughout the delocalized and semilocalized phases. We refer to Sect. 1.5 below for more details.

Previously, strong delocalization with localization exponent γ(w)=1+o(1) has been established for many mean-field models, such as Wigner matrices [1, 3437], supercritical Erdős–Rényi graphs [35, 42], and random regular graphs [12, 13]. All of these models are homogeneous and only have a delocalized phase.

Although a rigorous understanding of the metal-insulator transition for the Anderson model is still elusive, some progress has been made for random band matrices. Random band matrices [23, 40, 47, 58] constitute an attractive model interpolating between the Anderson model and mean-field Wigner matrices. They retain the n-dimensional structure of the Anderson model but have proved somewhat more amenable to rigorous analysis. They are conjectured [40] to have a similar phase diagram as the Anderson model in dimensions n3. As for the Anderson model, dimensions n>1 have so far seen little progress, but for n=1 much has been understood both in the localized [48, 50] and the delocalized [2022, 2833, 43, 51, 52, 59] phases. A simplification of band matrices is the ultrametric ensemble [41], where the Euclidean metric of Zn is replaced with an ultrametric arising from a tree structure. For this model, a phase transition was rigorously established in [56].

Another modification of the n-dimensional Anderson model is the Anderson model on the Bethe lattice, an infinite regular tree corresponding to the case n=. For it, the existence of a delocalized phase was shown in [5, 38, 44]. In [6, 7] it was shown that for unbounded random potentials the delocalized phase exists for arbitrarily weak disorder. It extends beyond the spectrum of the unperturbed adjacency matrix into the so-called Lifschitz tails, where the density of states is very small. The authors showed that, through the mechanism of resonant delocalization, the exponentially decaying tunnelling amplitudes between localization centres are counterbalanced by an exponentially large number of possible channels through which tunnelling can occur, so that the rule of thumb (1.3) for localization is violated. As a consequence, the eigenvectors are delocalized across many resonant localization centres. We remark that this analysis was made possible by the absence of cycles on the Bethe lattice. In contrast, the global geometry of the critical Erdős–Rényi graph is fundamentally different from that of the Bethe lattice (through the existence of a very large number of long cycles), which has a defining impact on the nature of the delocalization-semilocalization transition summarized in Fig. 1.

Transitions in the localization behaviour of eigenvectors have also been analysed in several mean-field type models. In [45, 46] the authors considered the sum of a Wigner matrix and a diagonal matrix with independent random entries with a large enough variance. They showed that the eigenvectors in the bulk are delocalized while near the edge they are partially localized at a single site. Their partially localized phase can be understood heuristically as a rigorous (and highly nontrivial) verification of the rule of thumb for localization, where the perturbation takes place around the diagonal matrix. Heavy-tailed Wigner matrices, or Lévy matrices, whose entries have α-stable laws for 0<α<2, were proposed in [24] as a simple model that exhibits a transition in the localization of its eigenvectors; we refer to [3] for a summary of the predictions from [24, 53]. In [18, 19] it was proved that for energies in a compact interval around the origin, eigenvectors are weakly delocalized, and for 0<α<2/3 for energies far enough from the origin, eigenvectors are weakly localized. In [3], full delocalization was proved in a compact interval around the origin, and the authors even established GOE local eigenvalue statistics in the same spectral region. In [2], the law of the eigenvector components of Lévy matrices was computed.

Conventions Throughout the following, every quantity that is not explicitly constant depends on the fundamental parameter N. We almost always omit this dependence from our notation. We use C to denote a generic positive universal constant, and write X=O(Y) to mean |X|CY. For X,Y>0 we write XY if X=O(Y) and Y=O(X). We write XY or X=o(Y) to mean limNX/Y=0. A vector is normalized if its 2-norm is one.

Results—the semilocalized phase

Let G=G(N,d/N) be the Erdős–Rényi graph with vertex set [N]:={1,,N} and edge probability d/N for 0dN. Let A=(Axy)x,y[N]{0,1}N×N be the adjacency matrix of G. Thus, A=A, Axx=0 for all x[N], and (Axy:x<y) are independent Bernoulli(d/N) random variables.

The entrywise nonnegative matrix A/d has a trivial Perron–Frobenius eigenvalue, which is its largest eigenvalue. In the following we only consider the other eigenvalues, which we call nontrivial. In the regime dlogN/loglogN, which we always assume in this paper, the trivial eigenvalue is located at d(1+o(1)), and it is separated from the nontrivial ones with high probability; see [14]. Moreover, without loss of generality in this subsection we always assume that d3logN, for otherwise the semilocalized phase does not exist (see Sect. 1.1).

For x[N] we define the normalized degree of x as

αx:=1dy[N]Axy. 1.5

In Theorem 1.7 below we show that the nontrivial eigenvalues of A/d outside the interval [-2,2] are in two-to-one correspondence with vertices with normalized degree greater than 2: each vertex x with αx>2 gives rise to two eigenvalues of A/d located with high probability near ±Λ(αx), where we defined the bijective function Λ:[2,)[2,) through

Λ(α):=αα-1. 1.6

Our main result in the semilocalized phase is about the eigenvectors associated with these eigenvalues. To state it, we need the following notions.

Definition 1.1

Let λ>2 and 0<δλ-2. We define the set of resonant vertices at energy λ through

Wλ,δ:={x:αx2,|Λ(αx)-λ|δ}. 1.7

We denote by Br(x) the ball around the vertex x of radius r for the graph distance in G. Define

r=clogN; 1.8

all of our results will hold provided c>0 is chosen to be a small enough universal constant. The quantity r will play the role of a maximal radius for balls around localization centres.

We introduce the basic control parameters

ξ:=logNdlogd,ξu:=logNd1u, 1.9

which under our assumptions will always be small (see Remark 1.5 below). We now state our main result in the semilocalized phase.

Theorem 1.2

(Semilocalized phase). For any ν>0 there exists a constant C such that the following holds. Suppose that

ClogNloglogNd3logN. 1.10

Let w be a normalized eigenvector of A/d with nontrivial eigenvalue λ2+Cξ1/2. Let 0<δ(λ-2)/2. Then for each xWλ,δ there exists a normalized vector v(x), supported in Br(x), such that the supports of v(x) and v(y) are disjoint for xy, and

graphic file with name 220_2021_4167_Equ309_HTML.gif

with probability at least 1-CN-ν. Moreover, v(x) decays exponentially around x in the sense that for any r0 we have

yBr(x)(v(x))y21(αx-1)r+1.

Remark 1.3

An analogous result holds for negative eigenvalues -λ-2-Cξ1/2, with a different vector v(x). See Theorem 3.4 and Remark 3.5 below for a precise statement.

Remark 1.4

The upper bound d3logN in (1.10) is made for convenience and without loss of generality, because if d>3logN then, as explained in Sect. 1.1, with high probability the semilocalized phase does not exist, i.e. eigenvalues satisfying the conditions of Theorem 1.2 do not exist.

Theorem 1.2 implies that w is almost entirely concentrated in the balls around the resonant vertices, and in each such ball Br(x), xWλ,δ, the vector w is almost collinear to the vector v(x). Thus, v(x) has the interpretation of the localization profile around the localization centre x. Since it has exponential decay, we deduce immediately from Theorem 1.2 that the radius r can be made smaller at the expense of worse error terms. In fact, in Definition 3.2 and Theorem 3.4 below, we give an explicit definition of v(x), which shows that it is radial in the sense that its value at a vertex y depends only on the distance between x and y, in which it is an exponentially decaying function. To ensure that the supports of the vectors v(x) for different x do not overlap, v(x) is in fact defined as the restriction of a radial function around x to a subgraph of G, the pruned graph, which differs from G by only a small number of edges and whose balls of radius r around the vertices of Wλ,δ are disjoint (see Proposition 3.1 below). For positive eigenvalues, the entries of v(x) are nonnegative, while for negative eigenvalues its entries carry a sign that alternates in the distance to x. The set of resonant vertices Wλ,δ is a small fraction of the whole vertex set [N]; its size is analysed in Lemma A.12 below.

Remark 1.5

Note that, by the lower bounds imposed on d and λ in Theorem 1.2, we always have ξ,ξλ-21/C.

Using the exponential decay of the localization profiles, it is easy to deduce from Theorem 1.2 that a positive proportion of the eigenvector mass concentrates at the resonant vertices.

Corollary 1.6

Under the assumptions of Theorem 1.2 we have

yWλ,δwy2=λ2-4λ+λ2-4+O(C(ξ+ξλ-2)δ+Cδλ5/2λ-2)

with probability at least 1-CN-ν.

Next, we state a rigidity result on the eigenvalue locations in the semilocalized phase. It generalizes [10, Corollary 2.3] by improving the error bound and extending it to the full regime (1.4) of d, below which it must fail (see Sect. 1.5 below). Its proof is a byproduct of the proof of our main result in the semilocalized phase, Theorem 1.2. We denote the ordered eigenvalues of a Hermitian matrix MCN×N by λ1(M)λ2(M)λN(M). We only consider the nontrivial eigenvalues of A/d, i.e. λi(A/d) with 2iN. For the following statements we order the normalized degrees by choosing a (random) permutation σSN such that iασ(i) is nonincreasing.

Theorem 1.7

(Eigenvalue locations in semilocalized phase). For any ν>0 there exists a constant C such that the following holds. Suppose that (1.10) holds. Let

U:={x[N]:Λ(αx)2+ξ1/2}.

Then with probability at least 1-CN-ν, for all 1i|U| we have

|λi+1(A/d)-Λ(ασ(i))|+|λN-i+1(A/d)+Λ(ασ(i))|C(ξ+ξΛ(ασ(i))-2) 1.11

and for all |U|+2iN-|U| we have

|λi(A/d)|2+ξ1/2. 1.12

We remark that the upper bound on d from (1.10), which is necessary for the existence of a semilocalized phase, can be relaxed in Theorem 1.7 to obtain an estimate on max2iN|λi(A/d)| in the supercritical regime d3logN, which is sharper than the one in [10]. The proof is the same and we do not pursue this direction here.

We conclude this subsection with a discussion on the counting function of the normalized degrees, which we use to give estimates on the number of resonant vertices (1.7). For b0 and α2 define the exponent

θb(α):=[1-b(αlogα-α+1)]+. 1.13

Define αmax(b):=inf{α2:θb(α)=0}. Thus, θb is a nonincreasing function that is nonzero on [0,αmax(b)). Moreover, θb(2)=[1-b/b]+, so that αmax(b)>2 if and only if b<b. From Lemma A.9 below it is easy to deduce that if d1 then ασ(1)=αmax(d/logN)+O(ζ/d) with probability at least 1-o(1) for any ζ1. Thus, αmax(d/logN) has the interpretation of the deterministic location of the largest normalized degree. See Fig. 4 for a plot of θb.

Fig. 4.

Fig. 4

A plot of the exponent θb(α) as a function of α2 for the values b=0.3 (blue), b=1.3 (red), and b=2.3 (green). The graph hits the value 0 at αmax(b)

In Appendix A.4 below, we obtain estimates on the density of the normalized degrees (αx)x[N] and combine it with Theorem 1.2 to deduce a lower bound on the p-norm of eigenvectors in the semilocalized phase. The precise statements are given in Lemma A.12 and Corollary A.13, which provide quantitative error bounds throughout the regime (1.10). Here, we summarize them, for simplicity, in simple qualitative versions in the critical regime dlogN. For b<b we abbreviate

λmax(b):=Λ(αmax(b)),ρb(λ):=θb(Λ-1(λ))if|λ|21if|λ|<2, 1.14

where Λ-1(λ)=λ22(1+1-4/λ2) for |λ|2. Let d=blogN with some constant b<b, and suppose that 2+κλλmax(b)-κ for some constant κ>0. Then Lemma A.12 (ii) implies (choosing 1/dδ1)

|Wλ,δ|=Nρb(λ)+o(1) 1.15

with probability 1-o(1). From (1.15) and Theorem 1.2 we obtain, for any 2p,

wp2N(2/p-1)ρb(λ)+o(1) 1.16

with probability 1-o(1) (see Corollary A.13 below). In other words, the localization exponent γ(w) from (1.1) satisfies γ(w)ρb(λ)+o(1). See Fig. 2 for an illustration of the bound (1.16) for p=. We remark that the exponent ρb(λ) also describes the density of states at energy λ: under the above assumptions on b and λ, for any interval I containing λ and satisfying ξ|I|1, the number of eigenvalues in I is equal to Nρb(λ)+o(1)|I| with probability 1-o(1), as can be seen from Lemma A.12 (i) and Theorem 1.7.

Results—the delocalized phase

Let A be the adjacency matrix of G(N,d/N), as in Sect. 1.2. For 0<κ<1/2 define the spectral region

Sκ:=[-2+κ,-κ][κ,2-κ]. 1.17

Theorem 1.8

(Delocalized phase). For any ν>0 and κ>0 there exists a constant C>0 such that the following holds. Suppose that

ClogNd(logN)3/2. 1.18

Let w be a normalized eigenvector of A/d with eigenvalue λSκ. Then

w2N-1+κ 1.19

with probability at least 1-CN-ν.

In the delocalized phase, i.e. in Sκ, we also show that the spectral measure of A/d at any vertex x is well approximated by the spectral measure at the root of Tdαx,d, the infinite rooted (dαx,d)-regular tree, whose root has dαx children and all other vertices have d children. This approximation is a local law, valid for intervals containing down to Nκ eigenvalues. See Remark 4.4 as well as Remark 4.3 and Appendix A.2 below for details.

Remark 1.9

In [42] it is shown that (1.19) holds with probability at least 1-CN-ν for all eigenvectors provided that

ClogNdN/2. 1.20

This shows that the upper bound in (1.18) is in fact not restrictive.

Remark 1.10

(Optimality of (1.18) and (1.20)). Both lower bounds in (1.18) and (1.20) are optimal (up to the value of C), in the sense that delocalization fails in each case if these lower bounds are relaxed. See Sect. 1.5 below.

We note that the domain Sκ is optimal, up to the choice of κ>0. Indeed, as explained in Sect. 1.5 below, delocalization fails in the neighbourhood of the origin, owing to a proliferation highly localized tuning fork states. Similarly, we expect the delocalization to fail in the neighbourhoods of ±2, where the masses of the eigenvectors become concentrated on vertices x with normalized degrees αx close to 2. The neighbourhoods of 0,±2 are also singled out as the regions where the self-consistent equation used to prove Theorem 1.8 (see Lemma 4.16) becomes unstable. This instability is directly related to the appearance of singularities in the spectral measure of the tree Tdαx,d (see (4.11) and Fig. 8 for an illustration). The singularity near 0 occurs when αx is close to 0, and the singularities near ±2 when αx is close to 2. See Fig. 10 for a simulation that demonstrates numerically the failure of delocalization outside of Sκ.

Fig. 8.

Fig. 8

An illustration of the probability measure μα for various values of α. For α>2, μα has two atoms which we draw using vertical lines. The measure μα is the semicircle law for α=1, the arcsine law for α=2, and the Kesten-McKay law with d=αα-1 for 1<α<2. Note that the density of μα is bounded in Sκ, uniformly in α. The divergence of the density near 0 is caused by values of α close to 0, and the divergence of the density near ±2 by values of α close to 2

Fig. 10.

Fig. 10

A scatter plot of (λ,w) for all eigenvalue-eigenvector pairs (λ,w) of the adjacency matrix A/d of the critical Erdős–Rényi graph restricted to its giant component, where N=10000 and d=0.6logN

Extension to general sparse random matrices

Our results, Theorems 1.2, 1.7, and 1.8, hold also for the following family of sparse Wigner matrices. Let A=(Axy) be the adjacency matrix of G(N,d/N) as above and W=(Wxy) be an independent Wigner matrix with bounded entries. That is, W is Hermitian and its upper triangular entries (Wxy:xy) are independent complex-valued random variables with mean zero and variance one, E|Wxy|2=1, and |Wxy|K almost surely for some constant K. Then we define the sparse Wigner matrix M=(Mxy) as the Hadamard product of A and W, with entries Mxy:=AxyWxy. Since the entries of M/d are centred, it does not have a trivial eigenvalue like A/d.

Theorem 1.11

Let M=(Mxy)x,y[N] be a sparse Wigner matrix. Define

αx=1dy[N]|Mxy|2. 1.21

Theorems 1.2 and 1.8 hold with (1.21) if A is replaced with M, and Theorem 1.7 holds with (1.21) if λi+1(A/d), λN-i+1(A/d), and λi(A/d) are replaced with λi(M/d), λN-i+1(M/d), and λi(M/d), respectively. Here, the constants C depend on K in addition to ν and κ.

The modifications to the proofs of Theorems 1.2 and 1.7 required to establish Theorem 1.11 are minor and follow along the lines of [10, Section 10]. The modification to the proof of Theorem 1.8 is trivial, since the assumptions of the general Theorem 4.2 below include the sparse Wigner matrix M. We also remark that, with some extra work, one can relax the boundedness assumption on the entries of W, which we shall however not do here.

The limits of sparseness and the scale dlogN

We conclude this section with a discussion on how sparse G can be for our results to remain valid. We show that all of our results—Theorems 1.2, 1.7, and 1.8—are wrong below the regime (1.4), i.e. if d is smaller than order logN. Thus, our sparseness assumptions—the lower bounds on d from (1.10) and (1.18)—are optimal (up to the factor loglogN in (1.10) and the factor C in (1.18)). The fundamental reason for this change of behaviour will turn out to be that the ratio |S2(x)|/|S1(x)| concentrates if and only if dlogN, where Si(x) denotes the sphere in G of radius i around x. This can be easily made precise with a well-known tuning fork construction, detailed below.

In the critical and subcritical regime 1d=O(logN), the graph G is in general not connected, but with probability 1-o(1) it has a unique giant component Ggiant with at least N(1-e-d/4) vertices (see Corollary A.15 below). Moreover, the spectrum of A/d restricted to the complement of the giant component is contained in the O(logNd)-neighbourhood of the origin (see Corollary A.16 below). Since we always assume dClogN and we only consider eigenvalues in R\[-κ,κ], we conclude that all of our results listed above only pertain to the eigenvalues and eigenvectors of the giant component.

For D=0,1,2, we introduce a star1tuning fork of degree D rooted in Ggiant, or D-tuning fork for short, which is obtained by taking two stars with central degree D and connecting their hubs to a common base vertex in Ggiant. We refer to Fig. 5 for an illustration and Definition A.17 below for a precise definition.

Fig. 5.

Fig. 5

A star tuning fork of degree 12 rooted in a graph. The tuning fork is highlighted in blue. Its base is filled with red and its two hubs are filled with blue

It is not hard to see that every D-tuning fork gives rise to two eigenvalues ±D/d of A/d restricted to Ggiant, whose associated eigenvectors are supported on the stars (see Lemma A.18 below). We denote by Σ:={D/d:a D-tuning fork exists} the spectrum of A/d restricted to Ggiant generated by the tuning forks. Any eigenvector associated with an eigenvalue D/dΣ is localized on precisely 2D+2 vertices. Thus, D-tuning forks provide a simple way of constructing localized states. Note that this is a very basic form of concentration of mass, supported at the periphery of the graph on special graph structures, and is unrelated to the much more subtle concentration in the semilocalized phase described in Sect. 1.2.

For d>0 and DN we now estimate the number of D-tuning forks in G(N,d/N), which we denote by F(dD). The following result is proved in Appendix A.6.

Lemma 1.12

(Number of D-tuning forks). Suppose that 1d=blogN=O(logN) and 0DlogN/loglogN. Then F(d,D)=N1-2b-2bD+o(1) with probability 1-o(1).

Defining D:=logN2d-1, we immediately deduce the following result.

Corollary 1.13

For any constant ε>0 with probability 1-o(1) the following holds. If D-ε then Σ=. If Dε then Σ={±D/d:DN,DD(1+o(1))}.

We deduce that if d(1/2-ε)logN then Σ and hence the delocalization for all eigenvectors from Remark 1.9 fails. Hence, the lower bound (1.20) is optimal up to the value of C.

Similarly, for dlogN the set Σ is in general nonempty, but we always have Σ[-κ,κ] for any fixed κ>0, so that eigenvalues from Σ do not interfere with the statements of Theorems 1.2, 1.7, and 1.8. On the other hand, if d=logN/t for constant t, we find that Σ is asymptotically dense in the interval [-t/2,t/2]. Since the conclusions of Theorems 1.2, 1.7, and 1.8 are obviously wrong for any eigenvalue from Σ, they must all be wrong for large enough t. This shows that the lower bounds d from (1.10) and (1.18) are optimal (up to the factor loglogN in (1.10) and the factor C in (1.18)).

In fact, the emergence of the tuning fork eigenvalues of order one and the failure of all of our proofs has the same underlying root cause, which singles out the scale dlogN as the scale below which the concentration of the ratio

|S2(x)|/|S1(x)|=d(1+o(1)) 1.22

fails for vertices x satisfying Dxd. Clearly, to have a D-tuning fork with Dd, (1.22) has to fail at the hubs of the stars. Moreover, (1.22) enters our proofs of both the semilocalized and the delocalized phase in a crucial way. For the former, it is linked to the validity of the local approximation by the (Dx,d)-regular tree from Appendix A.2, which underlies also the construction of the localization profile vectors (see e.g. (3.35) below). For the latter, in the language of Definition 4.6 below, it is linked to the property that most neighbours of any vertex are typical (see Proposition 4.8 (ii) below).

Basic Definitions and Overview of Proofs

In this preliminary section we introduce some basic notations and definitions that are used throughout the paper, and give an overview of the proofs of Theorems 1.2 (semilocalized phase) and 1.8 (delocalized phase). These proofs are unrelated and, thus, explained separately. For simplicity, in this overview we only consider qualitative error terms of the form o(1), although all of our estimates are in fact quantitative.

Basic definitions

We write N={0,1,2,}. We set [n]:={1,,n} for any nN and [0]:=. We write |X| for the cardinality of a finite set X. We use 1Ω as symbol for the indicator function of the event Ω.

Vectors in RN are denoted by boldface lowercase Latin letters like u, v and w. We use the notation v=(vx)x[N]RN for the entries of a vector. We denote by suppv:={x[N]:vx0} the support of a vector v. We denote by Inline graphic the Euclidean scalar product on RN and by Inline graphic the induced Euclidean norm. For a matrix MRN×N, M is its operator norm induced by the Euclidean norm on RN. For any x[N], we define the standard basis vector 1x:=(δxy)y[N]RN. To any subset S[N] we assign the vector 1SRN given by 1S:=xS1x. In particular, 1{x}=1x.

We use blackboard bold letters to denote graphs. Let H=(V(H),E(H)) be a (simple, undirected) graph on the vertex set V(H)=[N]. We often identify a graph H with its set of edges E(H). We denote by AH{0,1}N×N the adjacency matrix of H. For rN and x[N], we denote by BrH(x) the closed ball of radius r around x in the graph H, i.e. the set of vertices at distance (with respect to H) at most r from the vertex x. We denote the sphere of radius r around the vertex x by SrH(x):=BrH(x)\Br-1H(x). We denote by DxH the degree of the vertex x in the graph H. For any subset V[N], we denote by H|V the subgraph induced by H on V. If H is a subgraph of G then we denote by G\H the graph on [N] with edge set E(G)\E(H). In the above definitions, if the graph H is the Erdős–Rényi graph G, we systematically omit the superscript G.

The following notion of very high probability is a convenient shorthand used throughout the paper. It simplifies considerably the probabilistic statements of the kind that appear in Theorems 1.2, 1.7, and 1.8. It also introduces two special symbols, ν and C, which appear throughout the rest of the paper.

Definition 2.1

Let ΞΞN,ν be a family of events parametrized by NN and ν>0. We say that Ξ holds with very high probability if for every ν>0 there exists CCν such that

P(ΞN,ν)1-CνN-ν

for all NN.

Convention 2.2

In statements that hold with very high probability, we use the special symbol CCν to denote a generic positive constant depending on ν such that the statement holds with probability at least 1-CνN-ν provided Cν is chosen large enough. Thus, the bound |X|CY with very high probability means that, for each ν>0, there is a constant Cν>0, depending on ν, such that

P(|X|CνY)1-CνN-ν

for all NN. Here, X and Y are allowed to depend on N. We also write X=O(Y) to mean |X|CY.

We remark that the notion of very high probability from Definition 2.1 survives a union bound involving NO(1) events. We shall tacitly use this fact throughout the paper. Moreover, throughout the paper, the constant CCν in the assumptions (1.10) and (1.18) is always assumed to be large enough.

Overview of proof in semilocalized phase

The starting point of the proof of Theorem 1.2 is the following simple observation. Suppose that M is a Hermitian matrix with eigenvalue λ and associated eigenvector w. Let Π be an orthogonal projection and write Π¯:=I-Π. If λ is not an eigenvalue of Π¯MΠ¯ then from (M-λ)w=0 we deduce

Π¯w=-(Π¯MΠ¯-λ)-1Π¯MΠw. 2.1

If Π is an eigenprojection of M whose range contains the eigenspace of λ (for instance Π=ww if λ is simple) then clearly both sides of (2.1) vanish. The basic idea of our proof is to apply an approximate version of this observation to M=A/d, by choosing Π appropriately, and showing that the left-hand side of (2.1) is small by estimating the right-hand side.

In fact, we choose2

Π:=xWλ,δv(x)v(x), 2.2

where Wλ,δ is the set (1.7) of resonant vertices at energy λ, and v(x) is the exponentially decaying localization profile from Theorem 1.2. The proof then consists of two main ingredients:

  1. Π¯MΠ=o(1);

  2. Π¯MΠ¯ has a spectral gap around λ.

Informally, (a) states that Π is close to a spectral projection of M, as Π¯MΠ=[M,Π]Π quantifies the noncommutativity of M and Π on the range of Π. Similarly, (b) states that Π projects roughly onto an eigenspace of M of energies near λ. Plugging (a) and (b) into (2.1) yields an estimate on Π¯w from which Theorem 1.2 follows easily. Thus, the main work of the proof is to establish the properties (a) and (b) for the specific choice of Π from (2.2).

The construction of the localization profile v(x) uses the pruned graph Gτ from [10], a subgraph of G depending on a threshold τ>1, which differs from G by only a small number of edges and whose balls of radius r around the vertices of Vτ:={x:αxτ} are disjoint (see Proposition 3.1 below). Now we define the vector v(x):=v+τ(x), where, for σ=± and τ>1,

vστ(x):=i=0rσiui(x)1SiGτ(x)/1SiGτ(x),ui(x):=αx(αx-1)i/2u0(1ir). 2.3

The motivation behind this choice is explained in Appendix A.2: with high probability, the r-neighbourhood of x in Gτ looks roughly like that of the root of infinite regular tree TDx,d whose root has Dx children and all other vertices d children. The adjacency matrix of TDx,d has the exact eigenvalues ±dΛ(αx) with the corresponding eigenvectors given by (2.3) with Gτ replaced with TDx,d.

The central idea of our proof is the introduction of a block diagonal approximation of the pruned graph. Define the orthogonal projections

Πτ:=xV2+o(1)σ=±vστ(x)vστ(x),Π¯τ:=I-Πτ.

The range of Π from (2.2) is a subspace of the range of Πτ, i.e. ΠΠτ=Π. The interpretation of Πτ is the orthogonal projection onto all localization profiles around vertices x with normalized degree at least 2+o(1), which is precisely the set of vertices around which one can define an exponentially decaying localization profile. Now we define the block diagonal approximation of the pruned graph as

H^τ:=xV2+o(1)σ=±σΛ(αx)vστ(x)vστ(x)+Π¯τHτΠ¯τ; 2.4

here we defined the centred and scaled adjacency matrix Hτ:=AGτ/d-Eτ, where Eτ is a suitably chosen matrix that is close to EAG/d and preserves the locality of AGτ in balls around the vertices of Vτ. In the subspace spanned by the localization profiles {vστ(x):σ=±,xV2+o(1)}, H^τ is diagonal with eigenvalues σΛ(αx). In the orthogonal complement, it is equal to Hτ. The off-diagonal blocks are zero. The main work of our proof consists in an analysis of H^τ.

In terms of H^τ, abbreviating H:=(AG-EAG)/d, the problem of showing (a) and (b) reduces to showing

  • (c)

    H-H^τ=o(1),

  • (d)

    Π¯τHτΠ¯τ2+o(1).

Indeed, ignoring minor issues pertaining to the centring EAG, we replace M=AG/d with H in (a) and (b). Then (a) follows immediately from (c), since Π¯HΠ=Π¯H^τΠ+o(1)=o(1), as Π¯H^τΠ=0 by the block structure of H^τ and the relation ΠτΠ=Π. To show (b), we note that the Πτ-block of H^τ, ΠτH^τΠτ=xV2+o(1)σ=±σΛ(αx)vστ(x)vστ(x), trivially has a spectral gap: Π¯ΠτHτΠτΠ¯ has no eigenvalues in the δ-neighbourhood of λ, simply because the projection Π¯ removes the projections vστ(x)vστ(x) with eigenvalues σΛ(αx) in the δ-neighbourhood of λ. Moreover, the Π¯τ-block also has such a spectral gap by (d) and λ>2+o(1). Hence, by (c), we deduce the desired spectral gap (b).

Thus, what remains is the proof of (c) and (d). To prove (c), we prove H-Hτ=o(1) and Hτ-H^τ=o(1). The bound H-Hτ=o(1) follows from a detailed analysis of the graph G\Gτ removed from G to obtain the pruned graph Gτ, which we decompose as a union of a graph of small maximal degree and a forest, to which standard estimates of adjacency matrices of graphs can be applied (see Lemma 3.8 below). To prove Hτ-H^τ=o(1), we first prove that vστ(x) is an approximate eigenvector of Hτ with approximate eigenvalue σΛ(αx) (see Proposition 3.9 below). Then we deduce Hτ-H^τ=o(1) using that the balls B2r(x), xV2+o(1), are disjoint and the locality of the operator Hτ (see Lemma 3.11 below). Thus we obtain (c).

Finally, we sketch the proof of (d). The starting point is an observation going back to [10, 15]: from an estimate on the spectral radius of the nonbacktracking matrix associated with H from [15] and an Ihara–Bass-type formula relating the spectra of H and its nonbacktracking matrix from [15], we obtain the quadratic form inequality |H|I+Q+o(1) with very high probability, where Q=diag(αx:x[N]), |H| is the absolute value of the Hermitian matrix H, and o(1) is in the sense of operator norm (see Proposition 3.13 below). Using (c), we deduce the inequality

|H^τ|I+Q+o(1). 2.5

To estimate Π¯τHτΠ¯τ, we take a normalized eigenvector w of Π¯τHτΠ¯τ with maximal eigenvalue λ>0. Thus, wv±τ(x) for all xV2+o(1). We estimate Π¯τHτΠ¯τ from above (an analogous argument yields an estimate from below) using (2.5) to get

λ1+o(1)+xαxwx21+τ+o(1)+maxxαxxVτwx2. 2.6

Choosing τ=1+o(1), we see that (d) follows provided that we can show that

xVτwx2=o(1/logN), 2.7

since maxxαxClogN with very high probability.

The estimate (2.7) is a delocalization bound, in the vertex set Vτ, for any eigenvector w of H^τ that is orthogonal to v±τ(x) for all xV2+o(1) and whose associated eigenvalue is larger than 2τ+o(1). It crucially relies on the assumption that wv±τ(x) for all xV2+o(1), without which it is false (see Proposition 3.14 below). The underlying principle behind its proof is the same as that of the Combes–Thomas estimate [25]: the Green function ((λ-Z)-1)ij of a local operator Z at a spectral parameter λ separated from the spectrum of Z decays exponentially in the distance between i and j, at a rate inversely proportional to the distance from λ to the spectrum of Z. We in fact use a radial form of a Combes–Thomas estimate, where Z is the tridiagonalization of a local restriction of H^τ around a vertex xVτ (see Appendix A.2) and ij index radii of concentric spheres. The key observation is that, by the orthogonality assumption on w, the Green function ((λ-Z)-1)ir, 0i<r, and the eigenvector components in the radial basis ui, 0i<r, satisfy the same linear difference equation. Thus we obtain exponential decay for the components ui, which yields u02o(1/logN)i=0rui2. Going back to the original vertex basis, this implies that wx2o(1/logN)w|B2rGτ(x)2 for all xVτ, from which (2.7) follows since the balls B2rGτ(x), xVτ, are disjoint.

Overview of proof in delocalized phase

The delocalization result of Theorem 1.8 is an immediate consequence of a local law for the matrix A/d, which controls the entries of the Green function

GG(z):=(A/d-z)-1

in the form of high-probability estimates, for spectral scales Imz down to the optimal scale 1/N, which is the typical eigenvalue spacing. Such a local law was first established for d(logN)6 in [35] and extended down to dClogN in [42]. In both of these works, the diagonal entries of G are close to the Stieltjes transform of the semicircle law. In contrast, in the regime (1.4) the diagonal entry Gxx is close to the Stieltjes transform of the spectral measure at the root of an infinite (Dx,d)-regular tree. Hence, Gxx does not concentrate around a deterministic quantity.

The basic approach of the proof is the same as for any local law: derive an approximate self-consistent equation with very high probability, solve it using a stability analysis, and perform a bootstrapping from large to small values of Imz . For a set T[N] denote by A(T) the adjacency matrix of the graph G where the vertices of T (and all incident edges) have been removed, and denote by G(T)=(A(T)/d-z)-1 the associated Green function. In order to understand the emergence of the self-consistent equation, it is instructive to consider the toy situation where, for a given vertex x, all neighbours S1(x) are in different connected components of A(x). This is for instance the case if G is a tree. On the global scale, where Imz is large enough, this assumption is in fact valid to a good approximation, since the neighbourhood of x is with high probability a tree. Then a simple application of Schur’s complement formula and the resolvent identity yield

1Gxx=-z-1dyS1(x)Gyy(x),Gyy(x)-Gyy=(Gyy(x))21dGxx. 2.8

Thus, on the global scale, using that G is bounded, we obtain the self-consistent equation

1Gxx=-z-1dyS1(x)Gyy+o(1) 2.9

with very high probability.

It is instructive to solve the self-consistent equation (2.9) in the family (Gxx)x[N] on the global scale. To that end, we introduce the notion of typical vertices, which is roughly the set T={x[N]:αx=1+o(1)}. (In fact, as explained below, the actual definition for local scales has to be different; see (2.12) below.) A simple argument shows that with very high probability most neighbours of any vertex are typical. With this definition, we can try to solve (2.9) on the global scale as follows. From the boundedness of G we obtain a self-consistent equation for the vector (Gxx)xT that reads

1Gxx=-z-yT1dAxyGyy+ζx,ζx=o(1). 2.10

It is not hard to see that the equation (2.10) has a unique solution, which satisfies Gxx=m+o(1) for all xT. Here m is the Stieltjes transform of the semicircle law, which satisfies m=1-z-m. Plugging this solution back into (2.9) and using that most neighbours of any vertex are typical shows that for xT we have Gxx=mαx+o(1), where mα:=1-z-αm. One readily finds (see Appendix A.2 below) that mαx is Stieltjes transform of the spectral measure of the infinite (Dx,d)-regular tree at the root.

The first main difficulty of the proof is to provide a derivation of identities of the form (2.8) (and hence a self-consistent equation of the form (2.9)) on the local scale Imz1. We emphasize that the above derivation of (2.8) is completely wrong on the local scale. Unlike on the global scale, on the local scale the behaviour of the Green function is not governed by the local geometry of the graph, and long cycles contribute to G in an essential way. In particular, eigenvector delocalization, which follows from the local law, is a global property of the graph and cannot be addressed using local arguments; it is in fact wrong outside of the region Sκ, although the above derivation is insensitive to the real part of z.

We address this difficulty by replacing the identities (2.8) with the following argument, which ultimately provides an a posteriori justification of approximate versions of (2.8) with very high probability, provided we are in the region Sκ. We make an a priori assumption that the entries of G are bounded with very high probability; we propagate this assumption from large to small scales using a standard bootstrapping argument and the uniform boundedness of the density of the spectral measure associated with mα. It is precisely this uniform boundedness requirement that imposes the restriction to Sκ in our local law (as explained in Remark 1.10, this restriction is necessary). The key tool that replaces the simpleminded approximation (2.8) is a series of large deviation estimates for sparse random vectors proved in [42], which, as it turns out, are effective for the full optimal regime (1.4). Thus, under the bootstrapping assumption that the entries of G are bounded, we obtain (2.8) (and hence also (2.9)), with some additional error terms, with very high probability.

The second main difficulty of the proof is that, on the local scale and for sparse graphs, the self-consistent equation (2.10), which can be derived from (2.9) as explained above, is not stable enough to be solved in (Gxx)xT. This problem stems from the sparseness of the graphs that we are considering, and does not appear in random matrix theory for denser (or even heavy-tailed) matrices. Indeed, the stability estimates of (2.10) carry a logarithmic factor, which is usually of no concern in random matrix theory but is deadly for the sparse regime of this paper. This is a major obstacle and in fact ultimately dooms the self-consistent equation (2.10). To explain the issue, write the sum in (2.10) as ySxyGyy, where S is the T×T matrix Sxy=1dAxy. Writing Gxx=m+εx, plugging it into (2.10), and expanding to first order in εx, we obtain, using the definition of m, that εx=-m2((I-m2S)-1ζ)x. Thus, in order to deduce smallness of εx from the smallness of ζx, we need an estimate on the norm3(I-m2S)-1. In Appendix A.10 below we show that for typical S, RezSκ, and small enough Imz,we have

logNC(loglogN)2(I-m2S)-1CκlogN 2.11

for some universal constant C and some constant Cκ depending on κ. In our context, where ζx is small but much larger than the reciprocal of the lower bound of (2.11), such a logarithmic factor is not affordable.

To address this difficulty, we avoid passing by the form (2.10) altogether, as it is doomed by (2.11). The underlying cause for the instability of (2.10) is the inhomogeneous local structure of the matrix S, which is a multiple of the adjacency matrix of a sparse graph. Thus, the solution is to derive a self-consistent equation of the form (2.10) but with an unstructured S, which has constant entries. The basic intuition is to replace the local average 1dyS1(x)Gyy(x) in the first identity of (2.8) with the global average 1NyxGyy(x). Of course, in general these two are not close, but we can include their closeness into the definition of a typical vertex. Thus, we define the set of typical vertices as

T:={x[N]:αx=1+o(1),1dyS1(x)Gyy(x)=1NyxGyy(x)+o(1)}. 2.12

The main work of the proof is then to prove the following facts with very high probability.

  1. Most vertices are typical.

  2. Most neighbours of any vertex are typical.

With (a) and (b) at hand, we explain how to conclude the proof. Using (a) and the approximate version of (2.8) established above, we deduce the self-consistent equation for typical vertices,

1Gxx=-z-1|T|yTGyy+o(1),xT,

which, unlike (2.10), is stable (see Lemma 4.19 below) and can be easily solved to show that Gxx=m+o(1)=mαx+o(1) for all xT. Moreover, if xT then we obtain from (2.8) and (b) that

1Gxx=-z-1dyS1(x)TGyy(x)+o(1)=-z-αxm+o(1),

where we used that Gyy=m+o(1) for yT. This shows that Gxx=mαx+o(1) for all x[N] with very high probability, and hence concludes the proof.

What remains, therefore, is the proof of (a) and (b); see Proposition 4.8 below for a precise statement. Using the bootstrapping assumption of boundedness of the entries of G, it is not hard to estimate the probability P(xT), which we prove to be 1-o(1), although {xT} does not hold with very high probability (this characterizes the critical and subcritical regimes). Now if the events {xT}, x[N], were all independent, it would then be a simple matter to deduce (a) and (b).

The most troublesome source of dependence among the events {xT}, x[N], is the Green function Gyy(x) in the definition of T. Thus, the main difficulty of the proof is a decoupling argument that allows us to obtain good decay for the probability P(TT) in the size of T. This decay can only work up to a threshold in the size of T, beyond which the correlations among the different events kick in. In fact, we essentially prove that

P(TT)e-o(1)d|T|+CN-νfor|T|=o(d); 2.13

see Lemma 4.12. Choosing the largest possible T, T=o(d), we find that the first term on the right-hand side of (2.13) is bounded by N-ν provided that o(1)d2νlogN, which corresponds precisely to the optimal lower bound in (1.18). Using (2.13), we may deduce (a) and (b).

To prove (2.13), we need to decouple the events {xT}, xT. We do so by replacing the Green functions G(x) in the definition of T by G(T), after which the corresponding events are essentially independent. The error that we incur depends on the difference Gyy(T)-Gyy, which we have to show is small with very high probability under the bootstrapping assumption that the entries of G are bounded. For T of fixed size, this follows easily from standard resolvent identities. However, for our purposes it is crucial that T can have size up to o(d), which requires a more careful quantitative analysis. As it turns out, Gyy(T)-Gyy is small only up to |T|=o(d), which is precisely what we need to reach the optimal scale dlogN from (1.4).

The Semilocalized Phase

In this section we prove the results of Sect. 1.2–Theorems 1.2 and 1.7.

The pruned graph and proof of Theorem 1.2

The balls (Br(x))xWλ,δ in Theorem 1.2 are in general not disjoint. For its proof, and in order to give a precise definition of the vector v(x) in Theorem 1.2, we need to make these balls disjoint by pruning the graph G. This is an important ingredient of the proof, and will also allow us to state a more precise version of Theorem 1.2, which is Theorem 3.4 below. This pruning was previously introduced in [10]; it is performed by cutting edges from G in such a way that the balls (Br(x))xWλ,δ are disjoint for appropriate radii, r=2r, by carefully cutting in the right places, thus reducing the number of cut edges. This ensures that the pruned graph is close to the original graph in an appropriate sense. The pruned graph, Gτ, depends on a parameter τ>1, and its construction is the subject of the following proposition.

To state it, we introduce the following notations. For a subgraph Gτ of G we abbreviate

Biτ(x):=BiGτ(x),Siτ(x):=SiGτ(x).

Moreover, we define the set of vertices with large degrees

Vτ:={x[N]:αxτ}.

Proposition 3.1

(Existence of pruned graph). Let 1+ξ1/2τ2 and d3logN. There exists a subgraph Gτ of G with the following properties.

  • (i)

    Any path in Gτ connecting two different vertices in Vτ has length at least 4r+1. In particular, the balls (B2rτ(x))xVτ are disjoint.

  • (ii)

    The induced subgraph Gτ|B2rτ(x) is a tree for each xVτ.

  • (iii)

    For each edge in G\Gτ, there is at least one vertex in Vτ incident to it.

  • (iv)

    For each xVτ and each iN satisfying 1i2r we have Siτ(x)Si(x).

  • (v)
    The degrees induced on [N] by G\Gτ are bounded according to
    maxx[N]DxG\GτClogN(τ-1)2d 3.1
    with very high probability.
  • (vi)
    Suppose that logNd. For each xVτ and all 2i2r, the bound
    |Si(x)\Siτ(x)|ClogN(τ-1)2di-2 3.2
    holds with very high probability.

The proof of Proposition 3.1 is postponed to the end of this section, in Sect. 3.5 below. It is essentially [10, Lemma 7.2], the main difference being that (vi) is considerably sharper than its counterpart, [10, Lemma 7.2 (vii)]; this stronger bound is essential to cover the full optimal regime (1.4) (see Sect. 1.5). As a guide for the reader’s intuition, we recall the main idea of the pruning. First, for every xVτ, we make the 2r-neighbourhood of x a tree by removing appropriate edges incident to x. Second, we take all paths of length less than 4r+1 connecting different vertices in Vτ, and remove all of their edges incident to any vertex in Vτ. Note that only edges incident to vertices in Vτ are removed. This informal description already explains properties (i)–(iv). Properties (v) and (vi) are probabilistic in nature, and express that with very high probability the pruning has a small impact on the graph. See also Lemma 3.8 below for a statement in terms of operator norms of the adjacency matrices. For the detailed algorithm, we refer to the proof of [10, Lemma 7.2].

Using the pruned graph Gτ, we can give a more precise formulation of Theorem 1.2, where the localization profile vector v(x) from Theorem 1.2 is explicit. For its statement, we introduce the set of vertices

V:=V2+ξ1/4 3.3

around which a localization profile can be defined.

Definition 3.2

(Localization profile). Let 1+ξ1/2τ2 and Gτ be the pruned graph from Proposition 3.1. For xV we introduce positive weights u0(x),u1(x),,ur(x) as follows. Set u0(x)>0 and define, for i=1,,r-1,

ui(x):=αx(αx-1)i/2u0(x),ur(x):=1(αx-1)(r-1)/2u0(x). 3.4

For σ=± we define the radial vector

vστ(x):=i=0rσiui(x)1Siτ(x)1Siτ(x), 3.5

and choose u0(x)>0 such that vστ(x) is normalized.

Remark 3.3

The family (vστ(x):xV,σ=±) is orthonormal. Indeed, if x,yV are distinct, then by Proposition 3.1 (i) the vectors vστ(x) and vσ~τ(y) are orthogonal for any σ,σ~=± because they are supported on disjoint sets of vertices. Moreover, v+τ(x) and v-τ(x) are orthogonal by the choice of ur(x) from (3.4), as can be seen by a simple computation.

The following result restates Theorem 1.2 by identifying v(x) there as v+τ(x) given in (3.5). It easily implies Theorem 1.2, and the rest of this section is devoted to its proof.

Theorem 3.4

The following holds with very high probability. Suppose that d satisfies (1.10). Let w be a normalized eigenvector of A/d with nontrivial eigenvalue λ2+Cξ1/2. Choose 0<δ(λ-2)/2 and set τ:=1+(λ-2)/81. Then

graphic file with name 220_2021_4167_Equ41_HTML.gif 3.6

Remark 3.5

An analogous result holds for negative eigenvalues -λ, where λ is as in Theorem 3.4 and v+τ(x) in (3.6) is replaced with v-τ(x).

For the motivation behind Definition 3.2, we refer to the discussion in Sect. 2.2 and Appendix A.2. As explained there, if Gτ is sufficiently close to the infinite tree TDx,d in a ball of radius r around x, and if r is large enough for ur(x) to be very small, we expect (3.5) to be an approximate eigenvector of A. This will in fact turn out to be true; see Proposition 3.9 below. That r is in fact large enough is easy to see: the definition of r in (1.8) and the bound ξ1/d imply that, for αx2+C(logd)2/logN, we have

(αx-1)-(r-2)/2ξ. 3.7

This means that the last element of the sequence (ui(x))i=0r is bounded by ξ. Note that the lower bound on αx imposed above always holds for xV, since, by (1.10),

C(logd)2logNξ1/4. 3.8

As a guide to the reader, in Fig. 6, we summarize the three main sets of vertices that are used in the proof of Theorem 3.4. We conclude this subsection by proving Theorem 1.2 and Corollary 1.6 using Theorem 3.4.

Fig. 6.

Fig. 6

An illustration of the three sets of vertices of increasing size that enter into the proof of Theorem 3.4. Each vertex x is plotted as a dot at its normalized degree αx. The largest set is Vτ from Proposition 3.1, where 1+ξ1/2τ2. It is used to define the pruned graph Gτ. The intermediate set is VV2+ξ1/4 from (3.3). It is the set of vertices for which we can define the localization profile vector v(x) that decays exponentially around x. The smallest set Wλ,δ=Λ-1([λ-δ,λ+δ]) is the set of resonant vertices at energy λ

Proof of Theorem 1.2

The first claim follows immediately from Theorem 3.4, with v(x)=v+τ(x). To verify the claim about the exponential decay of v, we note that the graph distance in G is bounded by the graph distance in Gτ, which implies

yBr(x)c(v+τ(x))y2yBrτ(x)c(v+τ(x))y2=i=r+1rui(x)2,

from which the claim easily follows using the definition (3.4).

Proof of Corollary 1.6

We decompose Inline graphic, where Inline graphic and e is orthogonal to Span{v+τ(x):xWλ,δ}. By Theorem 3.4 we have eC(ξ+ξτ-1)δ and

xWλ,δγx21-C(ξ+ξτ-1)δ. 3.9

Moreover, since λ-δ2τ, we have Wλ,δVτ, so that Proposition 3.1 (i) implies (v+τ(x))y=δxyu0(x) for x,yWλ,δ. Thus we have

yWλ,δwy2=w|Wλ,δ2=xWλ,δγxv+τ(x)|Wλ,δ2+O(e)=yWλ,δγy2u0(y)2+Oξ+ξτ-1δ. 3.10

Since u0(y) was chosen such that v+τ(y) is normalized, we find

u0(y)2=1+i=1r-1αy(αy-1)i+1(αy-1)r-1-1=αy-22(αy-1)+O(1(αy-1)r-1).

Define α:=Λ-1(λ) for α2. Since |Λ(αy)-λ|δ for yWλ,δ, we obtain

|αy-α|δmaxt[λ-δ,λ+δ](Λ-1)(t)=Oδλ3/2(λ-2)-1/2,

where we used that λ±δ-2λ-2. Since ddαα-22(α-1)=12(α-1)2λ-4, we find

u0(y)2=α-22(α-1)+Oδλ5/2λ-2+1(αy-1)r-1=α-22(α-1)+Oδλ5/2λ-2+ξδ, 3.11

where we used (3.7) and the upper bound on δ in the last step. By an elementary computation,

α-22(α-1)=λ2-4λ+λ2-4,

and the claim hence follows by recalling (3.7) and plugging (3.9) and (3.11) into (3.10). 

Block diagonal approximation of pruned graph and proof of Theorems 3.4 and 1.7

We now introduce the adjacency matrix of Gτ and a suitably defined centred version. Then we define a block diagonal approximation of this matrix, called H^τ in (3.16) below, which is the central construction of our proof.

Definition 3.6

Let Aτ be the adjacency matrix of Gτ. Let H:=A_/d and Hτ:=A_τ/d, where

A_:=A-EA,A_τ:=Aτ-χτ(EA)χτ 3.12

and χτ is the orthogonal projection onto Span{1y:yxVτB2rτ(x)}.

The definition of A_τ is chosen so that (i) A_τ is close to A_ provided that Aτ is close to A, since the kernel of χτ has a relatively low dimension, and (ii) when restricted to vertices at distance at most 2r from Vτ, the matrix A_τ coincides with Aτ. In fact, property (i) is made precise by the simple estimate

EA-χτ(EA)χτ2 3.13

with very high probability (see [10, Eq. (8.17)] for details). Property (ii) means that A_τ inherits the locality of the matrix A, meaning that applying A_τ to a vector localized in space to a small enough neighbourhood of Vτ yields again a vector localized in space. This property will play a crucial role in the proof, and it can be formalized as follows.

Remark 3.7

Let i+j2r. Then for any xVτ and vector v we have

suppvBiτ(x)supp[(Hτ)jv]Bi+jτ(x).

The next result states that Hτ is a small perturbation of H.

Lemma 3.8

Suppose that d3logN. For any 1+ξ1/2τ2 we have H-HτCξτ-1 with very high probability.

The next result states that vστ(x) is an approximate eigenvector of Hτ.

Proposition 3.9

Let d satisfy (1.10). Let x[N] and suppose that 1+ξ1/2τ2. If αx2+C(logd)2/logN then for σ=± we have

(Hτ-σΛ(αx))vστ(x)Cξ 3.14

with very high probability.

The proofs of Lemma 3.8 and Proposition 3.9 are deferred to Sect. 3.3. The following object is the central construction in our proof.

Definition 3.10

(Block diagonal approximation of pruned graph) Define the orthogonal projections

Πτ:=xVσ=±vστ(x)vστ(x),Π¯τ:=I-Πτ, 3.15

and the matrix

H^τ:=xVσ=±σΛ(αx)vστ(x)vστ(x)+Π¯τHτΠ¯τ. 3.16

That Πτ and Π¯τ are indeed orthogonal projections follows from Remark 3.3. Note that H^τ may be interpreted as a block diagonal approximation of Hτ. Indeed, completing the orthonormal family (vστ(x))xV,σ=± to an orthonormal basis of RN, which we write as the columns of the orthogonal matrix R, we have

RH^τR=diag(σΛ(αx))xV,σ=±00[].

The following estimate states that H^τ is a small perturbation of Hτ.

Lemma 3.11

Let d satisfy (1.10). If 1+ξ1/2τ2 then Hτ-H^τCξ with very high probability.

The proof of Lemma 3.11 is deferred to Sect. 3.3. The following result is the key estimate of our proof; it states that on the range of Π¯τ the matrix Hτ is bounded by 2τ+o(1).

Proposition 3.12

Let d satisfy (1.10). If 1+ξ1/2τ2 then Π¯τHτΠ¯τ2τ+C(ξ+ξτ-1) with very high probability.

The proof of Proposition 3.12 is deferred to Sect. 3.4. We now use Lemma 3.11 and Proposition 3.12 to conclude Theorems 3.4 and 1.7.

Proof of Theorem 3.4

Define the orthogonal projections

Πλ,δτ:=xWλ,δv+τ(x)v+τ(x),Π¯λ,δτ:=I-Πλ,δτ.

By definition, the orthogonal projections Πτ and Πλ,δτ commute. Moreover, under the assumptions of Theorem 3.4 we have the inclusion property

ΠτΠλ,δτ=Πλ,δτ. 3.17

See also Fig. 6. To show (3.17), we note that the condition on δ and the lower bound on λ in Theorem 3.4 imply λ-δ2+Cξ1/2. Using Λ(2+x)-2x2x1/2 for x0 we conclude that for any α2 we have the implication Λ(α)λ-δα2+ξ1/4, which implies (3.17).

Next, we abbreviate Eτ:=χτ(EA/d)χτ and note that ΠτEτ=0 because Πτχτ=0 by construction of vστ(x). From (3.17) we obtain Π¯λ,δτ=Π¯λ,δτΠτ+Π¯τ, which yields

Π¯λ,δτ(H^τ+Eτ)Π¯λ,δτ=Π¯λ,δτΠτH^τΠτΠ¯λ,δτ+(Π¯τH^τΠ¯τ+Eτ), 3.18

where we used that the cross terms vanish because of the block diagonal structure of H^τ.

The core of our proof is the spectral gap

spec(Π¯λ,δτ(H^τ+Eτ)Π¯λ,δτ)R\[λ-δ,λ+δ]. 3.19

To establish (3.19), it suffices to establish the same spectral gap for each term on the right-hand side of (3.18) separately, since the right-hand side of (3.18) is a block decomposition of its left-hand side. The first term on the right-hand side of (3.18) is explicit:

Π¯λ,δτΠτH^τΠτΠ¯λ,δτ=xVσ=±σΛ(αx)1|σΛ(αx)-λ|>δvστ(x)vστ(x),

which trivially has no eigenvalues in [λ-δ,λ+δ].

In order to establish the spectral gap for the second term of (3.18), we begin by remarking that Eτ has rank one and, by (3.13), its unique nonzero eigenvalue is d+O(1/d). Hence, by rank-one interlacing and Proposition 3.12, we find

spec(Π¯τ(Hτ+Eτ)Π¯τ)[-2τ-C(ξ+ξτ-1),2τ+C(ξ+ξτ-1)]{μ} 3.20

for some simple eigenvalue μ=d+O(1). Thus, to conclude the proof of the spectral gap for the second term of (3.18), it suffices to show that

λ-δ>2τ+C(ξ+ξτ-1) 3.21
λ+δ<μ. 3.22

To prove (3.21), we suppose that λ2+8Cξ1/2 and, recalling the condition on δ and the choice of τ in Theorem 3.4, obtain

λ-δ2+λ-222τ+2Cξ1/2>2τ+C(ξ+ξτ-1), 3.23

where in the last step we used that ξτ-1<ξ1/2 by our choice of τ and the lower bound on λ. This is (3.21).

For the following arguments, we compare A/d with H^τ+Eτ using the estimate

A/d-(H^τ+Eτ)(Hτ-H^τ)+(H-Hτ)+(EA/d-Eτ)C(ξ+ξτ-1) 3.24

with very high probability, which follows from Lemma 3.8, Lemma 3.11, (3.13) and d-1/2Cξ.

Next, we use (3.24) to conclude the proof of (3.22). The only nonzero eigenvalue of Eτ is d(1+O(1/d)), and from Proposition 3.12 and Remark 1.5 we have H^τΛ(maxxVαx)+O(1) with very high probability, so that Lemma A.7 and the assumption (1.10) yield H^τClogNd with very high probability. Hence, by first order perturbation theory (e.g. Weyl’s inequality), (1.10) and (3.24) imply that A/d has one eigenvalue bigger than d-O(1) and all other eigenvalues are at most ClogNd. Since λ is nontrivial, we conclude that λClogNd. By the upper bound δ(λ-2)/2 and the lower bound on d in (1.10), this concludes the proof of (3.22) and, thus, the one of the spectral gap (3.19).

Next, from (3.19), and (3.24), we conclude the spectral gap for the full adjacency matrix

spec(Π¯λ,δτ(A/d)Π¯λ,δτ)R\[λ-δ+C(ξ+ξτ-1),λ+δ-C(ξ+ξτ-1)]. 3.25

Using (3.25) we may conclude the proof. The eigenvalue-eigenvector equation (A/d-λ)w=0 yields

Π¯λ,δτw=-(Π¯λ,δτ(A/d)Π¯λ,δτ-λ)-1Π¯λ,δτ(A/d)Πλ,δτw. 3.26

Assuming that δ>C(ξ+ξτ-1), from (3.25) we get

(Π¯λ,δτ(A/d)Π¯λ,δτ-λ)-11δ-C(ξ+ξτ-1). 3.27

Moreover, since Π¯λ,δτH^τΠλ,δτ=0 and EτΠλ,δτ=0, we deduce from (3.24) that

Π¯λ,δτ(A/d)Πλ,δτC(ξ+ξτ-1). 3.28

Plugging (3.27) and (3.28) into (3.26) yields

Π¯λ,δτwC(ξ+ξτ-1)δ-C(ξ+ξτ-1)12C(ξ+ξτ-1)δ,

since w is normalized. This concludes the proof if δ>C(ξ+ξτ-1) (after a renaming of the constant C), and otherwise the claim is trivial.

Proposition 3.12 is also the main tool to prove Theorem 1.7.

Proof of Theorem 1.7

The proof uses Proposition 3.12, Lemma 3.8, and Lemma 3.11 for τ[1+ξ1/2/3,2]. Note that the lower bound 1+ξ1/2/3 is smaller than the lower bound 1+ξ1/2 imposed in these results, but their proofs hold verbatim also in this regime of τ.

We set Eτ:=χτ(EA/d)χτ with χτ from Definition 3.6. We now compare A/d and H^τ+Eτ, as in the proof of Theorem 3.4, and use some estimates from its proof. For any τ[1+ξ1/2/3,2], we have

spec(H^τ+Eτ)={±Λ(αx):xU}spec(Π¯τ(Hτ+Eτ)Π¯τ), 3.29

since Πτχτ=0. By first order perturbation theory and the choice τ=2, we get from (3.29), (3.20) and (3.24) that λ1(A/d)=μ+O(ξ)=d+O(1) and λ1(A/d) is well separated from the other eigenvalues of A/d (see the proof of Theorem 3.4). Combining (3.29), (3.20), and (3.24), choosing τ=1+ξ1/2/3 as well as using C(ξ+ξτ-1)ξ1/2/3 for this choice of τ imply (1.12).

Moreover, we apply first order perturbation theory to (3.29) using (3.20) and (3.24), and obtain

|λi+1(A/d)-Λ(ασ(i))|+|λN-i+1(A/d)+Λ(ασ(i))|C(ξ+ξτ-1) 3.30

with very high probability for all τ[1+ξ1/2/3,2] and all i[|U|] satisfying

2(τ-1)+C(ξ+ξτ-1)<Λ(ασ(i))-2. 3.31

What remains is choosing ττi, depending on i[|U|], such that the condition (3.31) is satisfied and the error estimate from (3.30) transforms into the form of (1.11). Both are achieved by setting

τ=1+13[(Λ(ασ(i))-2)3]. 3.32

Note that τ[1+ξ1/2/3,2] as σ(i)U. From Λ(ασ(i))-23(τ-1) due to (3.32) and Λ(ασ(i))-2ξ1/2 by the definition of U, we conclude that

Λ(ασ(i))-252(τ-1)+16ξ1/22(τ-1)+C(ξτ-1+ξ),

where we used τ-13ξτ-1logd as τ-1ξ1/2/3. This proves (3.31) and, thus, (3.30) for any σ(i)U with the choice of τ from (3.32).

In order to show that the right-hand side of (3.30) is controlled by the one in (1.11), we now distinguish the two cases, Λ(ασ(i))-23 and Λ(ασ(i))-2>3. In the latter case, τ=2 by (3.32) and (1.11) follows immediately from (3.30) as ξ1ξ. If Λ(ασ(i))-23 then τ-1=(Λ(ασ(i))-2)/3 and, thus, ξτ-1=3ξΛ(ασ(i))-2. Hence, (3.30) implies (1.11). This concludes the proof of Theorem 1.7.

Proof of Lemma 3.8, Proposition 3.9, and Lemma 3.11

Proof of Lemma 3.8

To begin with, we reduce the problem to the adjacency matrices by using the estimate (3.13). Hence, with very high probability,

dH-HτEA-χτ(EA)χτ+A-Aτ2+ADτ,

where ADτ is the adjacency matrix of the graph Dτ:=G\Gτ. Hence, since d-1/2Cξτ-1 by d3logN and the definition (1.9), it suffices to show that ADτCξτ-1d.

We know from Proposition 3.1 (iii) and (v) that with very high probability Dτ consists of (possibly overlapping) stars4 around vertices xVτ of central degree DxDτCdξτ-12. Moreover, with very high probability,

  • (i)

    any ball B2r(x) around xVτ has at most C cycles;

  • (ii)

    any ball B2r(x) around xVτ contains at most Cdξτ-12 vertices in Vτ.

Claim (i) follows from [10, Corollary 5.6], the definition (1.8), and Lemma A.7. Claim (ii) follows from [10, Lemma 7.3] and h((τ-1)/2)(τ-1)2 for 1τ2.

Let xVτ. We claim that we can remove at most C edges of Dτ incident to x so that no cycle passes through x. Indeed, if there were more than C cycles in Dτ passing through x, then at least one such cycle would have to leave B2r(x) (by (i)), which would imply that B2r(x) has at least r vertices in Vτ, which, by (ii), is impossible since r2Cdξτ-12 by τ1+ξ1/2. See Fig. 7 for an illustration of Dτ.

Fig. 7.

Fig. 7

An illustration of a connected component of Dτ. Vertices of Vτ are drawn in white and the other vertices in black. The ball B2r(x) around a chosen white vertex x is drawn in grey, where 2r=4. The illustrated component of Dτ has three cycles, two of which are in B2r(x). The blue and red cycles pass through x. The purple edge is removed from the blue cycle, i.e. it is put into the graph Uτ. With very high probability, the red cycle cannot appear, because it leaves the ball B2r(x) and therefore contains more white vertices in B2r(x) than allowed by property (ii)

Thus, we can remove a graph Uτ from Dτ such that Uτ has maximal degree C and Dτ\Uτ is a forest of maximal degree Cdξτ-12 (by (ii)). The claim now follows from Lemma A.4.

Proof of Proposition 3.9

We focus on the case σ=+; trivial modifications yield (3.14) for σ=-. The basic strategy is to decompose (Hτ-Λ(αx))v+τ(x) into several error terms that are estimated separately. A similar argument was applied in [10, Proposition 5.1] to the original graph G instead of Gτ, which however does not yield sharp enough estimates to reach the optimal scale dlogN (see Sect. 1.5).

We omit x from the notation in this proof and write ui, v+τ and Siτ instead of ui(x), v+τ(x) and Siτ(x). We define

siτ:=1Siτ1Siτ,Niτ(y):=|S1τ(y)Siτ|.

Note that (siτ)i=02r form an orthonormal system. Defining the vectors

w2:=i=2ruid|Siτ|ySi-1τ(Niτ(y)-|Siτ||Si-1τ|)1y,w3:=u2|S2τ|d|S1τ|-1s1τ+i=2r-1ui+1|Si+1τ|d|Siτ|-1+ui-1|Siτ|d|Si-1τ|-1siτ,w4:=ur(1-1αx)sr-1τ+ur-1(|Srτ|d|Sr-1τ|-1αx-1)srτ+ur|Sr+1τ|d|Srτ|sr+1τ, 3.33

a straightforward computation using the definition of v+τ yields

(Hτ-Λ(αx))v+τ=w2+w3+w4. 3.34

For a detailed proof of (3.34) in a similar setup, we refer the reader to [10, Lemma 5.2] (note that in the analogous calculation of [10] the left-hand side of (3.34) is multiplied by d). The terms in (3.34) analogous to w0 and w1 in [10] vanish, respectively, because the projection χτ is included in (3.12) and because Gτ|B2rτ is a tree by Proposition 3.1 (ii). The vector w4 from (3.33) differs from the one in [10] due to the special choice of ur in (3.4).

We now complete the proof of (3.14) by showing that each term on the right-hand side of (3.34) is bounded in norm by Cξ with very high probability. We start with w3 by first proving the concentration bound

|Si+1τ|d|Siτ|-1=OlogNd 3.35

with very high probability, for i=1,,r. To prove this, we use Proposition 3.1 (iv) and (vi), as well as [10, Lemma 5.4], to obtain

|Siτ||Si|=1-|Si\Siτ||Si|1-ClogN(τ-1)2d2 3.36

with very high probability, where we used that αx1, and the assumption [10, Eq. (5.13)] is satisfied by the definition (1.8). Therefore, invoking [10, Lemma 5.4] in the following expansion yields

|Si+1τ|d|Siτ|=|Si+1|d|Si||Si||Siτ||Si+1τ||Si+1|=(1+O(logNd))(1+O(logNd2(τ-1)2)) 3.37

with very high probability. Hence, recalling the lower bound τ1+ξ1/2, we obtain (3.35).

We take the norm in the definition of w3, use the orthonormality of (siτ)i=0r, and end up with

w32|S2τ|d|S1τ|-12u22+2i=2r-1|Si+1τ|d|Siτ|-12ui+12+|Siτ|d|Si-1τ|-12ui-12.

Consequently, (3.35) and i=0rui2=1 yield the desired bound on w3.

In order to estimate w2, we use the definitions

Ni(y):=|S1(y)Si|,Yi:=1|Si-1τ|ySi-1τ(Ni(y)-E[Ni(y)|Bi-1])2

and the Pythagorean theorem to obtain

w22=i=2rui2d|Siτ|ySi-1τ(Niτ(y)-|Siτ||Si-1τ|)24i=2rui2d|Siτ|ySi-1τ[(Ni(y)-E[Ni(y)|Bi-1])2+(E[Ni(y)|Bi-1]-d)2+(d-|Siτ||Si-1τ|)2+(Niτ(y)-Ni(y))2]4max2ir|Si-1τ|d|Siτ|[Yi+ClogN+(maxyDyG\Gτ)2] 3.38

with very high probability. Here, in the last step, we used (3.35), i=0rui2=1 and |d-E[Ni(y)|Bi-1]|=d|Bi-1|/NC with very high probability due to [10, Eq. (5.12b)] and Lemma A.7.

Next, we claim that

YiClogNlogd 3.39

with very high probability, for i=2,,r. The proof of (3.39) is based on a dyadic decomposition analogous to the one used in the proof of [10, Eq. (5.26)]. We distinguish two regimes and estimate

Yid+1|Si-1τ|ySi-1τ1|Ni(y)-E[Ni(y)|Bi-1]|>d1/2Ni(y)-E[Ni(y)|Bi-1]2d+1|Si-1τ|k=kmin0d2ek+1|Ni,kτ| 3.40

with very high probability, where we introduced

kmin:=-logd,Ni,kτ:={ySi-1τ:d2ek<(Ni(y)-E[Ni(y)|Bi-1])2d2ek+1}.

In (3.40), we used that, with very high probability, (Ni(y)-E[Ni(y)|Bi-1])2d2((τ-1/2)21)d2e, because ySi-1τ implies the conditions 0Ni(y)Dyτd due to Proposition 3.1 (i) and d/2E[Ni(y)|Bi-1]d with very high probability. By Proposition 3.1 (iv), we have Ni,kτNki-1, where Nki-1 is defined as in the proof of [10, Eq. (5.26)]. (Note that, in the notation of [10], there is a one-to-one mapping between A(Bi-1) and Bi.) In this proof it is shown that, with very high probability,

|Nki-1|k,k:=Cd(|Si-1|+logN)e-k.

Using (3.36) and (3.37), and then plugging the resulting bound into (3.40) concludes the proof of (3.39).

Thus, we obtain w2Cξ with very high probability, by starting from (3.38) and using (3.35), (3.39) and Proposition 3.1 (v) as well as the assumption 1+ξ1/2τ2.

Finally, we estimate w4. Since αx2 and u01 we have that ur+ur-13(αx-1)-(r-2)/2. The other coefficients of sr-1τ, srτ and sr+1τ are bounded by C with very high probability, due to αx2 and (3.35), respectively. Therefore, (3.7) implies w4Cξ. This concludes the proof of Proposition 3.9.

Proof of Lemma 3.11

We have to estimate the norm of

Hτ-H^τ=ΠτHτΠτ-xVσ=±σΛ(αx)vστ(x)vστ(x)+Π¯τHτΠτ+(Π¯τHτΠτ). 3.41

Each xV satisfies the condition of Proposition 3.9 since ξ1/4C(logd)2/logN (see (3.8)). Hence, for any xV and σ=±, Proposition 3.9 yields

Hτvστ(x)=σΛ(αx)vστ(x)+eστ(x),suppeστ(x)Br+1τ(x),eστ(x)Cξ

with very high probability, where the second statement follows from the first together with the definition (3.5) of vστ(x) and Remark 3.7. By Proposition 3.1 (i), the balls B2rτ(x) and B2rτ(y) are disjoint for x,yVτ with xy. Hence, in this case, vστ(x),eστ(x)vστ(y),eστ(y). For any a=xVσ=±ax,σvστ(x), we obtain

Π¯τHτΠτa=xVσ=±ax,σΠ¯τHτvστ(x)=Π¯τxVσ=±ax,σeστ(x).

Thus, with very high probability, Π¯τHτΠτa2xVσ=±ax,σeστ(x)24C2xVσ=±ax,σ2ξ2=4C2ξ2a2 by orthogonality. Therefore, Π¯τHτΠτCξ with very high probability. Similarly, the representation

ΠτHτΠτ-xVσ=±σΛ(αx)vστ(x)vστ(x)a=ΠτxVσ=±ax,σeστ(x)

yields the desired estimate on the sum of the two first terms on the right-hand side of (3.41).

Proof of Proposition 3.12

In this section we prove Proposition 3.12. Its proof relies on two fundamental tools.

The first tool is a quadratic form estimate, which estimates H in terms of the diagonal matrix of the vertex degrees. It is an improvement of [10, Proposition 6.1]. To state it, for two Hermitian matrices X and Y we use the notation XY to mean that Y-X is a nonnegative matrix, and |X| is the absolute value function applied to the matrix X.

Proposition 3.13

Let 4d3logN. Then, with very high probability, we have

|H|I+(1+2d-1/2)Q+ClogNd2d-1/2,

where Q is the diagonal matrix with diagonal (αx)x[N].

The second tool is a delocalization estimate for an eigenvector w of H^τ associated with an eigenvalue λ>2. Essentially, it says that wx is small at any xVτ unless w happens to be the specific eigenvector v±τ(x) of H^τ, which is by definition localized around x. Thus, in any ball B2rτ(x) around xVτ, all eigenvectors except v±τ(x) are locally delocalized in the sense that their magnitudes at x are small. Using that the balls (B2rτ(x))xVτ are disjoint, this implies that eigenvectors of Π¯τHτΠ¯τ have negligible mass on the set V.

Proposition 3.14

Let d satisfy (1.10). If 1+ξ1/2τ2 then the following holds with very high probability. Let λ be an eigenvalue of H^τ with λ>2τ+Cξ and w=(wx)x[N] its corresponding eigenvector.

  • (i)
    If xV and v±τ(x)w or if xVτ\V then
    |wx|w|B2rτ(x)λ2(λ-2τ-Cξ)2(2τ+Cξλ)r.
  • (ii)
    Let w be normalized. If v±τ(x)w for all xV then
    xVτwx2λ4(λ-2τ-Cξ)4(2τ+Cξλ)2r.

Analogous results hold for λ<-2τ-Cξ.

We may now conclude the proof of Proposition 3.12.

Proof of Proposition 3.12

By Proposition 3.13, Lemma 3.11, and Lemma 3.8 we have

H^τI+(1+2d-1/2)Q+ClogNd2d-1/2+H-Hτ+Hτ-H^τI+(1+2d-1/2)Q+C(ξ+ξτ-1) 3.42

with very high probability, where we used logNd2d-1/2(ξ+ξτ-1).

Arguing by contradiction, we assume that there exists an eigenvalue λ>2τ+C(ξ+ξτ-1) of Π¯τHτΠ¯τ for some C2C to be chosen later. By the lower bound in (1.10), we may assume that Cξ1. Thus, by the definition of H^τ, there is an eigenvector w of H^τ corresponding to λ, which is orthogonal to v±τ(x) for all xV. From (3.42), we conclude

λ=w,H^τw1+(1+2d-1/2)xVτwx2τ+(1+2d-1/2)xVτwx2maxy[N]αy+C(ξ+ξτ-1). 3.43

It remains to estimate the two sums on right-hand side of (3.43).

Since wv±τ(x) for all xV, we can apply Proposition 3.14 (ii). We find

2rlog2τ+Cξλ2rlog2τ+Cξ2τ+Cξ-2r(C-C)ξ2τ+Cξ-c(C-C)3logNξ, 3.44

where in the last step we recalled the definition (1.8) and used that τ2 and Cξ1. Using the estimate

λ4(λ-2τ-Cξ)4C(C-C)4ξ4,

combined with Proposition 3.14 (ii), (3.44) and Lemma A.7, yields

1ξxVτwx2maxy[N]αyClogN(C-C)4ξ5exp(-c(C-C)3logNξ)Cd5logN(C-C)4exp(-c(C-C)3logNdlogd)Cd5logN(C-C)41d81,

where the third step follows by choosing C large enough, depending on C.

Plugging this estimate into (3.43) and using xwx21 to estimate the first sum in (3.43), we obtain λ2τ+2C(ξ+ξτ-1). This is a contradiction to the assumption λ>2τ+C(ξ+ξτ-1). The proof of Proposition 3.12 is therefore complete.

Proof of Proposition 3.13

We only establish an upper bound on H. The proof of the same upper bound on -H is identical and, therefore, omitted.

We introduce the matrices H(t)=(Hxy(t))x,y[N] and M(t)=(δxymx(t))x,y[N] with entries

Hxy(t):=tHxyt2-Hxy2,mx(t):=1+yHxy2t2-Hxy2

By the estimate on the spectral radius of the nonbacktracking matrix associated with H in [15, Theorem 2.5] and the Ihara–Bass-type formula in [15, Lemma 4.1] we have, with very high probability, det(M(t)-H(t))0 for all t1+Cd-1/2. Because (M(t)-H(t))I as t, the matrix M(t)-H(t) is positive definite for large enough t. By continuity of the eigenvalues, we conclude that all eigenvalues of M(t)-H(t) stay positive for t1+Cd-1/2, and hence

H(t)M(t) 3.45

for all t1+Cd-1/2 with very high probability. We now define the matrix Δ=(Δxy)x,y[N] with

Δxy:=Hxy(t)-t-1Hxyifxyy|Hxy(t)-t-1Hxy|ifx=y.

It is easy to check that Δ is a nonnegative matrix. We also have

y|Hxy(t)-t-1Hxy|y|Hxy|3t(t2-Hxy2)2t3d1/2(αx+1d),

where we used that |Hxy|d-1/2 and yHxy2αx+dN by definition of H. We use this to estimate the diagonal entries of Δ and obtain

0ΔH(t)-t-1H+2t3dQ+2t3d3/2. 3.46

On the other hand, for the diagonal matrix M(t), we have the trivial upper bound

M(t)I+t-2Q+ClogNd2 3.47

since αxC(logN)/d with very high probability due to Lemma A.7. Finally, combining (3.45), (3.46) and (3.47) yields

t-1HI+(t-2+2t3d)Q+ClogNd2

and Proposition 3.13 follows by choosing t=1+Cd-1/2.

What remains is the proof of Proposition 3.14. The underlying principle behind the proof is the same as that of the Combes–Thomas estimate [25]: the Green function ((λ-Z)-1)ij of a local operator Z at a spectral parameter λ separated from the spectrum of Z decays exponentially in the distance between i and j, at a rate inversely proportional to the distance from λ to the spectrum of Z. Here local means that Zij vanishes if the distance between i and j is larger than 1. Since a graph is equipped with a natural notion of distance and the adjacency matrix is a local operator, a Combes–Thomas estimate would be applicable directly on the level of the graph, at least for the matrix Hτ. For our purposes, however, we need a radial version of a Combes–Thomas estimate, obtained by first tridiagonalizing (a modification of) H^τ around a vertex xVτ (see Appendix A.2). In this formulation, the indices i and j have the interpretation of radii around the vertex x, and the notion of distance is simply that of N on the set of radii. Since Z is tridiagonal, the locality of Z is trivial, although the matrix H^τ (or its appropriate modification) is not a local operator on the graph Gτ.

To ensure the separation of λ>2τ+o(1) and the spectrum of Z, we cannot choose Z to be the tridiagonalization of H^τ, since λ is an eigenvalue of H^τ. In fact, Z is the tridiagonalization of a new matrix H^τ,x, obtained by restricting H^τ to the ball B2rτ(x) and possibly subtracting a suitably chosen rank-two matrix, which allows us to show H^τ,x2τ+o(1). By the orthogonality assumption on w, we then find that the Green function ((λ-Z)-1)ir, 0i<r, and the eigenvector components in the radial basis ui, 0i<r, satisfy the same linear difference equation. The exponential decay of ((λ-Z)-1)ir in r-i then implies that, for each xVτ, u02o(1/logN)i=0rui2. Going back to the original vertex basis, this implies that wx2o(1/logN)w|B2rτ(x)2 for all xVτ, from which Proposition 3.14 follows since the balls B2rτ(x), xVτ, are disjoint.

Proof of Proposition 3.14

For a matrix MRN×N and a set V[N], we use the notation (M|V)xy:=1x,yVMxy.

We begin with part (i). We first treat the case xV. To that end, we introduce the matrix

H^τ,x:=H^τ|B2rτ(x)-Λ(αx)v+τ(x)v+τ(x)+Λ(αx)v-τ(x)v-τ(x). 3.48

We claim that, with very high probability,

H^τ,x2τ+Cξ. 3.49

To show (3.49), we begin by noting that, by Proposition 3.1 (i) and (ii), Gτ restricted to B2rτ(x) is a tree whose root x has αxd children and all other vertices have at most τd children. Hence, Lemma A.5 yields Hτ|B2rτ(x)τΛ(αx/τ2). Using Lemma 3.11 we find

H^τ|B2rτ(x)-Hτ|B2r(x)Cξ 3.50

with very high probability, and since v±τ(x) is an eigenvector of H^τ|B2rτ(x) with eigenvalue ±Λ(αx), we conclude

H^τ,xτΛ(αx/τ2)+Cξ 3.51

with very high probability. The estimate (3.51) is rough in the sense that the subtraction of the two last terms of (3.48) is not needed for its validity (since Λ(αx)τΛ(αx/τ2)). Nevertheless, it is sufficient to establish (3.49) in the following cases, which may be considered degenerate.

If αx2τ then (3.51) immediately implies (3.49), since ττ. Moreover, if αx>2τ and Λ(αx)2τ+Cξ, then (3.51) implies

H^τ,xτΛ(αx/τ)+CξτΛ(αx)+Cξ2τ+3Cξ,

which is (3.49) after renaming the constant C.

Hence, to prove (3.49), it suffices to consider the case Λ(αx)>2τ+Cξ. By Proposition 3.1 (i) and (ii), Gτ restricted to B2rτ(x)\{x} is a forest of maximal degree at most τd. Lemma A.4 therefore yields Hτ|B2rτ(x)\{x}2τ. Moreover, the adjacency matrix of the star graph consisting of all edges of Gτ incident to x has precisely two nonzero eigenvalues, ±dαx. By first order perturbation theory, we therefore conclude that Hτ|B2rτ(x) has at most one eigenvalue strictly larger than 2τ and at most one strictly smaller than -2τ. Using (3.50) we conclude that H^τ|B2rτ(x) has at most one eigenvalue strictly larger than 2τ+Cξ and at most one strictly smaller than -2τ-Cξ. Since v+τ(x) (respectively v-τ(x)) is an eigenvector of H^τ|B2rτ(x) with eigenvalue Λ(αx) (respectively -Λ(αx)), and since Λ(αx)>2τ+Cξ, we conclude (3.49).

Next, let (gi)i=0r be the Gram–Schmidt orthonormalization of the vectors ((H^τ,x)i1x)i=0r. We claim that

suppgiBr+iτ(x). 3.52

for i=0,,r. The proof proceeds by induction. The base case for i=0 holds trivially. For the induction step, it suffices to prove for 0i<r that if suppgiBr+iτ(x) then

supp(H^τ,xgi)Br+i+1τ(x) 3.53

To that end, we note that by Proposition 3.1 (i) we have H^τ,x=(Π¯τHτΠ¯τ)|B2rτ(x). Hence, by induction assumption, Proposition 3.1 (i), and Remark 3.7,

H^τ,xgi=(I-σ=±vστ(x)vστ(x))Hτ(I-σ=±vστ(x)vστ(x))gi,

and we conclude (3.53), as suppvστ(x)Brτ(x).

Let Inline graphic, be the tridiagonal representation of H^τ,x up to radius r (see Appendix A.2 below). Owing to (3.49), we have

Z2τ+Cξ. 3.54

We set ui:=gi,w for any 0ir. Because w is an eigenvector of H^τ that is orthogonal to v±τ(x), for any i<r, (3.52) implies

λui=gi,H^τ-Λ(αx)v+τ(x)v+τ(x)+Λ(αx)v-τ(x)v-τ(x)w=H^τ,xgi,w=Ziigi+Zii+1gi+1+Zii-1gi-1,w=Ziiui+Zii+1ui+1+Zii-1ui-1 3.55

with the conventions u-1=0 and Z0,-1=0. Let G(λ):=(λ-Z)-1 be the resolvent of Z at λ. Note that λ-Z is invertible since λ>Z by assumption and (3.54). Since ((λ-Z)G(λ))ir=0 for i<r, we find

λGir(λ)=ZiiGir(λ)+Zii+1Gi+1r(λ)+Zii-1Gi-1r(λ).

Therefore (Gir(λ))ir and uiir satisfy the same linear recursive equation (cf. (3.55)); solving them recursively from i=0 to i=r yields

Gir(λ)Grr(λ)=uiur 3.56

for all ir. Moreover, as λ>Z by assumption and (3.54), we have the convergent Neumann series G(λ)=1λk0(Z/λ)k. Thus, the offdiagonal entries of the resolvent satisfy

G0r(λ)=1λk0((Z/λ)k)0r.

Since Z is tridiagonal, we deduce that ((Z/λ)k)0r=0 if k<r, so that, by (3.54),

|G0r(λ)|(2τ+Cξλ)r1λ-2τ-Cξ. 3.57

On the other hand, for the diagonal entries of the resolvent, we get, by splitting the summation over k into even and odd values,

Grr(λ)=1λk0((Z/λ)k)rr=1λk0((Z/λ)k(I+Z/λ)(Z/λ)k)rr1λ(I+Z/λ)rr1λ(1-2τ+Cξλ), 3.58

where in the thid step we discarded the terms k>0 to obtain a lower bound using that I+Z/λ0 by (3.54), and in the last step we used (3.54). Hence, the definition of ui and (3.52) imply

|wx|w|B2rτ(x)|u0|i=0rui21/2|u0||ur|=|G0r(λ)|Grr(λ)λ2(λ-2τ-Cξ)2(2τ+Cξλ)r.

Here, we used (3.56) in third step and (3.57) as well as (3.58) in the last step. This concludes the proof of (i) for xV.

In the case xVτ\V, we set H^τ,x:=H^τ|B2rτ(x). We claim that (3.49) holds. To see that, we use Proposition 3.1 (i) and (ii) as well as Lemma A.5 with p=d(2+ξ1/4) and q=dτ to obtain

Hτ|B2rτ(x)τΛ((2+ξ1/4)/τ2)2τ.

Here, the last step is trivial if τ1+ξ1/4/2 and, if τ[1+ξ1/2,1+ξ1/4/2], we used that f(τ):=τΛ((2+ξ1/4)/τ)/(2τ) is monotonically decreasing on this interval and f(1+ξ1/2)1, as can be seen by an explicit analysis of the function f. Now we may take over the previous argument verbatim to prove (i) for xVτ\V.

Finally, we prove (ii). By (i) we have

xVτwx2xVτw|B2rτ(x)2λ4(λ-2τ-Cξ)4(2τ+Cξλ)2rλ4(λ-2τ-Cξ)4(2τ+Cξλ)2r,

where we used that the the balls {B2rτ(x):xVτ} are disjoint, which implies 1=w2xVτw|B2rτ(x)2.

Proof of Proposition 3.1

We conclude this section with the proof of Proposition 3.1.

Proof of Proposition 3.1

Parts (i)–(v) follow immediately from parts (i)–(iv) and (vi) of [10, Lemma 7.2]. To see this, we remark that the function h from [10] satisfies h((τ-1)/2)(τ-1)2 for 1<τ2. Moreover, by Lemma A.7 and the upper bound on d, we have maxxDxClogN with very high probability. Hence, choosing the universal constant c small enough in (1.8) and recalling the lower bound on τ-1, in the notation of [10, Equations (5.1) and (7.2)] we obtain for any xVτ the inequality 2r(14rx)(12r(τ)) with very high probability. This yields parts (i)–(v).

It remains to prove (vi), which is the content of the rest of this proof. From now on we systematically omit the argument x from our notation. Part (v) already implies the bound

|S1\S1τ|=DxG\GτClogN(τ-1)2d 3.59

with very high probability, which is (3.2) for i=1.

From [10, Eq. (7.13)] we find

|Si\Siτ|yS1\S1τ|Si-1(y)|.

(As a guide to the reader, this estimate follows from the construction of Gτ given in [10, Proof of Lemma 7.2], which ensures that if a vertex zSi is not in Siτ then any path in G of length i connecting z to x is cut in Gτ at its edge incident to x.) Hence, in order to show (vi) for i2, it suffices to prove

yS1\S1τ|Si-1(y)|ClogN(τ-1)2di-2 3.60

with very high probability, for all 2i2r.

We start with the case i=2. We shall use the relation

yS1\S1τ|S1(y)|=yS1\S1τN2(y)+yS1\S1τ|S1(y)S1|+|S1\S1τ|, 3.61

where, for yS1, we introduced N2(y):=|S1(y)S2|. Note that N2(y) is the number of vertices in S2 connected to x via a path of minimal length passing through y. The identity (3.61) is a direct consequence of |S1(y)|=|S1(y)S2|+|S1(y)S1|+|S1(y)S0| using the definition of N2 and |S1(y)S0|=|S1(y){x}|=1.

The second and third terms of (3.61) are smaller than the right-hand side of (3.60) for i=2 due to [10, Eq. (5.23)] and (3.59), respectively. Hence, it remains to estimate the first term on the right-hand side of (3.61) in order to prove (3.60) for i=2.

To that end, we condition on the ball B1 and abbreviate PB1(·):=P(·B1). Since

N2(y)=z[N]\B1Ayz, 3.62

we find that conditioned on B1 the random variables (N2(y))yS1 are independent Binom(N-|B1|,d/N) random variables. We abbreviate Γ:=logN(τ-1)2. For given C,C, we set C:=C+2C and estimate

PB1(yS1\S1τN2(y)CΓ)PB1(yS1\S1τ1N2(y)2dN2(y)(C-2C)Γ)+PB1(yS1\S1τ1N2(y)<2dN2(y)2CΓ)PB1(yS112dN2(y)N1/4N2(y)CΓ)+yS1PB1(N2(y)N1/4)+PB1(|S1\S1τ|CΓd-1). 3.63

In order to estimate the first term on the right-hand side of (3.63), we shall prove that if |B1|N1/4 then

EB1[exp(12dN2(y)N1/4N2(y)t)]2 3.64

for all yS1 and t1/8. To that end, we estimate

EB1[exp(12dN2(y)N1/4N2(y)t)]1+EB1[12dN2(y)N1/4eN2(y)t].

With Poisson approximation, Lemma A.6 below, we obtain (assuming that 2d is an integer to simplify notation)

EB1[12dN2(y)N1/4eN2(y)t]=2dkN1/4(d-d|B1|/N)ketkk!e-d+d|B1|/N(1+O(N-1/2))k2ddketkk!e-d(1+O(N-1/2))=d2de2td(2d)!e-di0dietij=2d+12d+ij(1+O(N-1/2))d2de2td(2d)!e-di02dieti(2d)i=d2de2td(2d)!e-d2(1-et/2).

By Stirling’s approximation we get

logd2de2td(2d)!e-d=d2t-2log2+1-12log(4πd)+o(1).

The term in the parentheses on the right-hand side is negative for t1/8, and hence

EB1[12dN2(y)N1/4eN2(y)t]1

for large enough d, which gives (3.64). Since the family (N2(y))yS1 is independent conditioned on B1, we can now use Chebyshev’s inequality to obtain, for 0t1/8,

PB1(yS112dN2(y)N1/4N2(y)CΓ)maxyS1EB1exp(12dN2(y)N1/4N2(y)t)|S1|etCΓexp|S1|log2-Ct(τ-1)2logN.

Now we set t=1/8, recall the bound τ2, plug this estimate back into (3.63), and take the expectation. We use Lemma A.7 to estimate |S1|, which in particular implies that |B1|N1/4 with very high probability; this concludes the estimate of the expectation of the first term of (3.63) by choosing C large enough. Next, the expectation of the second term is easily estimated by Lemma A.7 since N2(y) has law Binom(N-|B1|,d/N) when conditioned on B1. Finally, the expectation of the last term of (3.63) is estimated by (3.59) by choosing C large enough. This concludes the proof of (3.60) for i=2.

We now prove (3.60) for i+1 with i2 by induction. Using [10, Lemma 5.4 (ii)] combined with Lemma A.7, we deduce that

|Si(y)|d|Si-1(y)|+Cd|Si-1(y)|logN

with very high probability for all yS1\S1τ and all ir. Therefore, using the induction assumption, i.e. (3.60) for i, we obtain

yS1\S1τ|Si(y)|ClogN(τ-1)2di-1+CdlogNyS1\S1τ|Si-1(y)|ClogN(τ-1)2di-1+CdlogN|S1\S1τ|(yS1\S1τ|Si-1(y)||S1\S1τ|)1/2ClogN(τ-1)2di-1+CdlogNlogNd(τ-1)2di-1

with very high probability, where we used the concavity of · in the second step, (3.59) and (3.60) for i in the last step. Since dilogNdi/2+1di for i2 and the sequence (d1-i/2)iN is summable, this proves (3.60) for i+1 with a constant C independent of i. This concludes the proof of Proposition 3.1.

The Delocalized Phase

In this section we prove Theorem 1.8. In fact, we state and prove a more general result, Theorem 4.2 below, which immediately implies Theorem 1.8.

Local law

Theorem 4.2 is a local law for a general class of sparse random matrices of the form

M=H+fee, 4.1

where f0 and e:=N-1/2(1,1,,1). Here H is a Hermitian random matrix satisfying the following definition.

Definition 4.1

Let 0<d<N. A sparse matrix is a complex Hermitian N×N matrix H=HCN×N whose entries Hij satisfy the following conditions.

  • (i)

    The upper-triangular entries (Hij:1ijN) are independent.

  • (ii)

    We have EHij=0 and E|Hij|2=(1+O(δij))/N for all ij.

  • (iii)

    Almost surely, |Hij|Kd-1/2 for all ij and some constant K.

It is easy to check that the set of matrices M defined as in (4.1) and Definition 4.1 contains those from Theorem 1.8 (see the proof of Theorem 1.8 below). From now on we suppose that K=1 to simplify notation.

The local law for the matrix M established in Theorem 4.2 below provides control of the entries of the Green function

G(z):=(M-z)-1 4.2

for z in the spectral domain

SSκ,L,N=Sκ×[N-1+κ,L] 4.3

for some constant L1. We also define the Stieltjes transform g of the empirical spectral measure of M given by

g(z):=1Ni=1N1λi(M)-z=1NTrG(z). 4.4

The limiting behaviour of G and g is governed by the following deterministic quantities. Denote by C+:={zC:Imz>0} the complex upper half-plane. For zC+ we define m(z) as the Stieltjes transform of the semicircle law μ1,

m(z):=μ1(du)u-z,μ1(du):=12π(4-u2)+du. 4.5

An elementary argument shows that m(z) can be characterized as the unique solution m in C+ of the equation

1m(z)=-z-m(z). 4.6

For α0 and zC+ we define

mα(z):=-1z+αm(z), 4.7

so that m1=m by (4.6). In Lemma A.3 below we show that mα is bounded in the domain S, with a bound depending only on κ.

For x[N] we denote the square Euclidean norm of the xth row of H by

βx:=y|Hxy|2, 4.8

which should be thought of as the normalized degree of x; see Remark 4.3 below.

Theorem 4.2

(Local law for M). Fix 0<κ1/2 and L1. Let H be a sparse matrix as in Definition 4.1, define M as in (4.1) for some 0fNκ/6, and define G and g as in (4.2) and (4.4) respectively. Then with very high probability, for d satisfying (1.18), for all zS we have

maxx,y[N]|Gxy(z)-δxymβx(z)|C(logNd2)1/3, 4.9
|g(z)-m(z)|C(logNd2)1/3. 4.10

Proof of Theorem 1.8

Under the assumptions of Theorem 1.8 we find that M:=A/d is of the form (4.1) for some H and f satisfying the assumptions of Theorem 4.2. Now Theorem 1.8 is a well-known consequence of Theorem 4.2 and the boundedness of mα(z) in (A.4) below. For the reader’s convenience, we give the short proof. Denoting the eigenvalues of M by (λi(M))i[N] and the associated eigenvectors by (wi(M))i[N], setting z=λ+iη with η=N-1+κ, by (4.9) and (A.4) we have with very high probability

graphic file with name 220_2021_4167_Equ310_HTML.gif

where in the last step we omitted all terms except i satisfying λi(M)=λ. The claim follows by renaming κκ/2. (Here we used that Theorem 4.2 holds also for random zS, as follows form a standard net argument; see e.g. [16, Remark 2.7].)

Remark 4.3

(Relation between αx and βx). In the special case M=d-1/2A with A the adjacency matrix of G(N,d/N), we have

βx=1dy(Axy-dN)2=αx+O(d(1+αx)N)=αx+O(d+logNN)

with very high probability, by Lemma A.7.

By definition, mα(z)C+ for zC+, i.e. mα is a Nevanlinna function, and limzzmα(z)=-1. By the integral representation theorem for Nevanlinna functions, we conclude that mα is the Stieltjes transform of a Borel probability measure μα on R,

mα(z)=μα(du)u-z. 4.11

Theorem 4.2 implies that the spectral measure of M at a vertex x is approximately μβx with very high probability.

Inverting the Stieltjes transform (4.11) and using the definitions (4.5) and (4.7), we find after a short calculation

μα(du)=gα(u)du+hαδsα(du)+hαδ-sα(du), 4.12

where

gα(u):=α1|u|<22π4-u2(1-α)u2+α2,hα:=1α>2α-22α-2+1α=02,sα:=1α>2Λ(α).

The family (μα)α0 contains the semicircle law (α=1), the Kesten-McKay law of parameter d (α=d/(d-1)), and the arcsine law (α=2). For rational α=p/q, the measure μp/q can be interpreted as the spectral measure at the root of the infinite rooted (pq)-regular tree, whose root has p children and all other vertices have q children. We refer to Appendix A.2 for more details. See Fig. 8 for an illustration of the measure μα.

Remark 4.4

Using a standard application the Helffer-Sjöstrand formula (see e.g. [16, Section 8 and Appendix C]), we deduce from Theorem 4.2 the following local law for the spectral measure. Denote by ϱx the spectral measure of M at vertex x. Under the assumptions of Theorem 4.2, with very high probability, for any inverval ISκ, we have

ϱx(I)=μβx(I)+O(|I|(logNd2)1/3+Nκ-1).

The error is smaller than the left-hand side provided that |I|CNκ-1.

The remainder of this section is devoted to the proof of Theorem 4.2. For the rest of this section, we assume that M is as in Theorem 4.2. To simplify notation, we consistently omit the z-dependence from our notation in quantities that depend on zS. Unless mentioned otherwise, from now on all statements are uniform in zS.

For the proof of Theorem 4.2, it will be convenient to single out the generic constant C from (1.18) by introducing a new constant D and replacing (1.18) with

DlogNd(logN)3/2. 4.13

Our proof will always assume that CCν and DDν are large enough, and the constant C in (1.18) can be taken to be CD. For the rest of this section we assume that d satisfies (4.13) for some large enough D, depending on κ and ν. To guide the reader through the proof, in Fig. 9 we include a diagram of the dependencies of the various quantities appearing throughout this section.

Fig. 9.

Fig. 9

The dependency graph of the various quantities appearing in the proof of Theorem 4.2. An arrow from x to y means that y is chosen as a function of x. The independent parameters, κ and ν, are highlighted in blue

Typical vertices

We start by introducing the key tool in the proof of Theorem 4.2, a decomposition of vertices into typical vertices and the complementary atypical vertices. Heuristically, a typical vertex x has close to d neighbours and the spectral measure of M at x is well approximated by the semicircle law. In fact, in order to be applicable to the proof of Proposition 4.18 below, the notion of a typical vertex is somewhat more complicated, and when counting the number of neighbours of a vertex x we also need to weight the neighbours with diagonal entries of a Green function, so that the notion of typical vertex also depends on the spectral parameter z, which in this subsection we allow to be any complex number z with ImzN-1+κ. This notion is defined precisely using the parameters Φx and Ψx from (4.18) below. The main result of this subsection is Proposition 4.8 below, which states, in the language of graphs when M=d-1/2A with A the adjacency matrix of G(N,d/N), that most vertices are typical and most neighbours of any vertex are typical. To state it, we introduce some notation.

Definition 4.5

For any subset T[N], we define the minor M(T) with indices in T as the (N-|T|)×(N-|T|)-matrix

M(T):=(Mxy)x,y[N]\T. 4.14

If T consists only of one or two elements, T={x} or T={x,y}, then we abbreviate M(x) and M(xy) for M({x}) and M({x,y}). We also abbreviate M(Tx) for M(T{x}). The Green function of M(T) is denoted by

G(T)(z):=(M(T)-z)-1. 4.15

We use the notation

x(T):=x[N]\T. 4.16

Definition 4.6

(Typical vertices). Let a>0 be a constant, and define the set of typical vertices

Ta:={x[N]:|Φx||Ψx|φa},φa:=a(logNd2)1/3, 4.17

where

Φx:=y(x)(|Hxy|2-1N),Ψx:=y(x)(|Hxy|2-1N)Gyy(x). 4.18

Note that this notion depends on the spectral parameter z, i.e. TaTa(z). The constant a will depend only on ν and κ. It will be fixed in (4.23) below. The constant Da3/2 from (4.13) is always chosen large enough so that φa1.

The following proposition holds on the event {θ=1}, where we introduce the indicator function

θ:=1maxx,y|Gxy|Γ 4.19

depending on some deterministic constant Γ1. In (4.40) below, we shall choose a constant ΓΓκ, depending only on κ, such that the condition θ=1 can be justified by a bootstrapping argument along the proof of Theorem 4.2 in Sect. 4.3 below.

Throughout the sequel we use the following generalization of Definition 2.1.

Definition 4.7

An event Ξ holds with very high probability on an event Ω if for all ν>0 there exists C>0 such that P(ΞΩ)P(Ω)-CN-ν for all NN.

We now state the main result of this subsection.

Proposition 4.8

There are constants 0<q1, depending only on Γ, and a>0, depending only on ν and q, such that, on the event {θ=1}, the following holds with very high probability.

  • (i)
    Most vertices are typical:
    |Tac|exp(qφa2d)+Nexp(-2qφa2d).
  • (ii)
    Most neighbours of any vertex are typical:
    yTac(x)|Hxy|2Cφa+Cd4exp(-qφa2d)
    uniformly for x[N].

For the interpretation of Proposition 4.8 (ii), one should think of the motivating example M=d-1/2A, for which dyTac(x)|Hxy|2 is the number of atypical neighbours of x, up to an error term O(d2+dlogNN) by Remark 4.3.

The remainder of Sect. 4.2 is devoted to the proof of Proposition 4.8. We need the following version of Ta defined in terms of H(T) instead of H.

Definition 4.9

For any x[N] and T[N], we define

Φx(T):=y(Tx)(|Hxy|2-1N),Ψx(T):=y(Tx)(|Hxy|2-1N)Gyy(Tx)

and

Ta(T):={x[N]\T:|Φx(T)||Ψx(T)|φa}.

Note that Φx()=Φx and Ψx()=Ψx with the definitions from (4.18), and hence Ta()=Ta. The proof of Proposition 4.8 relies on the two following lemmas.

Lemma 4.10

There are constants 0<q1, depending only on Γ, and a>0, depending only on ν and q, such that, for any deterministic X[N], the following holds with very high probability on the event {θ=1}.

  • (i)

    |XTa/2c|exp(qφa2d)+|X|exp(-2qφa2d).

  • (ii)

    If |X|exp(2qφa2d) then |XTa/2c|φad.

For any deterministic x[N], the same estimates hold for (Ta/2(x))c instead of Ta/2c and a random set X[N]\{x} that is independent of H(x).

Lemma 4.11

With very high probability, for any constant a>0 we have

θ|Φy-Φy(x)|φa/2,θ|Ψy-Ψy(x)|φa/2

for all x,y[N].

Before proving Lemmas 4.10 and 4.11, we use them to establish Proposition 4.8.

Proof of Proposition 4.8

For (i), we choose X=[N] in Lemma 4.10 (i), using that Ta/2Ta.

We now turn to the proof of (ii). By Lemma 4.11, on the event {θ=1} we have Tac(Ta/2(x))c with very high probability and hence

θyTac(x)|Hxy|2θy(Ta/2(x))c(x)|Hxy|2

with very high probability. Since |Hxy|21/d almost surely, we obtain the decomposition

y(Ta/2(x))c(x)|Hxy|2k=0logNy(Ta/2(x))c(x)|Hxy|21d-k-2|Hxy|2d-k-1+1Nk=0logNy(Ta/2(x))c(x)d-k-11|Hxy|2d-k-2+1N=k=0logNd-k-1|Xk(Ta/2(x))c|+1N, 4.20

where we defined

Xk:={yx:|Hxy|2d-k-2}.

Since y(x)|Hxy|2Cd with very high probability by Definition 4.1 and Bennett’s inequality, we conclude that

|Xk|Cdk+3 4.21

with very high probability.

We shall apply Lemma 4.10 to the sets X=Xk and (Ta/2(x))c. To that end, note that Xk[N]\{x} is a measurable function of the family (Hxy)y[N], and hence independent of H(x). Thus, we may apply Lemma 4.10.

We define K:=max{k0:Cdk+3e2qφa2d} and decompose the sum on the right-hand side of (4.20) into

k=0logNd-k-1|Xk(Ta/2(x))c|=k=0Kd-k-1|Xk(Ta/2(x))c|+k=K+1logNd-k-1|Xk(Ta/2(x))c|k=0Kd-k-1φad+k=K+1logNd-k-1(eqφa2d+Cdk+3e-2qφa2d)2φa+Cd2e-qφa2dlogN

with very high probability. Here, we used Lemma 4.10 (ii) to estimate the summands if kK and Lemma 4.10 (i) and (4.21) for the other summands. Since logNd2, this concludes the proof of (ii).

The rest of this subsection is devoted to the proofs of Lemmas 4.10 and 4.11. Let θ be defined as in (4.19) for some constant Γ1. For any subset T[N], we define the indicator function

θ(T):=1maxa,bT|Gab(T)|2Γ.

Lemma 4.10 is a direct consequence of the following two lemmas.

The first one, Lemma 4.12, is mainly a decoupling argument for the random variables (Ψx)x[N]. Indeed, the probability that any fixed vertex x is atypical is only small, o(1), and not very small, N-ν; see (4.31) below. If the events of different vertices being atypical were independent, we could deduce that the probability that a sufficiently large set of vertices are atypical is very small. However, these events are not independent. The most serious breach of independence arises from the Green function Gyy(x) in the definition of Ψx. In order to make this argument work, we have to replace the parameters Φx and Ψx with their decoupled versions Φx(T) and Ψx(T) from Definition 4.9. To that end, we have to estimate the error involved, |Φx-Φx(T)| and |Ψx-Ψx(T)|. Unfortunately the error bound on the latter is proportional to βx (see (4.32)), which is not affordable for vertices of large degree. The solution to this issue involves the observation that if βx is too large then the vertex is atypical by the condition on Φx, which allows us to disregard the size of Ψx. The details are given in the proof of Lemma 4.12 below.

The second one, Lemma 4.13, gives a priori bounds on the entries of the Green function G(T), which shows that if the entries of G are bounded then so are those of G(T) for |T|=o(d). For T of fixed size, this fact is a standard application of the resolvent identities from Lemma A.24. For our purposes, it is crucial that T can have size up to o(d), and such a quantitative estimate requires slightly more care.

Lemma 4.12

There is a constant 0<q1, depending only on Γ, such that, for any ν>0, there is C>0 such that the following holds for any fixed a>0. If xT[N] are deterministic with |T|φad/C then

P(TTa/2c,θ=1)e-4qφa2d|T|+CN-ν, 4.22a
P(T(Ta/2(x))c,θ(x)=1)e-4qφa2d|T|+CN-ν. 4.22b

Lemma 4.13

For any subset T[N] satisfying |T|dCΓ2 we have θθ(T) with very high probability.

Before proving Lemma 4.12 and Lemma 4.13, we use them to show Lemma 4.10.

Proof of Lemma 4.10

Throughout the proof we abbreviate Pθ(Ξ):=P(Ξ{θ=1}). Let C be the constant from Lemma 4.12, and set

a:=(Cν4q)1/3. 4.23

For the proof of (ii), we choose k=φad/C and estimate

Pθ(|XTa/2c|k)YX:|Y|=kPθ(YTa/2c)|X|k(e-4qφa2dk+CN-ν)(|X|e-4qφa2d)k+C|X|kN-νe-2qφa2dk+Ce2qφa2dkN-ν=N-2qa3/C+CN2qa3/C-ν.

where in the second step we used (4.22a). Thus, by our choice of a, we have Pθ(|XTa/2c|k)(C+1)N-ν/2, from which (ii) follows after renaming ν and C.

To prove (i) we estimate, for t>0 and lN,

Pθ(|XTa/2c|t)1tlE(xX1xTa/2cθ)l=1tlx1,,xlXPθ(x1Ta/2c,,xlTa/2c).

Choosing l=φad/C, regrouping the summation according to the partition of coincidences, and using Lemma 4.12 yield

Pθ(|XTa/2c|t)1tlπPl|X||π|(e-4qφa2d|π|+CN-ν)1tlk=0llkll-k|X|k(e-4qφa2dk+CN-ν)=(l+|X|e-4qφa2d)l+CN-ν(l+|X|)ltl.

Here, Pl denotes the set of partitions of [l], and we denote by k=|π| the number of blocks in the partition πPl. We also used that the number of partitions of l elements consisting of k blocks is bounded by lkll-k. The last step follows from the binomial theorem. Therefore, using l=φad/C and choosing t=eqφa2d+|X|e-2qφa2d as well as C and ν sufficiently large imply the bound in Lemma 4.10 (i) with very high probability, after renaming C and ν. Here we used (4.13).

To obtain the same statements for Ta/2(x) instead of Ta/2, we estimate

Pθ(|X(Ta/2(x))c|t)E[P(|X(Ta/2(x))c|t,θ(x)=1|X)]+P(θ(x)=0,θ=1).

For both parts, (i) and (ii), the conditional probability P(|X(Ta/2(x))c|t,θ(x)=1|X) can be bounded as before using (4.22b) instead of (4.22a) since, by assumption on X, the set Ta/2(x) and the indicator function θ(x) are independent of X. The smallness of P(θ(x)=0,θ=1)P(θ(x)<θ) is a consequence of Lemma 4.13. This concludes the proof of Lemma 4.10.

The rest of this subsection is devoted to the proofs of Lemmas 4.11, 4.12, and 4.13.

Lemma 4.14

There is ccν>0, depending on ν and κ, such that for any deterministic T[N] satisfying |T|cd/Γ2 we have with very high probability

θmaxx,yT|Gxy(T)|2Γ. 4.24

Moreover, under the same assumptions on T and for any u[N]\T, we have

θmaxx,yT{u}|Gxy(Tu)-Gxy(T)|Cd-1 4.25

with very high probability.

Before proving Lemma 4.14, we use it to conclude the proof of Lemma 4.13.

Proof of Lemma 4.13

The bound in (4.24) of Lemma 4.14 implies that θ=θθ(T) with very high probability. Since θ1, the proof is complete.

Proof of Lemma 4.14

Throughout the proof we work on the event {θ=1} exclusively. After a relabelling of the vertices [N], we can suppose that T=[k] with kcd/Γ2. For k[N], we set

Γk:=1maxx,y[k]|Gxy([k])|.

Note that Γ0Γ by definition of θ.

We now show by induction on k that there is C>0 such that

ΓkΓ0(1+16CΓ2d)k 4.26

for all kN satisfying kd32CΓ2. Since 1+xex, (4.26) implies that Γke1/2Γ02Γ. This directly implies (4.24) by the definition of θ.

The initial step with k=0 is trivially correct. For the induction step kk+1, we set T=[k] and u=k+1. The algebraic starting point for the induction step is the identities (A.32a) and (A.32b). We shall need the following two estimates. First, from Lemma A.23 and Cauchy–Schwarz, we get

fN|Guy(T)a(Tu)Gxa(Tu)|fNΓkNImzΓk+1N-κ/3ΓkΓk+1, 4.27

where we used that Γk+11, fNκ/6, and ImzN-1+κ. Second, the first estimate of (A.28) in Corollary A.21 with ψ=Γk+1/d and γ=Γk+1/(NImz), Lemma A.23, and Γk+11 imply

|a(Tu)Gxa(Tu)Hau|CdΓk+1 4.28

with very high probability.

Hence, owing to (A.32a) and (A.32b) with T=[k] and u=k+1, we get, respectively,

Γk+1Γk+CdΓkΓk+1,Γk+1Γk+CdΓkΓk+12 4.29

with very high probability.

By the induction assumption (4.26) we have CΓk/d2CΓ/d1/2, so that the first inequality in (4.29) implies the rough a priori bound

Γk+12Γk 4.30

with very high probability. From the second inequality in (4.29) and (4.30), we deduce that

Γk+1Γk(1+4CdΓk2)Γk(1+16CΓ2d),

where in the second step we used Γk2Γ, by the induction assumption (4.26). This concludes the proof of (4.26), and, hence, of (4.24).

For the proof of (4.25), we start from (A.32b) and use (4.27), (4.28) as well as (4.24). This concludes the proof of Lemma 4.14.

The next result provides concentration estimates for the parameters Φx and Ψx.

Lemma 4.15

There is a constant 0<q1, depending only on Γ, such that the following holds. Let c>0 be as in Lemma 4.14, and let x[N] and T[N] be deterministic and satisfy |T|cd/Γ2. Then for any 0<ε1 we have

θ(T)P(|Φx(T)|>ε|H(T))e-32qε2d,θ(T)P(|Ψx(T)|>ε|H(T))e-32qε2d, 4.31

and, for any uT,

Φx(Tu)-Φx(T)=O(1d),θ(T)(Ψx(Tu)-Ψx(T))=O(1+βxd) 4.32

with very high probability.

Before proving Lemma 4.15, we use it conclude the proof of Lemma 4.11.

Proof of Lemma 4.11

Using (A.27b), we find that βxC(1+logNd) with very high probability. The claim now follows from (4.32) with T= and the definition of φa, choosing the constant D in (4.13) large enough.

Proof of Lemma 4.15

Set q:=1211(eΓ)2. We get, using (A.27b) with r:=32qε2dd, E|Hxy|2=1/N, and Chebyshev’s inequality,

θ(T)P(|Ψx(T)|>ε|H(T))=P(θ(T)|y(Tx)(|Hxy|2-E|Hxy|2)Gyy(T)|>ε|H(T))(8Γεrd)r=e-32qε2d

with very high probability for any 0<ε1. This proves the estimate on Ψx(T) in (4.31), and the estimate for Φx(T) is proved similarly.

We now turn to the proof of (4.32). If x=u then the statement is trivial. Thus, we assume xu. In this case we have

Φx(Tu)-Φx(T)=-(|Hxu|2-1N) 4.33

and the claim for Φ follows by Definition 4.1. Next,

Ψx(Tu)-Ψx(T)=y(Tux)(|Hxy|2-1N)(Gyy(Tux)-Gyy(Tx))-(|Hxu|2-1N)Guu(Tx).

The last term multiplied by θ(T) is estimated by O(Γ/d) since θ(T)|Guu(Tx)|4Γ by (4.30). We estimate the first term using (4.25) in Lemma 4.14, which yields

θ(T)|Ψx(Tu)-Ψx(T)|y(Tux)|Hxy|2Cd+1Ny(Tux)Cd+O(Γd)=O(1+βxd)

with very high probability. This concludes the proof of Lemma 4.15.

Proof of Lemma 4.12

Throughout the proof we abbreviate Pθ(Ξ):=P(Ξ{θ=1}). We have

P(TTa/2c,θ=1)=Pθ(xTΩx),

where we defined the event

Ωx:={|Φx|>φa/2}{|Ψx|>φa/2}={|Φx|>φa/2}{|Φx|φa/2,|Ψx|>φa/2}.

We have the inclusions

{|Φx|>φa/2}{|Φx(T)|>φa/4}{|Φx-Φx(T)|>φa/4},{|Φx|φa/2,|Ψx|>φa/2}{|Ψx(T)|>φa/4}{|Φx|φa/2,|Ψx-Ψx(T)|>φa/4}.

Defining the event

Ωx(T):={|Φx(T)|>φa/4}{|Ψx(T)|>φa/4},

we therefore deduce by a union bound that

Pθ(xTΩx)Pθ(xTΩx(T))+xTPθ(|Φx-Φx(T)|>φa/4)+xTPθ(|Φx|φa/2,|Ψx-Ψx(T)|>φa/4). 4.34

We begin by estimating the first term of (4.34). To that end, we observe that, conditioned on H(T), the family (Ωx(T))xT is independent. Using Lemma 4.13 we therefore get

Pθ(xTΩx(T))E[θ(T)P(xTΩx(T)|H(T))]+CN-ν=E[θ(T)xTP(Ωx(T)|H(T))]+CN-ν,

and we estimate each factor using (4.31) from Lemma 4.15 as

θ(T)P(Ωx(T)|H(T))θ(T)P(|Φx(T)|>φa/4|H(T))+θ(T)P(|Ψx(T)|>φa/4|H(T))2e-8qφa2de-4qφa2d,

where in the last step we used that e-4qφa2d1/2. We conclude that

Pθ(xTΩx(T))e-4qφa2d|T|+CN-ν.

Next, we estimate the second term of (4.34). After renaming the vertices, we may assume that T=[k] with kφad/C, so that we get from (4.32) from Lemma 4.15 (using that φad/Ccd/Γ2 provided that D in (4.13) is chosen large enough, depending on a), by telescoping and recalling Lemma 4.13,

|Φx-Φx(T)|i=0k-1|Φx([i])-Φx([i+1])|O(kd)φa/4 4.35

with very high probability on the event {θ=1}, if the constant C in the upper bound φad/C on k is large enough.

The last term of (4.34) is estimated analogously, with the additional observation that, by definition of Φx and since φa/21/2, on the event {|Φx|φa/2} we have βx2. Thus, on the event {θ=1}{|Φx|φa/2} we have, by Lemma 4.13,

|Ψx-Ψx(T)|i=0k-1|Ψx([i])-Ψx([i+1])|O(k(1+βx)d)φa/4 4.36

with very high probability, for large enough C in the upper bound on k. We conclude that the two last terms of (4.34) are bounded by CN-ν, and the proof of (4.22a) is therefore complete.

The proof of (4.22b) is identical, replacing the matrix M with the matrix M(x).

Self-consistent equation and proof of Theorem 4.2

In this subsection, we derive an approximate self-consistent equation for the Green function G, and use it to prove Theorem 4.2. The key ingredient is Proposition 4.18 below, which provides a bootstrapping bound stating that if Gxx-mβx is smaller than some constant then it is in fact bounded by φa with very high probability. It is proved by first deriving and solving a self-consistent equation for the entries Gxx indexed by typical vertices xTa, and using the obtained bounds to analyse Gxx for atypical vertices xTac.

We begin with a simple algebraic observation.

Lemma 4.16

(Approximate self-consistent equation). For any x[N] and zC+, we have

1Gxx=-z-y(x)|Hxy|2Gyy(x)+Yx,

where we introduced the error term

Yx:=Hxx+fN-ab(x)HxaGab(x)Hbx-a,b(x)(fN(HxaGab(x)+Gab(x)Hbx)+f2N2Gab(x)). 4.37

Proof

The lemma follows directly from (A.31) and the definition (4.1).

Let θ be defined as in (4.19) with some Γ1. The following lemma provides a priori bounds on the error terms appearing in the self-consistent equation.

Lemma 4.17

For all zC with ImzN-1+κ, with very high probability,

θmaxx|Yx|Cd-1/2, 4.38a
θmaxxy|Gxy|Cd-1/2, 4.38b
θmaxxay|Gxy-Gxy(a)|Cd-1. 4.38c

Proof

We first estimate Yx. From Definition 4.1, the upper bound on f, and (4.13), we conclude that |Hxx|+f/N=O(d-1/2) almost surely. Moreover, the Cauchy–Schwarz inequality, Lemma A.23, (4.24) and the upper bound on f imply

θf2N2|a,b(x)Gab(x)|Cκf2NImzCκN-κ/6Cd,

for some constant Cκ depending only on κ. Next, we use the first estimate of (A.28), Lemma A.23, and the upper bound on f to conclude that

fNθ|a,b(x)HxaGab(x)|+fNθ|a,b(x)Gab(x)Hbx|CdfNImzCdN-κ/3Cd

with very high probability (compare the proof of (4.28)). Moreover, from Lemma A.23 and the second estimate of (A.28) we deduce that remaining term in (4.37) is O(d-1)=O(d-1/2). This concludes the proof of (4.38a).

For the proof of (4.38b), we start from (A.29) and use Mxa=Hxa+f/N to obtain

Gxy=-Gxxa(x)HxaGay(x)-GxxHxyGyy(x)-fNGxxa(x)Gay(x).

Similar arguments as in (4.28) and (4.27) show that the first and third term, respectively, are bounded by Cd-1/2 with very high probability. The same bound for the second term follows from Definition 4.1 and (4.24) in Lemma 4.14. This proves (4.38b).

Finally, (4.38c) follows directly from (4.25).

Proposition 4.18 below is the main tool behind the proof of Theorem 4.2. To formulate it, we introduce the z-dependent random control parameters

Λd:=maxx|Gxx-mβx|,Λo:=maxxy|Gxy|,Λ:=ΛdΛo,

and, for some constant λ1, the indicator function

ϕ:=1Λλ. 4.39

Proposition 4.18 below provides a strong bound on Λ provided the a priori condition ϕ=1 is satisfied. Each step of its proof is valid provided λ is chosen small enough depending on κ. Note that, owing to (A.4), there is a deterministic constant Γ, depending only on κ, such that, for all zS, we have

ϕmaxx,y|Gxy|Γ. 4.40

In particular, if Γ in the definition (4.19) of θ is chosen as in (4.40) then

ϕθ. 4.41

Proposition 4.18

There exists λ>0, depending only on κ, such that, for all zS, with very high probability,

ϕΛCφa.

For the proof of Proposition 4.18, we employ the results of the previous subsections to show that the diagonal entries (Gxx)xTa of the Green function of M at the typical vertices satisfy the approximate self-consistent equation (4.42) below. This is a perturbed version of the relation (4.6) for the Stieltjes transform m of the semicircle law, which holds for all zC+. The stability estimate, (4.43) below, then implies that Gxx and m are close for all xTa. From this we shall, in a second step, deduce that Gxx is close to mβx for all x; this steps includes also the atypical vertices.

The next lemma is a relatively standard stability estimate of self-consistent equations in random matrix theory (compare e.g. to [27, Lemma 3.5]). It is proved in Appendix A.9.

Lemma 4.19

(Stability of the self-consistent equation for m). Let X be a finite set, κ>0, and zC+ satisfy |Rez|2-κ. We assume that, for two vectors (gx)xX, (εx)xXCX, the identities

1gx=-z-1|X|yXgy+εx 4.42

hold for all xX. Then there are constants b,C(0,), depending only on κ, such that if maxxX|gx-m(z)|b then

maxxX|gx-m(z)|CmaxxX|εx|, 4.43

where m(z) satisfies (4.6).

Proof of Proposition 4.18

Throughout the proof, we work on the event {ϕ=1}, which, by (4.41), is contained in the event {θ=1}. Fix a as in Proposition 4.8. Throughout the proof we use that d-1/2φa by the upper bound in (4.13). Owing to (4.38b), it suffices to estimate Λd. Let b be chosen as in Lemma 4.19, and set λ:=b/2 in the definition (4.39) of ϕ.

For the analysis of Gxx we distinguish the two cases xTa and xTa.

If xTa then we write using Lemma 4.16 and the definition (4.18) of Ψx that

1Gxx=-z-y(x)|Hxy|2Gyy(x)+Yx=-z-1Ny(x)Gyy(x)+Yx-Ψx=-z-1|Ta|yTaGyy+εx,

where the error term εx satisfies

|εx|=O(d-1/2+1Nexp(qφa2d)+exp(-2qφa2d)+φa)=O(φa) 4.44

with very high probability. Here, in the first step of (4.44) we used (4.38a), (4.38c), Proposition 4.8 (i), and the bound on Ψx in the definition (4.17) of Ta, and in the second step of (4.44) we used that φa2d=a2(logN)2/3d-1/3 and (4.13) imply (logN)1/6/Cφa2dC(logN)1/2, which yields

1Nexp(qφa2d)+exp(-2qφa2d)Cd-10φa. 4.45

Thus, for (Gxx)xTa we get the self-consistent equation in (4.42) with gx=Gxx and X=Ta. Moreover, by the bound on Φx in the definition (4.17) of Ta, we have βx=1+O(φa). Hence, by (A.5), the assumption ϕ=1 and dClogN, we find that

|Gxx-m||Gxx-mβx|+|mβx-m|b,

choosing the constant D in (4.13) large enough that the right-hand side of (A.5), i.e. C|βx-1|, is bounded by b/2. Hence Lemma 4.19 is applicable and we obtain |Gxx-m|=O(maxyTa|εy|). Therefore, we obtain

|Gxx-mβx||Gxx-m|+|m-mβx|Cφa 4.46

with very high probability. This concludes the proof in the case xTa.

What remains is the case xTa. In that case, we obtain from Lemma 4.16 that

1Gxx=-z-yTa(x)|Hxy|2Gyy(x)-yTac(x)|Hxy|2Gyy(x)+Yx=-z-βxm+εx, 4.47

where the error term εx satisfies εx=O((1+βx)φa) with very high probability. Here we used (4.38a) as well as (4.38c), (4.45), (4.46) and Proposition 4.8 (ii) twice to conclude that

yTa(x)|Hxy|2Gyy(x)=βxm+O(βxφa),yTac(x)|Hxy|2Gyy(x)=O(φa+d4exp(-qφa2d))=O(φa)

with very high probability. From (4.7) and (4.47) we therefore get

Gxx-mβx=-mβx1-z-βxm+εxεx. 4.48

To estimate the right-hand side of (4.48), we consider the cases βx1 and βx>1 separately.

If βx1 then, by (A.4), the first factor of (4.48) is bounded by C. Thus, by (4.7), the second factor is bounded by 2C provided that |εx|1/2C by choosing D in (4.13) large enough, and the third factor is bounded by Cφa. This yields the claim.

If βx>1, we use that Immc for some constant c>0 depending only on κ and L. Thus, the right-hand side of (4.48) is bounded in absolute value, again using (A.4), by C1βxc/2Cβxφa, provided that D in (4.13) is chosen large enough. This yields the claim.

Proof of Theorem 4.2

After possibly increasing L, we can assume that L in the definition of S in (4.3) satisfies L2/λ+1, where λ is chosen as in Proposition 4.18.

We first show that (4.10) follows from (4.9). Indeed, averaging the estimate on |Gxx-mβx| in (4.9) over x[N], using that mβx=m+O(φa) for xTa by (A.5) and estimating the summands in Tac by Proposition 4.8 (i) and (A.4) yield (4.10) due to (4.45).

What remains is the proof of (4.9). Let z0S, set J:=min{jN0:Imz0+jN-32/λ}, and define zj:=z0+ijN-3 for j[J]. We shall prove the bound in (4.9) at z=zj by induction on j, starting from j=J and going down to j=0. Since |Gxy(z)|(Imz)-1 and |mβx(z)|(Imz)-1 for all x,y[N], we have maxx|Gxx(zJ)-mβx(zJ)|λ and ϕ(zJ)=1.

For the induction step jj-1, suppose that ϕ(zj)=1 with very high probability. Then, by Proposition 4.18, we deduce that Λ(zj)Cφa with very high probability. Since Gxy and mβx are Lipschitz-continuous on S with constant N2, we conclude that Λ(zj-1)Cφa+N-1 with very high probability. If N is sufficiently large and φa is sufficiently small, obtained by choosing D in (4.13) large enough, then we deduce that Λ(zj-1)λ with very high probability and hence ϕ(zj-1)=1 with very high probability. Using Proposition 4.18, this concludes the induction step, and hence establishes Λ(z0)Cφa with very high probability. Here we used that the intersection of J events of very high probability is an event of very high probability, since JCN3, where C depends on κ.

Acknowledgements

The authors would like to thank Simone Warzel for helpful discussions. The authors gratefully acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 715539_RandMat) and from the Swiss National Science Foundation through the NCCR SwissMAP grant.

Appendices

In the following appendices we collect various tools and explanations used throughout the paper.

Simulation of the -norms of eigenvectors

In Fig. 10 we depict a simulation of the -norms of the eigenvectors of the adjacency matrix A/d of the Erdős–Rényi graph G(N,d/N) restricted to its giant component. We take d=blogN with N=10000 and b=0.6. The eigenvalues and eigenvectors are drawn using a scatter plot, where the horizontal coordinate is the eigenvalue and the vertical coordinate the -norm of the associated eigenvector. The higher a dot is located, the more localized the associated eigenvector is. Complete delocalization corresponds to a vertical coordinate 0.01, and localization at a single site to a vertical coordinate 1. Note the semilocalization near the origin and outside of [-2,2]. The two semilocalized blips around ±0.4 are a finite-N effect and tend to 0 as N is increased. The Perron–Frobenius eigenvalue is an outlier near 2.8 with delocalized eigenvector.

Spectral analysis of the infinite rooted (pq)-regular tree

In this appendix we describe the spectrum, eigenvectors, and spectral measure of the following simple graph.

Definition A.1

For p,qN we define Tp,q as the infinite rooted (pq)-regular tree, whose root has p children and all other vertices have q children.

A convenient way to analyse the adjacency matrix of Tp,q is by tridiagonalizing it around its root. To that end, we first review the tridiagonalization5 of a general symmetric matrix XRN×N around a vertex x[N]; we refer to [10, Appendices A–C] for details. Let rN and x[N]. Suppose that the vectors 1x,X1x,X21x,,Xr1x are linearly independent, and denote by g0,g1,g2,,gr the associated orthonormalized sequence. Then the tridiagonalization of X around x up to radius r is the (r+1)×(r+1) matrix Z=(Zij)i,j=0r with Inline graphic. By construction, Z is tridiagonal and conjugate to X restricted to the subspace Span{g0,g1,,gr}.

Let now X=AATp,q be the adjacency matrix of Tp,q, whose root we denote by o. Then it is easy to see that gi=1Si(o)/1Si(o) and the tridiagonalization of A around the root up to radius is the infinite matrix qZ(p/q), where

Z(α):=0αα0110110. A.1

If α>2, a transfer matrix analysis (see [10, Appendix C]) shows that Z(α) has precisely two eigenvalues in R\[-2,2], which are ±Λ(α). The associated eigenvectors are ((±)iui)iN, where u0>0 and ui:=α(α-1)i/2u0 for i1. Note that the eigenvector components are exponentially decaying since α>2, and hence u0 can be chosen so that the eigenvectors are normalized. Going back to the original vertex basis of Tp,q, setting α=p/q, we conclude that the adjacency matrix A has eigenvalues ±qΛ(α) with associated eigenvectors iN(±)iui1Si(o)/1Si(o).

Next, we show that the measure μα from (4.12) is the spectral measure at the root of ATp,q/d and the spectral measure at 0 of (A.1).

Lemma A.2
  • (i)

    For any α0 the measure μα is the spectral measure of Z(α) at 0.

  • (ii)

    For any p,qN the measure μp/q is the spectral measure of the normalized adjacency operator ATp,q/q at the root.

Proof

For (i), define the vector e0=(1,0,0,)2(N). The spectral measure of Z(α) with respect to e0 is characterized by its Stieltjes transform

graphic file with name 220_2021_4167_Equ152_HTML.gif A.2

Here, we used Schur’s complement formula on the Green function (Z(α)-z)-1, observing that the minor of Z(α) obtained by removing the zeroth row and column is Z(1). Setting α=1 in (A.2) and recalling the defining relation (4.6) of the Stieltjes transform m of the semicircle law, we conclude that Inline graphic and hence from (4.7) and (A.2) we get Inline graphic, as desired.

The proof of (ii) is analogous. Denote the root of Tp,q by o. Again using Schur’s complement formula to remove the oth row and column of H=ATp,q/q, we deduce that

graphic file with name 220_2021_4167_Equ153_HTML.gif A.3

where we used that Tp,q from which o has been removed consists of p disconnected copies of Tq,q. Setting p=q in (A.3) and comparing to (4.6) implies that the left-hand side of (A.3) is equal to m(z) if p=q, and hence (ii) for general p follows from (4.7).

Finally, we remark that the equality of the spectral measures of Z(p/q) and ATp,q/q can also be seen directly, by noting that Z(p/q) is the tridiagonalization of ATp,q/q around the root o.

We conclude with some basic estimates for the Stieltjes transform mα of μα used in Sect. 4.

Lemma A.3

For each κ>0 there is a constant C>0 depending only on κ such that for all zS and all α0 we have

|mα(z)|C, A.4
|mα(z)-m(z)|C|α-1|. A.5
Proof

The simple facts follow directly from the corresponding properties of the semicircle law and its Stieltjes transform m (see e.g. [16, Lemma 3.3]). We leave the details to the reader.

Bounds on adjacency matrices of trees

In this appendix we derive estimates on the operator norm of a tree. We start with a standard estimate on the operator norm of a graph.

Lemma A.4

Let T be a graph whose vertices have degree at most q+1 for some q1. Then ATq+1 and if in addition T is a tree then AT2q.

Proof

The first claim is obvious by the Schur test for the operator norm. To prove the second claim, choose a root o and denote by Cx the set of children of the vertex x. Then for any vector w=(wx) we have

graphic file with name 220_2021_4167_Equ311_HTML.gif

where in third step we used Young’s inequality and in the fourth step that each vertex in the sum appears once as a child and at most q times as a parent. This concludes the proof.

The same proof shows that if T is a rooted tree whose root has at most p children and all other vertices at most q children, then ATq(p/q2). This bound is sharp for p2q but not for p>2q. The sharp bound in the latter case is established in the following result.

Lemma A.5

Let p,qN. Let T be a tree whose root has p children and all the other vertices have at most q children. Then the adjacency matrix AT of T satisfies ATqΛ(p/q2).

Proof

Let rN and denote by Tp,q(r) the rooted (pq)-regular tree of depth r, whose root x has p children, all vertices at distance 1ir from x have q children, and all vertices at distance r+1 from x are leaves. For large enough r, we can exhibit T as a subgraph of Tp,q(r). By the Perron–Frobenius theorem,

graphic file with name 220_2021_4167_Equ156_HTML.gif A.6

for the some normalized eigenvector w whose entries are nonnegative. We extend w to a vector indexed by the vertex set of Tp,q(r) by setting wy=0 for y not in the vertex set of T. Clearly,

graphic file with name 220_2021_4167_Equ157_HTML.gif A.7

Abbreviating AATp,q(r), it therefore remains to estimate the right-hand side of (A.7) for large enough r. To that end, we define Z as the tridiagonalization of A around the root up to radius r (see Appendix A.2). The associated orthonormal set g0,g1,,gr is given by gi=1Si(x)/1Si(x), and Z=qZr(p/q), where Zr(α) is the upper-left (r+1)×(r+1) block of (A.1). We introduce the orthogonal projections P0:=g0g0 and P:=i=0rgigi. Clearly, P0P=P0 and hence (1-P)(1-P0)=1-P. For large enough r the vectors gr and w have disjoint support, and hence Inline graphic, since AgiSpan{gi-i,gi+1} for i<r. Thus we have

graphic file with name 220_2021_4167_Equ158_HTML.gif A.8

From [10, Appendices B and C] we find

limrPAP=limrZ=qΛ(p/q2). A.9

Moreover, the operator (1-P0)A(1-P0) is the adjacency matrix of a forest whose vertices have degree at most q. By Lemma A.4, we therefore obtain (1-P0)A(1-P0)2q. From (A.8) we therefore get

graphic file with name 220_2021_4167_Equ312_HTML.gif

By (A.6) and (A.7), the proof is complete.

Degree distribution and number of resonant vertices

In this appendix we record some basic facts about the distribution of degrees of the graph G(N,d/N), and use them to estimate the number of resonant vertices Wλ,δ.

The following is a quantitative version of the Poisson approximation of a binomial random variable.

Lemma A.6

(Poisson approximation) If D is a random variable with law Binom(n,p) then for kn and p1/n we have

P(D=k)=(pn)kk!e-pn(1+O(k2n+p2n)).
Proof

Plugging the estimates (1-p)n-k=e(n-k)log(1-p)=e-np+O(pk+p2n) and

n!(n-k)!=nki=0k-1(1-in)=nkei=0k-1log(1-in)=nkeO(k2n),

into P(Dx=k)=n!k!(n-k)!pk(1-p)n-k yields the claim, since pkk2/n+p2n.

Lemma A.7

For G(N,d/N) we have αxC(1+logNd) with very high probability.

Proof

This is a simple application of Bennett’s inequality; see [10, Lemma 3.3] for details.

Next, we recall some standard facts about the distribution of the degrees. Define the function fd:[1,)[12log(2πd),) through

fd(α):=d(αlogα-α+1)+12log(2παd), A.10

which is bijective and increasing. For its interpretation, we note that if Y=dPoisson(d) then by Stirling’s formula we have P(Y=k)=exp(-fd(k/d)+O(1k)) for any kN. There is a universal constant C>0 such that for 1lNCd the equation fd(β)=log(N/l) has a unique solution ββl(d). The interpretation of βl(d) is the typical location of ασ(l). By the implicit function theorem, we find that dβl(d) on the interval (0,N2Cl2] is a decreasing bijective function.

Definition A.8

An event ΞΞN holds with high probability if P(Ξ)=1-o(1).

The following result is a slight generalization of [10, Proposition D.1], which can be established with the same proof. We note that the qualitative notion of high probability can be made stronger and quantitative with some extra effort, which we however refrain from doing here.

Lemma A.9

If d1 and l1 satisfies βl(d)3/2 then

|ασ(l)-βl(d)|1(ζ/logβl(d))d A.11

with high probability, where ζ is any sequence tending to infinity with N.

The following result6 gives bounds on the counting function of the normalized degrees (αx)x[N].

Lemma A.10

Suppose that ζ satisfies

1ζdCloglogN A.12

for some large enough universal constant C. Then for any α2 we have with high probability

(Ne-fd(α)-1)(logN)-2ζ|{x[N]:αxα}|(Ne-fd(α)+1)(logN)2ζ. A.13
Proof

If d>3logN, then an elementary analysis using Bennett’s inequality shows that |{x[N]:αxα}|=0 with high probability. Since Ne-fd(α)1 for α2, the claim follows. Thus, for the following we assume that d3logN.

Abbreviate Υ:=32ζd, which is an upper bound for the right-hand side of (A.11). For the following we adopt the convention that β0(d)=. Choose l0 such that

βl+1(d)<αβl(d), A.14

and define

k_:=l(logN)-2ζ,k¯:=(l+1)(logN)2ζ.

We shall show that

βk_(d)-Υβl(d) A.15

for k_1,

βk¯(d)+Υβl+1(d), A.16

and

k¯Ne-fd(3/2). A.17

Thus βk¯(d)3/2 and, assuming k_1, Lemma A.9 is applicable to the indices k¯ and k_. We obtain, with high probability,

ασ(k¯)βk¯(d)+Υβl+1(d)αβl(d)βk_(d)-Υασ(k_), A.18

from which we deduce that

k_|{x[N]:αxα}|k¯, A.19

which also holds trivially also for the case k_=0. By applying the function fd to (A.14) we obtain lNe-fd(α)l+1, so that (A.19) yields (A.13).

Next, we verify (A.17). We consider the cases l=0 and l1 separately. If l=0 then, by the definition of βk¯(d), for (A.17) we require (logN)2ζ+1Ne-fd(3/2), which holds by the assumption d3logN and the upper bound on ζ. Let us therefore suppose that l1. By (A.14), α2, and the definition of βl(d), we have lNe-fd(2), and we have to ensure that (l+2)(logN)2ζNe-fd(3/2). Since l1, this is satisfied provided that 3e-fd(2)(logN)2ζe-fd(3/2), which holds provided that fd(2)-fd(3/2)3ζloglogN. This inequality is true because fd(2)-fd(3/2)fd(3/2)/2d/C, where we used that fd(α)=dlogα+12α.

What remains, therefore, is the proof of (A.15) and (A.16). We begin with the proof of (A.15). We get from the mean value theorem that

βk_(d)-βl(d)=fd-1(log(Nk_))-fd-1(log(Nl))34dlogβk_(d)log(lk_). A.20

The right-hand side of (A.20) is bounded from below by Υ provided that

log(lk_)2ζlogβk_(d). A.21

We estimate βk_(d)β1(d) using the elementary bound fd(β)d10β for β2, which yields logN=fd(β1(d))d10β1(d). By assumption on d we therefore get

β1(d)logN. A.22

Thus, (A.21) holds by k_l/(logN)2ζ. This concludes the proof of (A.15).

Next, we prove (A.16). As in (A.20), we find

βl+1(d)-βk¯(d)=fd-1(log(Nl+1))-fd-1(log(Nk¯))34dlogβl+1(d)log(k¯l+1). A.23

Together with βl+1(d)β1(d)logN from (A.22), we deduce that the right-hand side of (A.23) is bounded from below by Υ provided that log(k¯l+1)2ζloglogN, which is true by definition of k¯. This concludes the proof of (A.16).

The following result follows easily from Lemma A.10. Recall the definition (1.13) of the exponent θb(α).

Corollary A.11

Suppose that ζ satisfies (A.12). Write d=blogN. Then for any α2 we have

|{x[N]:αxα}|1=Nθb(α)+ε,ε=O(ζloglogNlogN)

with high probability.

Using the exponent θb(α) from (1.13) and αmax(b) defined below it, we may state the following estimate on the density of the normalized degrees and the number of resonant vertices.

Lemma A.12

The following holds for a large enough universal constant C. Suppose that ζ satisfies (A.12). Write d=blogN.

  • (i)
    For 2α<βαmax(b) satisfying β-αCζloglogNdlogα, with high probability we have
    |{x[N]:ααxβ}|=Nθb(α)+ε,ε=O(ζloglogNlogN). A.24
  • (ii)
    For δCζloglogNd and 2+δλΛ(αmax(b)), with high probability we have
    |Wλ,δ|=Nθb(Λ-1(λ-δ))+ε,ε=O(ζloglogNlogN).

Note that, since ξd-1/2, if the conclusion of Theorem 1.2 is nontrivial then δd-1/2, and hence the assumption on δ in Lemma A.12 (ii) is automatically satisfied for suitably chosen ζ.

Proof of Lemma A.12

Part (i) follows Corollary A.11 below by noting that the assumption on β implies θb(α)-θb(β)CζloglogNlogN by the mean value theorem.

Part (ii) follows from Part (i), using that log(λ-δ)log2, that Λ is bounded on [2,), and the mean value theorem.

Corollary A.13

The following holds for large enough universal constants C,C. Suppose that (1.10) holds. Write d=blogN. Let w=(wx)x[N] be a normalized eigenvector of A/d with nontrivial eigenvalue 2+Cξ1/2λΛ(αmax(b)). Then with high probability for any 2p we have

wp2N(2/p-1)θb(Λ-1(λ))+ε,ε=O[loglogNlogN+b(logλ)(λ+1λ-2)(ξ+ξλ-2)].
Proof

We choose δ:=C(ξ+ξλ-2). Then by assumption on λ we have δ(λ-2)/2, and hence Theorem 1.2 yields, using that v(x) is supported in Br(x), xWλ,δyBr(x)wy212 with high probability. Using that for any vector xRn we have xp2n2/p-1x22 (by Hölder’s inequality), with the choice n=xWλ,δ|Br(x)|, we get

wp212(xWλ,δ|Br(x)|)2/p-112(|Wλ,δ|NCloglogN/logN)2/p-1 A.25

with high probability, where we used Lemma A.7 to estimate maxx[N]|Br(x)|NCloglogN/logN with high probability.

Next, using the mean value theorem and elementary estimates on the derivatives of θb and Λ-1, we estimate

θb(Λ-1(λ-δ))-θb(Λ-1(λ))Cb(logλ)(λ+1λ-2)δ.

Invoking Lemma A.12 (ii) with ζ:=loglogN, and recalling (A.25), therefore yields the claim.

Connected components of G(N,d/N)

In this appendix we give some basic estimates on the sizes of connected components of G(N,d/N). These are needed for the analysis of the tuning forks in Appendix A.6 below. The arguments are standard and are tailored to work well in the regime 1dlogN that we are interested in. For smaller values of d, see e.g. [17].

Lemma A.14

Let Wk be the number of connected components that have k vertices and W^k the number of connected components that have k vertices and are not a tree. Then for kN/2 we have

E[Wk]Ne-k(d/2-logd-1),E[W^k]e-k(d/2-logd-1).
Proof

For a set X[N], denote by T(X) the set of spanning trees of X. If X is a connected component of G then there exists TT(X) a subgraph of G such that no vertex of X is connected to a vertex of [N]\X. Hence,

WkX[N]1|X|=kTT(X)1TGxXy[N]\X(1-Axy).

Taking the expectation now easily yields the claim, using |T(X)|=|X||X|-2 by Cayley’s theorem, that a tree on k vertices has k-1 edges, Stirling’s approximation, and 1-xe-x.

The argument to estimate W^k is similar, noting that in addition to a spanning tree T of X, we also have to have at least one edge not in T connecting two vertices of X. Thus,

W^kX[N]1|X|=kTT(X)1TGxXy[N]\X(1-Axy){u,v}X2\E(T)Auv,

and we may estimate the expectation as before.

We call a connected component of G small if it is not the giant component. For the following statement we recall the definition of high probability from Definition A.8.

Corollary A.15

Suppose that d1. All small components of G have at most O(logNd) vertices with very high probability. All small components of G are trees with high probability. The giant component of G has at least N(1-e-d/4) vertices with high probability.

Proof

Any small component has at most N/2 vertices. Using Lemma A.14 we therefore get that the probability that there exists a small component with at least K vertices is bounded by

P(k[K,N/2],Wk1)k=KN/2E[Wk]2Ne-K(d/2-logd-1),

by summing the geometric series. Since d/2-logd-1cd for some universal constant c, we obtain the first claim. To obtain the second claim, we use Lemma A.14 to estimate the probability that there exists a small component that is not a tree by k=1N/2EW^ke-d/3. To obtain the last claim, we estimate the expected number of vertices in small components by E[k=1N/2kWk]Nk=1ke-k(d/2-logd-1)CNe-d/3 using Lemma A.14, and the third claim follows from Chebyshev’s inequality.

We may now estimate the adjacency matrix on the small components of G(N,d/N). The following result follows immediately from Corollary A.15 and Lemma A.4.

Corollary A.16

Suppose that d1. Then the operator norm of A/d restricted to the small components of G is bounded by O(logNd) with high probability.

Corollary A.16 makes it explicit that Theorem 1.8 excludes all eigenvectors on small components of G, whose eigenvalues lie outside Sκ precisely under the lower bound from (1.18).

Tuning forks and proof of Lemma 1.12

In this appendix we give a precise definition of the D-tuning forks from Sect. 1.5 and prove Lemma 1.12.

Definition A.17

A star of degree DN consists of a vertex, the hub, and D leaves adjacent to the hub, the spokes. A star tuning fork of degree D is obtained by taking two disjoint stars of degree D along with an additional vertex, the base, and connecting both hubs to the base. We say that a star tuning fork is rooted in a graph H if it is a subgraph of H in which both hubs have degree D+1 and all spokes are leaves.

Lemma A.18

If a star tuning fork of degree D is rooted in some graph H, then the adjacency matrix of H has eigenvalues ±D with corresponding eigenvectors supported on the stars of the tuning fork, i.e. on 2D+2 vertices.

Proof

Suppose first that D1. Note first that the adjacency matrix of a star of degree D has rank two and has the two nonzero eigenvalues ±D, with associated eigenvector equal to ±D at the hub and 1 at the spokes. Now take a star tuning fork of degree D rooted in a graph H. Define a vector on the vertex set of H by setting it to be ±D at the hub of the first star, 1 at the spokes of the first star, D at the hub of the second star, -1 at the spokes of the second star, and 0 everywhere else. Then it is easy to check that this vector is an eigenvector of the adjacency matrix of H with eigenvalue ±D. If D=0 the construction is analogous, defining the vector to be +1 at one hub and -1 at the other. 

We recall from Sect. 1.5 that F(dD) denotes the number of star tuning forks of degree D rooted in Ggiant.

Lemma A.19

Suppose that 1dN and 0DN. Then

E[F(d,D)]=Nd2e-2d2D!2(de-d+1)2D(1+o(1)) A.26

and E[F(d,D)2]E[F(d,D)]2(1+o(1)).

Proof of Lemma 1.12

From Lemma A.19 we deduce that if 1d=blogN=O(logN) and DlogN/loglogN, then E[F(d,D)]=N1-2b-2bD+o(1). The claim then follows from the second moment estimate in Lemma A.19 and Chebyshev’s inequality.

Proof of Lemma A.19

Let x1,x2[N] be distinct vertices and R1,R2[N]\{x1,x2} be disjoint subsets of size D. We abbreviate U=(x1,x2,R1,R2) and sometimes identify U with {x1,x2}R1R2. The family U and a vertex o[N]\U define a star tuning fork of degree D with base o, hubs x1 and x2, and associated spokes R1 and R2. Let Ck(H) denote the vertex set of the kth largest connected component of the graph H. Then F(d,D)=12Uo[N]\U1oC1(G)So,U, where

So,U:=i=12(uRi{o}Axiuu[N]\(Ri{o})(1-Axiu)uRiv[N]\{xi}(1-Auv)).

The factor 12 corrects the overcounting from the labelling of the two stars.

For disjoint deterministic U, we split the random variables A=(A,A) into two independent families, where A:=(Auv:uUorvU) and A:=(Auv:u,v[N]\U). Note that So,U is A-measurable. We define the event

Ξ:={|C1(G|[N]\U)|>|C2(G|[N]\U)|+2D+2},

which is A-measurable. By Corollary A.15 and the assumption on D, the event Ξ holds with high probability. Moreover, we have 1Ξ1oC1(G)So,U=1Ξ1oC1(G|[N]\U)So,U, since the component of o in G and G|[N]\U differ by 2D+2 vertices. Thus, for fixed o[N]\U, using the independence of A and A, we get

E[1oC1(G)So,U]=E[1Ξ1oC1(G|[N]\U)So,U]+E[1Ξc1oC1(G)So,U]=E[So,U][P(oC1(G|[N]\U))+O(P(Ξc))].

We have P(Ξc)=o(1) and P(oC1(G|[N]\U))=1-o(1) by Corollary A.15 and the assumption on D. Computing E[So,U] and performing the sum over o and U, we therefore conclude that

E[F(d,D)]=N(N-1)(N-2D-3+1)2D!2(dN)2D+2(1-dN)2(N-D-1)+2D(N-1)(1+o(1)),

from which (A.26) follows. The estimate of the second moment is similar; one can even disregard the restriction to the giant component by estimating E[F(d,D)2]14U,U~o,o~[N]E[So,USo~,U~]; we omit the details.

Multilinear large deviation bounds for sparse random vectors

In this appendix we collect basic large deviation bounds for multilinear functions of sparse random vectors, which are proved in [42]. The following result is proved in Propositions 3.1, 3.2, and 3.5 of [42]. We denote by Xr:=(E|X|r)1/r the Lr-norm of a random variable X.

Proposition A.20

Let r be even and 1dN. Let X1,,XN be independent random variables satisfying

EXi=0,E|Xi|k1Nd(k-2)/2

for all i[N] and 2kr. Let aiC and bijC be deterministic for all i,j[N]. Suppose that

(1Ni|ai|2)1/2γ,maxi|ai|dψ,

and

(maxi1Nj|bij|2)1/2(maxj1Ni|bij|2)1/2γ,maxi,j|bij|dψ

for some γ,ψ0. Then

iaiXir(2r1+2(log(ψ/γ))+2)(γψ), A.27a
iai(|Xi|2-E|Xi|2)r2(1+2dN)maxi|ai|(rdrd), A.27b
ijbijXiXjr(4r1+(log(ψ/γ))+4)2(γψ). A.27c

The Lr-norm bounds in Proposition A.20 induce bounds that hold with very high probability.

Corollary A.21

Fix κ(0,1). Let the assumptions of Proposition A.20 be satisfied. If ψ/γNκ/4 then with very high probability

|iaiXi|Cψ,|ijbijXiXj|Cψ. A.28
Remark A.22

Our proof of Corollary A.21 shows that C can be chosen as a linear function of ν for the first estimate of (A.28) and as a quadratic function of ν for the second estimate of (A.28).

Proof

Fix ν1. We choose r=νlogN in (A.27a) of Proposition A.20 and obtain from Cheybshev’s inequality that

graphic file with name 220_2021_4167_Equ313_HTML.gif

as κ(0,1). Similarly, choosing r=12νlogN in (A.27c) yields

graphic file with name 220_2021_4167_Equ314_HTML.gif

Resolvent identities

In this appendix we record some well-known identities for the Green function (4.2) and its minors from Definition 4.5. We assume throughout that zC\R.

Lemma A.23

(Ward identity). For xT[N] we have

y(T)|Gxy(T)|2=1ImzImGxx(T).
Proof

This is a standard identity for resolvents, see e.g. [16, Eq. (3.6)]. 

Lemma A.24

Let T[N]. For x,yT and xy, we have

Gxy(T)=-Gyy(T)a(Ty)Gxa(Ty)May=-Gxx(T)b(Tx)MxbGby(Tx). A.29

For x,y,aT and xay, we have

Gxy(Ta)=Gxy(T)-Gxa(T)Gay(T)Gaa(T). A.30

For any x[N], we have

1Gxx=Mxx-z-a,b(x)MxaGab(x)Mbx. A.31
Proof

All identities are standard and proved e.g. in [16]: (A.29) in [16, Eq. (3.5)], (A.30) in [16, Eq. (3.4)] and (A.31) in [16, Lemma A.1 and (5.1)]. 

We recall (4.1) and derive two expansions used in Sect. 4. For any T[N] and x,y,uT, xuy, we have

Gxy(Tu)=Gxy(T)+a(Tu)Gxa(Tu)HauGuy(T)+fNGuy(T)a(Tu)Gxa(Tu), A.32a

which follows from (A.30) and (A.29). Under the same assumptions, applying (A.29) to (A.32a) yields

Gxy(Tu)=Gxy(T)-Guu(T)a(Tu)Gxa(Tu)Haub(Tu)HubGby(Tu)-fNGuu(T)a(Tu)Gxa(Tu)Haub(Tu)Gby(Tu)+fNGuy(T)a(Tu)Gxa(Tu). A.32b

Stability estimate—proof of Lemma 4.19

In this appendix we prove Lemma 4.19. The estimate in [27, Lemma 3.5] corresponding to (4.43) has logarithmic factors, which are not affordable for our purposes: they have to be replaced with constants. The following proof of Lemma 4.19 is analogous to that of the more complicated bulk stability estimate from [9, Lemma 5.11].

Proof of Lemma 4.19

We introduce the vectors g:=(gx)xX and ε:=(εx)xX. Moreover, with the abbreviation m:=m(z) we introduce the constant vectors m=(m)xX and e:=|X|-1/2(1)xX. We regard all vectors as column vectors. A simple computation starting from the difference of (4.6) and (4.42) reveals that

B(g-m)=m(g-m)(ee(g-m))-(g-m)mε-m2ε, A.33

where B:=1-m2ee, and column vectors are multiplied entrywise. The inverse of B is

B-1=1+m21-m2ee.

For a matrix RCX×X, we write R for the operator norm induced by the norm r=maxxX|rx| on CX. It is easy to see that there is c>0, depending only on κ, such that |1-m(w)2|c for all wC+ satisfying |Rew|2-κ. Hence, owing to ee=1, we obtain B-11+|1-m2|-11+c-1. Therefore, inverting B in (A.33) and choosing b, depending only on κ, sufficiently small to absorb the term quadratic in g-m into the left-hand side of the resulting bound yields (4.43) for some sufficiently large C>0, depending only on κ. This concludes the proof of Lemma 4.19.

Instability estimate—proof of (2.11)

In this appendix we prove (2.11), which shows that the self-consistent equation (2.10) is unstable with a logarithmic factor, which renders it useless for the analysis of sparse random graphs. More precisely, we show that the norm (I-m2S)-1 is ill-behaved precisely in the situation where we need it. For simplicity, we replace m2 with a phase α-1S1 separated from ±1, since for RezSκ we have

|m(z)|2=1-O(Imz),Imm(z)1, A.34

by [33, Lemma 3.5]. Moreover, for definiteness, recalling that with very high probability most of the d(1+o(1)) neighbours of any vertex in T are again in T, we assume that S is the adjacency matrix of a d-regular graph on T divided by d.

By the spectral theorem and because S is Hermitian, (α-S)-122 is bounded, but, as we now show, the same does not apply to (α-S)-1. Indeed, the upper bound of (2.11) follows from [34, Proposition A.2], and the lower bound from the following result.

Lemma A.25

(Instability of (2.10)). Let S be 1/d times the adjacency matrix of a graph whose restriction to the ball of radius rN around some distinguished vertex is a d-regular tree. Let αS1 be an arbitrary phase. Then

(α-S)-1c(rlogrd) A.35

for some universal constant c>0.

In particular, denoting by N the number of vertices in the tree (which may be completed to a d-regular graph by connecting the leaves to each other), for dlogN and rlogNlogd we find

(α-S)-1clogN(loglogN)2, A.36

which is the lower bound of (2.11).

Proof of Lemma A.25

After making r smaller if needed, we may assume that rlogrd. We shall construct a vector u satisfying u=1 and (α-S)u=O(logrr), from which (A.35) will follow. To that end, we construct the sequence a0,a1,,ar by setting

a0:=1,a1:=α,ak+1:=dd-1αak-1d-1ak-1for1kr-1.

A short transfer matrix analysis shows that |ak|eC1k/d for some constant C1. Now choose μ:=C2logrr with C2:=22C1, and define bk:=e-μkak. Calling o the distinguished vertex, we define ux:=bk if k=dist(o,x)r and ux=0 otherwise. It is now easy to check that (α-S)u=O(logrr), by considering the cases k=0, 1kr-1, and kr separately. The basic idea of the construction is that if μ were zero, then (α-S)u would vanish exactly on Br-1(o), but it would be large on the boundary Sr(o). The factor e-μk introduces exponential decay in the radius which dampens the contribution of the boundary Sr(o) at the expense of introducing errors in the interior Br-1(o).

Funding

Open Access funding provided by Université de Genève.

Footnotes

1

For simplicity we only consider stars, but the same argument can be applied to arbitrary trees.

2

This projection Π is denoted by Πλ,δτ in the proof of Theorem 3.4 below.

3

We write ·pp for the operator norm on p.

4

A star around a vertex x is a set of edges incident to x.

5

The tridiagonalization algorithm that we use is the Lanczos algorithm. Tridiagonalizing matrices in numerical analysis and random matrix theory [26, 55] is usually performed using the numerically more stable Householder algorithm. However, when applied to the adjacency matrix X=A of a graph, the Lanczos algorithm is more convenient because it can exploit the sparseness and local geometry of A.

6

The assumption dloglogN in Lemma A.10 is tailored so that it covers the entire range α2, which is what we need in this paper. The assumption on d could also be removed at the expense of introducing a nontrivial lower bound on α.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Johannes Alt, Email: johannes.alt@unige.ch.

Raphael Ducatez, Email: raphael.ducatez@unige.ch.

Antti Knowles, Email: antti.knowles@unige.ch.

References

  • 1.Aggarwal A. Bulk universality for generalized Wigner matrices with few moments. Prob. Theor. Rel. Fields. 2019;173(1–2):375–432. [Google Scholar]
  • 2.Aggarwal, A., Lopatto, P., Marcinek, J.: Eigenvector statistics of Lévy matrices. Preprint arXiv:2002.09355 (2020)
  • 3.Aggarwal, A., Lopatto, P., Yau, H.-T.: GOE statistics for Lévy matrices. Preprint arXiv:1806.07363 (2018)
  • 4.Aizenman M, Molchanov S. Localization at large disorder and at extreme energies: an elementary derivation. Commun. Math. Phys. 1993;157:245–278. [Google Scholar]
  • 5.Aizenman M, Sims R, Warzel S. Absolutely continuous spectra of quantum tree graphs with weak disorder. Commun. Math. Phys. 2006;264(2):371–389. [Google Scholar]
  • 6.Aizenman M, Warzel S. Extended states in a Lifshitz tail regime for random Schrödinger operators on trees. Phys. Rev. Lett. 2011;106(13):136804. doi: 10.1103/PhysRevLett.106.136804. [DOI] [PubMed] [Google Scholar]
  • 7.Aizenman M, Warzel S. Resonant delocalization for random Schrödinger operators on tree graphs. J. Eur. Math. Soc. 2013;15(4):1167–1222. [Google Scholar]
  • 8.Aizenman M, Warzel S. Random Operators, Graduate Studies in Mathematics. Providence: American Mathematical Society; 2015. [Google Scholar]
  • 9.Ajanki OH, Erdős L, Krüger T. Quadratic vector equations on complex upper half-plane. Mem. Am. Math. Soc. 2019;261(1261):v+133. [Google Scholar]
  • 10.Alt J, Ducatez R, Knowles A. Extremal eigenvalues of critical Erdős–Rényi graphs. Ann. Prob. 2021;49(3):1347–1401. [Google Scholar]
  • 11.Anderson PW. Absence of diffusion in certain random lattices. Phys. Rev. 1958;109(5):1492. [Google Scholar]
  • 12.Bauerschmidt R, Huang J, Yau H-T. Local Kesten–McKay law for random regular graphs. Commun. Math. Phys. 2019;369(2):523–636. [Google Scholar]
  • 13.Bauerschmidt R, Knowles A, Yau H-T. Local semicircle law for random regular graphs. Commun. Pure Appl. Math. 2017;70(10):1898–1960. [Google Scholar]
  • 14.Benaych-Georges F, Bordenave C, Knowles A. Largest eigenvalues of sparse inhomogeneous Erdős–Rényi graphs. Ann. Prob. 2019;47(3):1653–1676. [Google Scholar]
  • 15.Benaych-Georges F, Bordenave C, Knowles A. Spectral radii of sparse random matrices. Ann. Inst. Henri Poincaré Probab. Stat. 2020;56(3):2141–2161. [Google Scholar]
  • 16.Benaych-Georges, F., Knowles, A.: Local Semicircle Law for Wigner Matrices, Advanced Topics in Random Matrices, Panor. Synthèses, vol. 53, pp. 1–90. Soc. Math. France, Paris (2017)
  • 17.Bollobás B. Random Graphs. Cambridge: Cambridge University Press; 2001. [Google Scholar]
  • 18.Bordenave C, Guionnet A. Localization and delocalization of eigenvectors for heavy-tailed random matrices. Prob. Theor. Rel. Fields. 2013;157(3–4):885–953. [Google Scholar]
  • 19.Bordenave C, Guionnet A. Delocalization at small energy for heavy-tailed random matrices. Commun. Math. Phys. 2017;354(1):115–159. [Google Scholar]
  • 20.Bourgade P, Erdős L, Yau H-T, Yin J. Universality for a class of random band matrices. Adv. Theor. Math. Phys. 2017;21(3):739–800. [Google Scholar]
  • 21.Bourgade P, Yang F, Yau H-T, Yin J. Random band matrices in the delocalized phase, II: generalized resolvent estimates. J. Stat. Phys. 2019;174(6):1189–1221. [Google Scholar]
  • 22.Bourgade P, Yau H-T, Yin J. Random band matrices in the delocalized phase, I: quantum unique ergodicity and universality. Commun. Pure Appl. Math. 2020;73(7):1526–1596. [Google Scholar]
  • 23.Casati G, Molinari L, Izrailev F. Scaling properties of band random matrices. Phys. Rev. Lett. 1990;64(16):1851. doi: 10.1103/PhysRevLett.64.1851. [DOI] [PubMed] [Google Scholar]
  • 24.Cizeau P, Bouchaud J-P. Theory of Lévy matrices. Phys. Rev. E. 1994;50(3):1810. doi: 10.1103/physreve.50.1810. [DOI] [PubMed] [Google Scholar]
  • 25.Combes JM, Thomas L. Asymptotic behaviour of eigenfunctions for multiparticle Schrödinger operators. Commun. Math. Phys. 1973;34:251–270. [Google Scholar]
  • 26.Dumitriu I, Edelman A. Matrix models for beta ensembles. J. Math. Phys. 2002;43(11):5830–5847. [Google Scholar]
  • 27.Erdős L, Yau H-T, Yin J. Bulk universality for generalized Wigner matrices. Prob. Theor. Rel. Fields. 2012;154(1–2):341–407. [Google Scholar]
  • 28.Erdős L, Knowles A. Quantum diffusion and delocalization for band matrices with general distribution. Ann. H. Poincaré. 2011;12:1227–1319. [Google Scholar]
  • 29.Erdős L, Knowles A. Quantum diffusion and eigenfunction delocalization in a random band matrix model. Commun. Math. Phys. 2011;303:509–554. [Google Scholar]
  • 30.Erdős L, Knowles A. The Altshuler-Shklovskii formulas for random band matrices II: the general case. Ann. H. Poincaré. 2014;16:709–799. [Google Scholar]
  • 31.Erdős L, Knowles A. The Altshuler-Shklovskii formulas for random band matrices I: the unimodular case. Commun. Math. Phys. 2015;333:1365–1416. [Google Scholar]
  • 32.Erdős L, Knowles A, Yau H-T. Averaging fluctuations in resolvents of random band matrices. Ann. H. Poincaré. 2013;14:1837–1926. [Google Scholar]
  • 33.Erdős L, Knowles A, Yau H-T, Yin J. Delocalization and diffusion profile for random band matrices. Commun. Math. Phys. 2013;323:367–416. [Google Scholar]
  • 34.Erdős L, Knowles A, Yau H-T, Yin J. The local semicircle law for a general class of random matrices. Electron. J. Probab. 2013;18:1–58. [Google Scholar]
  • 35.Erdős L, Knowles A, Yau H-T, Yin J. Spectral statistics of Erdős–Rényi graphs I: local semicircle law. Ann. Prob. 2013;41:2279–2375. [Google Scholar]
  • 36.Erdős L, Schlein B, Yau H-T. Local semicircle law and complete delocalization for Wigner random matrices. Commun. Math. Phys. 2009;287:641–655. [Google Scholar]
  • 37.Erdős L, Yau H-T, Yin J. Rigidity of eigenvalues of generalized Wigner matrices. Adv. Math. 2012;229:1435–1515. [Google Scholar]
  • 38.Froese R, Hasler D, Spitzer W. Transfer matrices, hyperbolic geometry and absolutely continuous spectrum for some discrete Schrödinger operators on graphs. J. Funct. Anal. 2006;230(1):184–221. [Google Scholar]
  • 39.Fröhlich J, Spencer T. Absence of diffusion in the Anderson tight binding model for large disorder or low energy. Commun. Math. Phys. 1983;88:151–184. [Google Scholar]
  • 40.Fyodorov Y, Mirlin A. Scaling properties of localization in random band matrices: a σ-model approach. Phys. Rev. Lett. 1991;67(18):2405. doi: 10.1103/PhysRevLett.67.2405. [DOI] [PubMed] [Google Scholar]
  • 41.Fyodorov Y, Ossipov A, Rodriguez A. The Anderson localization transition and eigenfunction multifractality in an ensemble of ultrametric random matrices. J. Stat. Mech. Theor Exper. 2009;2009(12):L12001. [Google Scholar]
  • 42.He Y, Knowles A, Marcozzi M. Local law and complete eigenvector delocalization for supercritical Erdős–Rényi graphs. Ann. Prob. 2019;47(5):3278–3302. [Google Scholar]
  • 43.He Y, Marcozzi M. Diffusion profile for random band matrices: a short proof. J. Stat. Phys. 2019;177(4):666–716. [Google Scholar]
  • 44.Klein A. Absolutely continuous spectrum in the Anderson model on the Bethe lattice. Math. Res. Lett. 1994;1(4):399–407. [Google Scholar]
  • 45.Lee JO, Schnelli K. Local deformed semicircle law and complete delocalization for Wigner matrices with random potential. J. Math. Phys. 2013;54(10):103504. [Google Scholar]
  • 46.Lee JO, Schnelli K. Extremal eigenvalues and eigenvectors of deformed Wigner matrices. Prob. Theor. Rel. Fields. 2016;164(1–2):165–241. [Google Scholar]
  • 47.Mirlin AD, Fyodorov YV, Dittes F-M, Quezada J, Seligman TH. Transition from localized to extended eigenstates in the ensemble of power-law random banded matrices. Phys. Rev. E. 1996;54(1):3221–3230. doi: 10.1103/physreve.54.3221. [DOI] [PubMed] [Google Scholar]
  • 48.Peled R, Schenker J, Shamis M, Sodin S. On the Wegner orbital model. Int. Math. Res. Not. 2019;2019(4):1030–1058. [Google Scholar]
  • 49.Sarnak, P.: Arithmetic Quantum Chaos. The Schur lectures (1992)(Tel Aviv). Israel Math. Conf. Proc., vol. 8, pp. 183–236 (1995)
  • 50.Schenker J. Eigenvector localization for random band matrices with power law band width. Commun. Math. Phys. 2009;290:1065–1097. [Google Scholar]
  • 51.Shcherbina, M., Shcherbina, T.: Universality for 1 D random band matrices. Preprint arXiv:1910.02999 (2019)
  • 52.Sodin S. The spectral edge of some random band matrices. Ann. Math. 2010;172(3):2223–2251. [Google Scholar]
  • 53.Tarquini E, Biroli G, Tarzia M. Level statistics and localization transitions of Lévy matrices. Phys. Rev. Lett. 2016;116(1):010601. doi: 10.1103/PhysRevLett.116.010601. [DOI] [PubMed] [Google Scholar]
  • 54.Tikhomirov K, Youssef P. Outliers in spectrum of sparse Wigner matrices. Random Struct. Algorithms. 2021;58(3):517–605. [Google Scholar]
  • 55.Trotter HF. Eigenvalue distributions of large Hermitian matrices; Wigner’s semicircle law and a theorem of Kac, Murdock, and Szegő. Adv. Math. 1984;54(1):67–82. [Google Scholar]
  • 56.von Soosten P, Warzel S. The phase transition in the ultrametric ensemble and local stability of Dyson Brownian motion. Electr. J. Prob. 2018;23:1–70. [Google Scholar]
  • 57.Wegner F. Bounds on the density of states in disordered systems. Z. Phys. B Cond. Mat. 1981;44(1):9–15. [Google Scholar]
  • 58.Wigner EP. Characteristic vectors of bordered matrices with infinite dimensions. Ann. Math. 1955;62:548–564. [Google Scholar]
  • 59.Yang F, Yin J. Random band matrices in the delocalized phase, III: averaging fluctuations. Probab. Theory Related Fields. 2021;179(1–2):451–540. [Google Scholar]

Articles from Communications in Mathematical Physics are provided here courtesy of Springer

RESOURCES