Abstract
For complex Wigner-type matrices, i.e. Hermitian random matrices with independent, not necessarily identically distributed entries above the diagonal, we show that at any cusp singularity of the limiting eigenvalue distribution the local eigenvalue statistics are universal and form a Pearcey process. Since the density of states typically exhibits only square root or cubic root cusp singularities, our work complements previous results on the bulk and edge universality and it thus completes the resolution of the Wigner–Dyson–Mehta universality conjecture for the last remaining universality type in the complex Hermitian class. Our analysis holds not only for exact cusps, but approximate cusps as well, where an extended Pearcey process emerges. As a main technical ingredient we prove an optimal local law at the cusp for both symmetry classes. This result is also the key input in the companion paper (Cipolloni et al. in Pure Appl Anal, 2018. arXiv:1811.04055) where the cusp universality for real symmetric Wigner-type matrices is proven. The novel cusp fluctuation mechanism is also essential for the recent results on the spectral radius of non-Hermitian random matrices (Alt et al. in Spectral radius of random matrices with independent entries, 2019. arXiv:1907.13631), and the non-Hermitian edge universality (Cipolloni et al. in Edge universality for non-Hermitian random matrices, 2019. arXiv:1908.00969).
Introduction
The celebrated Wigner–Dyson–Mehta (WDM) conjecture asserts that local eigenvalue statistics of large random matrices are universal: they only depend on the symmetry type of the matrix and are otherwise independent of the details of the distribution of the matrix ensemble. This remarkable spectral robustness was first observed by Wigner in the bulk of the spectrum. The correlation functions are determinantal and they were computed in terms the sine kernel via explicit Gaussian calculations by Dyson, Gaudin and Mehta [59]. Wigner’s vision continues to hold at the spectral edges, where the correct statistics was identified by Tracy and Widom for both symmetry types in terms of the Airy kernel [70, 71]. These universality results have been originally formulated and proven [17, 35, 36, 67–69] for traditional Wigner matrices, i.e. Hermitian random matrices with independent, identically distributed (i.i.d.) entries and their diagonal [55, 57] and non-diagonal [51] deformations. More recently they have been extended to Wigner-type ensembles, where the identical distribution is not required, and even to a large class of matrices with general correlated entries [7, 8, 11]. In different directions of generalization, sparse matrices [1, 32, 47, 56], adjacency matrices of regular graphs [14] and band matrices [19, 20, 66] have also been considered. In parallel developments bulk and edge universal statistics have been proven for invariant -ensembles [12, 15, 17, 18, 29, 30, 52, 61, 62, 64, 65, 73] and even for their discrete analogues [13, 16, 41, 48] but often with very different methods.
A precondition for the Tracy-Widom distribution in all these generalizations of Wigner’s original ensemble is that the density of states vanishes as a square root near the spectral edges. The recent classification of the singularities of the solution to the underlying Dyson equation indeed revealed that at the edges only square root singularities appear [6, 10]. The density of states may also form a cusp-like singularity in the interior of the asymptotic spectrum, i.e. single points of vanishing density with a cubic root growth behaviour on either side. Under very general conditions, no other type of singularity may occur. At the cusp a new local eigenvalue process emerges: the correlation functions are still determinantal but the Pearcey kernel replaces the sine- or the Airy kernel.
The Pearcey process was first established by Brézin and Hikami for the eigenvalues close to a cusp singularity of a deformed complex Gaussian Wigner (GUE) matrix. They considered the model of a GUE matrix plus a deterministic matrix (“external source”) having eigenvalues with equal multiplicity [21, 22]. The name Pearcey kernel and the corresponding Pearcey process have been coined by [72] in reference to related functions introduced by Pearcey in the context of electromagnetic fields [63]. Similarly to the universal sine and Airy processes, it has later been observed that also the Pearcey process universality extends beyond the realm of random matrices. Pearcey statistics have been established for non-intersecting Brownian bridges [3] and in skew plane partitions [60], always at criticality. We remark, however, that critical cusp-like singularity does not always induce a Pearcey kernel, see e.g. [31].
In random matrix theory there are still only a handful of rather specific models for which the emergence of the Pearcey process has been proven. This has been achieved for deformed GUE matrices [2, 4, 23] and for Gaussian sample covariance matrices [42–44] by a contour integration method based upon the Brézin–Hikami formula. Beyond linear deformations, the Riemann-Hilbert method has been used for proving Pearcey statistics for a certain two-matrix model with a special quartic potential with appropriately tuned coefficients [40]. All these previous results concern only specific ensembles with a matrix integral representation. In particular, Wigner-type matrices are out of the scope of this approach.
The main result of the current paper is the proof of the Pearcey universality at the cusps for complex Hermitian Wigner-type matrices under very general conditions. Since the classification theorem excludes any other singularity, this is the third and last universal statistics that emerges from natural generalizations of Wigner’s ensemble.
This third universality class has received somewhat less attention than the other two, presumably because cusps are not present in the classical Wigner ensemble. We also note that the most common invariant -ensembles do not exhibit the Pearcey statistics as their densities do not feature cubic root cusps but are instead 1/2-Hölder continuous for somewhat regular potentials [28]. The density vanishes either as 2kth or th power with their own local statistics (see [26] also for the persistence of these statistics under small additive GUE perturbations before the critical time). Cusp singularities, hence Pearcey statistics, however, naturally arise within any one-parameter family of Wigner-type ensembles whenever two spectral bands merge as the parameter varies. The classification theorem implies that cusp formation is the only possible way for bands to merge, so in that sense Pearcey universality is ubiquitous as well.
The bulk and edge universality is characterized by the symmetry type alone: up to a natural shift and rescaling there is only one bulk and one edge statistic. In contrast, the cusp universality has a much richer structure: it is naturally embedded in a one-parameter family of universal statistics within each symmetry class. In the complex Hermitian case these are given by the one-parameter family of (extended) Pearcey kernels, see (2.5) later. Thinking in terms of fine-tuning a single parameter in the space of Wigner-type ensembles, the density of states already exhibits a universal local shape right before and right after the cusp formation; it features a tiny gap or a tiny nonzero local minimum, respectively [5, 10]. When the local lengthscale of these almost cusp shapes is comparable with the local eigenvalue spacing , then the general Pearcey statistics is expected to emerge whose parameter is determined by the ratio . Thus the full Pearcey universality typically appears in a double scaling limit.
Our proof follows the three step strategy that is the backbone of the recent approach to the WDM universality, see [38] for a pedagogical exposé and for detailed history of the method. The first step in this strategy is a local law that identifies, with very high probability, the empirical eigenvalue distribution on a scale slightly above the typical eigenvalue spacing. The second step is to prove universality for ensembles with a tiny Gaussian component. Finally, in the third step this Gaussian component is removed by perturbation theory. The local law is used for precise apriori bounds in the second and third steps.
The main novelty of the current paper is the proof of the local law at optimal scale near the cusp. To put the precision in proper context, we normalize the real symmetric or complex Hermitian Wigner-type matrix H to have norm of order one. As customary, the local law is formulated in terms of the Green function with spectral parameter z in the upper half plane. The local law then asserts that G(z) becomes deterministic in the large N limit as long as is much larger than the local eigenvalue spacing around . The deterministic approximant M(z) can be computed as the unique solution of the corresponding Dyson equation (see (2.2) and (3.1) later). Near the cusp the typical eigenvalue spacing is of order ; compare this with the spacing in the bulk and spacing near the edges. We remark that a local law at the cusp on the non-optimal scale has already been proven in [8]. In the current paper we improve this result to the optimal scale and this is essential for our universality proof at the cusp.
The main ingredient behind this improvement is an optimal estimate of the error term D (see (3.4) later) in the approximate Dyson equation that G(z) satisfies. The difference is then roughly estimated by , where is the linear stability operator of the Dyson equation. Previous estimates on D (in averaged sense) were of order , where is the local density; roughly speaking in the bulk, at the edge and near the cusp. While this estimate cannot be improved in general, our main observation is that, to leading order, we need only the projection of MD in the single unstable direction of . We found that this projection carries an extra hidden cancellation due to a special local symmetry at the cusp and thus the estimate on D effectively improves to . Customary power counting is not sufficient, we need to compute this error term explicitly at least to leading order. We call this subtle mechanism cusp fluctuation averaging since it combines the well established fluctuation averaging procedure with the additional cancellation at the cusp. Similar estimates extend to the vicinity of the exact cusps. We identify a key quantity, denoted by (in (3.5b) later), that measures the distance from the cusp in a canonical way: characterizes an exact cusp, while indicates that z is near an almost cusp. Our final estimate on D is of order . Since the error term D is random and we need to control it in high moment sense, we need to lift this idea to a high moment calculation, meticulously extracting the improvement from every single term. This is performed in the technically most involved Sect. 4 where we use a Feynman diagrammatic formalism to bookkeep the contributions of all terms. Originally we have developed this language in [34] to handle random matrices with slow correlation decay, based on the revival of the cumulant expansion technique in [45] after [50]. In the current paper we incorporate the cusp into this analysis. We identify a finite set of Feynman subdiagrams, called -cells (Definition 4.10) with value that embody the cancellation effect at the cusp. To exploit the full strength of the cusp fluctuation averaging mechanism, we need to trace the fate of the -cells along the high moment expansion. The key point is that -cells are local objects in the Feynman graphs thus their cancellation effects act simultaneously and the corresponding gains are multiplicative.
Formulated in the jargon of diagrammatic field theory, extracting the deterministic Dyson equation for M from the resolvent equation corresponds to a consistent self-energy renormalization of G. One way or another, such procedure is behind every proof of the optimal local law with high probability. Our -cells conceptually correspond to a next order resummation of certain Feynman diagrams carrying a special cancellation.
We remark that we prove the optimal local law only for Wigner-type matrices and not yet for general correlated matrices unlike in [11, 34]. In fact we use the simpler setup only for the estimate on D (Theorem 3.7) the rest of the proof is already formulated for the general case. This simpler setup allows us to present the cusp fluctuation averaging mechanism with the least amount of technicalities. The extension to the correlated case is based on the same mechanism but it requires considerably more involved diagrammatic manipulations which is better to develop in a separate work to contain the length of this paper.
Our cusp fluctuation averaging mechanism has further applications. It is used in [9] to prove an optimal cusp local law for the Hermitization of non-Hermitian random matrices with a variance profile, demonstrating that the technique is also applicable in settings where the flatness assumption is violated. The cusp of the Hermitization corresponds to the edge of the non-Hermitian model via Girko’s formula, thus the optimal cusp local law leads to an optimal bound on the spectral radius [9] and ultimately also to edge universality [25] for non-Hermitian random matrices.
Armed with the optimal local law we then perform the other two steps of the three step analysis. The third step, relying on the Green function comparison theorem, is fairly standard and previous proofs used in the bulk and at the edge need only minor adjustments. The second step, extracting universality from an ensemble with a tiny Gaussian component can be done in two ways: (i) Brézin–Hikami formula with contour integration or (ii) Dyson Brownian Motion (DBM). Both methods require the local law as an input. In the current work we follow (i) mainly because this approach directly yields the Pearcey kernel, at least for the complex Hermitian symmetry class. In the companion work [24] we perform the DBM analysis adapting methods of [37, 53, 54] to the cusp. The main novelty in the current work and in [24] is the rigidity at the cusp on the optimal scale provided below. Once this key input is given, the proof of the edge universality from [53] is modified in [24] to the cusp setting, proving universality for the real symmetric case as well. We remark, however, that, to our best knowledge, the analogue of the Pearcey kernel for the real symmetric case has not yet been explicitly identified.
We now explain some novelty in the contour integration method. We first note that a similar approach was initiated in the fundamental work of Johansson on the bulk universality for Wigner matrices with a large Gaussian component in [49]. This method was generalised later to Wigner matrices with a small Gaussian component in [35] as well as it inspired the proof of bulk universality via the moment matching idea [68] once the necessary local law became available. The double scaling regime has also been studied, where the density is very small but the Gaussian component compensates for it [27]. More recently, the same approach was extended to the cusp for deformed GUE matrices [23, Theorem 1.3] and for sample covariance matrices but only for large Gaussian component [42–44]. For our cusp universality, we need to perform a similar analysis but with a small Gaussian component. We represent our matrix H as , where U is GUE and is an independent Wigner-type matrix. The contour integration analysis (Sect. 5.1) requires a Gaussian component of size at least .
The input of the analysis in Sect. 5.1 for the correlation kernel of H is a very precise description of the eigenvalues of just above , the scale of the typical spacing between eigenvalues—this information is provided by our optimal local law. While in the bulk and in the regime of the regular edge finding an appropriate is a relatively simple matter, in the vicinity of a cusp point the issue is very delicate. The main reason is that the cusp, unlike the bulk or the regular edge, is unstable under small perturbations; in fact it typically disappears and turns into a small positive local minimum if a small GUE component is added. Conversely, a cusp emerges if a small GUE component is added to an ensemble that has a density with a small gap. In particular, even if the density function of H exhibits an exact cusp, the density of will have a small gap: in fact is given by the evolution of the semicircular flow up to time t with initial data . Unlike in the bulk and edge cases, here one cannot match the density of H and by a simple shift and rescaling. Curiously, the contour integral analysis for the local statistics of H at the cusp relies on an optimal local law of with a small gap far away from the cusp.
Thus we need an additional ingredient: the precise analysis of the semicircular flow near the cusp up to a relatively long times ; note that is the original density with the cusp. Here is the semicircular density with variance s and indicates the free convolution. In Sects. 5.2–5.3 we will see that the edges of the support of the density typically move linearly in the time s while the gap closes at a much slower rate. Already is beyond the simple perturbative regime of the cusp whose natural lengthscale is . Thus we need a very careful tuning of the parameters: the analysis of a cusp for H requires constructing a matrix that is far from having a cusp but that after a relatively long time will develop a cusp exactly at the right location. In the estimates we heavily rely on various properties of the solution to the Dyson equation established in the recent paper [10]. These results go well beyond the precision of the previous work [5] and they apply to a very general class of Dyson equations, including a non-commutative von-Neumann algebraic setup.
Notations. We now introduce some custom notations we use throughout the paper. For non-negative functions f(A, B), g(A, B) we use the notation if there exist constants C(A) such that for all A, B. Similarly, we write if and . We do not indicate the dependence of constants on basic parameters that will be called model parameters later. If the implied constants are universal, we instead write and . Similarly we write if for some tiny absolute constant .
We denote vectors by bold-faced lower case Roman letters , and matrices by upper case Roman letters with entries . The standard scalar product and Euclidean norm on will be denoted by and , while we also write for the scalar product of matrices, and , . We write , for the diagonal vector of a matrix R and the diagonal matrix obtained from a vector , and for the entrywise (Hadamard) product of matrices R, S. The usual operator norm induced by the vector norm will be denoted by , while the Hilbert-Schmidt (or Frobenius) norm will be denoted by . For integers n we define .
Main Results
The Dyson equation
Let be a self-adjoint random matrix and be a deterministic diagonal matrix with entries . We say that W is of Wigner-type [8] if its entries for are centred, , independent random variables. We define the variance matrix or self-energy matrix by
| 2.1 |
This matrix is symmetric with non-negative entries. In [8] it was shown that as N tends to infinity, the resolvent of the deformed Wigner-type matrix entrywise approaches a diagonal matrix
The entries of M have positive imaginary parts and solve the Dyson equation
| 2.2 |
We call M or the self-consistent Green’s function. The normalised trace of M is the Stieltjes transform of a unique probability measure on that approximates the empirical eigenvalue distribution of increasingly well as , motivating the following definition.
Definition 2.1
(Self-consistent density of states). The unique probability measure on , defined through
is called the self-consistent density of states (scDOS). Accordingly, its support is called self-consistent spectrum.
Cusp universality
We make the following assumptions:
Assumption (A)
(Bounded moments). The entries of the Wigner-type matrix have bounded moments and the expectation A is bounded, i.e. there are positive such that
Assumption (B)
(Fullness). If the matrix belongs to the complex hermitian symmetry class, then we assume
| 2.3 |
as quadratic forms, for some positive constant . If belongs to the real symmetric symmetry class, then we assume .
Assumption (C)
(Bounded self-consistent Green’s function). In a neighbourhood of some fixed spectral parameter the self-consistent Green’s function is bounded, i.e. for positive we have
We call the constants appearing in Assumptions (A)–(C)model parameters. All generic constants C in this paper may implicitly depend on these model parameters. Dependence on further parameters however will be indicated.
Remark 2.2
The boundedness of in Assumption (C) can be ensured by assuming some regularity of the variance matrix S. For more details we refer to [5, Chapter 6].
From the extensive analysis in [10] we know that the self-consistent density is described by explicit shape functions in the vicinity of local minima with small value of and around small gaps in the support of . The density in such almost cusp regimes is given by precisely one of the following three asymptotics:
-
(i)Exact cusp. There is a cusp point in the sense that and for . In this case the self-consistent density is locally around given by
for some .2.4a -
(ii)Small gap. There is a maximal interval of size such that . In this case the density around is, for some , locally given by
where the shape function around the edge is given by2.4b 2.4c -
(iii)Non-zero local minimum. There is a local minimum at of such that . In this case there exists some such that
where the shape function around the local minimum is given by2.4d 2.4e
We note that the parameter in (2.4a) is chosen in a way which is convenient for the universality statement. We also note that the choices for in (2.4b)–(2.4d) are consistent with (2.4a) in the sense that in the regimes and the respective formulae asymptotically agree. Depending on the three cases (i)–(iii), we define the almost cusp point as the cusp in case (i), the midpoint in case (ii), and the minimum in case (iii). When the local length scale of the almost cusp shape starts to match the eigenvalue spacing, i.e. if or , then we call the local shape a physical cusp. This terminology reflects the fact that the shape becomes indistinguishable from the exact cusp with when resolved with a precision above the eigenvalue spacing. In this case we call a physical cusp point.
The extended Pearcey kernel with a real parameter (often denoted by in the literature) is given by
| 2.5 |
where is a contour consisting of rays from to 0 and rays from 0 to , and is the ray from to . The simple Pearcey kernel with parameter has been first observed in the context of random matrix theory by [21, 22]. We note that (2.5) is a special case of a more general extended Pearcey kernel defined in [72, Eq. (1.1)].
It is natural to express universality in terms of a rescaled k-point function which we define implicitly by
for test functions f, where the summation is over all subsets of k distinct integers from [N].
Theorem 2.3
Let H be a complex Hermitian Wigner matrix satisfying Assumptions (A)–(C). Assume that the self-consistent density within from Assumption (C) has a physical cusp, i.e. that is locally given by (2.4) for some and either (i) has a cusp point , or (ii) a small gap of size , or (iii) a local minimum at of size . Then it follows that for any smooth compactly supported test function it holds that
where
| 2.6 |
, , and is a small constant only depending on k.
Local law
We emphasise that the proof of Theorem 2.3 requires a very precise a priori control on the fluctuation of the eigenvalues even at singular points of the scDOS. This control is expressed in the form of a local law with an optimal convergence rate down to the typical eigenvalue spacing. We now define the scale on which the eigenvalues are predicted to fluctuate around the spectral parameter .
Definition 2.4
(Fluctuation scale). We define the self-consistent fluctuation scale through
if . If , then is defined as the fluctuation scale at a nearby edge. More precisely, let I be the largest (open) interval with and set . Then we define
| 2.7 |
We will see later (cf. (A.8b)) that (2.7) is the fluctuation of the edge eigenvalue adjacent to a spectral gap of length as predicted by the local behaviour of the scDOS. The control on the fluctuation of eigenvalues is expressed in terms of the following local law.
Theorem 2.5
(Local law). Let H be a deformed Wigner-type matrix of the real symmetric or complex Hermitian symmetry class. Fix any . Assuming (A)–(C) for any and the local law holds uniformly for all with in the form
| 2.8a |
for any and
| 2.8b |
for any . Here denotes the harmonic extension of the scDOS to the complex upper half plane. The constants in (2.8) only depends on and the model parameters.
We remark that later we will prove the local law also in a form which is uniform in and , albeit with a more complicated error term, see Proposition 3.11. The local law Theorem 2.5 implies a large deviation result for the fluctuation of eigenvalues on the optimal scale uniformly for all singularity types.
Corollary 2.6
(Uniform rigidity). Let H be a deformed Wigner-type matrix of the real symmetric or complex Hermitian symmetry class satisfying Assumptions (A)–(C) for . Then
for any and and some , where we defined the (self-consistent) eigenvalue index , and where .
In particular, the fluctuation of the eigenvalue whose expected position is closest to the cusp location does not exceed for any with very high probability. The following corollary specialises Corollary 2.6 to the neighbourhood of a cusp.
Corollary 2.7
(Cusp rigidity). Let H be a deformed Wigner-type matrix of the real symmetric or complex Hermitian symmetry class satisfying Assumptions (A)–(C) and the location of an exact cusp. Then for some , that we call the cusp eigenvalue index. For any , and with we have
where and are the self-consistent eigenvalue locations, defined through .
We remark that a variant of Corollary 2.7 holds more generally for almost cusp points. It is another consequence of Corollary 2.6 that with high probability there are no eigenvalues much further than the fluctuation scale away from the spectrum. We note that the following corollary generalises [11, Corollary 2.3] by also covering internal gaps of size .
Corollary 2.8
(No eigenvalues outside the support of the self-consistent density). Let . Under the assumptions of Theorem 2.5 we have
for any , where c and C are positive constants, depending on model parameters. The latter also depends on and .
Remark 2.9
Theorem 2.5 and its consequences, Corollaries 2.6, 2.7 and 2.8 also hold for both symmetry classes if Assumption (B) is replaced by the condition that there exists an and such that . A variance profile S satisfying this condition is called uniformly primitive (cf. [6, Eq. (2.5)] and [5, Eq. (2.11)]). Note that uniform primitivity is weaker than condition (B) on two accounts. First, it involves only the variance matrix unlike (2.3) in the complex Hermitian case that also involves . Second, uniform primitivity allows certain matrix elements of W to vanish. The proof under these more general assumptions follows the same strategy but requires minor modifications within the stability analysis.1
Local Law
In order to directly appeal to recent results on the shape of solution to Matrix Dyson Equation (MDE) from [10] and the flexible diagrammatic cumulant expansion from [34], we first reformulate the Dyson equation (2.2) for N-vectors into a matrix equation that will approximately be satisfied by the resolvent G. This viewpoint also allows us to treat diagonal and off-diagonal elements of G on the same footing. In fact, (2.2) is a special case of
| 3.1 |
for a matrix with positive definite imaginary part, . The uniqueness of the solution M with was shown in [46]. Here the linear (self-energy) operator is defined as and it preserves the cone of positive definite matrices. Definition 2.1 of the scDOS and its harmonic extension (cf. Theorem 2.5) directly generalises to the solution to (3.1), see [10, Definition 2.2].
In the special case of Wigner-type matrices the self-energy operator is given by
| 3.2 |
where , S was defined in (2.1), with and denotes the entrywise Hadamard product. The solution to (3.1) is then given by , where solves (2.2). Note that the action of on diagonal matrices is independent of T, hence the Dyson equation (2.2) for Wigner-type matrices is solely determined by the matrix S, the matrix T plays no role. However, T plays a role in analyzing the error matrix D, see (3.4) below.
The proof of the local law consists of three largely separate arguments. The first part concerns the analysis of the stability operator
| 3.3 |
and shape analysis of the solution M to (3.1). The second part is proving that the resolvent G is indeed an approximate solution to (3.1) in the sense that the error matrix
| 3.4 |
is small. In previous works [8, 11, 34] it was sufficient to establish smallness of D in an isotropic form and averaged form with general bounded vectors/matrices . In the vicinity of a cusp, however, it becomes necessary to establish an additional cancellation when D is averaged against the unstable direction of the stability operator . We call this new effect cusp fluctuation averaging. Finally, the third part of the proof consists of a bootstrap argument starting far away from the real axis and iteratively lowering the imaginary part of the spectral parameter while maintaining the desired bound on .
Remark 3.1
We remark that the proofs of Theorem 2.5, and Corollaries 2.6 and 2.8 use the independence assumption on the entries of W only very locally. In fact, only the proof of a specific bound on D (see (3.15) later), which follows directly from the main result of the diagrammatic cumulant expansion, Theorem 3.7, uses the vector structure and the specific form of in (3.2) at all. Therefore, assuming (3.15) as an input, our proof of Theorem 2.5 remains valid also in the correlated setting of [11, 34], as long as is flat (see (3.6) below), and Assumption (C) is replaced by the corresponding assumption on the boundedness of .
For brevity we will carry out the proof of Theorem 2.5 only in the vicinity of almost cusps as the local law in all other regimes was already proven in [8, 11] to optimality. Therefore, within this section we will always assume that lies inside a small neighbourhood
of the location of a local minimum of the scDOS within the self-consistent spectrum . Here c is a sufficiently small constant depending only on the model parameters. We will further assume that either (i) is sufficiently small and is the location of a cusp or internal minimum, or (ii) and is an edge adjacent to a sufficiently small gap of length . The results from [10] guarantee that these are the only possibilities for the shape of , see (2.4). In other words, we assume that is a local minimum of with a shape close to a cusp (cf. (2.4)). For concreteness we will also assume that if is an edge, then it is a right edge (with a gap of length to the right) and . The case when is a left edge has the same proof.
We now introduce a quantity that will play an important role in the cusp fluctuation averaging mechanism. We define
| 3.5a |
where is the real part of . It was proven in [10, Lemma 5.5] that extends to the real line as a 1/3-Hölder continuous function wherever the scDOS is smaller than some threshold , i.e. . In the specific case of as in (3.2) the definition simplifies to
| 3.5b |
since is diagonal, where multiplication and division of vectors are understood entrywise. When evaluated at the location the scalar provides a measure of how far the shape of the singularity at is from an exact cusp. In fact, if and , then is a cusp location. To see the relationship between the emergence of a cusp and the limit , we refer to [10, Theorem 7.7 and Lemma 6.3]. The analogues of the quantities and in (3.5b) are denoted by and in [10], respectively. The significance of for the classification of singularity types in Wigner-type ensembles was first realised in [5]. Although in this paper we will use only [10] and will not rely on [5], we remark that the definition of in [5, Eq. (8.11)] differs slightly from the definition (3.5b). However, both definitions equally fulfil the purpose of classifying singularity types, since the ensuing scalar quantities are comparable inside the self-consistent spectrum. For the interested reader, we briefly relate our notations to the respective conventions in [10] and [5]. The quantity denoted by f in both [10] and [5] is the normalized eigendirection of the saturated self-energy operator F in the respective settings and is related to from (3.5b) via . Moreover, in [5] is defined as , justifying the comparability to from (3.5b).
Stability and shape analysis
From (3.1) and (3.4) we obtain the quadratic stability equation
for the difference . In order to apply the results of [10] to the stability operator , we first have to check that the flatness condition [10, Eq. (3.10)] is satisfied for the self-energy operator . We claim that is flat, i.e.
| 3.6 |
as quadratic forms for any positive semidefinite . We remark that in the earlier paper [8] in the Wigner-type case only the upper bound defined the concept of flatness. Here with the definition (3.6) we follow the convention of the more recent works [10, 11, 34] which is more conceptual. We also warn the reader, that in the complex Hermitian Wigner-type case the condition implies (3.6) only if is bounded away from .
However, the flatness (3.6) is an immediate consequence of the fullness Assumption (B). Indeed, (B) is equivalent to the condition that the covariance operator of all entries above and on the diagonal, defined as , is uniformly strictly positive definite. This implies that for some constant , where is the covariance operator of a GUE or GOE matrix, depending on the symmetry class we consider. This means that can be split into , where and are the self-energy operators corresponding to and , respectively. It is now an easy exercise to check that and thus is flat.
In particular, [10, Proposition 3.5 and Lemma 4.8] are applicable implying that [10, Assumption 4.5] is satisfied. Thus, according to [10, Lemma 5.1] for spectral parameters z in a neighbourhood of the operator has a unique isolated eigenvalue of smallest modulus and associated right and left eigendirections normalised such that . We denote the spectral projections to and to its complement by and . For convenience of the reader we now collect some important quantitative information about the stability operator and its unstable direction from [10].
Proposition 3.2
(Properties of the MDE and its solution). The following statements hold true uniformly in assuming flatness as in (3.6) and the uniform boundedness of for ,
-
(i)The eigendirections are norm-bounded and the operator is bounded on the complement to its unstable direction, i.e.
3.7a -
(ii)The density is comparable with the explicit function given by
3.7b -
(iii)The eigenvalue of smallest modulus satisfies
and we have the comparison relations3.7c 3.7d -
(iv)The quantities and in (3.7c)–(3.7d) can be replaced by the following more explicit auxiliary quantities
which are monotonically increasing in . More precisely, it holds that and, in the case where is a cusp or a non-zero local minimum, we also have that . For the case when is a right edge next to a gap of size there exists a constant such that in the regime and in the regime .3.7e
Proof
We first explain how to translate the notations from the present paper to the notations in [10]: The operators are simply denoted by S, B, Q in [10]; the matrices here are denoted by there. The bound on in (3.7a) follows directly from [10, Eq. (5.15)]. The bounds on in (3.7a) follow from the definition of the stability operator (3.3) together with the fact that (by Assumption (C)) and , following from the upper bound in flatness (3.6). The asymptotic expansion of in (3.7b) follows from [10, Remark 7.3] and [5, Corollary A.1]. The claims in (iii) follow directly from [10, Proposition 6.1]. Finally, the claims in (iv) follow directly from [10, Remark 10.4].
The following lemma establishes simplified lower bounds on whenever is much larger than the fluctuation scale . We defer the proof of the technical lemma which differentiates various regimes to the Appendix.
Lemma 3.3
Under the assumptions of Proposition 3.2 we have uniformly in with that
We now define an appropriate matrix norm in which we will measure the distance between G and M. The -norm is defined exactly as in [11] and similar to the one first introduced in [34]. It is a norm comparing matrix elements on a large but finite set of vectors with a hierarchical structure. To define this set we introduce some notations. For second order cumulants of matrix elements we use the short-hand notation . We also use the short-hand notation for the -weighted linear combination of such cumulants. We use the notation that replacing an index in a scalar quantity by a dot () refers to the corresponding vector, e.g. is a short-hand notation for the vector . Matrices with vector subscripts are understood as short-hand notations for , and matrices with mixed vector and index subscripts are understood as with being the ath normalized standard basis vector. We fix two vectors and some large integer K and define the sets of vectors
Here the cross and the direct part of the 2-cumulants refer to the natural splitting dictated by the Hermitian symmetry. In the specific case of (3.2) we simply have and . Then the -norm is given by
We remark that the set hence also depend on z via . We omit this dependence from the notation as it plays no role in the estimates.
In terms of this norm we obtain the following estimate on in terms of its projection onto the unstable direction of the stability operator . It is a direct consequence of a general expansion of approximate quadratic matrix equations whose linear stability operators have a single eigenvalue close to 0, as given in Lemma A.1.
Proposition 3.4
(Cubic equation for ). Fix , and use . For fixed and on the event that the difference admits the expansion
| 3.8a |
with an error matrix E and the scalar that satisfies the approximate cubic equation
| 3.8b |
Here, the error satisfies the upper bound
| 3.8c |
where R is a deterministic matrix with and the coefficients of the cubic equation satisfy the comparison relations
| 3.8d |
Proof
We first establish some important bounds involving the -norm. We claim that for any matrices
| 3.9 |
The proof of (3.9) follows verbatim as in [11, Lemma 3.4] with (3.7a) as an input. Moreover, the bound on follows directly from the bound on . Obviously, we also have .
Next, we apply Lemma A.1 from the Appendix with the choices
The operator in Lemma A.1 is chosen as the stability operator (3.3). Then (A.1) is satisfied with according to (3.9) and (3.7a). With we verify (3.8a) directly from (A.5), where satisfies
| 3.10 |
Here we used and . The coefficients are defined through (A.4) and R is given by
Now we bound by Young’s inequality, absorb the error terms bounded by
into the cubic term, , by introducing a modified coefficient and use that for any . Finally, we safely divide (3.10) by to verify (3.8b) with and . For the fact on and the comparison relations (3.8d) we refer to (3.7c)–(3.7d).
Probabilistic bound
We now collect bounds on the error matrix D from [34, Theorem 4.1] and Sect. 4. We first introduce the notion of stochastic domination.
Definition 3.5
(Stochastic domination). Let be sequences of non-negative random variables. We say that X is stochastically dominated by Y (and use the notation ) if
for any and some family of positive constants that is uniform in N and other underlying parameters (e.g. the spectral parameter z in the domain under consideration).
It can be checked (see [33, Lemma 4.4]) that satisfies the usual arithmetic properties, e.g. if and , then also and . Furthermore, to formulate bounds on a random matrix R compactly, we introduce the notations
for random matrices R and a deterministic control parameter . We also introduce high moment norms
for , scalar valued random variables X and random matrices R. To translate high moment bounds into high probability bounds and vice versa we have the following easy lemma [11, Lemma 3.7].
Lemma 3.6
Let R be a random matrix, a deterministic control parameter such that and for some , and let be a fixed integer. Then we have the equivalences
Expressed in terms of the -norm we have the following high-moment bounds on the error matrix D. The bounds (3.11a)–(3.11b) have already been established in [34, Theorem 4.1]; we just list them for completeness. The bounds (3.11c)–(3.11d), however, are new and they capture the additional cancellation at the cusp and are the core novelty of the present paper. The additional smallness comes from averaging against specific weights from (3.5b).
Theorem 3.7
(High moment bound on D with cusp fluctuation averaging). Under the assumptions of Theorem 2.5 for any compact set there exists a constant C such that for any , and matrices/vectors it holds that
| 3.11a |
| 3.11b |
Moreover, for the specific weight matrix we have the improved bound
| 3.11c |
and the improved bound on the off-diagonal component
| 3.11d |
where we defined the following z-dependent quantities
and .
Theorem 3.7 will be proved in Sect. 4. We now translate the high moment bounds of Theorem 3.7 into high probability bounds via Lemma 3.6 and use those to establish bounds on and the error in the cubic equation for . To simplify the expressions we formulate the bounds in the domain
| 3.12 |
Lemma 3.8
(High probability error bounds). Fix sufficiently small and suppose that , and hold at fixed , and assume that the deterministic control parameters satisfy . Then for any sufficiently small it holds that
| 3.13a |
as well as
| 3.13b |
where the coefficients are those from Proposition 3.4, and we recall that .
Proof
We translate the high moment bounds (3.11a)–(3.11b) into high probability bounds using Lemma 3.6 and to find
| 3.14 |
In particular, these bounds together with the assumed bounds on guarantee the applicability of Proposition 3.4. Now we use (3.14) and (3.9) in (3.8a) to get (3.13b). Here we used (3.9), translated -bounds into -bounds on and vice versa via Lemma 3.6, and absorbed the factors into by using that K can be chosen arbitrarily large. It remains to verify (3.13a). In order to do so, we first claim that
| 3.15 |
for any sufficiently small .
Proof of (3.15)
We first collect two additional ingredients from [10] specific to the vector case.
The imaginary part of the solution is comparable to its average in the sense for all and some , and, in particular, .
- The eigendirections are diagonal and are approximately given by
for some constants .3.16
Indeed, (a) follows directly from [10, Proposition 3.5] and the approximations in (3.16) follow directly from [10, Corollary 5.2]. The fact that are diagonal follows from simplicity of the eigendirections in the matrix case, and the fact that is diagonal and that preserves the space of diagonal matrices as well as the space of off-diagonal matrices. On the latter acts stably as . Thus the unstable directions lie inside the space of diagonal matrices.
We now turn to the proof of (3.15) and first note that, according to (a) and (b) we have
| 3.17 |
with errors in -norm-sense, for some constant to see
where is a deterministic vector with uniformly bounded entries. Since by (3.14), the bound on the first term in (3.15) follows together with (3.11c) via Lemma 3.6. Now we consider the second term in (3.15). We split into its diagonal and off-diagonal components. Since and preserve the space of diagonal and the space of off-diagonal matrices we find
| 3.18 |
with an appropriate deterministic matrix having bounded entries. In particular, the cross terms vanish and the first term is bounded by
| 3.19 |
according to (3.14). By taking the off-diagonal part of (3.8a) and using the fact that M and and therefore also are diagonal (cf. (b) above) we have
for any such that by Young’s inequality in the last step. Together with (3.17), (3.14) and the assumption that we then compute
Thus the bound on the second term on the lhs. in (3.15) follows together with (3.18)–(3.19) by and (3.11d) via Lemma 3.6. This completes the proof of (3.15).
With (3.14) and (3.15) the upper bound (3.8c) on the error of the cubic equation (3.8b) takes the same form as the rhs. of (3.15) if K is sufficiently large depending on . By the first estimate in (3.13b) we can redefine the control parameter on as and the claim (3.13a) follows directly with (3.15), thus completing the proof of Lemma 3.8.
Bootstrapping
Now we will show that the difference converges to zero uniformly for all spectral parameters as defined in (3.12). For convenience we refer to existing bounds on far away from the real line to establish a rough bound on in, say, . We then iteratively lower the threshold on by appealing to Proposition 3.4 and Lemma 3.8 until we establish the rough bound in all of . As a second step we then improve the rough bound iteratively until we obtain Theorem 2.5.
Lemma 3.9
(Rough bound). For any there exists a constant such that on the domain we have the rough bound
| 3.20 |
Proof
The rough bound (3.20) in a neighbourhood of a cusp has first been established for Wigner-type random matrices in [8]. For the convenience of the reader we present a streamlined proof that is adapted to the current setting. The lemma is an immediate consequence of the following statement. Let be a sufficiently small step size, depending on . Then for any on the domain we have
| 3.21 |
We prove (3.21) by induction over k. For sufficiently small the induction start holds due to the local law away from the self-consistent spectrum, e.g. [34, Theorem 2.1].
Now as induction hypothesis suppose that (3.21) holds on , and in particular, , for any according to Lemma 3.6. The monotonicity of the function (see e.g. [34, proof of Prop. 5.5]) implies and therefore, according to Lemma 3.6, that on . This, in turn, implies on by (3.11a) and Lemma 3.6, provided is chosen small enough. We now fix and a large integer K as the parameters of for the rest of the proof and omit them from the notation but we stress that all estimates will be uniform in . We find , by using a simple union bound and for some . Thus, for K large enough, we can use (3.8a), (3.8b), (3.8c) and (3.9) to infer
| 3.22 |
on the event , and on . Now we use the following lemma [10, Lemma 10.3] to translate the first estimate in (3.22) into a bound on . For the rest of the proof we keep fixed and consider the coefficients and as functions of .
Lemma 3.10
(Bootstrapping cubic inequality). For let be complex valued functions and be continuous functions such that at least one of the following holds true:
-
(i)
, , and are monotonically increasing, and at ,
-
(ii)
, , and is monotonically increasing.
Then any continuous function that satisfies the cubic inequality
on , has the property
| 3.23 |
With direct arithmetics we can now verify that the coefficients in (3.8b) and the auxiliary coefficients defined in (3.7e) satisfy the assumptions in Lemma 3.10 with the choice of the constant function for any , by using only the information on given by the comparison relations (3.8d). As an example, in the regime where is a right edge and , we have and and both functions are monotonically increasing in . Then Assumption (ii) of Lemma 3.10 is satisfied. All other regimes are handled similarly.
We now set and
By the induction hypothesis we have with overwhelming probability, so that the condition in (3.23) holds, and conclude for . For small enough the second bound in (3.22) implies . By continuity and the definition of we conclude , finishing the proof of (3.21).
Proof of Theorem 2.5
The bounds within the proof hold true uniformly for , unless explicitly specified otherwise. We therefore suppress this qualifier in the following statements. First we apply Lemma 3.8 with the choice , i.e. we do not treat the imaginary part of the resolvent separately. With this choice the first inequality in (3.13b) becomes self-improving and after iteration shows that
| 3.24 |
and, in other words, (3.13a) holds with . This implies that if for some arbitrarily small , then
| 3.25 |
holds for all sufficiently small with overwhelming probability, where we defined
| 3.26 |
For this conclusion we used the comparison relations (3.8d), Proposition 3.2(iv) as well as (3.7b), and the bound .
The bound (3.25) is a self-improving estimate on in the following sense. For and let
Then (3.25) with implies that . Applying Lemma 3.10 with , , yields the improvement . Here we needed to check the condition in (3.23) but at we have , so . After a k-step iteration until becomes smaller than , we find , where we used that can be chosen arbitrarily small. We are now ready to prove the following bound which we, for convenience, record as a proposition.
Proposition 3.11
For any we have the bounds
| 3.27 |
where , and are given in (3.26), (3.7b) and (3.7e), respectively.
Proof
Using proven above, we apply (3.24) with to conclude the first inequality in (3.27). For the second inequality in (3.27) we use the estimate on from (3.13b) with and .
The bound on from (3.27) implies a complete delocalisation of eigenvectors uniformly at singularities of the scDOS. The following corollary was established already in [8, Corollary 1.14] and, given (3.27), the proof follows the same line of reasoning.
Corollary 3.12
(Eigenvector delocalisation). Let be an eigenvector of H corresponding to an eigenvalue for some sufficiently small positive constant . Then for any deterministic we have
The bounds (3.27) simplify in the regime above the typical eigenvalue spacing to
| 3.28 |
using Lemma 3.3 which implies . The bound on is further improved in the case when is an edge and, in addition to , we assume for some , i.e. if is well inside a gap of size . Then we find by the definition of in (2.7) and use Lemma 3.3 and (3.7b), (3.7e) to conclude
| 3.29 |
In the last bound we used and . Using (3.29) in (3.27) yields the improvement
| 3.30 |
The bounds on from (3.28) and (3.30), inside and outside the self-consistent spectrum, allow us to show the uniform rigidity, Corollary 2.6. We postpone these arguments until after we finish the proof of Theorem 2.5. The uniform rigidity implies that for we can estimate the imaginary part of the resolvent via
| 3.31 |
for any normalised , where denotes the normalised eigenvector corresponding to . For the first inequality in (3.31) we used Corollary 3.12 and for the second we applied Corollary 2.6 that allows us to replace the Riemann sum with an integral as .
Using with (3.31), we apply Lemma 3.8, repeating the strategy from the beginning of the proof. But this time we can choose the control parameter . In this way we find
| 3.32 |
where we defined
Note that the estimates in (3.32) are simpler than those in (3.27). The reason is that the additional terms , and in (3.27) are a consequence of the presence of in (3.13a), (3.13b). With these are immediately absorbed into and not present any more. The second term in the definition of can be dropped since we still have (this follows from Lemma 3.3 if , and directly from (3.7b), (3.7e) if ). This implies , so the first bound in (3.32) proves (2.8a).
Now we turn to the proof of (2.8b). Given the second bound in (3.28), it is sufficient to consider the case when and with . In this case Proposition 3.2 yields . Thus we have
and therefore the second bound in (3.32) implies (2.8b). This completes the proof of Theorem 2.5.
Rigidity and absence of eigenvalues
The proofs of Corollaries 2.6 and 2.8 rely on the bounds on from (3.28) and (3.30). As before, we may restrict ourselves to the neighbourhood of a local minimum of the scDOS which is either an internal minimum with a small value of , a cusp location or a right edge adjacent to a small gap of length . All other cases, namely the bulk regime and regular edges adjacent to large gaps, have been treated prior to this work [8, 11].
Proof of Corollary 2.8
Let us denote the empirical eigenvalue distribution of H by and consider the case when is a right edge, for any and . Then we show that there are no eigenvalues in with overwhelming probability. We apply [8, Lemma 5.1] with the choices
for any and some . We use (3.30) to estimate the error terms and from [8, Eq. (5.2)] by and see that , showing that with overwhelming probability the interval does not contain any eigenvalues. A simple union bound finishes the proof of Corollary 2.8.
Proof of Corollary 2.6
Now we establish Corollary 2.6 around a local minimum of the scDOS. Its proof has two ingredients. First we follow the strategy of the proof of [8, Corollary 1.10] to see that
| 3.33 |
for any , i.e. we have a very precise control on . In contrast to the statement in that corollary we have a local law (3.28) with uniform error and thus the bound (3.33) does not deteriorate close to . We warn the reader that the standard argument inside the proof of [8, Corollary 1.10] has to be adjusted slightly to arrive at (3.33). In fact, when inside that proof the auxiliary result [8, Lemma 5.1] is used with the choice , , for some , this choice should be changed to , , and , where is chosen sufficiently large such that lies far to the left of the self-consistent spectrum.
The control (3.33) suffices to prove Corollary 2.6 for all except for the case when is an edge at a gap of length and for some fixed and , i.e. except for some eigenvalues close to the edge with arbitrarily small . In all other cases, the proof follows the same argument as the proof of [8, Corollary 1.11] using the uniform 1/N-bound from (3.33) and we omit the details here.
The reason for having to treat the eigenvalues very close to the edge separately is that (3.33) does not give information on which side of the gap these eigenvalues are found. To get this information requires the second ingredient, the band rigidity,
| 3.34 |
for any , and large enough N. The combination of (3.34) and (3.33) finishes the proof of Corollary 2.6.
Band rigidity has been shown in case is bounded from below in [11] as part of the proof of Corollary 2.5. We will now adapt this proof to the case of small gap sizes . Since by Corollary 2.8 with overwhelming probability there are no eigenvalues in , it suffices to show (3.34) for . As in the proof of [11, Corollary 2.5] we consider the interpolation
between the original random matrix and the deterministic matrix , for . The interpolation is designed such that the solution of the MDE corresponding to is constant at spectral parameter , i.e. . Let denote the scDOS of . Exactly as in the proof from [11] it suffices to show that no eigenvalue crosses the gap along the interpolation with overwhelming probability, i.e. that for any we have
| 3.35 |
Here is some spectral parameter inside the gap, continuously depending on t, such that . In [11] was chosen independent of t, but the argument remains valid with any other choice of . We call the connected component of that contains and denote the gap length. In particular, and for all by [10, Lemma 8.1(ii)]. For concreteness we choose to be the spectral parameter lying exactly in the middle of . The 1/3-Hölder continuity of , hence and in t follows from [10, Proposition 10.1(a)]. Via a simple union bound it suffices to show that for any fixed we have no eigenvalue in .
Since with overwhelming probability, in the regime for some small constant , the matrix is a small perturbation of the deterministic matrix whose resolvent at spectral parameter is bounded by Assumption (C), in particular . By 1/3-Hölder continuity hence , and for some in this regime with very high probability. Since by [10, Proposition 10.1(a)] there are no eigenvalues of in a neighbourhood of , proving (3.35) for .
For we will now show that for any . In fact, we have . This is a consequence of [10, Lemma D.1]. More precisely, we use the equivalence of (iii) and (v) of that lemma. We check (iii) and conclude the uniform distance to the self-consistent spectrum by (v). Since and we only need to check that the stability operator of has a bounded inverse. We write in terms of the saturated self-energy operator , where and . Afterwards we use that (cf. [7, Eq. (4.24)]) and to first show the uniform bound and then improve the bound to using the trick of expanding in a geometric series from [7, Eqs. (4.60)–(4.63)]. This completes the argument that . Now we apply [34, Corollary 2.3] to see that there are no eigenvalues of around as long as t is bounded away from zero and one, proving (3.35) for this regime.
Finally, we are left with the regime for some sufficiently small . By [10, Proposition 10.1(a)] the self-consistent Green’s function corresponding to is bounded even in a neighbourhood of , whose size only depends on model parameters. In particular, Assumptions (A)–(C) are satisfied for and Corollary 2.8, which was already proved above, is applicable. Thus it suffices to show that the size of the gap in containing is bounded from below by for some . The size of the gap can be read off from the following relationship between the norm of the saturated self-energy operator and the size of the gap: Let H be a random matrix satisfying (A)–(C) and be well inside the interior of the gap of length in the self-consistent spectrum for a sufficiently small . Then
| 3.36 |
where in the first step we used [7, Eqs. (4.23)–(4.25)], in the second step (3.7b), and in the last step that . Applying the analogue of (3.36) for with and using that , we obtain . Combining this inequality with (3.36) and using that for , we have , i.e. . In particular, the gap size never drops below . This completes the proof of the last regime in (3.35).
Cusp Fluctuation Averaging and Proof of Theorem 3.7
We will use the graphical multivariate cumulant expansion from [34] which automatically exploits the self-energy renormalization of to highest order. Since the final formal statement requires some custom notations, we first give a simple motivating example to illustrate the type of expansion and its graphical representation. If is Gaussian, then integration by parts shows that
| 4.1 |
where we recall that is the second cumulant of the matrix entries index by double indices , , and denotes the matrix of all zeros except for an 1 in the (a, b)th entry. Since for non-Gaussian or higher powers of the expansion analogous to (4.1) consists of much more complicated polynomials in resolvent entries, we represent them concisely as the values of certain graphs. As an example, the rhs. of (4.1) is represented simply by
| 4.2 |
The graphs retain only the relevant information of the complicated expansion terms and chains of estimates can be transcribed into simple graph surgeries. Graphs also help identify critical terms that have to be estimated more precisely in order to obtain the improved high moment bound on . For example, the key cancellation mechanism behind the cusp fluctuation averaging is encoded in a small distinguished part of the expansion that can conveniently be identified as certain subgraphs, called the -cells, see Definition 4.10 later. It is easy to count, estimate and manipulate -cells as part of a large graph, while following the same operations on the level of formulas would be almost intractable.
First we review some of the basic nomenclature from [34]. We consider random matrices with diagonal expectation A and complex Hermitian or real symmetric zero mean random component W indexed by some abstract set J of size . We recall that Greek letters stand for labels, i.e. double-indices from , whereas Roman letters stand for single indices. If , then we set for its transpose. Underlined Greek letters stand for multisets of labels, whereas bold-faced Greek letters stand for tuples of labels with the counting combinatorics being their—for our purposes—only relevant difference.
According to [34, Proposition 4.4] with it follows from the assumed independence that for general (conjugate) linear functionals , of bounded norm
| 4.3a |
where we recall that
| 4.3b |
and that
| 4.3c |
Some notations in (4.3) require further explanation. The qualifier “if ” is satisfied for those terms in which is a summation variable when the brackets in the product are opened. The notation indicates the union of multisets.
For even p we apply (4.3) with for and for . This is obviously a special case of which was considered in the so-called averaged case of [34] with arbitrary B of bounded operator norm since . It was proved in [34] that
which is not good enough at the cusp. We can nevertheless use the graphical language developed in [34] to estimate the complicated right hand side of (4.3).
Graphical representation via double index graphs
The graphs (or Feynman diagrams) introduced in [34] encode the structure of all terms in (4.3). Their (directed) edges correspond to resolvents G, while vertices correspond to ’s. Loop edges are allowed while parallel edges are not. Resolvents G and their Hermitian conjugates are distinguished by different types of edges. Each vertex v carries a label and we need to sum up for all labels. Some labels are independently summed up, these are the -labels in (4.3), while the -labels are strongly restricted; in the independent case they can only be of the type or . These graphs will be called “double indexed” graphs since the vertices are naturally equipped with labels (double indices). Here we introduced the terminology “double indexed” for the graphs in [34] to distinguish them from the “single indexed” graphs to be introduced later in this paper.
To be more precise, the graphs in [34] were vertex-coloured graphs. The colours encoded a resummation of the terms in (4.3): vertices whose labels (or their transpose) appeared in one of the cumulants in (4.3) received the same colour. We then first summed up the colours and only afterwards we summed up all labels compatible with the given colouring. According to [34, Proposition 4.4] and the expansion of the main term [34, Eq. (49)] for every even p it holds that
| 4.4a |
where is a certain finite collection of vertex coloured directed graphs with p connected components, and , the value of the graph , will be recalled below. According to [34] each graph fulfils the following properties:
Proposition 4.1
(Properties of double index graphs). There exists a finite set of double index graphs such that (4.4) hold. Each fulfils the following properties.
There exist exactly p connected components, all of which are oriented cycles. Each vertex has one incoming and one outgoing edge.
Each connected component contains at least one vertex and one edge. Single vertices with a looped edge are in particular legal connected components.
Each colour colours at least two and at most 6p vertices.
If a colour colours exactly two vertices, then these vertices are in different connected components.
The edges represent the resolvent matrix G or its adjoint . Within each component either all edges represent G or all edges represent . Accordingly we call the components either G or -cycles.
Within each cycle there is one designated edge which is represented as a wiggled line in the graph. The designated edge represents the matrix in a G-cycle and the matrix in a -cycle.
For each colour there exists at least one component in which a vertex of that colour is connected to the matrix . According to (f) this means that if the relevant vertex is in a G-cycle, then the designated (wiggled) edge is its incoming edge. If the relevant vertex is in a G-cycle, then the designated edge is its outgoing edge.
If V is the vertex set of and for each colour , denotes the c-coloured vertices then we recall that
| 4.4b |
where the ultimate product is the product over all p of the cycles in the graph. By the notation we indicate a directed cycle with vertices . Depending upon whether a given cycle is a G-cycle or -cycle, it then contributes with one of the factors indicated after the last curly bracket in (4.4b) with the vertex order chosen in such a way that the designated edge represents the or matrix. As an example illustrating (4.4b) we have
| 4.5 |
Actually in [34] the graphical representation of the graph is simplified, it does not contain all information encoded in the graph. First, the direction of the edges are not indicated. In the picture both cycles should be oriented in a clockwise orientation. Secondly, the type of edges are not indicated, apart from the wiggled line. In fact, the edges in the second subgraph stand for , while those in the first subgraph stand for G. To translate the pictorial representation directly let the striped vertices in the first and second cycle be associated with and the dotted vertices with . Accordingly, the wiggled edge in the first cycle stands for , while the wiggled edge in the second cycle stands for . The reason why these details were omitted in the graphical representation of a double index graph is that they do not influence the basic power counting estimate of its value used in [34].
Single index graphs
In [34] we operated with double index graphs that are structurally simple and appropriate for bookkeeping complicated correlation structures, but they are not suitable for detecting the additional smallness we need at the cusp. The contribution of the graphs in [34] were estimated by a relatively simple power counting argument where only the number of (typically off-diagonal) resolvent elements were recorded. In fact, for many subleading graphs this procedure already gave a very good bound that is sufficient at the cusps as well. The graphs carrying the leading contribution, however, have now to be computed to a higher accuracy and this leads to the concept of “single index graphs”. These are obtained by a certain refinement and reorganization of the double index graphs via a procedure we will call graph resolution to be defined later. The main idea is to restructure the double index graph in such a way that instead of labels (double indices) its vertices naturally represent single indices a and b. Every double indexed graph will give rise to a finite number of resolved single index graphs. The double index graphs that require a more precise analysis compared with [34] will be resolved to single index graphs. After we explain the structure of the single index graphs and the graph resolution procedure, double index graphs will not be used in this paper any more. Thus, unless explicitly stated otherwise, by graph we will mean single index graph in the rest of this paper.
We now define the set of single index graphs we will use in this paper. They are directed graphs, where parallel edges and loops are allowed. Let the graph be denoted by with vertex set and edge set . We will assign a value to each which comprises weights assigned to the vertices and specific values assigned to the edges. Since an edge may represent different objects, we will introduce different types of edges that will be graphically distinguished by different line style. We now describe these ingredients precisely.
Vertices.
Each vertex is equipped with an associated index . Graphically the vertices are represented by small sunlabelled bullets
, i.e. in the graphical representation the actual index is not indicated. It is understood that all indices will be independently summed up over the entire index set J when we compute the value of the graph.
Vertex weights.
Each vertex carries some weight vector which is evaluated at the index associated with the vertex. We generally assume these weights to be uniformly bounded in N, i.e. . Visually we indicate vertex weights by incoming arrows as in
. Vertices without explicitly indicated weight may carry an arbitrary bounded weight vector. We also use the notation
to indicate the constant vector as the weight, this corresponds to summing up the corresponding index unweighted
G-edges.
The set of G-edges is denoted by . These edges describe resolvents and there are four types of G-edges. First of all, there are directed edges corresponding to G and in the sense that a directed G or -edge initiating from the vertex and terminating in the vertex represents the matrix elements or respectively evaluated in the indices associated with the vertices v and u. Besides these two there are also edges representing and . Distinguishing between G and , for practical purposes, is only important if it occurs in a loop. Indeed, is typically much smaller than , while basically acts just like when a, b are summed independently. Graphically we will denote the four types of G-edges by ![]()
where all these edges can also be loops. The convention is that continuous lines represent G, dashed lines correspond to , while the diamond on both types of edges indicates the subtraction of M or . An edge carries its type as its attribute, so as a short hand notation we can simply write for , , and depending on which type of G-edge e represents. Due to their special role in the later estimates, we will separately bookkeep those or edges that appear looped. We thus define the subset as the set of G-edges of type or such that . We write to refer to the fact that looped edges are evaluated on the diagonal of .
(G-)edge degree.
For any vertex v we define its in-degree and out-degree as the number of incoming and outgoing G-edges. Looped edges (v, v) are counted for both in- and out-degree. We denote the total degree by .
Interaction edges.
Besides the G-edges we also have interaction edges, , representing the cumulants . A directed interaction edge represents the matrix given by the cumulant
| 4.6 |
For all graphs and all interaction edges we have the symmetries and . Thus (4.6) is compatible with exchanging the roles of u and v. For the important case when it follows that the interaction from u to v is given by S if u has one incoming and one outgoing G-edge and T if u has two incoming G-edges, i.e.
Visually we will represent interaction edges as ![]()
Although the interaction matrix is completely determined by the in- and out-degrees of the adjacent vertices i(e), t(e) we still write out the specific S and T names because these will play a special role in the latter part of the proof. As a short hand notation we shall frequently use to denote the matrix element selected by the indices associated with the initial and terminal vertex of e. We also note that we do not indicate the direction of edges associated with S as the matrix S is symmetric.
Generic weighted edges.
Besides the specific G-edges and interaction edges, additionally we also allow for generic edges reminiscent of the generic vertex weights introduced above. They will be called generic weighted edges, or weighted edges for short. To every weighted edge e we assign a weight matrix which is evaluated as when we compute the value of the graph by summing up all indices. To simplify the presentation we will not indicate the precise form of the weight matrix but only its entry-wise scaling as a function of N. A weighted edge presented as
represents an arbitrary weight matrix whose entries scale like
. We denote the set of weighted edges by . For a given weighted edge we record the entry-wise scaling of in an exponent in such a way that we always have
.
Graph value.
For graphs we define their value
| 4.7 |
which differs slightly from that in (4.4b) because it applies to a different class of graphs.
Single index resolution
There is a natural mapping from double indexed graphs to a collection of single indexed graphs that encodes the rearranging of the terms in (4.4b) when the summation over labels is reorganized into summation over single indices. Now we describe this procedure.
Definition 4.2
(Single index resolution). By the single index resolution of a double vertex graph we mean the collection of single index graphs obtained through the following procedure.
-
(i)
For each colour, the identically coloured vertices of the double index graph are mapped into a pair of vertices of the single index graph.
-
(ii)
The pair of vertices in the single index graph stemming from a fixed colour is connected by an interaction edge in the single index graph.
-
(iii)
Every (directed) edge of the double index graph is naturally mapped to a G-edge of the single index graph. While mapping equally coloured vertices in the double index graph to vertices u, v connected by an interaction edge there are binary choices of whether we map the incoming edge of to an incoming edge of u and the outgoing edge of to an outgoing edge of v or vice versa. In this process we are free to consider the mapping of (or any other vertex, for that matter) as fixed by symmetry of .
-
(iv)
If a wiggled G-edge is mapped to an edge from u to v, then v is equipped with a weight of . If a wiggled -edge is mapped to an edge from u to v, then u is equipped with a weight of . All vertices with no weight specified in this process are equipped with the constant weight .
We define the set as the set of all graphs obtained from the double index graphs via the single index resolution procedure.
Remark 4.3
-
(i)
We note some ingredients described in Sect. 4.2 for a typical graph in will be absent for graphs . For example, for all .
-
(ii)
We also remark that loops in double index graphs are never mapped into loops in single index graphs along the single index resolution. Indeed, double index loops are always mapped to edges parallel to the interaction edge of the corresponding vertex.
A few simple facts immediately follow from the the single index construction in Definition 4.2. From (i) it is clear that the number of vertices in the single index graph is twice the number of colours of the double index graph. From (ii) it follows that the number of interaction edges in the single index graph equals the number of colours of the double index graph. Finally, from (iii) it is obvious that if for some colour c there are vertices in the double index graph with colour c, then the resolution of this colour gives rise to single indexed graph. Since these resolutions are done independently for each colour, we obtain that the number of single index graphs originating from one double index graph is
Since the number of double index graph in is finite, so is the number of graphs in .
Let us present an example of single index resolution applied to the graph from (4.5) where we, for the sake of transparency, label all vertices and edges. is a graph consisting of one 2-cycle on the vertices and one 2-cycle on the vertices as in
![]() |
4.8 |
with and being of equal colour (i.e. being associated to labels connected through cumulants). In order to explain steps (i)–(iii) of the construction we first neglect that some edges may be wiggled, but we restore the orientation of the edges in the picture. We then fix the mapping of to pairs of vertices for in such a way that the incoming edges of are incoming at and the outgoing edges from are outgoing from . It remains to map to and for each i there are two choices of doing so that we obtain the four possibilities 
which translates to
| 4.9 |
in the language of single index graphs where the S, T assignment agrees with (4.6). Finally we want to visualize step (iv) in the single index resolution in our example. Suppose that in (4.8) the edges and are G-edges while and are edges with and being wiggled (in agreement with (4.5)). According to (iv) it follows that the terminal vertex of and the initial vertex of are equipped with a weight of while the remaining vertices are equipped with a weight of . The first graph in (4.9) would thus be equipped with the weights 
Single index graph expansion.
With the value definition in (4.7) it follows from Definition 4.2 that
| 4.10 |
We note that in contrast to the value definition for double index graphs (4.4), where each average in (4.4b) contains an 1/N prefactor, the single index graph value (4.7) does not include the prefactor. We chose this convention in this paper mainly because the exponent p in the prefactor cannot be easily read off from the single index graph itself, whereas in the double index graph p is simply the number of connected components.
We now collect some simple facts about the structure of these graphs in which directly follow from the corresponding properties of the double index graphs listed in Proposition 4.1.
Fact 1
The interaction edges form a perfect matching of , in particular . Moreover, and therefore the number of vertices in the graph is even and satisfies . Finally, since for we have and , consequently also . The degree furthermore satisfies the bounds for each .
Fact 2
The weights associated with the vertices are some non-negative powers of in such a way that the total power of all ’s is exactly p. The trivial zeroth power, i.e. the constant weight is allowed. Furthermore, the weights are distributed in such a way that at least one non-trivial weight is associated with each interacting edge .
Examples of graphs
We now turn to some examples explaining the relation of the double index graphs from [34] and single index graphs. We note that the single index graphs actually contain more information because they specify edge direction, specify weights explicitly and differentiate between G and edges. These information were not necessary for the power counting arguments used in [34], but for the improved estimates they will be crucial.
We start with the graphs representing the following simple equality following from
We now turn to the complete graphical representation for the second moment in the case of Gaussian entries,
| 4.11 |
where we again stress that the double index graphs hide the specific weights and the fact that one of the connected components actually contains edges. In terms of single index graphs, the rhs. of (4.11) can be represented as the sum over the values of the six graphs
![]() |
4.12 |
The first two graphs were already explained above. The additional four graphs come from the second term in the rhs. of (4.11). Since is non-zero only if or , there are four possible choices of relations among the and labels in the two kappa factors. For example, the first graph in the second line of (4.12) corresponds to the choice , . Written out explicitly with summation over single indices, this value is given by
where in the picture the left index corresponds to , the top index to , the right one to and the bottom one to .
We conclude this section by providing an example of a graph with some degree higher than two which only occurs in the non-Gaussian situation and might contain looped edges. For example, in the expansion of in the non-Gaussian setup there is the term 
where and , in accordance with (4.6).
Simple estimates on
In most cases we aim only at estimating the value of a graph instead of precisely computing it. The simplest power counting estimate on (4.7) uses that the matrix elements of G and those of the generic weight matrix K are bounded by an constant, while the matrix elements of are bounded by . Thus the naive estimate on (4.7) is
| 4.13 |
where we used that the interaction edges form a perfect matching and that , . The somewhat informal notation in (4.13) hides a technical subtlety. The resolvent entries are indeed bounded by an constant in the sense of very high moments but not almost surely. We will make bounds like the one in (4.13) rigorous in a high moments sense in Lemma 4.8.
The estimate (4.13) ignores the fact that typically only the diagonal resolvent matrix elements of G are of , the off-diagonal matrix elements are much smaller. This is manifested in the Ward-identity
| 4.14a |
Thus the sum of off-diagonal resolvent elements is usually smaller than its naive size of order N, at least in the regime . This is quantified by the so called Ward estimates
| 4.14b |
Similarly to (4.13) the inequalities in (4.14b) are meant in a power counting sense ignoring that the entries of might not be bounded by almost surely but only in some high moment sense.
As a consequence of (4.14b) we can gain a factor of for each off-diagonal (that is, connecting two separate vertices) G-factor, but clearly only for at most two G-edges per adjacent vertex. Moreover, this gain can obviously only be used once for each edge and not twice, separately when summing up the indices at both adjacent vertices. As a consequence a careful counting of the total number of -gains is necessary, see [34, Section 4.3] for details.
Ward bounds for the example graphs from Sect. 4.4. From the single index graphs drawn in (4.12) we can easily obtain the known bound . Indeed, the last four graphs contribute a combinatorial factor of from the summations over four single indices and a scaling factor of from the size of S, T. Furthermore, we can gain a factor of for each G-edge through Ward estimates and the bound follows. Similarly, the first two graphs contribute a factor of from summation and S/T and a factor of from the Ward estimates, which overall gives . As this example shows, the bookkeeping of available Ward-estimates is important and we will do so systematically in the following sections.
Improved estimates on : Wardable edges
For the sake of transparency we briefly recall the combinatorial argument used in [34], which also provides the starting point for the refined estimate in the present paper. Compared to [34], however, we phrase the counting argument directly in the language of the single index graphs. We only aim to gain from the G-edges adjacent to vertices of degree two or three; for vertices of higher degree the most naive estimate is already sufficient as demonstrated in [34]. We collect the vertices of degree two and three in the set and collect the G-edges adjacent to in the set . In [34, Section 4.3] a specific marking procedure on the G-edges of the graph is introduced that has the following properties. For each we put a mark on at most two adjacent G-edges in such a way that those edges can be estimated via (4.14b) while performing the summation. In this case we say that the mark comes from the v-perspective. An edge may have two marks coming from the perspective of each of its adjacent vertices. Later, marked edges will be estimated via (4.14b) while summing up . After doing this for all of we call an edge in marked effectively if it either (i) has two marks, or (ii) has one mark and is adjacent to only one vertex from . While subsequently using (4.14b) in the summation of for (in no particular order) on the marked edges (and estimating the remaining edges adjacent to v trivially) we can gain at least as many factors of as there are effectively marked edges. Indeed, this follows simply from the fact that effectively marked edges are never estimated trivially during the procedure just described, no matter the order of vertex summation.
Fact 3
For each there is a marking of edges adjacent to vertices of degree at most 3 such that there are at least effectively marked edges.
Proof
On the one hand we find from Fact 1 (more specifically, from the equality for ) that
| 4.15 |
On the other hand it can be checked that for every pair with all G-edges adjacent to u or v can be marked from the u, v-perspective. Indeed, this is a direct consequence of Proposition 4.1(d): Because the two vertices in the double index graph being resolved to (u, v) cannot be part of the same cycle it follows that all of the (two, three or four) G-edges adjacent to the vertices with index u or v are not loops (i.e. do not represent diagonal resolvent elements). Therefore they can be bounded by using (4.14b). Similarly, it can be checked that for every edge with at most two G-edges adjacent to u or v can remain unmarked from the u, v-perspective. By combining these two observations it follows that at most
| 4.16 |
edges in are ineffectively marked since those are counted as unmarked from the perspective of one of its vertices. Subtracting (4.16) from (4.15) it follows that in total at least
edges are marked effectively, just as claimed.
In [34] it was sufficient to estimate the value of each graph in by subsequently estimating all effectively marked edges using (4.14b). For the purpose of improving the local law at the cusp, however, we need to introduce certain operations on the graphs of which allow to estimate the graph value to a higher accuracy. It is essential that during those operations we keep track of the number of edges we estimate using (4.14b). Therefore we now introduce a more flexible way of recording these edges. We first recall a basic definition [58] from graph theory.
Definition 4.4
For a graph is called k-degenerate if any induced subgraph has minimal degree at most k.
It is well known that being k-degenerate is equivalent to the following sequential property.2 We provide a short proof for convenience.
Lemma 4.5
A graph is k-degenerate if and only if there exists an ordering of vertices such that for each it holds that
| 4.17 |
where for , denotes the induced subgraph on the vertex set .
Proof
Suppose the graph is k-degenerate and let . Then there exists some vertex such that by definition. We now consider the subgraph induced by and, by definition, again find some vertex of degree . Continuing inductively we find a vertex ordering with the desired property.
Conversely, assume there exists a vertex ordering such that (4.17) holds for each m. Let be an arbitrary subset and let . Then it holds that
and the proof is complete.
The reason for introducing this graph theoretical notion is that it is equivalent to the possibility of estimating edges effectively using (4.14b). A subset of G-edges in can be fully estimated using (4.14b) if and only if there exists a vertex ordering such that we can subsequently remove vertices in such a way that in each step at most two edges from are removed. Due to Lemma 4.5 this is the case if and only if is 2-degenerate. For example, the graph induced by the effectively marked G-edges is a 2-degenerate graph. Indeed, each effectively marked edge is adjacent to at least one vertex which has degree at most 2 in : Vertices of degree 2 in are trivially at most of degree 2 in , and vertices of degree 3 in are also at most of degree 2 in as they can only be adjacent to 2 effectively marked edges. Consequently any induced subgraph of has to contain some vertex of degree at most 2 and thereby is 2-degenerate.
Definition 4.6
For a graph we call a subset of G-edges Wardable if the subgraph is 2-degenerate.
Lemma 4.7
For each there exists a Wardable subset of size
| 4.18 |
Proof
This follows immediately from Fact 3, the observation that is 2-degenerate and the fact that sub-graphs of 2-degenerate graphs remain 2-degenerate.
For each we choose a Wardable subset satisfying (4.18). At least one such set is guaranteed to exist by the lemma. For graphs with several possible such sets, we arbitrarily choose one, and consider it permanently assigned to . Later we will introduce certain operations on graphs which produce families of derived graphs . During those operations the chosen Wardable subset will be modified in order to produce eligible sets of Wardable edges and we will select one among those to define the Wardable subset of . We stress that the relation (4.18) on the Wardable set is required only for but not for the derived graphs .
We now give a precise meaning to the vague bounds of (4.13), (4.14b). We define the N-exponent, , of a graph as the effective N-exponent in its value-definition, i.e. as
We defer the proof of the following technical lemma to the Appendix.
Lemma 4.8
For any there exists some such that the following holds. Let be a graph with Wardable edge set and at most vertices and at most G-edges. Then for each it holds that
| 4.19a |
where
| 4.19b |
Remark 4.9
-
(i)
We consider and p as fixed within the proof of Theorem 3.7 and therefore do not explicitly carry the dependence of them in quantities like .
-
(ii)
We recall that the factors involving and do not play any role for graphs as those sets are empty in this restricted class of graphs (see Remark 4.3).
- (iii)
Improved estimates on at the cusp: -cells
Definition 4.10
For we call an interaction edge a -cell if the following four properties hold: (i) , (ii) there are no G-loops adjacent to u or v, (iii) precisely one of u, v carries a weight of while the other carries a weight of , and (iv), e is not adjacent to any other non -edges. Pictorially, possible -cells are given by 
For we denote the number of -cells in by .
Next, we state a simple lemma, estimating of the graphs in the restricted class .
Lemma 4.11
For each it holds that
Proof
We introduce the short-hand notations and . Starting from (4.19b) and Lemma 4.7 we find
Using it then follows that
| 4.21 |
It remains to relate (4.21) to the number of -cells in . Since each interaction edge of degree two which is not a -cell has an additional weight attached to it, it follows from Fact 2 that . Therefore, from (4.21) and we have that
proving the claim.
Using Lemma 4.8 and , the estimate in Lemma 4.11 has improved the previous bound (4.20) by a factor (ignoring the irrelevant factors). In order to prove (3.11c), we thus need to remove the from this exponent, in other words, we need to show that from each -cell we can multiplicatively gain a factor of . This is the content of the following proposition.
Proposition 4.12
Let be any constant and be a single index graph with at most cp vertices and edges with a -cell . Then there exists a finite collection of graphs with at most one additional vertex and at most 6p additional G-edges such that
| 4.22 |
and all graphs and have exactly one -cell less than .
Using Lemmas 4.8 and 4.11 together with the repeated application of Proposition 4.12 we are ready to present the proof of Theorem 3.7.
Proof of Theorem 3.7
We remark that the isotropic local law (3.11a) and the averaged local law (3.11b) are verbatim as in [34, Theorem 4.1]. We therefore only prove the improved bound (3.11c)–(3.11d) in the remainder of the section. We recall (4.10) and partition the set of graphs into those graphs with no -cells and those graphs with at least one -cell. For the latter group we then use Proposition 4.12 for some -cell to find
| 4.23 |
where the number of -cells is reduced by 1 for and each as compared to . We note that the Ward-estimate from Lemma 4.11 together with Lemma 4.8 is already sufficient for the graphs in . For those graphs with exactly one -cell the expansion in (4.23) is sufficient because and, according to (4.22), each has a Ward estimate which is already improved by . For the other graphs we iterate the expansion from Proposition 4.12 until no sigma cells are left.
It only remains to count the number of G-edges and vertices in the successively derived graphs to make sure that Lemma 4.8 and Proposition 4.12 are applicable and that the last two factors in (3.11c) come out as claimed. Since every of the applications of Proposition 4.12 creates at most 6p additional G-edges and one additional vertex, it follows that , also in any successively derived graph. Finally, it follows from the last factor in Lemma 4.11 that for each with we gain additional factors of . Since , we easily conclude that if there are more than 4p G-edges, then each of them comes with an additional gain of . Now (3.11c) follows immediately after taking the pth root.
We turn to the proof of (3.11d). We first write out
and therefore can, for even p, write the pth moment as the value
of the graph which is given by p disjoint 2-cycles as 
where there are p/2 cycles of G-edges and p/2 cycles of edges. It is clear that is 2-degenerate and since it follows that
On the other hand each of the p interaction edges in is a -cell and we can use Proposition 4.12p times to obtain (3.11d) just as in the proof of (3.11c).
Proof of Proposition 4.12
It follows from the MDE that
which we use to locally expand a term of the form for fixed a, x, y further. To make the computation local we allow for an arbitrary random function , which in practice encodes the remaining G-edges in the graph. A simple cumulant expansion shows
| 4.24 |
where and introduced the stability operator . The stability operator B appears from rearranging the equation obtained from the cumulant expansion to express the quantity . In our graphical representation, the stability operator is a special edge that we can also express as
| 4.25 |
An equality like (4.25) is meant locally in the sense that the pictures only represent subgraphs of the whole graph with the empty, labelled vertices symbolizing those vertices which connect the subgraph to its complement. Thus (4.25) holds true for every fixed graph extending x, y consistently in all three graphs. The doubly drawn edge in (4.25) means that the external vertices x, y are identified with each other and the associated indices are set equal via a function. Thus (4.25) should be understood as the equality
| 4.26 |
where the outside edges incident at the merged vertices x, y are reconnected to one common vertex in the middle graph. For example, in the picture (4.26) the vertex x is connected to the rest of the graph by two edges, and the vertex y by one.
In order to represent (4.24) in terms of graphs we have to define a notion of differential edge. First, we define a targeted differential edge represented by an interaction edge with a red -sign written on top and a red-coloured target G-edge to denote the collection of graphs
| 4.27 |
The second picture in (4.27) shows that the target G-edge may be a loop; the definition remains the same. This definition extends naturally to edges and is exactly the same for edges (note that this is compatible with the usual notion of derivative as M does not depend on W). Graphs with the differential signs should be viewed only as an intermediate simplifying picture but they really mean the collection of graphs indicated in the right hand side of (4.27). They represent the identities
In other words we introduced these graphs only to temporary encode expressions with derivatives (e.g. second term in the rhs. of (4.24)) before the differentiation is actually performed. We can then further define the action of an untargeted differential edge according the Leibniz rule as the collection of graphs with the differential edge being targeted on all G-edges of the graph one by one (in particular not only those in the displayed subgraph), i.e. for example
| 4.28 |
Here the union is a union in the sense of multisets, i.e. allows for repetitions in the resulting set (note that also this is compatible with the usual action of derivative operations). The symbol on the rhs. of (4.28) indicates that the targeted edge cycles through all G-edges in the graph, not only the ones in the subgraph. For example, if there are k G-edges in the graph, then the picture (4.28) represents a collection of 2k graphs arising from performing the differentiation
where represents the value of the G-edges outside the displayed subgraph.
Finally we introduce the notation that a differential edge which is targeted on all G-vertices except for those in the displayed subgraph. This differential edge targeted on the outside will be denoted by .
Regarding the value of the graph, we define the value of a collection of graphs as the sum of their values. We note that this definition is for the collection of graphs encoded by the differential edges also consistent with the usual differentiation.
Written in a graphical form (4.24) reads
![]() |
4.29 |
where the ultimate graph encodes the ultimate terms in the last two lines of (4.24).
We worked out the example for the resolution of the quantity , but very similar formulas hold if the order of the fixed indices (x, y) and the summation index a changes in the resolvents, as well as for other combinations of the complex conjugates. In graphical language this corresponds to changing the arrows of the two G-edges adjacent to a, as well as their types. In other words, equalities like the one in (4.29) hold true for other any degree two vertex but the stability operator changes slightly: In total there are 16 possibilities, four for whether the two edges are incoming or outgoing at a and another four for whether the edges are of type G or of type . The general form for the stability operator is
| 4.30 |
where if there is one incoming and one outgoing edge, if there are two outgoing edges and otherwise, and where represent complex conjugations if the corresponding edges are of type. Thus for, for example, the stability operator in a for is . Note that the stability operator at vertex with degree two is exclusively determined by the type and orientation of the two G-edges adjacent to a. In the sequel the letter B will refer to the appropriate stability operator, we will not distinguish their 9 possibilities ( and ) in the notation.
Lemma 4.13
Let be any constant, be a single index graph with at most cp vertices and edges and let be a vertex of degree not adjacent to a -loop. The insertion of the stability operator B (4.30) at a as in (4.29) produces a finite set of graphs with at most one additional vertex and 6p additional edges, denoted by , such that
and all of them have a Ward estimate
Moreover all -cells in , except possibly a -cell adjacent to a, remain -cells also in each .
Proof
As the proofs for all of the 9 cases of B-operators are almost identical we prove the lemma for the case (4.29) for definiteness. Now we compare the value of the graph 
with the graph in the lhs. of (4.29), i.e. when the stability operator B is attached to the vertex a. We remind the reader that the displayed graphs only show a certain subgraph of the whole graph. The goal is to show that for each graph occurring on the rhs. of (4.29). The forthcoming reasoning is based on comparing the quantities , , and defining the Ward estimate from (4.19b) of the graph and the various graphs occurring on the rhs. of (4.29).
- We begin with the first graph and claim that
Due to the double edge which identifies the x and a vertices it follows that . The degrees of all interaction edges remain unchanged when going from to . As the 2-degenerate set of Wardable edges we choose , i.e. the 2-degenerate edge set in the original graph except for the edge-neighbourhood N(a) of a, i.e. those edges adjacent to a. As a subgraph of it follows that is again 2-degenerate. Thus and the claimed bound follows since and Next, we consider the third and fourth graph and claim that
Here there is one more vertex (corresponding to an additional summation index), , whose effect in (4.19b) is compensated by one additional interaction edge e of degree 2. Hence the N-exponent remains unchanged. In the first graph we can simply choose , whereas in the second graph we choose which is 2-degenerate as a subgraph of a 2-degenerate graph together with an additional vertex of degree 2. Thus in both cases we can choose (if necessary, by removing excess edges from again) in such a way that but the number of -loops is increased by 1, i.e. .Similarly, we claim for the fifth and sixth graph that
There is one more vertex whose effect in (4.19b) is compensated by one more interaction edge of degree 2, whence the number N-exponent remains unchanged. The number of Wardable edges can be increased by one by setting to be a suitable subset of which is 2-degenerate as the subset of a 2-degenerate graph together with two vertices of degree 2. The number of -loops remains unchanged.- For the last graph in (4.29), i.e. where the derivative targets an outside edge, we claim that
Here the argument on the lhs., , stands for a whole collection of graphs but we essentially only have to consider two types: The derivative edge either hits a G-edge or a -loop, i.e.
which encodes the graphs
as well as the corresponding transpositions (as in (4.27)). In both cases the N-size of remains constant since the additional vertex is balanced by the additional degree two interaction edge. In both cases all four displayed edges can be included in . So can be increased by 1 in the first case and by 2 in the second case while the number of -loops remains constant in the first case is decreased by 1 in the second case. The claim follows directly in the first case and from
in the second case. -
It remains to consider the second graph in the rhs. of (4.29) with the higher derivative edge. We claim that for each it holds that
We prove the claim by induction on k starting from . For any we write . For the action of the last derivative we distinguish three cases: (i) action on an edge adjacent to the derivative edge, (ii) action on a non-adjacent G-edge and (iii) an action on a non-adjacent -loop. Graphically this means
We ignored the case where the derivative acts on (a, y) since it is estimated identically to the first graph. We also neglected the possibility that the derivative acts on a g-loop, as this is estimated exactly as the last graph and the result is even better since no -loop is destroyed. After performing the last derivative in (4.31) we obtain the following graphs
4.31
where we neglected the transposition of the third graph with u, v exchanged because this is equivalent with regard to the counting argument. First, we handle the second, third and fourth graphs in (4.32). In all these cases the set is defined simply by adding all edges drawn in (4.32) to the set . The new set remains 2-degenerate since all these new edges are adjacent to vertices of degree 2. Compared to the original graph, , we thus have increased by at least 1.
4.32 We now continue with the first graph in (4.32), where we explicitly expand the action of another derivative (notice that this is the only graph where is essentially used). We distinguish four cases, depending on whether the derivative acts on (i) the b-loop, (ii) an adjacent edge, (iii) a non-adjacent edge or (iv) a non-adjacent -loop, i.e. graphically we have
After performing the indicated derivative, the encoded graphs are
4.33
where we again neglected the version of the third graph with u, v exchanged. We note that both the first and the second graph in (4.33) produce the first graph in (4.34). Now we define how to get the set from for each case. In the first graph of (4.34) we add all three non-loop edges to , in the second graph we add both non-loop edges, in the third and fourth graph we add the non-looped edge adjacent to b as well as any two non-looped edges adjacent to a. Thus, compared to the original graph the number is at least preserved. On the other hand the N-power counting is improved by . Indeed, there is one additional vertex b, yielding a factor N, which is compensated by the scaling factor from the interaction edge of degree 3.
4.34 To conclude the inductive step we note that additional derivatives (i.e. the action of ) can only decrease the Ward-value of a graph. Indeed, any single derivative can at most decrease the number by 1 by either differentiating a -loop or differentiating an edge from . Thus the number is decreased by at most while the number is not increased. In particular, by choosing a suitable subset of Wardable edges, we can define in such a way that is decreased by exactly . But at the same time each derivative provides a gain of since the degree of the interaction edge is increased by one. Thus we have
just as claimed.
Lemma 4.13 shows that the insertion of the B-operator reduces the Ward-estimate by at least . However, this insertion does not come for free since the inverse
is generally not a uniformly bounded operator. For example, it follows from (2.2) that
and therefore is singular for small with being the unstable direction. It turns out, however, that B is invertible on the subspace complementary to some bad direction . At this point we distinguish two cases. If B has a uniformly bounded inverse, i.e. if for some constant , then we set . Otherwise we define as the spectral projection operator onto the eigenvector of B corresponding to the eigenvalue with smallest modulus:
| 4.35 |
where denotes the normalized inner product and is the corresponding left eigenvector, .
Lemma 4.14
For all 9 possible B-operators in (4.30) it holds that
| 4.36 |
for some constant , depending only on model parameters.
Proof
First we remark that it is sufficient to prove the bound (4.36) on as an operator on with the Euclidean norm, i.e. . For this insight we refer to [5, Proof of (5.28) and (5.40a)]. Recall that , or , depending on which stability operator we consider (cf. (4.30)). We begin by considering the complex hermitian symmetry class and the cases and . We will now see that in this case B has a bounded inverse and thus . Indeed, we have
where . The fullness Assumption (B) in (2.3) implies that for some constant and thus for . Here we used , a general property of the saturated self-energy matrix that was first established in [6, Lemma 4.3] (see also [7, Eq. (4.24)] and [10, Eq. (4.5)]). Now we turn to the case for both the real symmetric and complex hermitian symmetry classes. In this case B is the restriction to diagonal matrices of an operator , where . All of these operators were covered in [10, Lemma 5.1] and thus (4.36) is a consequence of that lemma. Recall that the flatness (3.6) of ensured the applicability of the lemma.
We will insert the identity , and we will perform an explicit calculation for the component, while using the boundedness of in the other component. We are thus left with studying the effect of inserting B-operators and suitable projections into a -cell. To include all possible cases with regard to edge-direction and edge-type (i.e. G or ), in the pictures below we neither indicate directions of the G-edges nor their type but implicitly allow all possible assignments. We recall that both the R-interaction edge as well as the relevant B-operators (cf. (4.30)) are completely determined by the type of the four G-edges as well as their directions. To record the type of the inserted B, , operators we call those inserted on the rhs. of the R-edge , and in the following graphical representations. Pictorially we first decompose the -cell subgraph of some graph as
![]() |
4.37 |
where we allow the vertices x, y to agree with z or w. With formulas, the insertion in (4.37) means the following identity
since . We first consider with the second graph in (4.37), whose treatment is independent of the specific weights, so we already removed the weight information. We insert the B operator as 
and notice that due to Lemma 4.14 the matrix , assigned to the weighted edge in the last graph, is entry-wise bounded (the transpositions compensate for the opposite orientation of the participating edges). It follows from Lemma 4.13 that
| 4.38 |
where all satisfy and all -cells in except for the currently expanded one remain -cells in . We note that it is legitimate to compare the Ward estimate of with that of because with respect to the Ward-estimate there is no difference between and the modification of in which the R-edge is replaced by a generic -weighted edge.
We now consider the first graph in (4.37) and repeat the process of inserting projections to the other side of the R-edge to find
| 4.39 |
where we already neglected those weights which are of no importance to the bound. The argument for the second graph in (4.39) is identical to the one we used in (4.38) and we find another finite collection of graphs such that
| 4.40 |
where the weighted edge carries the weight matrix , which is according to Lemma 4.14 indeed scales like . The graphs also satisfy and all -cells in except for the currently expanded one remain -cells in .
It remains to consider the first graph in (4.39) in the situation where B does not have a bounded inverse. We compute the weight matrix of the interaction edge as
which we separate into the scalar factor
and the weighted edge
| 4.41 |
which scales like since is -normalised and delocalised. Thus we can write
| 4.42 |
Note that the B and operators are not completely independent: According to Fact 1 it follows that for an interaction edge associated with the matrix R the number of incoming G-edges in u is the same as the number of outgoing G-edges from v, and vice versa. Thus, according to (4.30), the B-operator at u comes with an S if and only if the -operator at v comes also with an S. Furthermore, if the B-operator comes with an T, then the -operator comes with an , and vice versa. The distribution of the conjugation operators to in (4.30), however, can be arbitrary. We now use the fact that the scalar factor in (4.42) can be estimated by (cf. Lemma A.2). Summarising the above arguments, from (4.37)–(4.42), the proof of Proposition 4.12 is complete.
Cusp Universality
The goal of this section is the proof of cusp universality in the sense of Theorem 2.3. Let H be the original Wigner-type random matrix with expectation and variance matrix with and with . We consider the Ornstein Uhlenbeck process starting from , i.e.
| 5.1 |
which preserves expectation and variance. In our setting of deformed Wigner-type matrices the covariance operator is given by
The OU process effectively adds a small Gaussian component to along the flow in the sense that in distribution with being and independent centred Gaussian matrix with covariance . Due to the fullness Assumption (B) there exist small such that can be decomposed as with and Gaussian and independent of U for . Thus there exists a Wigner-type matrix such that
| 5.2 |
with U independent of . Note that we do not define as a stochastic process and we will use the representation (5.2) only for one carefully chosen . We note that satisfies the assumption of our local law from Theorem 2.5. It thus follows that is well approximated by the solution to the MDE
In particular, by setting , well approximates the resolvent of the original matrix H and is its self-consistent density. Note that the Dyson equation of and hence its solution as well are independent of t, since they are entirely determined by the first and second moments of that are the same A and S for any t. Thus the resolvent of is well approximated by the same and the self-consistent density of is given by for any t. While H and have identical self-consistent data, structurally they differ in a key point: has a small Gaussian component. Thus the correlation kernel of the local eigenvalue statistics has a contour integral representation using a version of the Brézin–Hikami formulas, see Sect. 5.2.
The contour integration analysis requires a Gaussian component of size at least and a very precise description of the eigenvalues of just above the scale of the eigenvalue spacing. This information will come from the optimal rigidity, Corollary 2.6, and the precise shape of the self-consistent density of states of . The latter will be analysed in Sect. 5.1 where we describe the evolution of the density near the cusp under an additive GUE perturbation . We need to construct with a small gap carefully so that after a relatively long time the matrix develops a cusp exactly at the right location. In fact, we the process has two scales in the shifted variable that indicates the time relative to the cusp formation. It turns out that the locations of the edges typically move linearly with , while the length of the gap itself scales like , i.e. it varies much slower and we need to fine-tune the evolution of both.
To understand this tuning process, we fix and we consider the matrix flow for any and not just for . It is well known that the corresponding self-consistent densities are given by the semicircular flow. Equivalently, these densities can be described by the free convolution of with a scaled semicircular distribution . In short, the self-consistent density of is given by , where we omitted t from the notation since we consider t fixed. In particular we have , the density of and , the density of as well as that of H. Hence, as a preparation to the contour integration, in Sect. 5.1 we need to describe the cusp formation along the semicircular flow. Before going into details, we describe the strategy.
Since in the sequel the densities and their local minima and gaps will play an important role, we introduce the convention that properties of the original density will always carry as a superscript for the remainder of Sect. 5. In particular, the points and the gap size from (2.4) and Theorem 2.3 will from now on be denoted by and . In particular a superscript of never denotes a power.
Proof strategy
First we consider case (i) when , the self-consistent density associated with H, has an exact cusp at the point . Note that is also a cusp point of the self-consistent density of for any t.
We set . Define the functions
for any . For denote the gap in the support of close to by and its length by . In Sect. 5.1 we will prove that if has an exact cusp in as in (2.4a), then has a gap of size , and, in particular, has a gap of size , only depending on c, t and . The distance of from the gap is . This overall shift will be relatively easy to handle, but notice that it must be tracked very precisely since the gap changes much slower than its location. For with we will similarly prove that has no gap anymore close to but a unique local minimum in of size .
Now we consider the case where has no exact cusp but a small gap of size . We parametrize this gap length via a parameter defined by . It follows from the associativity (5.3b) of the free convolution that has a gap of size .
Finally, the third case is where has a local minimum of size . We parametrize it as with then it follows that has a gap of size .
Note that these conclusions follow purely from the considerations in Sect. 5.1 for exact cusps and the associativity of the free convolution. We note that in both almost cusp cases should be interpreted as a time (or reverse time) to the cusp formation.
In the final part of the proof in Sects. 5.2–5.3 we will write the correlation kernel of as a contour integral purely in terms of the mesoscopic shape parameter and the gap size of the density associated with . If , then the gap closes after time and we obtain a Pearcey kernel with parameter . If and , then the gap does not quite close at time and we obtain a Pearcey kernel with , while for with the gap after time is transformed into a tiny local minimum and we obtain a Pearcey kernel with . The precise value of in terms of and are given in (2.6). Note that as an input to the contour integral analysis, in all three cases we use the local law only for , i.e. in a situation when there is a small gap in the support of , given by defined as above in each case.
Free convolution near the cusp
In this section we quantitatively investigate the free semi-circular flow before and after the formation of cusp. We first establish the exact rate at which a gap closes to form a cusp, and the rate at which the cusp is transformed into a non-zero local minimum. We now suppose that is a general density with a small spectral gap whose Stieltjes transform can be obtained from solving a Dyson equation. Let be the density of the semicircular distribution and let be a time parameter. The free semicircular convolution of with is then defined implicitly via its Stieltjes transform
| 5.3a |
It follows directly from the definition that is associative in the sense that
| 5.3b |
Figure 1a illustrates the quantities in the following lemma. We state the lemma for scDOSs from arbitrary data pairs satisfying the conditions in [10], i.e.
| 5.4 |
for any self-adjoint and some constants .
Fig. 1.
(a) illustrates the evolution of along the semicircular flow at two times before and after the cusp. We recall that and . (b) shows the points as well as their distances to the edges
Lemma 5.1
Let be the density of a Stieltjes transform associated with some Dyson equation
with satisfying (5.4). Then there exists a small constant c, depending only on the constants in Assumptions (5.4) such that the following statements hold true. Suppose that has an initial gap of size . Then there exists some critical time such that has exactly one exact cusp in some point with
, and that is locally around given by (2.4a) for some . Considering the time evolution we then have the following asymptotics.
-
(i)After the cusp. For , has a unique non-zero local minimum in some point such that
Furthermore, can approximately be found by solving a simple equation, namely there exists such that5.5a 
5.5b -
(ii)Before the cusp. For , the support of has a spectral gap of size near which satisfies
In particular we find that the initial gap is related to via .5.5c
Proof
Within the proof of the lemma we rely on the extensive shape analysis from [10]. We are doing so not only for the density and its Stieltjes transform, but also for and its Stieltjes transform for . The results from [10] also apply here since can also be realized as the solution
to the Dyson equation with perturbed self-energy . Since it follows that the shape analysis from [10] also applies to for any .
We begin with part (i). Set , then for we want to find such that has a local minimum in near , i.e.
First we show that with these properties exists and is unique by using the extensive shape analysis in [10]. Uniqueness directly follows from [10, Theorem 7.2(ii)]. For the existence, we set
Set with a large constant K. Since , we have and . Recall from [10, Proposition 10.1(a)] that the map is 1/3-Hölder continuous. It then follows that , while . Thus necessarily has a local minimum in if K is sufficiently large. This shows the existence of a local minimum with .
We now study the function in a small neighbourhood around 0. From [10, Eqs. (7.62),(5.43)–(5.45)] it follows that
| 5.6 |
whenever , with appropriate real functions3 and . Moreover, since is an almost cusp point for for any . Thus it follows that whenever . Due to the 1/3-Hölder continuity4 of both and and , it follows that whenever . We can thus conclude that satisfies in some -neighbourhood of 0. As we can conclude that there exists a root , of size
. With we have thus shown the first equality in (5.5b).
Using (2.4a), we now expand the defining equation
for the free convolution in the regime for those x sufficiently close to such that
to find
i.e.
| 5.7 |
Note that (5.7) implies that , i.e. the last claim in (5.5b). We now pick some large K and note that from (5.7) it follows that . Thus the interval contains a local minimum of , but by the uniqueness this must then be . We thus have
, proving the second claim in (5.5b). By 1/3-Hölder continuity of and by from (5.7), we conclude that as well. Using that and from (5.6) and , we conclude that , i.e. the second claim in (5.5a). Plugging this information back into (5.7), we thus find and have also proven the first claim in (5.5a).
We now turn to part (5.5). It follows from the analysis in [10] that exhibits either a small gap, a cusp or a small local minimum close to . It follows from (i) that a cusp is transformed into a local minimum, and a local minimum cannot be transformed into a cusp along the semicircular flow. Therefore it follows that the support of has a gap of size between the edges . Evidently , , and for we differentiate (5.3a) to obtain
| 5.8 |
by considering the limit and the fact that has a square root at edge (for ) hence blows up at this point. Denoting the derivative by dot, from
we can thus conclude that . This implies that the gap as a whole moves with linear speed (for non-zero ), and, in particular, the distance of the gap of to is an order of magnitude larger than the size of the gap. It follows that the size of the gap of satisfies
We now use the precise shape of close to according to (2.4b) which is given by
| 5.9 |
where defined in (2.4c) exhibits the limiting behaviour
Using (5.9), we compute
| 5.10 |
where the factor in (5.9) encapsulates two error terms; both are due to the fact that the shape factor of from (2.4b) is not exactly the same as , i.e. the one for . To track this error in we go back to [10]. First, in [10, Eq. (7.5a)] is of size by the fact that vanishes at and is 1/3-Hölder continuous according to [10, Lemma 10.5]. Secondly, according to [10, Lemma 10.5] the shape factor (which is directly related to in the present context) is also 1/3-Hölder continuous and therefore we know that the shape factors of at are at most multiplicatively perturbed by a factor of . By solving the differential equation (5.10) with the initial condition , the claim (5.5c) follows.
Besides the asymptotic expansion for gap size and local minimum we also require some quantitative control on the location of , as defined in (5.3a), and some slight perturbations thereof within the spectral gap of . We remark the the point plays a critical role for the contour integration in Sect. 5.2 since it will be the critical point of the phase function. From (5.5c) we recall that the gap size scales as which makes it natural to compare distances on that scale. In the regime where all of the following estimates thus identify points very close to the centre of the initial gap.
Lemma 5.2
Suppose that we are in the setting of Lemma 5.1. We then find that is very close to the centre of in the sense that
| 5.11a |
Furthermore, for we have that
| 5.11b |
Proof
We begin with proving (5.11a). For we denote the distance of to the edges by , cf. Fig. 1b. We have, by differentiating from (5.8) that
| 5.12 |
and by differentiating (5.3a),
We now consider with and compute from (5.9), for any ,
and
Here we used that fact that the error terms in (5.9) become irrelevant in the limit. We conclude, together with (5.12), that
Since and it follows that, to leading order, and more precisely
In particular it follows that . Together with the case from (5.5c) we thus find
proving (5.11a).
We now turn to the proof of (5.11b) where we treat the small gap and small non-zero minimum separately. We start with the first inequality. We observe that (5.11a) in the setting where are replaced by implies
| 5.13 |
Furthermore, we infer from the definition of and the associativity (5.3b) of the free convolution that
and can therefore estimate
just as claimed. In the last step we used (5.13) and the fact that
| 5.14 |
which directly follows from the definition of and the 1/3-Hölder continuity of .
Finally, we address the second inequality in (5.11b) and appeal to Lemma 5.1(i) to establish the existence of such that
| 5.15 |
It thus follows from (5.5b) that
and therefore from (5.14) that
Using (5.15) twice, as well as the associativity (5.3b) of the free convolution and we then further compute
| 5.16 |
By Hölder continuity we can, together with (5.11a) and from (5.5b), conclude that
In the first term we used (5.14) and the second estimate of (5.5b). In the second term we used (5.16) together with from (5.5b) and 1/3-Hölder continuity of . Finally, the last term was already estimated in the exact cusp case, i.e. in (5.11a).
Correlation kernel as contour integral
We denote the eigenvalues of by . Following the work of Brézin and Hikami (see e.g. [22, Eq. (2.14)] or [35, Eq. (3.13)] for the precise version used in the present context) the correlation kernel of can be written as
where is any contour around all , and is any vertical line not intersecting . With this notation, the k-point correlation function of the eigenvalues of is given by
Due to the determinantal structure we can freely conjugate with for to redefine the correlation kernel as
This redefinition does not agree point-wise with the previous definition , but gives rise to the same determinant, and in particular to the same k-point correlation function. Here is the base point chosen in Theorem 2.3. The central result concerning the correlation kernel is the following proposition.
Proposition 5.3
Under the assumptions of Theorem 2.3, the rescaled correlation kernel
| 5.17 |
around the base point chosen in (2.6) converges uniformly to the Pearcey kernel from (2.5) in the sense that
for . Here R is an arbitrary large threshold, is some universal constant, is a constant depending only on the model parameters and R, and is chosen according to (2.6).
Proof
We now split the contour into two parts, one encircling all eigenvalues to the left of , and the other one encircling all eigenvalues to the right of , which does not change the value of . We then move the vertical contour so that it crosses the real axis in . This does also not change the value as the only pole is the one in z for which the residue reads
We now perform a linear change of variables , in (5.17) to transform the contours into contours
| 5.18 |
to obtain
| 5.19 |
where
Here indicates the length of the gap in the support of . From Lemma 5.1 with and we infer . In order to obtain (5.19) we used the relation .
We begin by analysing the deterministic variant of ,
We separately analyse the large- and small-scale behaviour of f(z). On the one hand, using the 1/3-Hölder continuity of , eq. (5.5c) and
we conclude the large-scale asymptotics
| 5.20 |
We now turn to the small-scale asymptotics. We first specialize Lemmas 5.1 and 5.2 to and collect the necessary conclusions in the following Lemma.
Lemma 5.4
Under the assumptions of Theorem 2.3 it follows that has a spectral gap of size
| 5.21a |
Furthermore, in all three cases we have that is is very close to the centre of the gap in the support of in the sense that
| 5.21b |
Proof
We prove (5.21a)–(5.21b) separately in cases (i), (ii) and (iii).
-
(i)
Here (5.21a) follows directly from (5.5c) with , , and . Furthermore (5.21b) follows from (5.11a) with , and .
-
(ii)
We apply (5.5c) with , , to conclude that , and that has an exact cusp in some point . Thus (5.21a) follows from another application of (5.5c) with , , and . Furthermore, (5.21b) follows again from (5.11b) but this time with , , and , and using that for sufficiently small .
-
(iii)
From (5.5a) with , , to conclude , and that has an exact cusp in some point . Finally, (5.21b) follows again from (5.11b) but with , , and , and using and for sufficiently small .
Equipped with Lemma 5.4 we can now turn to the small scale analysis of f(z) and write out the Stieltjes transform to find
Note that these integrals are not singular since vanishes for . We now perform the u integration to find
| 5.22 |
By using the precise shape (5.9) (with ) of close to the edges , and recalling the gap size from (5.21a) and location of from (5.21b) we can then write
| 5.23 |
with
being the leading order contribution. Here ± indicates that the formula holds for all three cases (i), (ii) and (iii) simultaneously, where in case (i). The contribution of the error term in (5.9) to the integral in (5.22) is of order using that and that on the support of . By the explicit integrals
and a Taylor expansion of the logarithm we find that the quadratic term almost cancels and we conclude the small-scale asymptotics
| 5.24 |
Contour deformations
We now argue that we can deform the contours and thereby via (5.18) the derived contours , in a way which bounds the sign of away from zero along the contours. Here g(z) is the N-independent variant of given by
| 5.25 |
The topological aspect of our argument is inspired by the approach in [42–44].
Lemma 5.5
For all sufficiently small there exists such that the following holds true. The contours then can be deformed, without touching or each other, in such a way that the rescaled contours defined in (5.18) satisfy on and on . Furthermore, locally around 0 the contours can be chosen in such a way that
| 5.26 |
Proof
Just as in (5.24) we have the expansion
| 5.27 |
It thus follows that for some small , and
we have and in agreement with Fig. 2c. For large z, however, it also follows from (5.20) together with (5.25) and (5.23) that for some large R, and
we have and , in agreement with Fig. 2a. We denote the connected component of containing some set A by .
Claim 1— are the only two unbounded connected components of Suppose there was another unbounded connected component A of . Since we would be able to find some with arbitrarily large . If , then we note that the map is increasing, and otherwise we note that the map is increasing. Thus it follows in both cases that the connected component A actually coincides with or with , respectively.
Claim 2—are the only two unbounded connected components of This follows very similarly to Claim 1.
Claim 3—are unbounded We note that the map is harmonic on and subharmonic on . Therefore it follows that are unbounded. Since these sets are moreover symmetric with respect to the real axis it then also follows that . This implies that is harmonic on and consequently also that are unbounded.
Claim 4—and This follows from Claims 1–3.
Claim 5—and This also follows from Claims 1–3.
Fig. 2.
Representative cusp analysis. (a) and (c) show the level set . On a small scale , while on a large scale . (b) shows the final deformed and rescaled contours and . (c) furthermore shows the cone sections and , where we for clarity do not indicate the precise area thresholds given by and R. We also do not specifically indicate for as then , cf. Claims 4–5 in the proof of Lemma 5.5
The claimed bounds on now follow from Claims 4–5 and compactness. The claimed small scale shape (5.26) follows by construction of the sets .
From Lemmas 5.5 and 2.8 it follows that and thereby also remain, with overwhelming probability, invariant under the chosen contour deformation. Indeed, only has poles where or for some i. Due to self-adjointness and Lemma 5.5, can only occur if or . Both probabilities are exponentially small as a consequence of Lemma 2.8, since for the former we have according to (2.7), while .
For it follows from (5.26) that we can estimate
| 5.28 |
Indeed, for (5.28) we used (5.26) to obtain , so that
follows from the local law from (2.8b).
We now distinguish three regimes: , and finally which we call microscopic, mesoscopic and macroscopic. We first consider the latter two regimes as they only contribute small error terms.
Macroscopic regime.
If either or , it follows from Lemma 5.5 that and/or , and therefore together with (5.23),(5.25) and (5.28) that and/or with overwhelming probability. Using from (5.21a), we find that and , so that the integrand in (5.19) in the considered regime is exponentially small.
Mesoscopic regime.
If either or , then and/or from (5.27). Thus it follows from (5.23) and (5.25) that also and/or and by (5.28) that with overwhelming probability and/or . Since is integrable over the contours it thus follows that the contribution to , as in (5.19), from z, w with either or is negligible.
Microscopic regime.
We can now concentrate on the important regime where and to do so perform another change of variables , which gives rise to two new contours
as depicted in Fig. 2B, and the kernel
| 5.29 |
We only have to consider w, z with in (5.29) since and the other regime has already been covered in the previous paragraph before the change of variables.
We now separately estimate the errors stemming from replacing first by f(z), then by and finally by . We recall that from (5.21a), from the definition of in (5.21a), and that which will be used repeatedly in the following estimates. According to (5.28), we have
| 5.30a |
Next, from (5.23) we have
| 5.30b |
Finally, we have to estimate the error from replacing by its Taylor expansion with (5.24) and find
| 5.30c |
Finally, from (5.21a) and the definition of from (2.6) we obtain that
| 5.30d |
From (5.30) and the integrability of for small z, w along the contours we can thus conclude
| 5.31 |
Furthermore, it follows from (5.26) that, as , the contours are those depicted in Fig. 2b, i.e.
We recognize (5.31) as the extended Pearcey kernel from (2.5).
It is easy to see that all error terms along the contour integration are uniform in x, y running over any fixed compact set. This proves that converges to uniformly in x, y in a compact set. This completes the proof of Proposition 5.3.
Green function comparison
We will now complete the proof of Theorem 2.3 by demonstrating that the local k-point correlation function at the common physical cusp location of the matrices does not change along the flow (5.1). Together with Proposition 5.3 this completes the proof of Theorem 2.3. A version of this continuity of the matrix Ornstein-Uhlenbeck process with respect to the local correlation functions that is valid in the bulk or at regular edges is the third step in the well known three step approach to universality [38]. We will present this argument in the more general setup of correlated random matrices, i.e. in the setting of [34]. In particular, we assume that the cumulants of the matrix elements satisfy the decay conditions [34, Assumptions (C,D)], an assumption that is obviously fulfilled for deformed Wigner-type matrices.
We claim that the k-point correlation function of and the corresponding k-point correlation function of stay close along the OU-flow in the sense that
| 5.32 |
for , , smooth functions F and some constant , where is the physical cusp point. The proof of (5.32) follows the standard arguments of computing t-derivatives of products of traces of resolvents at spectral parameters z just below the fluctuation scale of eigenvalues, i.e. for . Since the procedure detailed e.g. in [38, Chapter 15] is well established and not specific to the cusp scaling, we keep our explanations brief.
The only cusp-specific part of the argument is estimating products of random variables
and we claim that
| 5.33 |
as long as for some . For simplicity we first consider and find from Itô’s Lemma that
| 5.34 |
which we further compute using a standard cumulant expansion, as already done in the bulk regime in [34, Proof of Corollary 2.6] and in the edge regime in [11, Section 4.2]. We recall that , and more generally denote the joint cumulants of the random variables and , respectively, which accordingly scale like and . Here greek letters are double indices. After cumulant expansion, the leading term in (5.34) cancels, and the next order contribution is
with being the size of the cumulant . With and we then estimate
where we used the Ward-identity and that . We now use that according to [34, Proof of Prop. 5.5], and similarly are monotonically increasing with to find and from the local law from Theorem 2.5 and the scaling of at . Since all other error terms can be handled similarly and give an even smaller contribution it follows that
| 5.35 |
for some constant . Now (5.33) and therefore (5.32) follow from (5.35) as in [38, Theorem 15.3] using the choice and choosing sufficiently small.
Acknowledgements
Open access funding provided by Institute of Science and Technology (IST Austria). The authors are very grateful to Johannes Alt for numerous discussions on the Dyson equation and for his invaluable help in adjusting [10] to the needs of the present work.
Appendix A. Technical lemmata
Lemma A.1
Let be equipped with a norm . Let be a bilinear form and let a linear operator with a non-degenerate isolated eigenvalue . Denote the spectral projection corresponding to by and by the one corresponding to the spectral complement of , i.e.
where is the eigenmatrix corresponding to and a linear functional. Assume that for some positive constant the bounds
| A.1 |
are satisfied, where we denote the induced norms on linear operators, linear functionals and bilinear forms on by the same symbol . Then there exists a universal constant such that for any and any with that satisfies the quadratic equation
| A.2 |
the following holds: The scalar quantity
fulfils the cubic equation
| A.3 |
with coefficients
| A.4 |
Furthermore,
| A.5 |
Here, the constants implicit in the -notation depend on c only.
Proof
We decompose Y as
Then (A.2) takes the form
| A.6 |
We project both sides with , invert and take the norm to conclude
Then we use the smallness of by properly choosing and the definition of to infer , where we introduced the notation
Inserting this information back into (A.6) and using reveals
| A.7 |
In particular, (A.5) follows. Plugging (A.7) into (A.6) and applying the projection yields
For a linear operator and a bilinear form with we use the general bounds
for any and to find
which proves (A.3).
Proof of Lemma 3.3
Due to the asymptotics and and the classification of singularities in (2.4), we can infer the following behaviour of the self-consistent fluctuation scale from Definition 2.4. There exists a constant only depending on the model parameters such that we have the following asymptotics. First of all, in the spectral bulk we trivially have that as long as is at least a distance of away from local minima of . In the remaining cases we use the explicit shape formulae from (2.4) to compute directly from Definition 2.4.
- Non-zero local minimum or cusp. Let be the location of a non-zero local minimum or a cusp . Then
for .A.8a - Edge. Let be the position of a left/right edge at a gap in of size (cf. (2.4b)). Then
for .A.8b
The claimed bounds in Lemma 3.3 now follow directly from (3.7e) and (A.8) by distinguishing the respective regimes.
Proof of Lemma 4.8
We start from (4.7) and estimate all vertex weights , interaction matrices and weight matrices trivially by
to obtain
We now choose the vertex ordering as in Lemma 4.5. In the first step we partition the set of G-edges into three parts : the edges not adjacent to , , the non-Wardable edges adjacent to , and the Wardable edges adjacent to , . By the choice of ordering it holds that . We introduce the shorthand notation and use the general Hölder inequality for any collection of random variables and indexed by some arbitrary index set
to compute
where we choose in such a way that . Since we can use (4.14a) to estimate
and it thus follows from
that
| A.9 |
for . By using (A.9) inductively times it thus follows that
proving the lemma.
Lemma A.2
For the coefficient in (4.42) we have the expansion
| A.10 |
for some , provided for some large enough constant .
Proof
Recall from the explanation after (4.42) that if , respectively. As we saw in the proof of Lemma 4.14, in the case in the complex Hermitian symmetry class, the operator B as well as has a bounded inverse. Since we assume that is large, we have , which also includes the real symmetric symmetry class. In particular, we also have and all subsequent statements hold simultaneously for B and . We call the normalised eigenvector corresponding to the eigenvalue with largest modulus of , recalling . Since we can use perturbation theory of to analyse spectral properties of B. In particular, we find
| A.11 |
where is the orthogonal projection onto the direction. The error terms are measured in -norm. For the expansions (A.11) we used that F has a spectral gap in the sense that
for some constant , depending only on model parameters. By using (A.11) we see that the lhs. of (A.10) becomes . To complete the proof of the Lemma we note that according to [10, Eq. (5.10)].
Footnotes
See Appendix B of arXiv:1809.03971v2 for details.
This equivalent property is commonly known as having a colouring number of at most , see e.g. [39].
We have , with the notations in [10], where and near the almost cusp, but we refrain from using these letters in the present context to avoid confusions.
See [10, Lemma 5.5] for the 1/3-Hölder continuity of quantities in the definition of .
L. Erdős: Partially supported by ERC Advanced Grant No. 338804.
T. Krüger: Partially supported by the Hausdorff Center for Mathematics.
D. Schröder: Partially supported by the IST Austria Excellence Scholarship.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
László Erdős, Email: lerdos@ist.ac.at.
Torben Krüger, Email: torben-krueger@uni-bonn.de.
Dominik Schröder, Email: dschroed@ist.ac.at, Email: dschroeder@ethz.ch.
References
- 1.Adlam, B., Che, Z.: Spectral statistics of sparse random graphs with a general degree distribution. Preprint (2015). arXiv:1509.03368
- 2.Adler M, Cafasso M, van Moerbeke P. From the Pearcey to the Airy process. Electron. J. Probab. 2011;16(36):1048–1064. [Google Scholar]
- 3.Adler M, Ferrari PL, van Moerbeke P. Airy processes with wanderers and new universality classes. Ann. Probab. 2010;38:714–769. [Google Scholar]
- 4.Adler M, van Moerbeke P. PDEs for the Gaussian ensemble with external source and the Pearcey distribution. Commun. Pure Appl. Math. 2007;60:1261–1292. [Google Scholar]
- 5.Ajanki, O.H., Erdős, L., Krüger, T.: Quadratic vector equations on complex upperhalf-plane. Mem. Amer. Math. Soc. 261(1261), v+133 (2019)
- 6.Ajanki OH, Erdős L, Krüger T. Singularities of solutions to quadratic vector equations on the complex upper half-plane. Commun. Pure Appl. Math. 2017;70:1672–1705. [Google Scholar]
- 7.Ajanki OH, Erdős L, Krüger T. Stability of the matrix Dyson equation and random matrices with correlations. Probab. Theory Relat. Fields. 2019;173:293–373. [Google Scholar]
- 8.Ajanki OH, Erdős L, Krüger T. Universality for general Wigner-type matrices. Probab. Theory Relat. Fields. 2017;169:667–727. [Google Scholar]
- 9.Alt, J., Erdős, L., Krüger, T.: Spectral radius of random matrices with independent entries. Preprint (2019). arXiv:1907.13631
- 10.Alt, J., Erdős, L., Krüger, T.: The Dyson equation with linear self-energy: spectral bands, edges and cusps. Preprint (2018). arXiv:1804.07752
- 11.Alt, J., Erdős, L., Krüger, T., Schröder, D.: Correlated random matrices: Band rigidity and edge universality. Ann. Probab. (2018). arXiv:1804.07744 (to appear)
- 12.Anderson PW. Absence of diffusion in certain random lattices. Phys. Rev. 1958;109:1492–1505. [Google Scholar]
- 13.Baik, J., Kriecherbauer, T., McLaughlin, K.T.-R., Miller, P.D.: Discrete Orthogonal Polynomials, vol. 64. Annals of Mathematics Studies, Asymptotics and Applications, pp . viii+170. Princeton University Press, Princeton, NJ (2007)
- 14.Bauerschmidt R, Huang J, Knowles A, Yau H-T. Bulk eigenvalue statistics for random regular graphs. Ann. Probab. 2017;45:3626–3663. [Google Scholar]
- 15.Bekerman F, Figalli A, Guionnet A. Transport maps for -matrix models and universality. Commun. Math. Phys. 2015;338:589–619. [Google Scholar]
- 16.Borodin A, Okounkov A, Olshanski G. Asymptotics of Plancherel measures for symmetric groups. J. Am. Math. Soc. 2000;13:481–515. [Google Scholar]
- 17.Bourgade P, Erdős L, Yau H-T. Edge universality of beta ensembles. Commun. Math. Phys. 2014;332:261–353. [Google Scholar]
- 18.Bourgade P, Erdős L, Yau H-T. Universality of general -ensembles. Duke Math. J. 2014;163:1127–1190. [Google Scholar]
- 19.Bourgade P, Erdős L, Yau H-T, Yin J. Universality for a class of random band matrices. Adv. Theor. Math. Phys. 2017;21:739–800. [Google Scholar]
- 20.Bourgade, P., Yau, H.-T., Yin, J.: Random band matrices in the delocalized phase, I: quantum unique ergodicity and universality. Preprint (2018). arXiv:1807.01559
- 21.Brézin E, Hikami S. Level spacing of random matrices in an external source. Phys. Rev. E. 1998;3(58):7176–7185. [Google Scholar]
- 22.Brézin E, Hikami S. Universal singularity at the closure of a gap in a random matrix theory. Phys. Rev. E. 1998;3(57):4140–4149. [Google Scholar]
- 23.Capitaine M, Péché S. Fluctuations at the edges of the spectrum of the full rank deformed GUE. Probab. Theory Relat. Fields. 2016;165:117–161. [Google Scholar]
- 24.Cipolloni G, Erdős L, Krüger T, Schröder D. Cusp universality for random matrices II: the real symmetric case. Pure Appl. Anal. 2019;1(4):615–707. [Google Scholar]
- 25.Cipolloni, G., Erdős, L., Schröder, D.: Edge universality for non-Hermitian random matrices. Preprint (2019). arXiv:1908.00969 [DOI] [PMC free article] [PubMed]
- 26.Claeys T, Kuijlaars ABJ, Liechty K, Wang D. Propagation of singular behavior for Gaussian perturbations of random matrices. Commun. Math. Phys. 2018;362:1–54. [Google Scholar]
- 27.Claeys T, Neuschel T, Venker M. Boundaries of sine kernel universality for Gaussian perturbations of Hermitian matrices. Random Matrices Theory Appl. 2019;8:1950011, 50. [Google Scholar]
- 28.Deift P, Kriecherbauer T, McLaughlin KT-R. New results on the equilibrium measure for logarithmic potentials in the presence of an external field. J. Approx. Theory. 1998;95:388–475. [Google Scholar]
- 29.Deift P, Kriecherbauer T, McLaughlin KT-R, Venakides S, Zhou X. Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory. Commun. Pure Appl. Math. 1999;52:1335–1425. [Google Scholar]
- 30.Deift P, Gioev D. Universality at the edge of the spectrum for unitary, orthogonal, and symplectic ensembles of random matrices. Commun. Pure Appl. Math. 2007;60:867–910. [Google Scholar]
- 31.Duse E, Johansson K, Metcalfe A. The cusp-Airy process. Electron. J. Probab. 2016;21:50. [Google Scholar]
- 32.Erdős L, Knowles A, Yau H-T, Yin J. Spectral statistics of Erdős–Renyi graphs II: eigenvalue spacing and the extreme eigenvalues. Commun. Math. Phys. 2012;314:587–640. [Google Scholar]
- 33.Erdős L, Knowles A, Yau H-T, Yin J. The local semicircle law for a general class of random matrices. Electron. J. Probab. 2013;18(59):58. [Google Scholar]
- 34.Erdős L, Krüger T, Schröder D. Random matrices with slow correlation decay. Forum Math. Sigma. 2019;7:e8, 89. [Google Scholar]
- 35.Erdős L, Péché S, Ramírez JA, Schlein B, Yau H-T. Bulk universality for Wigner matrices. Commun. Pure Appl. Math. 2010;63:895–925. [Google Scholar]
- 36.Erdős L, Schlein B, Yau H-T. Universality of random matrices and local relaxation flow. Invent. Math. 2011;185:75–119. [Google Scholar]
- 37.Erdős L, Schnelli K. Universality for random matrix flows with time-dependent density. Ann. Inst. Henri Poincaré Probab. Stat. 2017;53:1606–1656. [Google Scholar]
- 38.Erdős, L., Yau, H.-T.: A Dynamical Approach to Random Matrix Theory, Vol. 28, Courant Lecture Notes in Mathematics, Courant Institute of Mathematical Sciences, pp. ix+226. American Mathematical Society, Providence, RI (2017)
- 39.Erdős P, Hajnal A. On chromatic number of graphs and set-systems. Acta Math. Acad. Sci. Hung. 1966;17:61–99. [Google Scholar]
- 40.Geudens, D., Zhang, L.: Transitions between critical kernels: from the tacnode kernel and critical kernel in the two-matrix model to the Pearcey kernel. International Mathematics Research Notices IMRN 5733–5782 (2015)
- 41.Guionnet A, Huang J. Rigidity and edge universality of discrete -ensembles. Comm. Pure Appl. Math. 2019;72(9):1875–1982. [Google Scholar]
- 42.Hachem, W., Hardy, A., Najim, J.: A survey on the eigenvalues local behavior of large complex correlated Wishart matrices. In: Modelisation Aleatoire et Statistique—Journées MAS 2014, vol. 51, ESAIM Proceedings Surveys, EDP Sciences, Les Ulis, pp. 150–174 (2015)
- 43.Hachem W, Hardy A, Najim J. Large complex correlated Wishart matrices: fluctuations and asymptotic independence at the edges. Ann. Probab. 2016;44:2264–2348. [Google Scholar]
- 44.Hachem W, Hardy A, Najim J. Large complex correlated Wishart matrices: the Vearcey kernel and expansion at the hard edge. Electron. J. Probab. 2016;21:36. [Google Scholar]
- 45.He Y, Knowles A. Mesoscopic eigenvalue statistics of Wigner matrices. Ann. Appl. Probab. 2017;27:1510–1550. [Google Scholar]
- 46.Helton, J. W., Rashidi Far, R., Speicher, R.: Operator-valued semicircular elements: solving a quadratic matrix equation with positivity constraints. International Mathematics Research Notices IMRN, Art. ID rnm086, 15 (2007)
- 47.Huang J, Landon B, Yau H-T. Bulk universality of sparse random matrices. J. Math. Phys. 2015;56:123301, 19. [Google Scholar]
- 48.Johansson K. Discrete orthogonal polynomial ensembles and the Plancherel measure. Ann. Math. (2) 2001;153:259–296. [Google Scholar]
- 49.Johansson K. Universality of the local spacing distribution in certain ensembles of Hermitian Wigner matrices. Commun. Math. Phys. 2001;215:683–705. [Google Scholar]
- 50.Khorunzhy AM, Khoruzhenko BA, Pastur LA. Asymptotic properties of large random matrices with independent entries. J. Math. Phys. 1996;37:5033–5060. [Google Scholar]
- 51.Knowles A, Yin J. Anisotropic local laws for random matrices. Probab. Theory Relat. Fields. 2017;169:257–352. [Google Scholar]
- 52.Krishnapur M, Rider B, Virág B. Universality of the stochastic Airy operator. Commun. Pure Appl. Math. 2016;69:145–199. [Google Scholar]
- 53.Landon B, Yau H-T. Convergence of local statistics of Dyson Brownian motion. Commun. Math. Phys. 2017;355:949–1000. [Google Scholar]
- 54.Landon, B., Yau, H.-T.: Edge statistics of Dyson Brownian motion. Preprint (2017). arXiv:1712.03881
- 55.Lee JO, Schnelli K. Edge universality for deformed Wigner matrices. Rev. Math. Phys. 2015;27:1550018, 94. [Google Scholar]
- 56.Lee JO, Schnelli K. Local law and Tracy-Widom limit for sparse random matrices. Probab. Theory Relat. Fields. 2018;171:543–616. [Google Scholar]
- 57.Lee JO, Schnelli K, Stetler B, Yau H-T. Bulk universality for deformed Wigner matrices. Ann. Probab. 2016;44:2349–2425. [Google Scholar]
- 58.Lick DR, White AT. k-degenerate graphs. Can. J. Math. 1970;22:1082–1096. [Google Scholar]
- 59.Mehta ML. Random Matrices and the Statistical Theory of Energy Levels. New York: Academic Press; 1967. p. x+259. [Google Scholar]
- 60.Okounkov A, Reshetikhin N. Random skew plane partitions and the Vearcey process. Commun. Math. Phys. 2007;269:571–609. [Google Scholar]
- 61.Pastur L, Shcherbina M. Bulk universality and related properties of Hermitian matrix models. J. Stat. Phys. 2008;130:205–250. [Google Scholar]
- 62.Pastur L, Shcherbina M. On the edge universality of the local eigenvalue statistics of matrix models. Mat. Fiz. Anal. Geom. 2003;10:335–365. [Google Scholar]
- 63.Pearcey T. The structure of an electromagnetic field in the neighbourhood of a cusp of a caustic. Philos. Mag. 1946;7(37):311–317. [Google Scholar]
- 64.Shcherbina M. Change of variables as a method to study general -models: bulk universality. J. Math. Phys. 2014;55:043504, 23. [Google Scholar]
- 65.Shcherbina M. Edge universality for orthogonal ensembles of random matrices. J. Stat. Phys. 2009;136:35–50. [Google Scholar]
- 66.Sodin S. The spectral edge of some random band matrices. Ann. Math. (2) 2010;172:2223–2251. [Google Scholar]
- 67.Soshnikov A. Universality at the edge of the spectrum in Wigner random matrices. Commun. Math. Phys. 1999;207:697–733. [Google Scholar]
- 68.Tao T, Vu V. Random matrices: universality of local eigenvalue statistics. Acta Math. 2011;206:127–204. [Google Scholar]
- 69.Tao T, Vu V. Random matrices: universality of local eigenvalue statistics up to the edge. Commun. Math. Phys. 2010;298:549–572. [Google Scholar]
- 70.Tracy CA, Widom H. Level-spacing distributions and the Airy kernel. Commun. Math. Phys. 1994;159:151–174. [Google Scholar]
- 71.Tracy CA, Widom H. On orthogonal and symplectic matrix ensembles. Commun. Math. Phys. 1996;177:727–754. [Google Scholar]
- 72.Tracy CA, Widom H. The Pearcey process. Commun. Math. Phys. 2006;263:381–400. [Google Scholar]
- 73.Valkó B, Virág B. Continuum limits of random matrices and the Brownian carousel. Invent. Math. 2009;177:463–508. [Google Scholar]






