Skip to main content
Springer logoLink to Springer
. 2021 Sep 13;30(5):1365–1398. doi: 10.1007/s10260-021-00590-6

Weighted stochastic block model

Tin Lok James Ng 1,, Thomas Brendan Murphy 2
PMCID: PMC8608781  PMID: 34840548

Abstract

We propose a weighted stochastic block model (WSBM) which extends the stochastic block model to the important case in which edges are weighted. We address the parameter estimation of the WSBM by use of maximum likelihood and variational approaches, and establish the consistency of these estimators. The problem of choosing the number of classes in a WSBM is addressed. The proposed model is applied to simulated data and an illustrative data set.

Keywords: Weighted stochastic block model, Variational estimators, Maximum likelihood estimators, Consistency, Model selection

Introduction

Networks are used in many scientific disciplines to represent interactions among objects of interest. For example, in the social sciences, a network typically represents social ties between actors. In biological sciences, a network can represent interactions between proteins.

The stochastic block model (SBM) (Holland et al. 1983; Snijders and Nowicki 1997) is a popular generative model which partitions vertices into latent classes. Conditional on the latent class allocations, the connection probability between two vertices depends only on the latent classes to which the two vertices belong. Many extensions of the class SBM have been proposed which include the degree correlated SBM (Karrer and Newman 2011; Peng and Carvalho 2016), mixed membership SBM (Airoldi et al. 2008) and overlapping SBM (Latouche et al. 2011).

The SBM and many of its variants are usually restricted to Bernoulli networks. However, many binary networks are produced after applying a threshold to a weighted relationship (Ghasemian et al. 2016) which results in the loss of potentially valuable information. Although most of the literature has focused on binary networks, there is a growing interest in weighted graphs (Barrat et al. 2004; Newman 2004; Peixoto 2018).

In particular, a number of clustering methods have been proposed for weighted graphs including algorithm based and model based methods. Algorithm based methods for clustering of weighted graphs can be further divided into two classes: algorithms which do not explicitly optimize any criteria (Pons and Latapy 2005; von Luxburg 2007) and those directly optimize a criterion (Clauset et al. 2004; Stouffer and Bascompte 2011). Model based methods (Mariadassou et al. 2010; Aicher et al. 2013, 2015; Ludkin 2020) attempt to take into account the random variability in the data. A recent review of graph clustering methods is given by (Leger et al. 2014).

Mariadassou et al. (2010) presents a Poisson mixture random graph model for integer valued networks and proposes a variational inference approach for parameter estimation. The model can account for covariates via a regression model. In Zanghi et al. (2010), a mixture modelling framework is considered for random graphs with discrete or continuous edges. In particular, the edge distribution is assumed to follow an exponential family distribution. Aicher et al. (2013) proposed a general class of weighted stochastic block model for dense graphs where edge weights are assumed to be generated according to an exponential family distribution. In particular, their construction produces complete graphs, in which every pair of vertices is connected by some real-valued weight. Since most real-world networks are sparse, the constructed model cannot be applied directly. To address this shortcoming, Aicher et al. (2015) extends the work of Aicher et al. (2013) and models the edge existence using a Bernoulli distribution and the edge weights using an exponential family distribution. The contributions of edge-existence distribution and edge-weight distribution in the likelihood function are then combined via a simple tuning parameter. However, their construction does not result in a generative model and it is not obvious how to simulate network observations from the proposed model. More recently, Ludkin (2020) presents a generalization of the SBM which allows artbitrary edge weight distributions and proposes a reversible jump Markov chain Monte Carlo sampler for estimating the parameters and the number of blocks. However, the use of continuous probability distribution to model the edge weights implies that the resulting graph is complete whereby every edge is present. This assumption is unrealistic for many applications whereby a certain proportion of the real-valued edges is 0. Haj et al. (2020) presents a binomial SBM for weighted graphs and proposes a variational expectation maximization algorithm for parameter estimation.

In this paper, we propose a weighted Stochastic Block model (WSBM) with gamma weights which aims to capture the information of weights directly using a generative model. Both maximum likelihood estimation and variational methods are considered for parameter estimation where consistency results are derived. We also address the problem of choosing the number of classes using the Integrated Completed Likelihood (ICL) criteria (Biernacki et al. 2000). The proposed models and inference methodology are applied to an illustrative data set.

Model specification

In this section, we present the weighted stochastic block model in detail and introduce the main notations and assumptions.

We let Ω=(V,X,Y) denote the set of directed weighted random graphs where V=N is the set of countable vertices, X={0,1}N×N is the set of edge-existence adjacency matrix, and Y=R+N×N is the set of weighted adjacency matrix. Given a random adjacency matrix X={Xij}i,jN, Xij=1 if an edge exists from vertex i to vertex j and Xij=0 otherwise. The associated weighted random adjacency matrix is given by: for ij, if Xij=1, Yij>0, and Yij=0 otherwise. Let P be a probability measure on Ω.

Generative model

We now describe the procedure of generating a sample of random graph (VXY) with n vertices from Ω.

  • Let Z[n]=(Z1,,Zn) be the vector of latent block allocations for the vertices, and set θ=(θ1,,θQ) with qθq=1. For each vertex vi, draw its block label Zi{1,,Q} from a multinomial distribution
    ZiM(1;θ1,,θQ).
  • Let π=(πql)q,l=1Q be a Q×Q matrix with entries in [0, 1]. Conditional on the block allocations Z[n], the entries Xij for ij of the edge-existence adjacency matrix X[n] is generated from independently a Bernoulli distribution
    Xij|Zi=q,Zj=lB(πql).
  • Let α=(αql)q,l=1Q and β=(βql)q,l=1Q be Q×Q matrices with entries taking values in the positive reals. Conditional on the latent block allocations Z[n] and edge-existence adjacency matrix X, the weighted adjacency matrix Y[n] is generated independently from
    Yij|Xij=1,Zi=q,Zj=lGa(αql,βql),Yij|Xij=0,Zi=q,Zj=lδ{0},
    where Ga(·,·) denotes the gamma distribution and δ{·} is the Dirac delta function.

The generative framework described above is a straightforward extension of the binary stochastic block model whereby a positive weight is generated according to a gamma distribution for each edge. In particular, (XZ) is a realization of the binary directed SBM. The gamma distribution is chosen due to its flexibility in the sense that, depending on the value of its shape parameter, it can represent distributions of different shapes.

The log-likelihood of the observations X[n] and Y[n] is given by

L2(Y[n],X[n];θ,π,α,β)=log(z[n]eL1(Y[n],X[n];z[n],π,α,β)P{Z[n]=z[n]}), 1

where the sum is over all possible latent block allocations, P{Z[n]=z[n]}=i=1nθzi is the probability of latent block allocation z[n], and

L1(Y[n],X[n];z[n],π,α,β)=ij[Xij{logπzi,zj+logf(Yij;αzi,zj,βzi,zj)}+(1-Xij)log(1-πzi,zj)]=ij{Xijlogπzi,zj+(1-Xij)log(1-πzi,zj)}+Xij{αzi,zjlogβzi,zj+(αzi,zj-1)logYij-βzi,zjYij-logΓ(αzi,zj)} 2

is the complete data log-likelihood, where f(·;a,b) is the gamma probability density function with shape parameter a and rate parameter b,

Assumptions

We present several assumptions needed for identifiability and consistency of maximum likelihood estimates. The following four assumptions were presented in Celisse et al. (2012) and are needed in this paper.

Assumption 1

For every qq, there exists l{1,,Q} such that

πq,lπq,lorπl,qπl,q.

Assumption 2

There exists ζ(0,1) such that for all (q,l){1,,Q}2

πql[ζ,1-ζ].

Assumption 3

There exists 0<γ<1/Q such that for all q{1,,Q},

θq[γ,1-γ].

Assumption 4

There exists 0<γ<1/Q and n0N such that for all q{1,,Q}, for all nn0,

Nq(z[n])nγ,

where Nq(z[n])=|{1in:zi=q| and z[n] is any realized block allocation under the WSBM.

(A1) requires that no two classes have the same connectivity probabilities. If this assumption is violated, the resulting model has too many classes and is non-identifiable. (A2) requires that the connectivity probability between any two classes strictly lies within a closed subset of the unit interval. Note that this assumption is slightly more restrictive compared to assumption 2 of Celisse et al. (2012) in that we do not consider the boundary cases where πq,l{0,1}. The boundary cases require special treatment and are not pursued in this paper. (A3) ensures that no class is empty with high probability while (A4) is the empirical version of (A3). We note that (A4) is satisfied asymptotically under the generative framework in Sect. 2.1 since the block allocations are generated according to a multinomial distribution.

In addition to the four assumptions above, we also have the following constraints on the gamma parameters.

Assumption 5

For every qq, there exists l{1,,Q} such that

(αq,l,βq,l)(αq,l,βq,l)or(αl,q,βl,q)(αl,q,βl,q)

(A5) requires that no two classes have the same weight distribution. This assumption is the exact counterpart of (A1).

The log-likelihood function (1) contains degeneracies that prevent the direct estimation of parameters θ,π,α,β. To see this, we note that the probability density function of a gamma distribution Ga(a,b) is given by

f(y;a,b)=ya-1exp(-by)baΓ(a).

By Stirling’s formula, we have

Γ(a)=2πaa-1/2exp(-a)(1+O(a-1)).

Setting y=ab,

f(y;a,b)=ya-1exp(-a)(a/y)aΓ(a)=12πya1+O(a-1).

Therefore, letting a while keeping ab=y, we have f(y;a,b). One can therefore show that the log-likelihood function is unbounded above. To avoid likelihood degeneracy, we compactify the parameter space. That is, we restrict the parameter space to a compact subset which contains the true paraemters. Therefore, we have the following assumption.

Assumption 6

There exists 0<αc<αC< and 0<βc<βC< such that for all (q,l){1,,Q},

αcαqlαC,βcβqlβC.

With this assumption, it is easy to see that the log-likelihood function is bounded for any sample size.

Identifiability

Sufficient conditions for identifiability of binary SBM with two classes have been first obtained by Allman et al. (2009). Celisse et al. (2012) show that the SBM parameters are identifiable up to a permutation of class labels under the conditions that πθ has distinct coordinates and n2Q. The condition on πθ is mild since the set of vectors violating this assumption has Lebesgue measure 0. The identifiability of weighted SBM is more challenging where the only known result (Section 4 of Allman et al. (2011)) requires all entries of (π,α,β) to be distinct. We note that the assumptions in the previous section are not necessarily sufficient but are necessary to ensure that the identifiability of the parameters.

Asymptotic recovery of class labels

We study the posterior probability distribution of the class labels Z[n] given the random adjacency matrix X[n] and weight matrix Y[n], which is denoted by P(Z[n]|X[n],Y[n]). Since X[n] and Y[n] are random, P(Z[n]|X[n],Y[n]) is also random.

Let P(X[n],Y[n]):=P(X[n],Y[n]|Z[n]=z[n]) be the true conditional distribution of (X[n],Y[n]) which depends on the true parameters (θ,π,α,β). We study the convergence rate of P(Z[n]|X[n],Y[n]) towards 1 with respect to P.

The matrices π,α,β are permutation-invariant if one permutes both its rows and columns according to some permutation σ:{1,,Q}{1,,Q}. Let πσ,ασ,βσ be the matrices defined by

πqlσ=πσ(q),σ(l),αqlσ=ασ(q),σ(l),βqlσ=βσ(q),σ(l)

and define the set

Σ={σ:{1,,Q}{1,,Q}|πσ=π,ασ=α,βσ=β}.

Two vectors of class labels z and z are equivalent if there exists σΣ such that zi=σ(zi), for all i. We let [z] denote the equivalence class of z and will omit the square-brackets in the equivalence class notation as long as no confusion arises.

The following result extends Theorem 3.1 of Celisse et al. (2012) to the case of WSBM.

Theorem 1

Under assumptions (A1)–(A6), for every t>0,

P[[z[n]][z[n]]P([Z[n]]=[z[n]]|X[n],Y[n])P([Z[n]]=[z[n]]|X[n],Y[n])>t]=O(ne-κn)

uniformly with respect to z, and for some κ>0 depending only on π,α,β but not on z. Here z=(zi)i=1 with zi{1,,Q}. Furthermore, P can be replaced by P under assumptions (A1)–(A3) and (A5)–(A6).

Maximum likelihood estimation of WSBM parameters

For the binary SBM, consistency of parameter estimation have been shown for profile likelihood maximization (Bickel and Chen 2009), spectral clustering method (Rohe et al. 2011), method of moments approach (Bickel et al. 2011), method based on empirical degrees (Channarond et al. 2012), and others (Choi et al. 2012). Consistency of both maximum likelihood estimation and variational approximation method are established in Celisse et al. (2012) and Bickel et al. (2013) where asymptotic normality is also established in Bickel et al. (2013). Abbe (2018) reviews recent development in the stochastic block model and community detections.

Ambroise and Matias (2012) proposes a general class of sparse and weighted SBM where the edge distribution may exhibit any parametric form and studies the consistency and convergence rates of various estimators considered in their paper. However, their model requires the edge existence parameter to be constant across the graph, that is, πql=π, for all ql, or πql can be modelled as πql=a1Iq=l+a2Iql where I· is the indicator function and a1a2. Furthermore, they also assume that conditional on the block assignments, the edge weight Yij|Xi=q,Xj=l is modelled using a parametric distribution with a single parameter θql. They further impose the restriction that

θql=θinifq=l,θoutifql.

These assumptions are more restrictive than those imposed in this paper. Jog and Loh (2015) studies the problem of characterizing the boundary between success and failure of MLE when edge weights are drawn from discrete distributions. More recently, Brault et al. (2020) studies the consistency and asymptotic normality of the MLE and variational estimators for the latent block model which is a generalization of the SBM. However, the model considered in Brault et al. (2020) is restricted to the dense setting and requires the observations in the data matrix to be modelled by univariate exponential family distributions.

This section addresses the consistency of the MLE of WSBM. In particular, we extend the results obtained in the pioneering paper of Celisse et al. (2012) to the case of weighted graphs. Our proof closely follows the proof of consistency of the MLE in Celisse et al. (2012). The MLE consistency proof of (π,α,β) and θ require different treatments since there are n(n-1) edges but only n vertices. The following result established the MLE consistency of (π,α,β).

Theorem 2

Assume that assumptions (A1), (A2), (A3), (A5), (A6) hold. Let us define the MLE of (θ,π,α,β) by

(θ^,π^,α^,β^):=argmaxθ,π,α,βL2(Y[n],X[n];θ,π,α,β).

Then for any metric d(·,·) on (π,α,β),

d((π^,α^,β^),(π,α,β))nP0.

Under additional assumption on the rate of convergence of the estimators (π^,α^,β^) of (π,α,β), consistency of θ^ can be established.

Theorem 3

Let (θ^,π^,α^,β^) denote the MLE of (θ,π,α,β) and assume that ||π^-π||=oP(logn/n), ||α^-α||=oP(logn/n), and ||β^-β||=oP(logn/n), then

d(θ^,θ)nP0

for any metric d in RQ.

Variational estimators

Direct maximization of the log-likelihood function is intractable except for very small graphs since it involves a sum over Qn terms. In practice, approximate algorithms such as Markov Chain Monte Carlo (MCMC) and variational inference algorithms are often used for parameter inference. For the SBM, both MCMC and variational inference approaches have been proposed (Snijders and Nowicki 1997; Daudin et al. 2008). Variational inference algorithms have also been developed for mixed membership SBM (Airoldi et al. 2008), overlapping SBM (Latouche et al. 2011), and the weighted SBM proposed in Aicher et al. (2013). This section develops a variational inference algorithm for the WSBM which can be considered a natural extension of the algorithm proposed in Daudin et al. (2008) for the SBM.

The variational method consists in approximating P(Z[n]=·|X[n],Y[n]) by a product of n multinomial distributions. Let Dn denote a set of product multinomial distributions

Dn={Dτ[n]=i=1nM(1;τi,1,,τi,Q)|τ[n]Sn}

where

Sn={τ[n]=(τ1,,τn)([0,1]Q)n|foralli,τi=(τi,1,,τi,Q),q=1Qτi,q=1}.

For any Dτ[n]Dn, the variational log-likelihood is defined by

J(Y[n],X[n];τ[n],θ,π,α,β)=L2(Y[n],X[n];θ,π,α,β)-KL(Dτ[n],P(·|X[n],Y[n])).

Here KL(·,·) denotes the Kullback-Leibler divergence between two probability distributions, which is nonnegative. Therefore, J provides a lower bound on the log-likelihood function. We have that

J(Y[n],X[n];τ[n],θ,π,α,β)=ijq,lτi,qτjl(logb(Xij;πql)+Xijlogf(Yij;αql,βql))-iqτiq(logτiq-logθq)

where b(·;p) denotes the probability mass function of a Bernoulli distribution with parameter p, and recall that f(·;αql,βql) denotes the density function of a gamma distribution Ga(αql,βql).

The variational algorithm works by iteratively maximizing the lower bound J with respect to the approximating distribution Dτ[n], and estimating the model parameters. Maximization of J with respect to Dτ[n] consists of solving

τ^[n]:=argmaxτ[n]J(Y[n],X[n];τ[n],θ,π,α,β),

where θ,π,α,β can be replaced by plug-in estimates. This has a closed-form solution given by

τ^iqθqjilb(Xij;πql)τ^jlf(Yij;αql,βql)τ^jlXij. 3

Conditional on τ^[n], the variational estimators of (θ,π,α,β) are found by solving,

(θ~,π~,α~,β~)=argmaxθ,π,α,βJ(Y[n],X[n];τ[n],θ,π,α,β).

Closed-form updates for θ~ and π~ exist and are given by

θ~q=1niτ^iq 4
π~ql=ijτ^iqτ^jlXijijτ^iqτ^jl. 5

On the other hand, updates for α~ and β~ do not have a closed form since the maximum likelihood estimators of the two parameters of a gamma distribution do not have closed forms. However, using the fact that a gamma distribution is a special case of a generalized gamma distribution, Ye and Chen (2017) derived simple closed-form estimators for the two parameters of gamma distribution. The estimators were shown to be strongly consistent and asymptotically normal. For q,l=1,,Q, let us define the quantities

W~ql=ij,Xij=1τ^iqτ^jlU~ql=ij,Xij=1τ^iqτ^jlYijV~ql=ij,Xij=1τ^iqτ^jllogYijS~ql=ij,Xij=1τ^iqτ^jlYijlogYij,

then the updates for αql,βql are given by

α~ql=W~qlU~qlW~qlS~ql-V~qlU~ql 6
β~ql=W~ql2W~qlS~ql-V~qlU~ql. 7

We obtain the variational estimators (θ~,π~,α~,β~) by computing (3), (4), (5), (6), (7) until convergence.

We now address the consistency of the variational estimators derived above. The following two propositions are the counterpart of Theorem 2 and 3 for variational estimators. We omit the proof since they follow similar arguments as the proof of Corollary 4.3 and Theorem 4.4 of Celisse et al. (2012).

Proposition 1

Assume that assumptions (A1), (A2), (A3), (A5) and (A6), and let (θ~,π~,α~,β~) be the variational estimators defined above. Then for any distance d(·,·) on (π,α,β),

d((π~,α~,β~),(π,α,β))nP0.

Proposition 2

Assume that the variational estimators (π~,α~,β~) converge at rate 1/n to (π,α,β), respectively, and assumptions (A1), (A2), (A3), (A5), (A6) hold. We have

d(θ~,θ)nP0,

where d denotes any distance between vectors in RQ.

Note a stronger assumption on the convergence rate 1/n of π~,α~,β~ is assumed for Proposition 2 compared to logn/n in Theorem 3. The same assumption is also used in Theorem 4.4. of Celisse et al. (2012).

Choosing the number of classes

In real world applications, the number of classes is typically unknown and needs to be estimated from the data. For the SBM, a number of methods have been developed to determine the number of classes, including log-likelihood ratio statistic (Wang and Bickel 2017), composite likelihood (Saldaña et al. 2017), exact integrated complete data likelihood (Côme and Latouche 2015) and Bayesian framework (Yan 2016) based methods. Model selection for variants of SBM have also been investigated (Latouche et al. 2014).

We apply the integrated classification likelihood (ICL) criterion developed by Biernacki et al. (2000) to choose the number of classes for the WSBM. The ICL is an approximation of the complete-date integrated likelihood. The ICL criterion for SBM have been derived by Daudin et al. (2008) under the assumptions that the prior distribution of (θ,π) factorizes and a non-informative Dirichlet prior on θ. Here we follow the approach of Daudin et al. (2008) to derive an approximate ICL for the WSBM.

Let mQ denote the model with Q blocks, the ICL criterion is an approximation of the complete-data integrated likelihood:

L(Y[n],X[n],Z[n]|mQ)=θ,π,α,βL(Y[n],X[n],Z[n];θ,π,α,β,mQ)g(θ,π,α,β)dθdπdαdβ.

where g(θ,π,α,β) is the prior distribution of the parameters. Assuming a non-informative Jeffreys prior on θ, a Stirling approximation to the gamma function, and finally a BIC approximation to the conditional log-likelihood function, the approximate ICL can be derived. For a model mQ with Q blocks, and assuming a non-informative Jeffreys prior Dir(0.5,,0.5) on θ, the approximate ICL criterion is:

ICL(mQ)=maxθ,π,α,βlogL(Y[n],X[n],Z~[n];θ,π,α,β,mQ)-32(Q(Q+1))logn(n-1)-Q-12logn

where Z~[n] is the estimate of Z[n]. The derivation of above follows exactly the same lines as the proof of Proposition 8 of Daudin et al. (2008).

Simulation

We validate the theoretical results developed in previous sections by conducting simulation studies. In particular, we investigate how fast parameter estimates of WSBM converge to their true values and the accuracy of posterior block allocations. Additionally, we investigate the performance of ICL in choosing the number of blocks.

Experiment 1 (two-class model)

For each fixed number of nodes n, 50 realizations of WSBM are generated based on the fixed parameter setting given in (8 and 9). The variational inference algorithm derived in Sect. 5 is then applied to estimate the model parameters and class allocations.

We can see from Table 1 that the estimated model parameters converge to their true values as the number of nodes increases while the posterior class assignment is accurate across any number of nodes. Table 2 shows the ICL criterion tends to select the correct number of classes, especially when the number of nodes is large.

θ=(0.7,0.3), 8
π=0.80.20.30.9,α=10.00.33.00.5,β=2.01.00.21.0. 9

Table 1.

Convergence analysis of posterior class allocations and parameter estimates under the two-class model

n i=1nIzi=zi/n ||θ-θ||2 ||π-π||F ||α-α||F ||β-β||F
25 1 0.031 0.057 0.855 0.434
50 1 0.029 0.035 0.654 0.322
100 1 0.027 0.012 0.256 0.070
200 1 0.025 0.006 0.116 0.046
500 1 0.021 0.002 0.053 0.024

Table 2.

Frequency of choosing Q blocks by ICL under different number of nodes n under the two-class model

n | Q 1 2 3 4 5
25 0 44 3 3 0
50 0 48 2 0 0
100 0 50 0 0 0
200 0 50 0 0 0
500 0 50 0 0 0

Experiment 2 (three-class model)

50 network realizations are obtained under the three-class model with parameter values given in (10 and 11) for a range of values n.

θ=(0.5,0.3,0.2) 10
π=0.600.200.300.300.900.100.600.500.20,α=0.502.001.000.300.026.002.000.053.00,β=5.000.405.003.0012.000.706.000.200.60. 11

We can see from Table 3 that the estimated parameters converge to their true values quickly as the number of nodes increases. The ICL criterion tends to overestimate the number of classes when the number of nodes is small, but consistently selects the correct model when the number of nodes is large (Table 4).

Table 3.

Convergence analysis of posterior class allocations and parameter estimates under the three-class model

n i=1nIzi=zi/n ||θ-θ||2 ||π-π||F ||α-α||F ||β-β||F
25 0.961 0.116 0.178 7.86 136.72
50 1 0.039 0.039 1.256 7.866
100 1 0.033 0.026 0.481 1.426
200 1 0.014 0.024 0.228 1.011
500 1 0.003 0.006 0.197 0.222

Table 4.

Frequency of choosing Q blocks by ICL under different number of nodes n under the three-block model

n|Q 1 2 3 4 5
25 0 3 37 8 2
50 0 0 43 7 0
100 0 0 50 0 0
200 0 0 49 1 0
500 0 0 50 0 0

Computational complexity

The computational complexity of the variational estimators derived in Sect. 5 scales as O(n2), which has the same complexity as the variational algorithm developed by Daudin et al. (2008). Therefore, the algorithm may be prohibitively expensive for networks with more than 1000 nodes. The estimated computational time for the two-class model in Sect. 7.1 and for the three-class model in Sect. 7.2 for various values of n are shown in Fig. 1. The estimated computing time at each n is the average running time of the variational algorithm over 20 replications.

Fig. 1.

Fig. 1

Estimated computing time for the two-class and three-class models

Application: Washington bike data set

We apply the WSBM to analyse the Washington bike sharing scheme data set.1 Information with respect to start stations and end stations of trips as well as length (travel time) of trips are available in the data set. We select a time window of one week staring from January 10th, 2016 and construct the adjacency matrix X and weight matrix Y as follows:

  • Xij=1 if there is trip starting from station i and finishing at station j.

  • Yij is the total length of trips (in minutes) from station i to station j.

The resulting network consists of 370 nodes with an average out-degree of 36.14. The average total length of trips between any pair of stations is 42.15 minutes. We apply the ICL criterion to select the number of classes for the WSBM. For each number of classes Q, the variational inference algorithm is fitted to the network 20 times with 20 random initializations and the highest value of ICL is recorded, and the six-class model is chosen (Table 5). Each bike station is plotted on the map in Fig. 2 where its colour represents the estimated class assignment. We observe that bike stations in class 6 (colored in brown) tend to be concentrated in the central area of Washington wheraas stations in class 3 (colored in red) tend to be located further from the center. Figure 2 shows some spatial effect in the class assignment of bike stations whereby stations that are close in distance tend to be in the same cluster, with the exception of class 3 (colored in red). One potential extension of the model is to take into account the spatial locations of the bike stations as covariates.

Table 5.

Model selection for the Washington Bike dataset using ICL criterion

Q ICL
1 − 107157.24
2 − 93505.61
3 − 91688.33
4 − 90813.17
5 − 90300.90
6 − 89061.38
7 − 89326.35

Fig. 2.

Fig. 2

Bike stations. Class 1: blue. Class 2: green. Class 3: red. Class 4: cyan. Class 5: black. Class 6: brown (color figure online)

The estimated class proportions θ^ shown in 12 indicate that class 3 has the largest number of stations whereas class 4 has the smallest number of stations. The estimated π^ shows that within class connectivity is generally higher compared to between class connectivity. We further observe that the connection probabilities between bike stations in class 1, 4, and 6 are substantially higher. Interestingly, we observe a near symmetry in the matrix π^ indicating that the probability of having a trip from a station in class k to another station in class l is similar with the probability of having a trip from class l to class k.

The estimated densities of travel time between each pair of classes are shown in Fig. 3. We observe that the majority of the estimated densities have mode and mean close to 0, particularly for the estimated densities in the diagonal of Fig. 3. This implies that the total travel times between stations in the same class are quite short. In comparison, the total travel time between stations in different classes tend to be longer. This is reasonable as the distance between bike stations in different classes tend to be longer which in turn requires longer travel time.

Fig. 3.

Fig. 3

Estimated densities of total travel time between stations

θ^=(0.1985,0.0975,0.3256,0.0270,0.1377,0.2136) 12
π^=0.28470.05390.00360.42680.01250.28110.05940.06300.01720.12540.01560.07910.00490.01560.01850.00790.00920.00290.41130.11400.01360.84380.05320.44940.01870.03180.01200.09100.21590.02410.27150.05140.00170.49650.01590.7268, 13

Discussion

This paper proposes a weighted stochastic block model (WSBM) for networks. The proposed model is an extension of the stochastic block model. A variational inference strategy is developed for parameter estimation. Asymptotic properties of maximum likelihood estimators and variational estimators are derived, and the problem of choosing the number of classes is addressed by using an ICL criteria. Simulation studies are conducted to evaluate the performance of variational estimators and the use of ICL to determine the number of classes. The proposed model and inference methods are an illustrative data set.

It is straightforward to extend the WSBM to allow node covariates. Let wiRd be the covariates for each node i=1,,n, and let wij be the covariates for each pair of nodes, i,j=1,,n,ij. The edge probability pij between a pair of nodes ij with node i in block q and node j in block l can be modelled as

logpij1-pij=ξql,0+ξql,1Twij+ξql,2Twi+ξql,3Twj,

where ξql,0R and ξql,1,ξql,2,ξql,3Rd. Conditional on block assignments and existence of an edge between a pair of nodes ij, Yij can be modelled as a Gamma random variable with mean μij and variance σij where

μij=E(Yij|Zi=q,Zj=l,Xij=1)=exp(ϕql,0+ϕql,1Twij+ϕql,2Twi+ϕql,3Twj),σij=Var(Zi=q,Zj=l,Xij=1)=νql

where ϕql,0R and ϕql,1,ϕql,2,ϕql,3Rd.

Many possible future extensions are possible. First, it is desirable to investigate further theoretical properties of maximum likelihood and variational estimators of WSBM parameters such as asymptotic normality of the estimators. Furthermore, some of the assumptions imposed in this work in order to ensure consistency of the estimators maybe relaxed. Moreover, the number of blocks is assumed to be fixed in the asymptotic analysis of the estimators. It would be interesting to allow the number of blocks to grows as the number of nodes grows.

Auxiliary results

Definition 1

A random variable X with mean μ=E(X) is sub-exponential if there are non-negative parameters (ν,b) such that

E(et(X-μ))eν2t22forall|t|<1b.

The following lemma is a straight forward consequence of the definition.

Lemma 1

If the independent random variables {Xi}i=1n are sub-exponential with parameters (νi,bi), i=1,,n, then i=1nXi is sub-exponential with parameters (i=1nνi2,maxi=1nbi).

The following results show that for a Gamma random variable Y and a Bernoulli random variable X, both XY and XlogY are sub-exponential random variables. They are useful for the proof of the main theorems.

Proposition 3

If YGa(a,b) and XBer(π), YX is a sub-exponential random variable.

Proof

The expectation of YX is given by

μ:=E(YX)=πab,

and thus

E(etμ)=exp(tπab).

The moment generating function of YX is given by

E(etYX)=(1-π)+π(1-tb)-a

which is defined for t<b. Taylor series expansion of (1-tb)-a around t=0 gives that

1-tb-a=1+abt+a(a+1)b2t22+O(t3).

Similary, Taylor series expansion of exp-tabπ around 0 gives that

exp-tabπ=1-abπt+a(a+1)b2πt22+O(t3).

Thus, we have

E(etYX)=1-π+π+abπt+a(a+1)b2πt22+O(t3)=1+abπt+a(a+1)b2πt22+O(t3).

This leads to

E(et(YX-μ))=(1+abπt+a(a+1)b2πt22+O(t3))(1-abπt+a(a+1)b2πt22+O(t3))=1+(a2+2a)πb2t2+O(t3).

Hence, we can choose suitable ν,b such that

E(et(YX-μ))exp(ν2t22)forall|t|<1b.

Proposition 4

If YGa(a,b) and XBer(π), XlogY is a sub-exponential random variable.

Proof

The proof is analogous to the proof of Proposition 3. It is straightforward to show that

E(etXlogY)=(1-π)+πΓ(α+t)Γ(α)βt

and

μ:=E(XlogY)=π(Ψ(α)-log(β))

where Ψ(·) is the digamma function. The rest of the proof follows the same line as the proof of Proposition 3 by considering the Taylor series expansion of e-μt and E(etXlogY) around t=0.

The following lemma provides an upper bound for the tail probability of a sub-exponential random variable.

Lemma 2

(Sub-exponential tail bound)Suppose that X with mean μ is sub-exponential with parameters (ν,b). Then

P(Xμ+t)exp[-min(t22ν2,t2b)].

The following inequality for suprema of random processes is needed for the proof of Theorem 2.

Proposition 5

(Baraud 2010, Theorem 2.1) Let (S(g))gG be a family of real valued and centered random variables. Fix some g0 in G.

Suppose the following two conditions hold:

1.
There exist two arbitrary norms ||·||2 and ||·|| on G and a nonnegative constant c such that for all g1,g2G(g1g2),
E[et(S(g1)-S(g2))]exp(t2||g1-g2||222(1-tc||g1-g2||))
for all t[0,1c||g1-g2||) .
2.
Let S be a linear space with finite dimension D endowed with the two norms ||·||, ||·||2 defined above, we assume that for constants u>0 and b0,
G{gS:||g-g0||u1,c||g-g0||2u2}.

we have for all x>0,

P[supgG|S(g)-S(g0)|κ(u2(D+x)+b(D+x))]2e-x.

where κ=18.

Proof

In this section, we prove Theorems 1, 2, 3 for the special case αq,l=α11 for all (q,l){1,,Q}. The more general case can be proved similary by using the fact that XijlogYij is a sub-exponential random variable.

Proof of Theorem 1

Proof

Our proof is adapted from Celisse et al. (2012). Using the fact that for every z[n][z[n]], P(z[n]|X[n],Y[n])=P(z[n]|X[n],Y[n]), we have that

P(Z[n]=[z[n]]|X[n],Y[n])=z[n][z[n]]P(z[n]|X[n],Y[n])=|[z[n]]|P(z[n]|X[n],Y[n])

where |[z[n]]| is the cardinality of the equivalence class [z[n]].

By applying a similar approach as in ( Celisse et al. (2012); “Appendix B2”), one can derive an upper bound for P[[z[n]][z[n]]P([Z[n]]=[z[n]]|X[n],Y[n])P([Z[n]]=[z[n]]|X[n],Y[n])>t]:

P[[z[n]][z[n]]P([z[n]]|X[n],Y[n])P([z[n]]|X[n],Y[n])>t]r=1nz[n][z[n]],||z[n]-z[n]||0=rP[P(z[n]|X[n],Y[n])P(z[n]|X[n],Y[n])>tnr+1(Q-1)r]

where ||z[n]-z[n]||0 is the number of difference between z[n] and z[n].

To simplify notation, we write z:=z[n], X:=X[n], and Y=Y[n]. We have that

P[P(z|X,Y)P(z|X,Y)>tnr+1(Q-1)r]=P[P(Y|X,z)P(X|z)P(z)P(Y|X,z)P(X|z)P(z)>tnr+1(Q-1)r]=P[logP(Y|X,z)P(Y|X,z)+logP(X|Z)P(z)P(X|z)P(z)>logtnr+1(Q-1)r]P[|logP(Y|X,z)P(Y|X,z)-EZ=z(logP(Y|X,z)P(Y|X,z))|>12(logtnr+1(Q-1)r-EZ=z(logP(Y|X,z)P(Y|X,z))-EZ=z(logP(X|z)P(z)P(X|z)P(z)))]+P[|logP(X|z)P(z)P(X|z)P(z)-EZ=z(logP(X|z)P(z)P(X|z)P(z))|]>12(logtnr+1(Q-1)r-EZ=z(logP(Y|X,z)P(Y|X,z))-EZ=z(logP(X|z)P(z)P(X|z)P(z)))]=:I1+I2

Consider the first term on the RHS of the last inequality,

logP(Y|X,z)P(Y|X,z)-EZ=z(logP(Y|X,z)P(Y|X,z))=ij(α11logβzi,zjβzi,zj)(Xij-πzi,zj)+(βzi,zj-βzi,zj)(YijXij-α11βzi,zjπzi,zj).

The summand in the summation above vanishes when βzi,zj=βzi,zj. For two vectors z and z, we define

D(z,z)={(i,j)|ij,βzi,zjβzi,zj}

and let Nr(z)=|D(z,z)| denote the number of terms in the summation. Set

s=1Nr(z)12{logtnr+1(Q-1)r-EZ=z(logP(Y|X,z)P(Y|X,z))-EZ=z(logP(X|z)P(z)P(X|z)P(z))},

one can show by Lemma B.3 of Celisse et al. (2012) that there exists some positive constant c such that sc>0. We have that

P[1Nr(z)|logP(Y|X,z)P(Y|X,z)-EZ=z(logP(Y|X,z)P(Y|X,z))|>2s]P[1Nr(z)|ijα11logβzi,zjβzi,zj(Xij-πzi,zj)|>s]+P[1Nr(z)|ij(βzi,zj-βzi,zj)(YijXij-α11βzi,zjπzi,zj)|>s]

Assumption (A6) implies that the random variable α11(logβzi,zj/βzi,zj)Xij is bounded for all ij. An application of Hoeffding’s inequality yields that

P[1Nr(z)|ijα11logβzi,zjβzi,zj(Xij-πzi,zj)|>s]exp(-Nr(z)s2L1) 14

for some constant L1>0.

By Proposition 3, the random variable YijXij is sub-exponential and the tail bound for sub-exponential random variables (Lemma 2) implies that

P[1Nr(z)|ij(βzi,zj-βzi,zj)(YijXij-α11βzi,zjπzi,zj)|>s]max{exp(-Nr(z)s2L2),exp(-Nr(z)sL3)} 15

for some constants L2,L3>0. Proposition B.4 of Celisse et al. (2012) shows that Nr(z) is bounded below by

Nr(z)γ22n||z[n]-z[n]||0. 16

Combining inequalities (14), (15), (16), it is straight forward to show that

I1=O(exp(-A1n)).

By Theorem 3.1 of Celisse et al. (2012), we have

I2=O(exp(-A2n))

for some constant A2>0. Therefore, with A:=min{A1,A2}

P[[z[n]][z[n]]P([z[n]]|X[n],Y[n])P([z[n]]|X[n],Y[n])>t]r=1nnr(Q-1)rO(exp(-An))0

as n. Since the upper bound does not depend on z, P can be replaced by P.

Proof of Theorem 2

Proof

As the first step of the proof, we define the normalized complete data log-likelihood function

ϕn(z[n],π,α,β)=1n(n-1)L1(X[n],Y[n];z[n],π,α,β)=1n(n-1)[ij{Xijlogπzi,zj+(1-Xij)log(1-πzi,zj)}+Xij{αzi,zjlogβzi,zj+(αzi,zj-1)logYij-βzi,zjYij-logΓ(αzi,zj)}],

and its expectation

Φn(z[n],π,α,β)=E[ϕn(z[n],π,α,β)|Z[n]=z[n]]=1n(n-1)[ij{πzi,zjlogπzi,zj+(1-πzi,zj)log(1-πzi,zj)}+πzi,zj{αzi,zjlogβzi,zj+(αzi,zj-1)(logβzi,zj+ψ(αzi,zj))-βzi,zjαzi,zjβzi,zj-logΓ(αzi,zj)}].

The following proposition shows that ϕn(z[n],π,α,β) uniformly converges to Φn(z[n],π,α,β). This result is an extension of Proposition 3.5 of Celisse et al. (2012).

Proposition 6

Under assumptions (A1), (A2), (A5), (A6), we have

supP|ϕn(z[n],π,α,β)-Φn(z[n],π,α,β)|nP0

where P:={(z[n],π,α,β):(A1),(A2),(A5),(A6)}.

Proof

We have that

|ϕn(z[n],π,α,β)-Φn(z[n],π,α,β)|ρn|ij(Xij-πzi,zj)(logπzi,zj1-πzi,zj+αzi,zjlogβzi,zj-logΓ(α11))|+ρn|ij(XijYij-πzi,zjαzi,zjβzi,zj)βzi,zj|+ρn|ijXijlogYij-πzi,zjlogβzi,zj-πzi,zjψ(αzi,zj)(αzi,zj-1)| 17

where ρn=1/n(n-1). Notice that under Assumption (A2) and (A6),

Φn(z[n],π,α,β)<+.

Therefore, by Proposition 3.5 of Celisse et al. (2012),

supPρn|ij(Xij-πzi,zj)(logπzi,zj1-πzi,zj+αzi,zjlogβzi,zj-logΓ(αzi,zj))|nP0

To bound the second term on the RHS of the inequality 17, we apply Proposition 5 (Theorem 2.1 of Baraud (2010)). We first define

Sn(β)=ij(XijYij-πzi,zjαzi,zjβzi,zj)βzi,zj.

Now, for two parameters β(1) and β(2),

Sn(β(1))-Sn(β(2))=ij(XijYij-πzi,zjαzi,zjβzi,zj)(βzi,zj(1)-βzi,zj(2))=q,lij,zi=q,zj=l(XijYij-πzi,zjαzi,zjβzi,zj)(βql(1)-βql(2)).

Since XijYij is sub-exponential with parameters (ν,b), ij,zi=q,zj=lXiXj is sub-exponential with parameters (nqlν,b) by Lemma 3, where nql=|{(i,j):zi=q,zj=l}|. Therefore, define the norms

||β(1)-β(2)||22=n2ν2q,l(βql(1)-βql(2))2||β(1)-β(2)||2=q,l(βql(1)-βql(2))2,

we have

E[exp(t(Sn(β(1))-Sn(β(2)))]=q,lE[exp(t(βql(1)-βql(2))ij,zi=q,zj=l(XijYij-πzi,zjαzi,zjβzi,zj))]q,lexp{nqlν2t2(βql(1)-βql(2))22}exp{n2ν2t2(βql(1)-βql(2))22}=exp{t2||β(1)-β(2)||222}exp{t2||β(1)-β(2)||222(1-tc||β(1)-β(2)||)}

for all

t[0,1c||β(1)-β(2)||),

where c is some negative constant. Therefore, the first condition of Proposition 5 is satisfied.

Fix some β(0), (A6) implies that there exist u1=O(n) and u2=O(1) such that ||β-β(0)||2u1, and ||β-β(0)||2u2. Therefore the second condition of Proposition 5 is also satisfied with D=Q2 . Now, we have

P(ρnsupβ|Sn(β)|>η)P(ρnsupβ|Sn(β)-Sn(β(0))|>η2)+P(ρn|Sn(β(0))|>η2). 18

Since ijXijYijβzi,zj(0) is a sub-exponential random variable with parameters (nν,b), the second term on the RHS of inequality 18 can be bounded by

P(ρn|Sn(β(0))|>η2)=O(exp(-n2)). 19

To bound the first term on the RHS of inequality 18, we introduce the set Pz[n]={(π,α,β):(z[n],π,α,β)P} for every z[n], and define the event

Ωn(z[n])={supP(z[n])ρn|Sn(β)-Sn(β(0))|ρnκu2(Q2+xn)+ρnb(Q2+xn)}.

We have

P(Ωn(z[n])c)=2e-xn 20

by Proposition 5. Combining 18, 19, and 20,

P[ρn|ij(XijYij-πzi,zjαzi,zjβzi,zj)βzi,zj|>η]z[n]P[{supP(z[n])ρn|Sn(β)-Sn(β(0))|>η2}]+P[ρn|Sn(β0)|>η2]z[n]P[{supP(z[n])ρn|Sn(β)-Sn(β(0))|>η2}Ωn(z[n])]+z[n]2e-xn+z[n]O(e-n2)z[n]P[ρnκu2(D+xn)+ρnb(D+xn)>η2]+z[n]2e-xn+z[n]O(e-n2)

Since z[n] belongs to a set of cardinality at most Qn, by choosing xn=nlog(n), the three sums converge to 0.

By using the fact that XijlogYij is a sub-exponential random variable, we can similarly show that

P[supPρn|ijXijlogYij-πzi,zjlogβzi,zj-πzi,zjψ(αzi,zj)(αzi,zj-1)|>η]nP0.

Since the convergence is uniform with respect to z[n], P can be replaced by P and the proof is completed.

Proposition 6 allows us to establish the following result concerning the convergence of the normalized log-likelihood function. Proposition 7 is an extension of Theorem 3.6 of Celisse et al. (2012) and allows us to establish the consistency of MLE of (π,α,β).

Proposition 7

We assume that assumptions (A1), (A2), (A3), (A5), (A6) hold. For every (θ,π,α,β), set

Mn(θ,π,α,β)=(n(n-1))-1L2(Y[n],X[n];θ,π,α,β),

and

M(π,α,β,A)=q,lθqθlq,laq,qal,l[πqllogπq,l+(1-πql)log(1-πq,l)+πqlEαql,βql(logf(.;αq,l,βq,l))]

where

A={A=(aq,l)1q,lQ|aq,l0,l=1Qakl=1}.

Then for any η>0,

supd((π,α,β),(π,α,β))ηM(π,α,β)<M(π,α,β),supθ,π,α,β|Mn(θ,π,α,β)-M(π,α,β)|nP0.
Proof

We define the following:

z^[n](π,α,β)=argmaxzϕn(z[n],π,α,β),z~[n](π,α,β)=argmaxzΦn(z[n],π,α,β),A¯π,α,β=argmaxAAM(π,α,β,A),M(π,α,β)=M(π,α,β,A¯π,α,β).

By a similar reasoning as in the proof of Theorem 3.6 of Celisse et al. (2012), we can show that A¯π,α,β=IQ and is unique. To show that for all η>0, supd((π,α,β),(π,α,β))ηM(π,α,β)<M(π,α,β), we let (a¯ql)q,l denote coefficients of A¯π,α,β. We have that

M(π,α,β)-M(π,α,β)=-q,lθqθlq,la¯q,qa¯l,l{KLB(πql,πq,l)+πqlKLG((αql,βql),(αq,lβq,l))},

where KLB(p,q) denotes the Kullback-Leibler divergence between two bernoulli distributions Ber(p) and Ber(q), and KLG((a,b1),(a2,b2)) denotes the Kullback-Leibler divergence between two gamma distributions Ga(a1,b1) and Ga(a2,b2).

Since the set {(π,α,β)|d((π,α,β),(π,α,β))η)} is compact by assumptions, there exists (π0,α0,β0)(π,α,β) such that

supd((π,α,β),(π,α,β))ηM(π,α,β)-M(π,α,β)=M(π0,α0,β0)-M(π,α,β)<0.

Next we show that

supθ,π,α,β|Mn(θ,π,α,β)-M(π,α,β)|nP0.

We first have the following bound:

|Mn(θ,π,α,β)-M(π,α,β)||Mn(θ,π,α,β)-ϕn(z^,π,α,β)|+|ϕn(z^,π,α,β)-Φn(z~,π,α,β)|+|Φn(z~,π,α,β)-M(π,α,β)|. 21

We first consider the first term on the RHS of inequality (21):

supθ,π,α,β|Mn(θ,π,α,β)-ϕn(z^,π,α,β)|=supθ,π,α,β|L2(Y[n],X[n];θ,π,α,β)-L1(Y[n],X[n];z^[n],π,α,β)|n(n-1)log(1/γ)n-1n0. 22

Consider the second term on the RHS of inequality (21), if ϕn(z^,π,α,β)<Φn(z~,π,α,β), we have

|ϕn(z^,π,α,β)-Φn(z~,π,α,β)|=Φn(z~,π,α,β)-ϕn(z^,π,α,β)Φn(z~,π,α,β)-ϕn(z~,π,α,β).

On the other hand, if ϕn(z^,π,α,β)Φn(z~,π,α,β),

|ϕn(z^,π,α,β)-Φn(z~,π,α,β)|=ϕn(z~,π,α,β)-Φn(z^,π,α,β)ϕn(z^,π,α,β)-Φn(z^,π,α,β).

Therefore, Proposition 6 implies that

supπ,α,β|ϕn(z^,π,α,β)-Φn(z~,π,α,β)|nP0. 23

Last, we notice that under our assumptions (A2) and (A6), and using the strong law of large number,

supπ,α,β|Φn(z~,π,α,β)-M(π,α,β)|nP0. 24

Combining (22), (23) and (24), we have the desired result.

The consistency of (π^,α^,β^) follows from Proposition 7 and Theorem 3.4 of Celisse et al. (2012).

Proof of Thereom 3

Proof

We denote P^(Z[n]=z[n]|X[n],Y[n]) as the conditional distribution of Z[n] under the parameters (θ^,π^,α^,β^). The following result is an extension of Proposition 3.8 of Celisse et al. (2012) and is needed to establish the consistency of θ^.

Proposition 8

Assume that assumptions (A1)-(A6) hold, and there exists estimators π^,α^,β^ such that ||π^-π||=oP(vn), ||α^-α||=oP(vn), ||β^-β||=oP(vn), with vn=o(logn/n). Let also θ^ denote any estimator of θ. Then for every ϵ>0,

P[z[n]z[n]P^(Z[n]=z[n]|X[n],Y[n])P^(Z[n]=z[n]|X[n],Y[n])>ϵ]κ1ne-κ2(logn)2nvn2+P[||π^-π||>vn]+P[||α^-α||>vn]+P[||β^-β||>vn]

for n large enough, and for some constants κ1,κ2>0 and

log(P^(Z[n]=z[n]|X[n],Y[n])P^(Z[n]=z[n]|X[n],Y[n]))=log(P^(X[n]|Z[n]=z[n])P(Z[n]=z[n])P^(X[n]|Z[n]=z[n])P(Z[n]=z[n]))+log(P^(Y[n]|X[n],Z[n]=z[n])P^(Y[n]|X[n],Z[n]=z[n]))=ij{Xijlog(π^zi,zjπ^zi,zj)+(1-Xij)log(1-π^zi,zj1-π^zi,zj)}+ilogθ^ziθ^zi+ij{Xij(α^zi,zjlogβ^zi,zj-α^zi,zjlogβ^zi,zj)+(α^zi,zj-α^zi,zj)XijlogYij+XijYij(β^zi,zj-β^zi,zj)+Xij(logΓ(α^zi,zj)-logΓ(α^zi,zj))}:=T0+T1.
Proof

We can write

T1=D{πzi,zjαzi,zjβzi,zj(βzi,zj-βzi,zj)+πzi,zjαzi,zjlogβzi,zjβzi,zj}+D{(XijYij-αzi,zjβzi,zjπzi,zj)(βzi,zj-β^zi,zj)+(Xij-πzi,zj)αzi,zjlogβzi,zjβzi,zj}+D^D{αzi,zjβzi,zjπzi,zj(β^zi,zj-βzi,zj)+αzi,zjβzi,zjπzi,zj(β^zi,zj-βzi,zj)+πzi,zj(α^zi,zjlogβ^zi,zjβ^zi,zj-αzi,zjlogβzi,zjβzi,zj)}+D^D{(Xij-πzi,zj)(α^zi,zjlogβ^zi,zjβ^zi,zj-αzi,zjlogβzi,zjβzi,zj)}+D^D{(XijYij-αzi,zjβzi,zjπzi,zj)(β^zi,zj-βzi,zj)+(XijYij-αzi,zjβzi,zjπzi,zj)(βzi,zj-β^zi,zj)}=:T1,1+T1,2+T1,3+T1,4+T1,5

where

D:={(i,j):ij,πzi,zjπzi,zj},D^:={(i,j):ij,π^zi,zjπ^zi,zj}.

By the proof of Proposition 3.8 of Celisse et al. (2012), we have

P[{[z[n]][z[n]]P^(Z[n]=z[n]|X[n],Y[n])P^(Z[n]=z[n]|X[n],Y[n])>ϵ}Ωn]r=1nz[n][z[n]],||z[n]-z[n]||0=rP[{logP^(Z[n]=z[n]|X[n],Y[n])P^(Z[n]=z[n]|X[n],Y[n])>-5rlogn}Ωn]=r=1nz[n][z[n]],||z[n]-z[n]||0=rP[{T0+T1>-5rlogn}Ωn].

We have that

P[{T0+T1>-5rlogn}Ωn]P[{T0>-10rlogn}Ωn}+P[{T1>5rlogn}Ωn].

The proof of Proposition 3.8 of Celisse et al. (2012) shows that

P[{T0>-10rlogn}Ωn}C1{exp(8nlogn-C2(logn)2nvn2)}r. 25

We have that

T1,1=Dπzi,zjαzi,zjβzi,zj(βzi,zj-βzi,zj)+πzi,zjα11logβzi,zjβzi,zj|D|max(q,l)(q,l)-KLG((αql,βql)||(αq,lβq,l))=-|D|K

Since XijYij is a sub-exponential random variable and Xij is bounded for all ij, one can show that

P[T1,2+12T1,1>t]max{exp{-C1(t+|D|K)2|D|},exp{-C2(t+|D|K}}. 26

Using similar techniques as in the proof of Proposition 3.8 of Celisse et al. (2012), one can show that

P[{|T1,3|>t}Ωn]P(vn>C3tnr), 27
P[{T1,4>t}Ωn]kD,|D|=kQ2exp(-C4t2vn2(k+|D|)), 28
P[T1,5+12T1,1>t]kD,|D|=kQ2max{exp[-C5(t+(|D|+k)K)vn],exp[-C6(t+(|D|+k)K)2(|D|+k)vn2]}. 29

Combining inequalities 25, 26, 27, 28, 29, and letting vn=o(logn/n), we have for some B1,B2,B3>0,

P[{[z[n]][z[n]]P^(z[n]|X[n],Y[n])P^(z[n]|X[n],Y[n])>ϵ}Ωn]r=1nnr(Q-1)rB1(exp[B2nlogn-B3ω(nlogn)])r=B1[(1+(Q-1)un)n-1]

where un=exp[B2nlogn-B3(logn)2nvn2]. Since (1+(Q-1)un)n1 as n, and the proof is completed.

The proof of the consistency of θ^ is a consequence of the proposition above and follows the same lines as the proof of Theorem 3.9 of Celisse et al. (2012).

Funding

Open Access funding provided by the IReL Consortium. The funding was provided by Science Foundation Ireland (Grant No. SFI/12/RC/2289-2).

Footnotes

1

Historical data available at https://www.capitalbikeshare.com/.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Abbe E. Community detection and stochastic block models: recent developments. J Mach Learn Res. 2018;18:1–86. [Google Scholar]
  2. Airoldi EM, Blei DM, Fienberg SE, Xing EP. Mixed membership stochastic blockmodels. J Mach Learn Res. 2008;9:1981–2014. [PMC free article] [PubMed] [Google Scholar]
  3. Aicher C, Jacobs AZ, Clauset A (2013) Adapting the stochastic block model to edge-weighted networks. ICML workshop on structured learning
  4. Aicher C, Jacobs AZ, Clauset A. Learning latent block structure in weighted networks. J Compl Netw. 2015;3:221–248. doi: 10.1093/comnet/cnu026. [DOI] [Google Scholar]
  5. Allman ES, Matias C, Rhodes JA. Identifiability of parameters in latent structure models with many observed variables. Ann Stat. 2009;37:3099–3132. doi: 10.1214/09-AOS689. [DOI] [Google Scholar]
  6. Allman ES, Matias C, Rhodes JA. Parameter identifiability in a class of random graph mixture models. J Stat Plan Inference. 2011;141:1719–1736. doi: 10.1016/j.jspi.2010.11.022. [DOI] [Google Scholar]
  7. Ambroise C, Matias C. New consistent and asymptotically normal parameter estimates for random-graph mixture models. J R Stat Soc Ser B Stat Methodol. 2012;74:3–35. doi: 10.1111/j.1467-9868.2011.01009.x. [DOI] [Google Scholar]
  8. Baraud Y. A Bernstein-type inequality for suprema of random processes with applications to model selection in non-Gaussian regression. Bernoulli. 2010;16:1064–1085. doi: 10.3150/09-BEJ245. [DOI] [Google Scholar]
  9. Barrat A, Barthélemy M, Pastor-Satorras R, Vespignani A. The architecture of complex weighted networks. Proc Natl Acad Sci. 2004;101:3747–3752. doi: 10.1073/pnas.0400087101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bickel PJ, Chen A. A nonparametric view of network models and Newman–Girvan and other modularities. Proc Natl Acad Sci USA. 2009;106:21068–21073. doi: 10.1073/pnas.0907096106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Bickel PJ, Chen A, Levina E. The method of moments and degree distributions for network models. Ann Stat. 2011;39:2280–2301. doi: 10.1214/11-AOS904. [DOI] [Google Scholar]
  12. Bickel P, Choi D, Chang X, Zhang H. Asymptotic normality of maximum likelihood and its variational approximation for stochastic blockmodels. Ann Stat. 2013;41:1922–1943. doi: 10.1214/13-AOS1124. [DOI] [Google Scholar]
  13. Biernacki C, Celeux G, Govaert G. Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans Pattern Anal Mach Intell. 2000;22:719–725. doi: 10.1109/34.865189. [DOI] [Google Scholar]
  14. Brault V, Keribin C, Mariadassou M. Consistency and asymptotic normality of Latent Block Model estimators. Electron J Stat. 2020;14:1234–1268. doi: 10.1214/20-EJS1695. [DOI] [Google Scholar]
  15. Celisse A, Daudin J-J, Pierre L. Consistency of maximum-likelihood and variational estimators in the stochastic block model. Electron J Stat. 2012;6:1847–1899. doi: 10.1214/12-EJS729. [DOI] [Google Scholar]
  16. Channarond A, Daudin J-J, Robin S. Classification and estimation in the stochastic blockmodel based on the empirical degrees. Electron J Stat. 2012;6:2574–2601. doi: 10.1214/12-EJS753. [DOI] [Google Scholar]
  17. Choi DS, Wolfe PJ, Airoldi EM. Stochastic blockmodels with a growing number of classes. Biometrika. 2012;99:273–284. doi: 10.1093/biomet/asr053. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Clauset A, Newman MEJ, Moore C. Finding community structure in very large networks. Phys Rev E. 2004;70:066111. doi: 10.1103/PhysRevE.70.066111. [DOI] [PubMed] [Google Scholar]
  19. Côme E, Latouche P. Model selection and clustering in stochastic block models based on the exact integrated complete data likelihood. Stat Model. 2015;15:564–589. doi: 10.1177/1471082X15577017. [DOI] [Google Scholar]
  20. Daudin J-J, Picard F, Robin S. A mixture model for random graphs. Stat Comput. 2008;18:173–183. doi: 10.1007/s11222-007-9046-7. [DOI] [Google Scholar]
  21. Ghasemian A, Zhang P, Clauset A, Moore C, Peel L. Detectability thresholds and optimal algorithms for community structure in dynamic networks. Phys Rev X. 2016;6:031005. [Google Scholar]
  22. Haj AE, Slaoui Y, Louis P-Y, Khraibani Z (2020) Estimation in a binomial stochastic blockmodel for a weighted graph by a variational expectation maximization algorithm. Commun Stat Simul Comput:1–20
  23. Holland PW, Laskey KB, Leinhardt S. Stochastic blockmodels: first steps. Soc Netw. 1983;5:109–137. doi: 10.1016/0378-8733(83)90021-7. [DOI] [Google Scholar]
  24. Jog V, Loh P-L (2015) Information-theoretic bounds for exact recovery in weighted stochastic block models using the Rényi divergence. CoRR, arXiv:abs/1509.06418
  25. Karrer B, Newman MEJ. Stochastic blockmodels and community structure in networks. Phys Rev E. 2011;83:016107. doi: 10.1103/PhysRevE.83.016107. [DOI] [PubMed] [Google Scholar]
  26. Latouche P, Birmelé E, Ambroise C. Overlapping stochastic block models with application to the French political blogosphere. Ann Appl Stat. 2011;5:309–336. doi: 10.1214/10-AOAS382. [DOI] [Google Scholar]
  27. Latouche P, Birmelé E, Ambroise C. Model selection in overlapping stochastic block models. Electron J Stat. 2014;8:762–794. doi: 10.1214/14-EJS903. [DOI] [Google Scholar]
  28. Leger J-B, Vacher C, Daudin J-J. Detection of structurally homogeneous subsets in graphs. Stat Comput. 2014;24:675–692. doi: 10.1007/s11222-013-9395-3. [DOI] [Google Scholar]
  29. Ludkin M. Inference for a generalised stochastic block model with unknown number of blocks and non-conjugate edge models. Comput Stat Data Anal. 2020;152:107051. doi: 10.1016/j.csda.2020.107051. [DOI] [Google Scholar]
  30. Mariadassou M, Robin S, Vacher C. Uncovering latent structure in valued graphs: a variational approach. Ann Appl Stat. 2010;4:715–742. doi: 10.1214/10-AOAS361. [DOI] [Google Scholar]
  31. Newman MEJ. Analysis of weighted networks. Phys Rev E. 2004;70:056131. doi: 10.1103/PhysRevE.70.056131. [DOI] [PubMed] [Google Scholar]
  32. Peixoto TP. Nonparametric weighted stochastic block models. Phys Rev E. 2018;97:012306. doi: 10.1103/PhysRevE.97.012306. [DOI] [PubMed] [Google Scholar]
  33. Peng L, Carvalho L. Bayesian degree-corrected stochastic blockmodels for community detection. Electron J Stat. 2016;10:2746–2779. doi: 10.1214/16-EJS1163. [DOI] [Google Scholar]
  34. Pons P, Latapy M (2005) Computing communities in large networks using random walks. In: Proceedings of the 20th international conference on computer and information sciences. Springer, Berlin, ISCIS’05, pp 284–293
  35. Rohe K, Chatterjee S, Yu B. Spectral clustering and the high-dimensional stochastic blockmodel. Ann Stat. 2011;39:1878–1915. doi: 10.1214/11-AOS887. [DOI] [Google Scholar]
  36. Saldaña DF, Yu Y, Feng Y. How many communities are there? J Comput Graph Stat. 2017;26:171–181. doi: 10.1080/10618600.2015.1096790. [DOI] [Google Scholar]
  37. Snijders TAB, Nowicki K. Estimation and prediction for stochastic blockmodels for graphs with latent block structure. J Classif. 1997;14:75–100. doi: 10.1007/s003579900004. [DOI] [Google Scholar]
  38. Stouffer DB, Bascompte J. Compartmentalization increases food-web persistence. Proc Natl Acad Sci USA. 2011;108:3648–3652. doi: 10.1073/pnas.1014353108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. von Luxburg U. A tutorial on spectral clustering. Stat Comput. 2007;17:395–416. doi: 10.1007/s11222-007-9033-z. [DOI] [Google Scholar]
  40. Wang YXR, Bickel PJ. Likelihood-based model selection for stochastic block models. Ann Stat. 2017;45:500–528. [Google Scholar]
  41. Yan X (2016) Bayesian model selection of stochastic block models. In: 2016 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM), pp 323–328
  42. Ye Z-S, Chen N. Closed-form estimators for the gamma distribution derived from likelihood equations. Am Stat. 2017;71:177–181. doi: 10.1080/00031305.2016.1209129. [DOI] [Google Scholar]
  43. Zanghi H, Picard F, Miele V, Ambroise C. Strategies for online inference of model-based clustering in large and growing networks. Ann Appl Stat. 2010;4:687–714. doi: 10.1214/10-AOAS359. [DOI] [Google Scholar]

Articles from Statistical Methods & Applications are provided here courtesy of Springer

RESOURCES