Abstract
Network models are widely used to represent relational information among interacting units and the structural implications of these relations. Recently, social network studies have focused a great deal of attention on random graph models of networks whose nodes represent individual social actors and whose edges represent a specified relationship between the actors.
Most inference for social network models assumes that the presence or absence of all possible links is observed, that the information is completely reliable, and that there are no measurement (e.g., recording) errors. This is clearly not true in practice, as much network data is collected though sample surveys. In addition even if a census of a population is attempted, individuals and links between individuals are missed (i.e., do not appear in the recorded data).
In this paper we develop the conceptual and computational theory for inference based on sampled network information. We first review forms of network sampling designs used in practice. We consider inference from the likelihood framework, and develop a typology of network data that reflects their treatment within this frame. We then develop inference for social network models based on information from adaptive network designs.
We motivate and illustrate these ideas by analyzing the effect of link-tracing sampling designs on a collaboration network.
Key words and phrases: Exponential family random graph model, p* model, Markov chain Monte Carlo, design-based inference
1. Introduction
Networks are a useful device to represent “relational data,” that is, data with properties beyond the attributes of the individuals (nodes) involved. Relational data arise in many fields and network models are a natural approach to representing the patterns of the relations between nodes. Networks can be used to describe such diverse ideas as the behaviour of epidemics, the interconnectedness of corporate boards, and networks of genetic regulatory interactions. In social network applications, the nodes in a graph typically represent individuals, and the ties (edges) represent a specified relationship between individuals. Nodes can also be used to represent larger social units (groups, families, organizations), objects (airports, servers, locations) or abstract entities (concepts, texts, tasks, random variables). We consider here stochastic models for such graphs. These models attempt to represent the stochastic mechanisms that produce relational ties, and the complex dependencies thus induced.
Social network data typically consist of a set of n actors and a relational tie random variable, Yij, measured on each possible ordered pair of actors, (i, j), i, j = 1, …, n, i ≠ j. In the most simple cases, Yij is a dichotomous variable, indicating the presence or absence of some relation of interest, such as friendship, collaboration, transmission of information or disease, etc. The data are often represented by an n × n sociomatrix Y, with diagonal elements, representing self-ties, treated as structural zeros. In the case of binary relations, the data can also be thought of as a graph in which the nodes are actors and the edge set is {(i, j) :Yij = 1}. For many networks the relations are undirected in the sense that Yij = Yji, i, j = 1, …, n.
In the application in this paper we consider a network formed from the collaborative working relations between n = 36 partners in a New England law firm [Lazega (2001)]. We focus on the undirected relation where a tie is said to exist between two partners if and only if both indicate that they collaborate with the other. The scientific objective is to explain the observed structural pattern of collaborative ties as a function of nodal and relational attributes. The relational data is supplemented by four actor attributes: seniority (the rank number of chronological entry into the firm divided by 36), practice (there are two possible values, litigation = 0 and corporate law = 1), gender (3 of the 36 lawyers are female) and office (there are three different offices in three different cities each of different size).
For large or hard-to-find populations of actors it is difficult to obtain information on all actors and all relational ties. As a result, various survey sampling strategies and methods are applied. Some of these methods make use of network information revealed by earlier stages of sampling to guide later sampling. These adaptive designs allow for more efficient sampling than conventional sampling designs. We consider such designs in Section 2.
Most of the work presented here considers the network over the set of actors to be the realization of a stochastic process. We seek to model that process. An alternative is to view the network as a fixed structure about which we wish to make inference based on partial observation.
In this paper we develop a theoretical framework for inference from network data that are partially-observed due to sampling. This work extends the fundamental work of Thompson and Frank (2000). For purposes of presentation, we focus on the relational data itself and suppress reference to covariates of the nodes. This more general situation is dealt with in Handcock and Gile (2007).
In Section 2 we present a conceptual framework for network sampling. We extend this framework in Section 3 to focus on inference from sampled network data. We first consider the limitations of design-based inference in this setting, then focus on likelihood-based inference. Section 4 presents the rich Exponential Family Random Graph Model (ERGM) family of models that has been applied to complete network data. Section 5 presents a study of the effect of sampling from a known complete network of law firm collaborations. Finally, in Section 6, we discuss the overall ramifications for the modeling of social networks with sampled data and note some extensions.
2. Network sampling design
In this section we consider the conceptual and computational theory of network sampling.
There is a substantial literature on network sampling designs. Our development here follows Thompson and Seber (1996) and Thompson and Frank (2000). Let 𝒴 denote the set of possible networks on the n actors. Note that in most network samples, the unit of sampling is the actor or node, while the unit of analysis is typically the dyad. Let D be the n × n random binary matrix indicating if the corresponding element of Y was sampled or not. The value of the i, jth element is 0 if the (i, j) ordered pair was not sampled and 1 if the element was sampled. Denote the sample space of D by 𝒟. We shall refer to the probability distribution of D as the sampling design. The sampling design is often related to the structure of the graph and a parameter ψ ∈ Ψ, so we posit a model for it. Specifically, let P(D = d|Y = y;ψ) denote the probability of selecting sample d given a network y and parameter ψ.
Under many sampling designs the set of sampled dyads is determined by the set of sampled nodes. Let S represent a binary random n-vector indicating a subset of the nodes, where the ith element is 1 if the ith node is part of the set, and is 0 otherwise. We often consider situations where D is determined by some S which is itself a result of a sample design denoted by P(S|Y, ψ). For example, consider an undirected network where the set of observed dyads are those that are incident on at least one of the sampled nodes. In this case D = S ◦ 1 + 1 ◦ S − S ◦ S, where 1 is the binary n-vector of 1s. A primary example of this is where people are sampled and surveyed to determine all their edges.
We introduce further notation to allow us to refer to the observed and unobserved portions of the relational structures. Denote the observed part of the complete graph Y by Yobs = {Yij : Dij = 1} and the unobserved part by Ymis = {Yij : Dij = 0}. The full observed data is then {Yobs, D}, in contrast to the complete data: {Yobs, Ymis, D}. We will write the complete graph Y = {Yobs, Ymis}. In addition, we make the convention that undefined numbers act as identity elements in addition and multiplication. So a number x plus or multiplied by an undefined number y is x, and hence Y = Yobs + Ymis. For a given network y ∈ 𝒴, denote the corresponding data as {yobs, d} and the other elements by their lower-case versions y = yobs + ymis. Finally denote 𝒴(yobs) = {υ : yobs + υ ∈ 𝒴}, that is the set of possible unobserved elements which together with yobs result in valid network. The set yobs + 𝒴(yobs) is then the restriction of 𝒴 to yobs.
A sampling design is conventional if it does not use information collected during the survey to direct subsequent sampling of individuals (e.g., network census and ego-centric designs). Specifically, a design is conventional if P(D = d|Y = y;ψ) = P(D = d|ψ) ∀y ∈𝒴. A simple example of a conventional sampling design for networks is an ego-centric design, consisting of a simple random sampling of a subset of the actors, followed by complete observation of the dyads originating from those actors. A complete census of the network is another. More complex examples include designs using probability sampling of pairs and auxiliary variables. Alternatively, we call a sampling design adaptive if it uses information collected during the survey to direct subsequent sampling, but the sampling design depends only on the observed data. Specifically, a design is adaptive if: P(D = d|Y = y;ψ) = P(D = d|Yobs = yobs, ψ) ∀y ∈ yobs + 𝒴(yobs). Hence a design can be adaptive for a given yobs (rather than all possible observed data), although most common such designs are adaptive for all possible data observed under them. Conventional designs can be considered to be special cases of adaptive designs.
Note that adaptive sampling designs satisfy
| (2.1) |
a condition called “missing at random” by Rubin (1976) in the context of missing data. Note that this is a bit misleading—it does not say that the propensity to be observed is unrelated to the unobserved portions of the network, but that this relationship can be explained by the data that are observed. The observed part of the data are often vital to equality (2.1). Hence adaptive designs are essentially those for which the unobserved dyads are missing at random.
Denote by [a] the vector-valued function that is 1 if the corresponding element of the vector a is logically true, and 0 otherwise. Let a × b be the elementwise product of the column vector a and the column vector b and a · b be the scalar product ∑j ajbj. Let a ◦ b be the outer product matrix with ijth element aibj. If y is a matrix and b a vector let y · b be the column vector with ith element ∑j yjibj.
2.1. Some adaptive designs for undirected networks
We now consider several examples of adaptive designs for undirected networks.
2.1.1. Example: Ego-centric design
Consider a simple ego-centric design:
Select individuals at random, each with probability ψ.
Observe all dyads involving the selected individuals (i.e., dyads with at least one of the selected individuals as one of the pair of actors).
The sampling design can be determined for this case. First note that
This, however, does not give the joint distribution of D. Let S be the binary n-vector where 1 and 0 indicate that the corresponding individual has been selected, or not, respectively. Within this design, S is determined by D (i.e., S = [D1 = (n − 1)1]). Then P(S = s|Y, ψ) = ψ1·s(1 − ψ)n−1·s, s ∈ {0, 1}n. If the i th element of S is 1 then all elements in the i th row and column of D are 1. Dij = 0 if and only if both the i th and j th elements of S are both 0. Hence the probability distribution of D is
for
Note that the distribution does not depend on Y, and is therefore conventional.
2.1.2. Example: One-wave link-tracing design
We refer to any sample in which subsequent nodes are enrolled based on their observed relations with other sampled nodes as a link-tracing design. Consider the one-wave link-tracing design specified as follows:
Select individuals at random, each with probability ψ.
Observe all dyads involving the selected individuals.
Identify all individuals reported to have at least one relation with the initial sample, and select them with probability 1.
Observe all dyads involving the newly selected individuals.
Let S0 denote the indicator vector for the initial sample and S1 the indicator for the added individuals not in the initial sample. Then the whole sample of individuals is S = S0 + S1. As in the undirected ego-centric design, D = 1 ◦ S + S ◦ 1 − S ◦ S. Note that S1 = [Y S0 × (1 −S0) > 0] is derivable from S0 and Y. Hence
for
2.1.3. Example: Multi-wave link-tracing design
Consider a multi-wave link-tracing design in which the complete set of partners of the kth wave are enrolled, that is, the link-tracing process described above is carried out k times. If k is fixed in advance this is called k-wave link-tracing.
Let S0 denote the indicator for the initial sample, S1 the indicator for the added individuals in the first wave not in the initial sample, …, Sk the indicator for the added individuals in wave k not in the prior samples. Then the whole sample of individuals is S = S0 + S1 + ⋯ +Sk. As in the ego-centric design D = 1 ◦ S + S ◦ 1 − S ◦ S. Note that , m = 1, …, k is derivable from S0 and Y. Then
for d = 1 ◦ s + s ◦ 1 − s ◦ s, s ∈ {0, 1}n. Here , m = 1, …, k so that the individuals selected in the successive waves only depend on the observed part of the graph, and not on the unobserved portions of the graph. Clearly, this is also true for one-wave link-tracing as a simple case of k-wave link-tracing. Note that it may be possible that Sm = ∅ for some m < k, so that subsequent waves do not increase the sample size (i.e., Sk = ∅). A variant of the k-wave link-tracing design is the saturated link-tracing design, in which sampling continues until wave m, such that Sm = ∅. We interpret k as the bound on the number of waves sampled imposed by the sampling design. Since saturated link-tracing does not restrict the number of waves sampled, we represent it by setting k =∞.
2.2. Some adaptive designs for directed networks
We can also consider variants of these adaptive designs for directed networks.
2.2.1. Example: Ego-centric design
Consider a simple ego-centric design:
Select individuals at random, each with probability ψ.
Observe all directed dyads originating at the selected individuals.
As before, the sampling design can be determined for this case. Since a directed dyad is observed only if its tail node is sampled,
and D = S0 ◦ 1. Hence the probability distribution of D is
for d = s ◦ 1, s ∈ {0, 1}n and the distribution does not depend on Y. As in the undirected case, this design is therefore conventional.
2.2.2. Example: One-wave link-tracing design
Consider a one-wave link-tracing design on a directed network specified as follows:
Select individuals at random, each with probability ψ.
Observe all directed dyads originating at the selected individuals.
Identify all individuals receiving an arc from a member of the initial sample, and select them with probability 1.
Observe all directed dyads originating at the newly selected individuals.
Let S0 denote the indicator vector for the initial sample and S1 the indicator for the added individuals not in the initial sample. Then the whole sample of individuals is S = S0 +S1. As in the ego-centric design D = S ◦ 1 and
for d = s ◦ 1, s ∈ {0, 1}n.
2.2.3. Example: Multi-wave link-tracing design
Consider a directed version of the multi-wave link-tracing design in which the complete set of out-partners of the kth wave are enrolled. The whole sample of individuals is S = S0 + S1 + ⋯ + Sk. And , m = 1, …, k is derivable from S0 and Y. Then
for d = s ◦ 1, s ∈ {0, 1}n, where we note that , m = 1, …, k so that the individuals selected in successive waves of depend only on the previously observed part of the graph, and not on the unobserved portions. The saturated link-tracing design is represented by k = ∞.
3. Inferential frameworks
In this section we consider two frameworks for inference based on sampled data. In the design-based framework y represents the fixed population and interest focuses on characterizing y based on partial observation. The random variation considered is due to the sampling design alone. A key advantage of this approach is that it does not require a model for the data themselves, although a model may also be used to guide design-based inference [Särndal, Swensson and Wretman (1992)]. Under the model-based framework, Y is stochastic and is a realization from a stochastic process depending on a parameter η. Here interest focuses on η which characterizes the mechanism that produced the complete network Y. We find severe limitations of the design-based framework for data from link-tracing samples, and focus on likelihood inference within the model-based framework.
3.1. Design-based inference for the network
In the design-based frame, the unobserved data values, or some functions thereof, are analogous to the parameters of interest in likelihood inference. The population of data values is treated as fixed, and all uncertainty in the estimates is due to the sampling design, which is typically assumed to be fully known (not just up to the parameter ψ).
Inference typically focuses on identifying design-unbiased estimators for quantities of interest measured on the complete network. In an undirected network analysis setting, for example, we can consider estimating τ =∑i<j yij, the number of edges in the network. Note that y is a partially-observed matrix of constants in this setting. Then τ̂ is design-unbiased for τ if
where the expectation is taken over realizations of the sampling process. Specifically,
where τ̂ (yobs(d), d) is the estimator expressed as a function of the observed network information. Similarly, the variance of the estimator is computed with respect to the variation induced by the sampling procedure
The Horvitz–Thompson estimator is a classic tool of design-based inference, and is based on inverse-probability weighting the sample. In our example, it is
where the dyadic sampling probability πij = P(Dij = 1| ψ, y) is the probability of observing dyad (i, j).
Consider an estimator of τ based on relations observed through the egocentric design of Section 2.1.1. Then
The classic Horvitz–Thompson estimator τ̂ of τ then weights each observation by the inverse of its sampling probability
Then
where πij,kl = P(S0i + S0j > 0, S0k + S0l > 0) or
Among the many available estimators for the variance of the Horvitz–Thompson estimator is the Horvitz–Thompson variance estimator:
Note the importance of the unit sampling probabilities in these estimators. This is a hallmark of design-based inference: inference relies on full knowledge of the sampling procedure in order to make unbiased inference without making assumptions about the distribution of the unobserved data. This typically requires knowledge of the sampling probability of each unit in the sample. This procedure is complicated in the network context, in that we require the sampling probabilities of the units of analysis, dyads, which are different from the units of sampling, nodes. In fact, for even single-wave link-tracing samples, the dyadic sampling probabilities are not observable.
To see this, define the nodal neighborhood of a dyad (i, j), N(i, j), where k ∈ N(i, j) ⇔ {S0k = 1 ⇒ Dij = 1}. Then πij = P(∃k : S0k = 1, k ∈ N(i, j)).
For the one-wave link-tracing design of Section 2.1.2, N(i, j) = {k} : yik = 1 or yjk = 1 or k ∈ {i, j}. Then if the initial sample S0 is drawn according to the design in Section 2.1.2, πij = 1 − (1 − ψ)‖N(i,j)‖. Suppose S0i = 1, and S0j = 0. Then dyad (i, j) is observed, but ‖N(i, j)‖ is unknown because it is unknown which k satisfy yjk = 1. The link-tracing sampling structures for which nodal and dyadic sampling probabilities are observable are summarized in Table 1. For directed networks, we assume sampled nodes provide information on their out-arcs only, so that D is not symmetric and Dij = 1 ⇔ Si = 1.
Table 1.
Observable sampling probabilities under various sampling schemes for directed and undirected networks. Nodal and dyadic sampling probabilities are considered separately. “X” indicates observable sampling probabilities, while a blank indicates unobservable sampling probabilities
| Sampling scheme |
Nodal probabilities πi | Dyadic probabilities πij | ||
|---|---|---|---|---|
| Undirected | Directed | Undirected | Directed | |
| Ego-centric | X | X | X | X |
| One-wave | X | |||
| k-wave, 1 < k < ∞ saturated | X | |||
Of the designs considered here, dyadic sampling probabilities are observable only for ego-centric samples, and never for link-tracing designs. Nodal sampling probabilities are also observable for ego-centric sampling, as well as for one-wave and saturated link-tracing designs in undirected networks. Overall, this table presents strong limitations to the applicability of design-based methods requiring the knowledge of sampling probabilities to link-tracing designs. Note that this limitation is not specific to dyad-based network statistics. Estimation of triad-based network statistics such as a triad census would be subject to similar limitations. A Horvitz–Thompson style estimator would rely on a weighted sum of observed triads, weighted according to sampling probabilities. Sampling probabilities for triads would be even more complex, as they would typically require sampling of two of the three nodes involved in an undirected case, and at least two of the three nodes in an directed case, depending on the triad census. Both of these sampling probabilities would not be possible to compute for link-tracing samples in which the degrees or in-degrees of some involved nodes are unobserved.
Not surprisingly, most of the work on design-based estimators for link-tracing samples has focused on the cases where sampling probabilities are observable: typically for one-wave or saturated samples used to estimate population means of nodal covariates. Frank (2005) presents a good overview and extensive citations to this literature. See also Thompson and Collins (2002); Snijders (1992). Although examples tend to focus on instances where sampling probabilities are observable, the limited applicability of classical design-based methods in estimating structural network features based on link-tracing samples has not been emphasized in the literature.
In the absence of observable sampling probabilities, design-based inference requires a mechanism for estimating sampling probabilities. This is most often necessary in the context of out-of-design missing data, and addressed with approaches such as propensity scoring [Rosenbaum and Rubin (1983)], which rely on auxiliary information available for the full sampling frame to estimate unknown sampling probabilities. Link-tracing differs from the traditional context of such methods in that the sampling probabilities are unobserved even when the design is executed faithfully, and in that the unknown sampling probabilities result directly from the unobserved variable of interest. In particular, estimating unknown sampling probabilities is equivalent to estimating unobserved relations based on the observed relations. One approach is to augment the sample with sufficient information to allow for determination of the sampling probabilities. However in most cases, this requires a substantial expansion of the sampling design. Therefore, in practice we must rely on a model relating the observed portions of the network structure to the unobserved portions. Lack of reliance on an assumed outcome model is a great advantage of the design-based framework over the model-based framework. By introducing a model to estimate sampling probabilities based on the outcome of interest, we reintroduce this reliance on model form, negating much of the advantage of the design-based framework. Furthermore, note that the naive use of this approach has an ad-hoc flavor, while still requiring complex observation weights and variance estimators.
In the next section, we describe an alternative more flexible likelihood approach to network inference based on link-tracing samples.
3.2. Likelihood-based inference
Consider a parametric model for the random behavior of Y depending on a parameter p-vector η:
| (3.1) |
In the model-based framework, if Y is completely observed, inference for η can be based on the likelihood
This situation has been considered in detail in Hunter and Handcock (2006) and the references therein. In the general case, where Y may be only partially observed, we can consider using the (so-called) face-value likelihood based solely on Yobs:
| (3.2) |
This ignores the additional information about η available in D. Inference for η and ψ should be based on all the available observed data, including the sampling design information. This likelihood is any function of η and ψ proportional to P(D, Yobs|η, ψ):
Thus the correct model is related to the complete data model through the sampling design as well as the observed nodes and dyads.
In likelihood inference, the sampling parameter ψ is a nuisance parameter, and modeling the sampling design along with the data structure adds a great deal of complexity. It is natural to ask when we might consider the simpler face-value likelihood, (3.2), which ignores the sampling design.
In the context of missing data, Rubin (1976) introduced the concept of ignorability to specify when inference based on the face-value likelihood is efficient. We introduce the term amenability to represent the notion of ignorability for network sampling strategies within a likelihood framework.
In many situations where models are used, the parameters η ∈ Ξ and ψ ∈ Ψ are distinct, in the sense that the joint parameter space of (η, ψ) is Ψ × Ξ. If the sampling design is adaptive and the parameters η and ψ are distinct,
Thus if the sampling design is adaptive and the structural and sampling parameters are distinct, then the sampling design is ignorable in the sense that the resulting likelihoods are proportional. When this condition is satisfied likelihood-based inference for η, as proposed here, is unaffected by the (possibly unknown) sampling design. This leads to the following definition and result.
Definition
Consider a sampling design governed by parameter ψ ∈ Ψ and a stochastic network model Pη(Y = y) governed by parameter η ∈ Ξ. We call the sampling design amenable to the model if the sampling design is adaptive and the parameters ψ and η are distinct.
Result
Consider networks produced by the stochastic network model Pη(Y = y) governed by parameter η ∈ Ξ which are observed by a sampling design with parameter ψ ∈ Ψ amenable to the model. Then the likelihood for η and ψ is
Thus likelihood-based inference for η from L[η, ψ|Yobs, D] will be the same as likelihood-based inference for η based on L[η|Yobs].
This result shows for standard designs such as the ego-centric, single wave and multi-wave sampling designs in Section 2, likelihood-based inference can be based on the face-value likelihood L[η|Yobs]. This was first noted in the foundational paper of Thompson and Frank (2000). Explicitly, this is
Hence we can evaluate the likelihood by just enumerating the full data likelihood over all possible values for the missing data.
We may also wish to make inference about the design parameter ψ. The likelihood for ψ based on the observed data is any function of ψ proportional to P(D, Yobs|ψ). For designs amenable to the model this is
for any choice of υ in 𝒴(yobs). Hence it can be computed without reference to the network model.
4. Exponential family models for networks
The models we consider for the random behavior of Y rely on a p-vector g(Y) of statistics and a parameter vector η ∈ Rp. The canonical exponential family model is
| (4.1) |
where exp{κ(η)} =∑u∈𝒴 exp{η · g(u)} is the familiar normalizing constant associated with an exponential family of distributions [Barndorff-Nielsen (1978); Lehmann (1983)].
The range of network statistics that might be included in the g(y) vector is vast—see Wasserman and Faust (1994) for the most comprehensive treatment of these statistics—though we will consider only a few in this article. We allow the vector g(y) to include covariate information about nodes or edges in the graph in addition to information derived directly from the matrix y itself.
There has been a great deal of work on models of the form (4.1), to which we refer as exponential family random graph models or ERGMs for short. [We avoid the lengthier EFRGM, for “exponential family random graph models,” both for the sake of brevity and because we consider some models in this article that should technically be called curved exponential families Hunter and Handcock (2006).]
The normalizing constant is usually difficult to compute directly for 𝒴 containing large numbers of networks. Inference for this class of models was considered in the seminal paper by Geyer and Thompson (1992), building on the methods of Frank and Strauss (1986) and the above cited papers. Until recently, inference for social network models has relied on maximum pseudolikelihood estimation [Besag (1974); Frank and Strauss (1986); Strauss and Ikeda (1990); Geyer and Thompson (1992)]. Geyer and Thompson (1992) proposed a stochastic algorithm to approximate maximum likelihood estimates for model (4.1), among other models; this Markov chain Monte Carlo (MCMC) approach forms the basis of the method described in this article. The development of these methods for social network data has been considered by Corander, Dahmström and Dahmström (1998); Crouch, Wasserman and Trachtenberg (1998); Snijders (2002); Handcock (2002); Corander, Dahmström and Dahmström (2002); Hunter and Handcock (2006).
4.1. Likelihood-based inference for ERGM
In this section we consider likelihood inference for η in the case where Y = Yobs + Ymis is possibly only partially observed.
As the direct computation of the likelihood is difficult when the number of networks in 𝒴 is large, we can approximate the likelihood by using the MCMC approach of randomly sampling from the space of possible values of the missing data and taking the mean. Alternatively, consider the conditional distribution of Y given Yobs:
where exp[κ(η|yobs)] = ∑u∈𝒴(yobs) exp[η · g(u + yobs)]. This formula gives a simple way to sample from the conditional distribution and hence produce multiple imputations of the full data. Specifically, the conditional distribution of Y given Yobs is an ERGM on a constrained space of networks, and hence one can simulate from it using a variant of the standard MCMC for ERGM [Hunter and Handcock (2006); Handcock et al. (2003)] that restricts the proposed networks to the subset of networks that are concordant to the observed data.
Also note that
which can then be estimated by MCMC samples: the first term by a chain on the complete data and the second by a chain conditional on yobs. So the sampled data situation is only slightly more difficult than the complete data case.
5. Two-wave link-tracing samples from a collaboration network
In this section we investigate the effect of network sampling on estimation by comparing network samples to the situation where we observe the complete network. Specifically, we consider the collaborative working relations between 36 partners in a New England law firm introduced in Section 1. These data have been studied by many authors including Lazega (2001), Snijders et al. (2006) and Hunter and Handcock (2006) (whom we follow).
We consider an ERGM (4.1) with two network statistics for the direct effects of seniority and practice of the form
where Xi is the seniority or practice of partner i. We also consider three dyadic homophily attributes based on practice, gender and office. These are included as three network statistics indicating matches between the two partners in the dyad on the given attribute:
where ℐ(x) indicates the truth of the condition x and Xi and Xj are the practice, gender or office attribute of partner i and j, respectively. We also include statistics that are purely functions of the relations y. These are the number of edges (essentially the density) and the geometrically weighted edgewise shared partner statistic (denoted by GWESP), a measure of the transitivity structure in the network [Snijders et al. (2006)]. The model is a slightly reparameterized form of Model 2 in Hunter and Handcock (2006) obtained by replacing the alternating k-triangle term with the GWESP term. The scale parameter for the GWESP term is fixed at its optimal value (0.7781). See Hunter and Handcock (2006) for details.
As discussed in Hunter and Handcock (2006), this model provides an adequate fit to the data, and we will use it here to assess the effect of sampling on model fit. A summary of the MLE parameters used is given in the complete data value column of Table 2. Note that we are taking these parameters as “truth” and considering data produced by sampling from this network.
Table 2.
Bias and Root Mean Squared Error (RMSE) of natural parameter MLE based on two-wave samples as percentages of true parameter values and efficiency losses
| Natural parameter |
Complete data value |
Bias (%) |
RMSE (%) |
Efficiency loss (%) |
|---|---|---|---|---|
| Structural | ||||
| Edges | −6.51 | 0.2 | 1.2 | 1.7 |
| GWESP | 0.90 | 0.8 | 3.7 | 5.1 |
| Nodal | ||||
| Seniority | 0.85 | 0.3 | 3.1 | 1.3 |
| Practice | 0.41 | 0.4 | 5.3 | 3.5 |
| Homophily | ||||
| Practice | 0.76 | 0.8 | 4.3 | 2.9 |
| Gender | 0.70 | 0.9 | 4.7 | 1.7 |
| Office | 1.15 | 0.7 | 2.9 | 2.8 |
We construct all possible datasets produced by a two-wave link-tracing design starting from two randomly chosen nodes (the “seeds”). This adaptive design is amenable to the model. As there are 36 partners and the sample is deterministic given the seeds, there are possible data sets. The number of actors in each dataset varies from just 2 to all 36 depending on the degree of connectedness of the seeds. The data pattern is shown in Figure 1. Consider a partition of the sampled from the nonsampled and the corresponding 2 × 2 blocking of the sociomatrix, with the four blocks representing dyads from sampled and nonsampled to sampled and nonsampled. The complete data consists of the full sociomatrix. The first three blocks contain the observed data, the dyads involving at least one sampled node, and the last block contains the unobserved data, those between the nonsampled.
Fig. 1.
Schematic depiction of sampled and unobserved arc data when the sampling is over an undirected network.
For each of these samples we use the methods of Section 4.1 to estimate the parameters. We can then compare them to the MLE for the complete dataset. For these networks, the MLEs are obtained using statnet [Handcock et al. (2003)], both for the natural parametrization and for the mean value parameterization [see Handcock (2003)].
The mean value parameters are a function of the natural parameters, specifically the expected values of the sufficient statistics given the values of the natural parameters.
There are two isolates, that is nodes with no relations. If these two are selected as the two seeds, only 69 of the 630 dyads are observed, and no edges are observed. Therefore, the MLE associated with this sample includes (negative) infinite values, on the boundary of the convex hull. For this reason, we exclude this sample from our analyses. Practically, this exclusion is reasonable in that it is unlikely any researcher drawing a link-tracing sample including only two isolated nodes will proceed with analysis of that sample.
One way to assess the effect of the link-tracing design is to compare the estimates from the sampled data to that of the complete data. As a measure of the difference between the estimates in the metric of the model, we use the Kullback–Leibler divergence from the model implied by the complete data estimate to that of the sampled data estimate. Recall that the Kullback–Leibler divergence of a distribution with probability mass function p from the distribution with probability mass function q is
Let η and ξ be alternative parameters for the model (4.1). The Kullback–Leibler divergence, KL(ξ, η), of the ERGM with parameter η from the ERGM with parameter ξ is
If ξ is the complete data MLE then Eξ[g(Y)] = g(Yobs) are the observed statistics (given in the complete data value column of Table 3). The divergence can be easily computed using the MCMC algorithms of Section 4.1.
Figure 2 plots the Kullback–Leibler divergence of the MLEs based on the 629 samples from the complete data MLE. The Kullback–Leibler divergence of the two smallest samples, including only 5 nodes (165 dyads), are about 14 and have not been plotted to reduce the vertical scale. The horizontal axis is the number of observed dyads in the sample. The plot indicates how the information in the data about the complete data MLE approaches that of the complete data as the number of sampled dyads approaches the full number. The key feature of this figure is the variation in information content among samples of the same size especially for the smaller sample sizes. Different seeds lead to samples that tell us different things about the model even when the numbers of partners surveyed is the same.
Fig. 2.
Kullback–Leibler divergence of the MLEs based on the samples compared to the complete data MLE. As the number of dyads sampled increases, the information content of the samples approaches that of the complete data. The information loss for the majority of samples is modest.
For more specific information on the individual estimates, we can compute the bias of the estimates based on the samples as the mean difference between the parameter estimates from the samples and that of the complete network. The root mean squared error (RMSE) is the square-root of the mean of the squared difference between the parameter estimates from each sample and the complete data estimates. The efficiency loss of the sampled estimate is the ratio of the mean squared error and the variance of the sampling distribution of the estimate based on the full data. This standardizes the error in the sampled estimates by the variation in the complete data estimates. We also complete a similar comparison of the estimates under the alternative mean value parametrization [Handcock (2003)].
The properties of the natural parameter estimates are summarized in Table 2. The bias and root mean squared error are presented in percentages of the complete data parameter estimates.
The bias is very small and the RMSE is modest. The efficiency loss is 2%–3% on average. Note that these population-average figures obscure the variation in loss over individual samples apparent in Figure 2.
Table 3 is the mean value parameterization analog of Table 2. As these are on the same measurement scale as the statistics they are easier to interpret. Again we see that the estimates are approximately unbiased and the RMSE and efficiency losses are small.
Table 3.
Bias and Root Mean Squared Error (RMSE) of mean value parameter MLE based on two-wave samples as percentages of true parameter values and efficiencies
| Natural parameter |
Complete data value |
Bias (%) |
RMSE (%) |
Efficiency loss (%) |
|---|---|---|---|---|
| Structural | ||||
| Edges | 115.00 | 0.4 | 2.0 | 1.8 |
| GWESP | 190.31 | 0.4 | 2.8 | 1.9 |
| Nodal | ||||
| Seniority | 130.19 | 0.3 | 1.8 | 1.4 |
| Practice | 129.00 | 0.2 | 2.6 | 3.4 |
| Homophily | ||||
| Practice | 72.00 | 0.1 | 2.0 | 1.7 |
| Gender | 99.00 | 0.5 | 2.1 | 1.8 |
| Office | 85.00 | 0.7 | 2.7 | 3.0 |
6. Discussion
In this paper we give a concise and systematic statistical framework for dealing with partially observed network data resulting from a designed sample. The framework includes, but is not restricted to, adaptive network sampling designs. We present a definition of a network design which is amenable to a given model and a result on likelihood-based inference under such designs.
An important simple result of this framework is that sampled networks are not “biased” but can be representative if analyzed correctly. Many authors have confused the ideas of simple random sampling of the dyads with representative designs. The results of this paper indicate that simple random sampling is not necessary for valid inference. In fact, the most commonly used designs can be easily taken into account. Hence, despite their form, inference from adaptive network samples is tractable.
It is illustrative to compare our approach to that of Stumpf, Wiuf and May (2005). These authors highlight the difference between the structure of a network and that of a sub-network induced by Bernoulli sampling of its nodes. The framework in this paper allows valid inference for the properties of the network based on its partial observation. This is because we fit a broad class of models compatible with an arbitrary set of network statistics (e.g., ERGM) for the complete network and use a method of inference that does not rely on equality between the structure of the full and sub-networks. As illustrated by the work of Stumpf, Wiuf and May (2005), treating the observed portion as if it were the full network may lead to invalid inference about characteristics of the full network such as the degree distribution.
We have also shown that likelihood-based inference from an adaptive network sample can be conducted using a complete network model. We have shown that such inference is both principled and practical. The likelihood framework naturally accommodates standard sampling designs. Note that in a design-based frame, principled inference would require a great deal of effort to precisely characterize the sampling designs. The result that link-tracing designs are adaptive and can be analyzed with complex likelihood based methods is very valuable in practice as these designs have previously not been analyzed with general exponential family random graph (or similar) models. The only prior work appears to be that of Thompson and Frank (2000) who applied a less complex model class.
In our application we show that an adaptive network sampling of a collaboration network can lead to effective estimates of the model parameters in the vast majority of cases. We find that the MLEs from the samples have only modest bias (compared to the complete data estimate) and an error that only increases slowly with the number of unobserved dyads. We also show that the information content of the sample (with respect to the model), varies greatly even for samples of the same size. For conventional samples of i.i.d. random variables, the Fisher information is simply proportional to the sample size. In the network setting with dependence terms, however, the Fisher information will depend on the specific set of nodes and dyads sampled. For example, the information component corresponding to the GWESP term in the example will be larger for samples in which more pairs of nodes joined by edges are sampled, as GWESP applies only to pairs of nodes joined by edges. If no such dyads were sampled, there would be no information in the sample about the propensity for nodes joined by edges to have relations in common.
In practice the sample is a result of a combination of the sampling design and an out-of-design mechanism. The sampling design is the part of the observation process under the control of the surveyor. When adaptive designs are executed faithfully, the unknown dyads are assumed to be intentionally unobserved, or missing by design. Note that the definition of control may be extended to nonamenable sampling designs, for example by allowing the design to depend on unknown factors, such as the unrecorded values of variables used for stratification. The out-of-design mechanism is the nonintentional nonobservation of network information (e.g., due to the failure to report links, incomplete measurement of links and attrition from longitudinal surveys). This is also referred to, in general, as the non-response mechanism. We consider the joint effect of sampling and missing data in a companion paper [Handcock and Gile (2007)].
Supplementary Material
Acknowledgments
The authors would like to thank the members of the UW Network Modeling Group (Martina Morris, P. I.), Stephen Fienberg and the reviewers for their helpful input.
Footnotes
Supported by NIH Grants R01 DA012831 and R01 HD041877, NSF Grant MMS-0851555 and ONR Grant N00014-08-1-1015.
Supplement: Software used in the simulation study (DOI: 10.1214/08-AOAS221SUPP; .zip). The code used to perform this study is written in the R statistical language [R Development Core Team (2007)] and is based on statnet, an open-source software suite for network modeling [Handcock et al. (2003)]. We provide the code and documentation for it with links to the statnet website.
Contributor Information
Mark S. Handcock, Department of Statistics, University of California, Los Angeles, California 90095-1554, USA, handcock@stat.washington.edu
Krista J. Gile, Nuffield College, University of Oxford, New Road, Oxford 0X1 1NF, United Kingdom, krista.gile@nuffield.ox.ac.uk
REFERENCES
- Barndorff-Nielsen OE. Information and Exponential Families in Statistical Theory. New York: Wiley; 1978. MR0489333. [Google Scholar]
- Besag J. Spatial interaction and the statistical analysis of lattice systems (with discussion) J. Roy. Statist. Soc. Ser. B. 1974;36:192–236. MR0373208. [Google Scholar]
- Corander J, Dahmström K, Dahmström P. Research report. Stockholm: Dept. Statistics, Univ.; 1998. Maximum likelihood estimation for Markov graphs. [Google Scholar]
- Corander J, Dahmström K, Dahmström P. Maximum likelihood estimation for exponential random graph models. In: Hagberg J, editor. Contributions to Social Network Analysis, Information Theory, and Other Topics in Statistics; A Festschrift in Honour of Ove Frank. Stockholm: Dept. Statistics, Univ.; 2002. pp. 1–17. [Google Scholar]
- Crouch B, Wasserman S, Trachtenberg F. Markov chain Monte Carlo maximum likelihood estimation for p* social network models. The XVIII International Sunbelt Social Network Conference; Sitges, Spain. 1998. [Google Scholar]
- Frank O. Network Sampling and Model Fitting. In: Carrington JSP, Wasserman SS, editors. Models and Methods in Social Network Analysis. Cambridge: Cambridge Univ. Press; 2005. pp. 31–56. [Google Scholar]
- Frank O, Strauss D. Markov Graphs. J. Amer. Statist. Assoc. 1986;81:832–842. MR0860518. [Google Scholar]
- Geyer CJ, Thompson EA. Constrained Monte Carlo maximum likelihood calculations (with discussion) J. Roy. Statist. Soc. Ser. B. 1992;54:657–699. MR1185217. [Google Scholar]
- Handcock MS. Degeneracy and inference for social network models. The Sunbelt XXII International Social Network Conference; New Orleans, LA. 2002. [Google Scholar]
- Handcock MS. Working paper 39, Center for Statistics and the Social Sciences. Univ. Washington; 2003. Assessing degeneracy in statistical models of social networks. Available at http://www.csss.washington.edu/Papers. [Google Scholar]
- Handcock MS, Gile KJ. Working paper 75, Center for Statistics and the Social Sciences. Univ. Washington; 2007. Modeling social networks with sampled or missing data. Available at http://www.csss.washington.edu/Papers. [Google Scholar]
- Handcock MS, Gile KJ. Supplement to “Modeling social networks from sampled data.”. 2010 doi: 10.1214/08-AOAS221. DOI: 10.1214/08-AOAS221SUPP. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Handcock MS, Hunter DR, Butts CT, Goodreau SM, Morris M. statnet: Software tools for the statistical modeling of network data statnet project http://statnet.org/, Seattle, WA. R package version 2.0. 2003 Available at http://CRAN.R-project.org/package=statnet. [Google Scholar]
- Hunter DR, Handcock MS. Inference in curved exponential family models for networks. J. Comput. Graph. Statist. 2006;15:565–583. MR2291264. [Google Scholar]
- Lazega E. The Collegial Phenomenon: The Social Mechanisms of Cooperation Among Peers in a Corporate Law Partnership. Oxford: Oxford Univ. Press; 2001. [Google Scholar]
- Lehmann EL. Theory of Point Estimation. New York, NY: Wiley; 1983. MR0702834. [Google Scholar]
- R Development Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2007. ISBN 3-900051-07-0, Version 2.6.1. Available at http://www.R-project.org/. [Google Scholar]
- Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:41–55. MR0742974. [Google Scholar]
- Rubin DB. Inference and missing data. Biometrika. 1976;63:581–592. MR0455196. [Google Scholar]
- Särndal C-E, Swensson B, Wretman J. Model Assisted Survey Sampling. New York: Springer; 1992. MR1140409. [Google Scholar]
- Snijders TAB. Estimation on the basis of snowball samples: How to weight. Bulletin Methodologie Sociologique. 1992;36:59–70. [Google Scholar]
- Snijders TAB. Markov chain Monte Carlo estimation of exponential random graph models. Journal of Social Structure. 2002;3:1–41. [Google Scholar]
- Snijders TAB, Pattison P, Robins GL, Handcock MS. New specifications for exponential random graph models. Sociological Methodology. 2006;36:99–153. [Google Scholar]
- Strauss D, Ikeda M. Pseudolikelihood estimation for social networks. J. Amer. Statist. Assoc. 1990;85:204–212. MR1137368. [Google Scholar]
- Stumpf MPH, Wiuf C, May RM. Subnets of scale-free networks are not scale-free: Sampling properties of networks. Proc. Natl. Acad. Sci. USA. 2005;102:4221–4224. doi: 10.1073/pnas.0501179102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thompson SK, Collins LM. Adaptive sampling in research on risk-related behaviors. Drug and Alcohol Dependence. 2002;68:S57–S67. doi: 10.1016/s0376-8716(02)00215-6. [DOI] [PubMed] [Google Scholar]
- Thompson SK, Frank O. Model-based estimation with link-tracing sampling designs. Survey Methodology. 2000;26:87–98. [Google Scholar]
- Thompson SK, Seber GAF. Adaptive Sampling. New York: Wiley; 1996. MR1390995. [Google Scholar]
- Wasserman S, Faust K. Social Network Analysis: Methods and Applications. Cambridge Univ. Press; 1994. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.


