Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Dec 28.
Published in final edited form as: Uncertain Artif Intell. 2019 Jul;2019:428.

Identification In Missing Data Models Represented By Directed Acyclic Graphs

Rohit Bhattacharya †,*, Razieh Nabi †,*, Ilya Shpitser , James M Robins
PMCID: PMC6935350  NIHMSID: NIHMS1063759  PMID: 31885521

Abstract

Missing data is a pervasive problem in data analyses, resulting in datasets that contain censored realizations of a target distribution. Many approaches to inference on the target distribution using censored observed data, rely on missing data models represented as a factorization with respect to a directed acyclic graph. In this paper we consider the identifiability of the target distribution within this class of models, and show that the most general identification strategies proposed so far retain a significant gap in that they fail to identify a wide class of identifiable distributions. To address this gap, we propose a new algorithm that significantly generalizes the types of manipulations used in the ID algorithm [14, 16], developed in the context of causal inference, in order to obtain identification.

1. INTRODUCTION

Missing data is ubiquitous in applied data analyses resulting in target distributions that are systematically censored by a missingness process. A common modeling approach assumes data entries are censored in a way that does not depend on the underlying missing data, known as the missing completely at random (MCAR) model, or only depends on observed values in the data, known as the missing at random (MAR) model. These simple models are insufficient however, in problems where missingness status may depend on underlying values that are themselves censored. This type of missingness is known as missing not at random (MNAR) [9, 10, 17].

While the underlying target distribution is often not identified from observed data under MNAR, there exist identified MNAR models. These include the permutation model [9], the discrete choice model [15], the no self-censoring model [11, 12], the block-sequential MAR model [18], and others. Restrictions defining many, but not all, of these models may be represented by a factorization of the full data law (consisting of both the target distribution and the missingness process) with respect to a directed acyclic graph (DAG).

The problem of identification of the target distribution from the observed distribution in missing data DAG models bears many similarities to the problem of identification of interventional distributions from the observed distribution in causal DAG models with hidden variables. This observation prompted recent work [3, 4, 13] on adapting identification methods from causal inference to identifying target distributions in missing data models.

In this paper we show that the most general currently known methods for identification in missing data DAG models retain a significant gap, in the sense that they fail to identify the target distribution in many models where it is identified. We show that methods used to obtain a complete characterization of identification of interventional distributions, via the ID algorithm [14, 16], or their simple generalizations [3, 4, 13], are insufficient on their own for obtaining a similar characterization for missing data problems. We describe, via a set of examples, that in order to be complete, an identification algorithm for missing data must recursively simplify the problem by removing sets of variables, rather than single variables, and these must be removed according to a partial order, rather than a total order. Furthermore, the algorithm must be able to handle subproblems where selection bias or hidden variables, or both, are present even if these complications are missing in the original problem. We develop a new general algorithm that exploits these observations and significantly narrows the identifiability gap in existing methods. Finally, we show that in certain classes of missing data DAG models, our algorithm takes on a particularly simple formulation to identify the target distribution.

Our paper is organized as follows. In section 2, we introduce the necessary preliminaries from the graphical causal inference literature. In section 3 we introduce missing data models represented by DAGs. In section 4, we illustrate, via examples, that existing identification strategies based on simple generalizations of causal inference methods are not sufficient for identification in general, and describe generalizations needed for identification in these examples. In section 5, we give a general identification algorithm which incorporates techniques needed to obtain identification in the examples we describe. Section 6 contains our conclusions. We defer longer proofs to the supplement in the interests of space.

2. PRELIMINARIES

Many techniques useful for identification in missing data contexts were first derived in causal inference. Causal inference is concerned with expressing counterfactual distributions, obtained after the intervention operation, from the observed data distribution, using constraints embedded in a causal model, often represented by a DAG.

A DAG is a graph G with a vertex set V connected by directed edges such that there are no directed cycles in the graph. A statistical model of a DAG G is the set of distributions p(V) such that p(V)=VVp(V|paG(V)), where paG(V) are the set of parents of V in G. Casual models of a DAG are also sets of distributions, but on counterfactual random variables. Given YV and AV \ {Y}, a counterfactual variable, or potential outcome, written as Y (a), represents the value of Y in a hypothetical situation where A were set to values a by an intervention operation [6]. Given a set Y, define Y(a) ≡ {Y} (a) ≡ {Y (a) | YY}. The distribution p(Y(a)) is sometimes written as p(Y|do(a)) [6].

A causal parameter is said to be identified in a causal model if it is a function of the observed data distribution p(V). Otherwise the parameter is said to be non-identified. In all causal models of a DAG G that are typically used, all interventional distributions p({V\A} (a)) are identified by the g-formula [8]:

p({V\A}(a))=VV\Ap(V|paG(V))|A=a. (1)

If a causal model contains hidden variables, only data on the observed marginal distribution is available. In this case, not every interventional distribution is identified, and identification theory becomes more complex. A general algorithm for identification of causal effects in this setting was given in [16], and proven complete in [14, 1]. Here, we describe a simple reformulation of this algorithm as a truncated nested factorization analogous to the g-formula, phrased in terms of kernels and mixed graphs recursively defined via a fixing operator [7]. As we will see, many of the techniques developed for identification in the presence of hidden variables will need to be employed (and generalized) for missing data, even if no variables are completely hidden.

We describe acyclic directed mixed graphs (ADMGs) obtained from a hidden variable DAG by a latent projection operation in section 2.1, and a nested factorization associated with these ADMGs in section 2.2. This factorization is formulated in terms of conditional ADMGs and kernels (described in section 2.2.1), via the fixing operator (described in section 2.2.2). The truncated nested factorization that yields all identifiable functions for interventional distributions is described in section 2.3.

As a prelude to the rest of the paper, we introduce the following notation for some standard genealogic sets of a graph G with a set of vertices V: parents paG(V){UV|UV}, children chG(V){UV|VU}, descendants deG(V){UV|VU}, ancestors anG(V){UV|UV}, and non-descendants ndG(V)V\deG(V). A district D is defined the maximal set of vertices that are pairwise connected by a bidirected path (a path containing only ↔ edges). We denote the district of V as disG(V), and the set of all districts in G as D(G). By convention, for any V, disG(V)de(V)anG(V)={V}. Finally, the Markov blanket mbG(V)disG(V)paG(disG(V)) is defined as the set that gives rise to the following independence relation through m-separation: VndG(V)\mbG(V)|mbG(V) [7]. The above definitions apply disjunctively to sets of variables SV; e.g. paG(S)=SSpaG(S).

2.1. LATENT PROJECTION ADMGS

Given a DAG G(VH), where V are observed and H are hidden variables, a latent projection G(V) is the following ADMG with a vertex set V. An edge AB exists in G(V) if there exists a directed path from A to B in G(VH) with all intermediate vertices in H. Similarly, an edge AB exists in G(V) if there exists a path without consecutive edges → ○ ← from A to B with the first edge on the path of the form A ← and the last edge on the path of the form → B, and all intermediate vertices on the path in H. Latent projections define an infinite class of hidden variable DAGs that share identification theory. Thus, identification algorithms are typically defined on latent projections for simplicity.

2.2. NESTED FACTORIZATION

The nested factorization of p(V) with respect to an ADMG G(V) is defined on kernel objects derived from p(V) and conditional ADMGs derived from G(V). The derivations are via a fixing operation, which can be causally interpreted as a single application of the g-formula on a single variable (to either a graph or a kernel) to obtain another graph or another kernel.

2.2.1. Conditional Graphs And Kernels

A conditional acyclic directed mixed graph (CADMG) G(V,W) is an ADMG in which the nodes are partitioned into W, representing fixed variables, and V, representing random variables. Only outgoing directed edges may be adjacent to variables in W.

A kernel qV(V|W) is a mapping from values in W to normalized densities over V [2]. In other words, kernels act like conditional distributions in the sense that ∑vV qV(v|w) = 1, ∀wW. Conditioning and marginalization in kernels are defined in the usual way. For AV, we define q(A|W) ≡ ∑V\A q(V|W) and q(V \ A|A, W) ≡ q(V|W)/q(A|W).

2.2.2. Fixability And Fixing

A variable VV in a CADMG G is fixable if deG(V)disG(V)={V}. In other words, V is fixable if paths V ↔ ⋯ ↔ U and V → ⋯ → U do not both exist in G for any UV \ {V}. Given a CADMG G(V,W) and VV fixable in G, the fixing operator ϕV(G) yields a new CADMG G(V\{V},W{V}), where all edges with arrowheads into V are removed, and all other edges in G are kept. Similarly, given a CADMG G(V,W), a kernel qV(V|W), and VV fixable in G, the fixing operator ϕV(qV;G) yields a new kernel qV\{V}(V\{V}|W{V})qV(V|W)qV(V|ndG(V),W). Fixing is a operation in which we divide a kernel by a conditional kernel. In some cases this operates as a conditioning operation, in other cases as a marginalization operation, and in yet other cases, as neither, depending on the structure of the kernel being divided.

For a set SV in a CADMG G, if all vertices in S can be ordered into a sequence σS = 〈S1, S2, … 〉 such that S1 is fixable in G, S2 in ϕS1(G), etc., S is said to be fixable in G, V \ S is said to be reachable in G, and σS is said to be valid. A reachable set C is said to be intrinsic if GC has a single district, where GC is the induced subgraph where we keep all vertices in C and edges whose endpoints are in C. We will define ϕσS(G) and ϕσS(qV;G) via the usual function composition to yield operators that fix all elements in S in the order given by σS.

The distribution p(V) is said to obey the nested factorization for an ADMG G if there exists a set of kernels {qC(C|paG(C))|C is intrinsic in G} such that for every fixable S, and any valid σS, ϕσS(p(V);G)=DD(ϕσS(G))qD(D|paGS(D)) . All valid fixing sequences for S yield the same CADMG G(V\S,S), and if p(V) obeys the nested factorization for G, all valid fixing sequences for S yield the same kernel. As a result, for any valid sequence σ for S, we will redefine the operator ϕσ, for both graphs and kernels, to be ϕS. In addition, it can be shown that the above kernel set is characterized as: {qC(C|paG(C))|C is intrinsic in G} = {ϕV\C(p(V);G)|C is intrinsic in G} [7]. Thus, we can re-express the above nested factorization as stating that for any fixable set S, we have ϕS(p(V);G)=DD(ϕS(G))ϕV\D(p(V);G) .

An important result in [7] states that if p(VH) obeys the factorization for a DAG G with vertex set VH, then p(V) obeys the nested factorization for the latent projection ADMG G(V).

2.3. IDENTIFICATION AS A TRUNCATED NESTED FACTORIZATION

For any disjoint subsets Y, A of V in a latent projection G(V) representing a causal DAG G(VH), define Y*anG(V)V\A(Y). Then p(Y(a)) is identified from p(V) in G if and only if every set DD(G(V)Y*) is intrinsic. If identification holds, we have:

p(Y(a))=Y*\YDD(G(V)Y*)ϕV\D(p(V);G(V))|A=a.

In other words, p(Y(a)) is identified if and only if it can be expressed as a factorization, where every piece corresponds to a kernel associated with a set intrinsic in G(V). Moreover, no term in this factorization contains elements of A as random variables, just as was the case in (1). The above provides a concise formulation of the ID algorithm [16, 14] in terms of the nested Markov model which contains the causal model of the observed distribution.

If Y = {Y}, and A={paG(Y)}, then the above truncated factorization has a simpler form:

p(Y(a))=ϕV\{Y}(p(V);G)|A=a.

In words, to identify the interventional distribution of Y where all parents (direct causes) A of Y are set to values a, we must find a total ordering on variables other than Y (V \ {Y}) that forms a valid fixing sequence. If such an ordering exists, the identifying functional is found from p(V) by applying the fixing operator to each variable in succession, in accordance with this ordering. Fig. 1 shows the identification of the functional p(Y (a)) following a total ordering of fixing M, B, A.

Figure 1:

Figure 1:

Identification of p(Y (a)) by following a total order of valid fixing operations.

Before generalizing these tools to the identification of missing data models, we first introduce the representation of these models using DAGs.

3. MISSING DATA MODELS OF A DAG

Missing data models are sets of full data laws (distributions) p(X(1), O, R) composed of the target laws p(X(1), O), and the nuisance laws p(R|X(1), O) defining the missingness processes. The target law is over a set X(1){X1(1),,Xk(1)} of random variables that are potentially missing, and a set O ≡ {O1, … ,Om} of random variables that are always observed. The nuisance law defines the behavior of missingness indicators R ≡ {R1, … ,Rk} given values of missing and observed variables. Each missing variable Xi(1)X(1) has a corresponding observed proxy variable Xi, defined as Xi ≡ “?” if Ri = 1, and defined as Xi ≡ “?” if Ri = 0 (this is the missing data analogue of the consistency property in causal inference). As a result, the observed data law in missing data problems is p(R, O, X), while some function of the target law p(X(1), O), as its name implies, is the target of inference. The goal in missing data problems is to estimate the latter from the former. By chain rule of probability,

p(X(1),O)=p(X,O,R=1)p(R=1|X(1),O). (2)

In other words, p(X(1), O) is identified from the observed data law p(R, O, X) if and only if p(R = 1|X(1), O) is. In general, p(X(1)) is not identified from the observed data law, unless sufficient restrictions are placed on the full data law defining the missing data model.

Many popular missing data models may be represented as a factorization of the full data law with respect to a DAG [4]. These include the permutation model, the monotone MAR model, the block sequential MAR model, and certain submodels of the no self-censoring model [9, 12, 18].

Given a set of full data laws p(X(1), O, R), a DAG G with the following properties may be used to represent a missing data model: G has a vertex set X(1), O, R, X; for each XiX, paG(Xi)={Ri,Xi(1)}; for each RiR, deG(Ri)(X(1)O)=. Given a DAG G with the above properties, missing data model associated with G is the set of distributions p(X(1), O, R) that can be written as

XiXp(Xi|Ri,Xi(1))VX(1)ORp(V|paG(V)), (3)

where the set of factors of the form p(Xi|Ri,Xi(1)) are deterministic to remain consistent with the definition of Xi. Note that by standard results on DAG models, conditional independences in p(X(1), O, R) may be read off from G by the d-separation criterion [5].

4. EXAMPLES OF IDENTIFIED MODELS

In this section, we describe a set of examples of missing data models that factorize as in (3) for different DAGs, where the target law is identified. We start with simpler examples where sequential fixing techniques from causal inference suffice to obtain identification, then move on to describe more complex examples where existing algorithms in the literature suffice, and finally proceed to examples where no published method known to us obtains identification, illustrating an identifiability gap in existing methods. In these examples, we show how identification may be obtained by appropriately generalizing existing techniques. In these discussions, we concentrate on obtaining identification of the nuisance law p(R|X(1), O) evaluated at R = 1, as this suffices to identify the target law p(X(1), O) by (2). In the course of describing these examples, we will obtain intermediate graphs and kernels. In these graphs, lower case letters (e.g. υ) indicates the variable V is evaluated at υ (for Ri, ri = 1). A square vertex indicates V had been fixed. Drawing the vertex normally with lower case indicates V was conditioned on (creating selection bias in the subproblem). For brevity, we use 1Ri to denote {Ri = 1}.

We first consider the block-sequential MAR model [18], shown in Fig. 2 for three variables. The target law is identified by applying the (valid) fixing sequence 〈R1, R2, R3〉via the operator ϕ to G and p(R, X). We proceed as follows. p(R1|paG(R1))=p(R1|ndG(R1))=p(R1) is identified immediately. Applying the fixing operator ϕR1 yields the graph G1ϕR1(G) shown in Fig. 2(b), and corresponding kernel q1(X1(1),X2,X3,R2,R3|1R1)p(X1,X2,X3,R2,R3,1R1)/p(1R1) where X1(1) is now observed. Thus, in the new subproblem represented by G1 and q1, p(R2|paG(R2))|R=1=q1(R2|X1(1),1R1) is identified. Applying the fixing operator ϕR2 to G1 and q1 yields G2ϕR2(G1) shown in Fig. 2(c), and q2(X1(1),X2(1),X3,R3|1R1,R2)=q1(X1(1),X2,X3,R2,R3|1R1)/q1(R2|X1(1),1R1). Finally, in the new subproblem represented by G2 and q2, p(R3|paG(R3))|R=1=q2(R3|X1(1),X2(1),1R1,R2) is identified. Applying the fixing operator ϕR3 to G2 and q2 yields q3(X1(1),X2(1),X3(1)|1R1,R2,R3)=p(X1(1),X2(1),X3(1)). The identifying functional for the target law only involves monotone cases (cases where Ri = 0 implies Ri+1 = 0) just as would be the case under the monotone MAR model, although this model does not assume monotonicity and is not MAR. In this simple example, identification may be achieved purely by causal inference methods, by treating variables in R as treatments, and finding a valid fixing sequence on them. In this example, each Ri in the sequence is fixable given that the previous variables are fixable, since all parents of each Ri become observed at the time it is fixed.

Figure 2:

Figure 2:

(a), (b), (c) are intermediate graphs obtained in identification of a block-sequential model by fixing {R1, R2, R3} in sequence. (d) is an MNAR model that is identifiable by fixing all Rs in parallel.

Following a total order to fix is not always sufficient to identify the target law, as noted in [4, 3, 13]. Consider the model represented by DAG in Fig. 2(d). For any Ri in this model, say R1, we have, by d-separation, that p(R1|paG(R1))=p(R1|X2(1),X3(1),1R2,R3), which is identified. However, if we were to fix R1 in p(X, R), we would obtain a kernel q1(X1(1),X2,X3,1R2,R3|1R1) where selection bias on R2 and R3 is introduced. The fact that q1 is not available at all levels of R2 and R3 prevents us from sequentially obtaining p(Ri|paG(Ri)), for Ri = R2, R3, due to our inability to sum out those variables from q1.

The model in Fig. 2(d) allows identification of the target law in another way, however. This follows from the fact that p(Ri|paG(Ri)) is identified for each Ri by exploiting conditional independences in p(X, R) displayed by Fig. 2(d). Since p(R|X(1))=i=13p(Ri|paG(Ri)) , the nuisance law is identified, which means the target law is also identified, as long as we fix R1, R2, R3 in parallel (as in (2)) rather than sequentially. In other words, the model is identified, but no total order on fixing operations suffices for identification. A general algorithm that aimed to fix indicators in R in parallel, while potentially exploiting causal inference fixing operations to identify each p(Ri|paG(Ri)) was proposed in [13]. Our subsequent examples show that this algorithm is insufficient to obtain identification of the target law in general, and thus is incomplete.

Consider the DAG in Fig. 3. Since R2 is a child of R3 and X2(1) is a parent of R3, we cannot obtain p(R3|paG(R3))=p(R3|X2(1)) by d-separation in any kernel (including the original distribution) where R2 is not fixed. Thus, any total order on fixing operations of elements in R must start with R1 or R2. Fixing either of these variables entails dividing p(X, R) by some factor p(Ri|paG(Ri)), which is identified as either p(R1|X3(1),1R3) or p(R2|X1(1),1R1). This division entails inducing selection bias on the subsequent kernel q1 for a variable not yet fixed (either R3 or R1). Thus, no total order on fixing operations works to identify the target law in this model. At the same time, attempting to fix all R variables in parallel would fail as well, since we cannot identify p(R3|X2(1)) either in the original distribution or any kernel obtained by standard causal inference operations described in [13]. In particular, in any such kernel or distribution R3 remains dependent on R2 given X2(1).

Figure 3:

Figure 3:

(a) A DAG where Rs are fixed according to a partial order. (b) The CADMG obtained by fixing R2.

However, the target law in this model is identified by following a partial order ≺ of fixing operations. In this partial order, R1 is incompatble with R2, and R2R3. This results in an identification strategy where we fix each variable only given that variables earlier than it in the partial order are fixed. That is, distributions p(R1|X3(1))=p(R1|X3,1R3) and p(R2|X1(1),R3)=p(R2|X1,1R1,R3) are obtained directly in the original distribution without fixing anything. The distribution p(R3|paG(R3)), on the other hand, is obtained in the kernel q1(X1,X2(1),X3,1R1,R3|1R2)=p(X,R)/p(R2|X1,1R1,R3) after R2 (the variable earlier than R3 in the partial order) is fixed. The graph corresponding to this kernel is shown in Fig. 3(b). Note that in this graph X2(1) is observed, and there is selection bias on R1. However, it easily follows by d-separation that R3 is independent of R1. It can thus be shown that p(R3|X2(1))=q1(R3|X2(1),1R2) even if q1 is only available at value R1 = 1. Since all p(Ri|paG(Ri)) are identified, so is the target law in this model, by (2).

Next, we consider the model in Fig. 4. Here, p(R2|X1(1),X3(1),R1)=p(R2|X1,X3,1R1,R3) and p(R3|X2(1),R1)=p(R3|X2,1R2,R1) are identified immediately. However, p(R1|X2(1)) poses a problem. In order to identify this distribution, we either require that R1 is conditionally independent of R2, possibly after some fixing operations, or we are able to render X2(1) observable by fixing R2 in some way. Neither seems to be possible in the problem as stated. In particular, fixing R2 via dividing by p(R2|X1(1),X3(1),R1) will necessarily induce selection bias on R1, which will prevent identification of p(R1|X2(1)) in the resulting kernel.

Figure 4:

Figure 4:

A DAG where selection bias on R1 is avoidable by following a partial order fixing schedule on an ADMG induced by latent projecting out X1(1).

However, we can circumvent the difficulty by treating X1(1) as an unobserved variable U1, and attempting the problem in the resulting (hidden variable) DAG shown in Fig. 4(b), and its latent projection ADMG G˜ shown in Fig. 4(c), where U1 is “projected out.” In the resulting problem, we can fix variables according to a partial order ≺ where R2 and R3 are incompatible, R2R1, and R3R1. Thus, we are able to fix R2 and R3 in parallel by dividing by p(R2|mbG˜(R2))=p(R2|X1,R1,X3(1),1R3) and p(R3|R1,X2(1))=p(R3|R1,X2,1R2), leading to a kernel q˜1(X1,X2(1),X3(1),R1|1R2,R3), and the graph ϕR1(G˜) shown in Fig. 4(d), where notation ϕR1 means “fix all necessary elements that occur earlier than R1 in the partial order, in a way consistent with that partial order.” In this example, this means fixing R2 and R3 in parallel. We will describe how fixing operates given general fixing schedules given by a partial order later in the paper. In the kernel q˜1 the parent of R1 is observed data, meaning that p(R1|X2(1)) is identified as q˜1(R1|X2,1R2,R3). This implies the target law is identified in this model.

In general, to identify p(Ri|paG(Ri)), we may need to use separate partial fixing orders on different sets of variables for different RiR. In addition, the fact that fixing introduces selection bias sometimes results in having to divide by a kernel where a set of variables are random, something that was never necessary in causal inference problems. In general, for a given Ri, the goal of a fixing schedule is to arrive at a kernel where an independence exists allowing us to identify p(Ri|paG(Ri)), even if some elements of paG(Ri) are in X(1) in the original problem. This fixing must be given by a partial order, and sometimes on sets of variables. In addition, some elements of X(1) must be treated as hidden variables. These complications are necessary in general to avoid creating selection bias in subproblems, and ultimately to identify the nuisance law. The following example is a good illustration.

Consider the graph in Fig. 5(a). For R1 and R3, the fixing schedules are empty, and we immediately obtain their distributions as p(R1|X2(1),X4(1),R2,R3)=p(R1|X2,X4,R3,1R2,R4) and p(R3|X4(1),R2)=p(R3|X4,1R4,R2). For R2, the partial order is R3R1 in a graph where we treat X2(1) as a hidden variable U2. This yields p(R2|X1(1),R4)=q2(R2|X1(1),R4,1R1,R3), where q2(X1(1),X2,X3(1),X4,R2,1R4|1R1,R3) is equal to q1(X1,X2,X3(1),X4,R1,R2,1R4|1R3)q1(1R1|X2,X3,X4,R2,1R3,R4), and q1(X1,X2,X3(1),X4,R1,R2,1R4|1R3)=p(X,R1,R2,1R3,R4)p(1R3|R2,X4,1R4).

Figure 5:

Figure 5:

(a) A DAG where the fixing operator must be performed on a set of vertices. (b) A latent projection of a subproblem used for identification of p(R4|X1(1)).

In order to obtain the propensity score for R4 we must either render X1(1) observable through fixing R1 or perform valid fixing operations until we obtain a kernel in which R4 is conditionally independent of R1 given its parent X1(1). However, there exists no partial order on elements of R. All partial orders on elements in R induce selection bias on variables higher in the order, preventing the identification of the required distribution for R4. For example, choosing a partial fixing order of R1R3, where we treat X2(1) and X4(1) as hidden variables results in selection bias on R3 as soon as we fix R1. Other partial orders fail similarly. However, the following approach is possible in the graph in which we treat X2(1) and X4(1) as hidden variables.

R1 and R3 lie in the same district in the resulting latent projection ADMG, shown in Fig. 5(b). Moreover, the set {R1, R3} is closed under descendants in the district in Fig. 5(b). As a result, R1 and R3 can essentially be viewed as a single vertex from the point of view of fixing. Indeed we may choose a partial order {R1, R3} ≺ R2, where we fix R1 and R3 as a set. The fixing operation on the set is possible since p(1R1,R3|mb(R1,R3))=p(1R1,R3|R2,R4,X2,X3(1),X4) is a function of observed data law, p(X, R). Specifically, it is equal to p(1R3|R2,R4,X2,X4)p(1R1|R2,R4,X2,X3,X4,1R3), where the equality holds by d-separation (R3X3(1)|R2,R4,X2,X4). We then obtain p(R4|X1(1))=X3(1),X4q2(X1(1),X3(1),X4,R4|1R1,R2,R3)X3(1),X4,R4q2(X1(1),X3(1),X4,R4|1R1,R2,R3), where q2(.|1R\R4)=q1(X1(1),X2,X3(1),X4,R2,R4|1R1,R3)q1(R2|X1(1),R4,1R1,R3), and q1(.|1R1,R3)=p(X,R2,R4,1R1,R3)p(1R1,R3|R2,R4,X2,X3(1),X4).

Our final example demonstrates that in order to identify the target law, we may potentially need to fix variables outside R, including variables in X(1) that become observed after fixing or conditioning on some elements of R. Fig. 6(a) contains a generalization of the model considered in [13], where O3 is fully observed. In this model, distributions for R4 and R1 are identified immediately, while identification of R2 requires a partial order R4X4(1)O3R1 in the graph where we treat X1(1),X2(1),X4(1) as latent variables (with the latent projection ADMG shown in Fig. 6(b)) until they are rendered observed by fixing the corresponding missingness indicators. To illustrate fixing operations according to this order, the intermediate graphs that arise are shown in Fig. 6(c),(d),(e),(f).

Figure 6:

Figure 6:

A DAG where variables besides Rs are required to be fixed.

5. A NEW IDENTIFICATION ALGORITHM

In order to identify the target law in examples discussed in the previous section, we had to consider situations where some variables were viewed as hidden, and marginalized out, and others were conditioned on, introducing selection bias. In addition, fixing operations were performed according to a partial, rather than a total, order as was the case in causal inference problems. Finally, we sometimes fixed sets of variables jointly, rather than individual variables. We now introduce relevant definitions that allow us to formulate a general identification algorithm that takes advantage of all these techniques.

Let V be a set of random variables (and corresponding vertices) consisting of observed variables O, R, X, missing variables X(1), and selected variables S. Let W be a set of fixed observed variables. The following definitions apply to a latent projection G(V\xU(1),W), for some XU(1)X(1), and a corresponding kernel q(V\XU(1)|W)XU(1)q(V|W). Graph G can be viewed as a latent variable CADMG for q where XU(1) are latent. Such CADMGs represent intermediate subproblems in our identification algorithm.

For ZDZD(G), let RZ={Rj|Xj(1)ZmbG(Z),RjZ}, and mbG(Z)(DZpaG(DZ))\Z. We say Z is fixable in G(V\XU(1),W) if

  1. deG(Z)DZZ,

  2. SZ = ∅,

  3. Z(SRZ)\mbG(Z)|mbG(Z).

In words, these conditions apply to some Z that is a subset of its own district (which is trivial when the set Z is a singleton). The conditions, in the listed order, require that Z is closed under descendants in the district, should not contain any selected variables, and should be independent of both selected variables S and the missingness indicators RZ of the corresponding counterfactal parents given the Markov blanket of Z, respectively. Consider the graph in Fig. 5(b) where S = ∅ and let Z = {R1, R3}. Z is fixable since ZDZ = {R1, R3, X2, X4}, deG(Z)={R1,R3,X1,X3}DZ={R1,R3} is closed, and both S and RZ are empty sets.

A set Z˜ spanning multiple elements in D(G) is said to be fixable if it can be partitioned into a set Z of elements Z, such that each Z is a subset of a single district in D(G) and is fixable.

Given an ordering ≺ on vertices VW topological in G and Z˜ fixable in G, define ϕZ˜(q;G) as

q(V\(XU(1)RZ),RZ=1|W)ZZZZq(Z|mbG(Z; anG(DZ){Z})),RZ)|(RZ)RZ=1, (4)

where mbG(V;S)mbGS(V) and {Z} is the set of all elements earlier than Z in the order ≺ (this includes Z itself).

Given a set ZROX(1), and an equivalence relation ~, let Z/~ be the partition of Z into equivalence classes according to ~. Define a fixing schedule for Z/~ to be a partial order ⊲ on Z/~. For each ZZ/~, define {Z˜} to be the set of elements in Z/~ earlier than Z˜ in the order ⊲, and {Z˜}{Z˜}\Z˜. Define Z˜ and Z˜ to be restrictions of ⊲ to {Z˜} and {Z˜}, respectively. Both restrictions, Z˜ and Z˜, are also partial orders.

We inductively define a valid fixing schedule (a schedule where fixing operations can be successfully implemented), along with the fixing operator on valid schedules. The fixing operator will implement fixing as in (4) on Z˜ within an intermediate problem represented by a CADMG where some XZ˜(1)X(1) will become observed after fixing Z˜, with X(1)\XZ˜(1) treated as latent variables, and kernel associated with this CADMG defined on the observed subset of variables. We also define X{Z˜}(1)Z{Z˜}XZ(1).

We say Z˜ is valid for {Z˜} in G if for every ⊲-largest element Y˜ of {Z˜}, Y˜ is valid for {Y˜}. If Z˜ is valid for {Z˜}, we define ϕZ˜(G) to be a new CADMG G(V\Z{Z˜}Z,WZ{Z˜}Z) obtained from G(V,W) by:

  • Removing all edges with arrowheads into Z{Z˜}Z

  • Marking any {Xj(1)|Xj(1)ZmbϕZ(G)(Z), Z{Z˜} as observed,

  • Marking any {RZV|Z{Z˜}}\Z{Z˜}Z as selected to value 1, where RZ is defined with respect to ϕz(G)

  • Treating elements of X(1)\XZ˜(1) as hidden variables.

We say Z˜ is valid for {Z˜} if Z˜ is valid for {Z˜} and Z˜ is fixable in ϕZ˜(G). If Z˜ is valid, we define

ϕZ˜(q;G)ϕZ˜(ϕZ˜(q;G);ϕZ˜(G)), (5)

where ϕZ˜(q;G)q(V|W)Y˜{Z˜}qY˜, and qY˜ are defined inductively as the denominator of (4) for Y˜, ϕY˜(G) and ϕY˜(q;G).

We have the following claims.

Proposition 1. Given a DAG G(X(1),R,O,X), the distribution p(Ri|paG(Ri))|paG(Ri)R=1 is identifiable from p(R, O, X) if there exists

  1. ZX(1)RO,

  2. an equivalence relation ~ on Z such that {Ri} ∈ Z/~,

  3. a set of elements XZ˜(1) such that X{Z˜}(1)XZ˜(1)X(1) for each Z˜Z~,

  4. X(1)paG(Ri)(Z\{Ri})X{Ri}(1),

  5. and a valid fixing schedulefor Z/~ in G such that for each Z˜Z~, Z˜{Ri}.

Moreover, p(Ri|paG(Ri))|paG(Ri)R=1 is equal to q{Ri}, defined inductively as the denominator of (4) for {Ri}, ϕ{Ri}(G) and ϕ{Ri}(q;G), and evaluated at paG(Ri)R=1.

Proposition 1 implies that p(Ri|paG(Ri)) is identified if we can find a set of variables that can be fixed according to a partial order (possibly through set fixing) within subproblems where certain variables are hidden. At the end of the fixing schedule, we require that Ri itself is fixable given its Markov blanket in the original DAG. We encourage the reader to view the example provided in Appendix B, for a demonstration of valid fixing schedules that may be chosen by Proposition 1.

Corollary 1. Given a DAG G(X(1),R,O,X), the target law p(X(1), O) is identified if p(Ri|paG(Ri)) is identified via Proposition 1 for every RiR.

Proof. Follows by Proposition 1 and (2). □

In addition, in special classes of models, the full law, rather than just the target law is identified.

Proposition 2. Given a DAG G(X(1),R,O,X), the full law p(R, X(1), O) is identifiable from p(R, O, X) if for every RiR, all conditions in Proposition 1 (i-v) are met, and also for each Z˜Z~, XZ˜(1) does not contain any elements in {Xj(1)|RjpaG(Ri)}. Moreover, p(Ri|paG(Ri)) is equal to q{Ri}, defined inductively as the denominator of (4) for {Ri}, ϕ{Ri}(G) and ϕ{Ri}(p;G), and

p(R,X(1),O)=(RiRqRi)×p(R=1,O,X)(RiRqRi)|R=1

Proof. Under conditions (i-v) in Proposition 1, we are guaranteed to identify the target law and obtain p(Ri|paG(Ri)) where some RjpaG(Ri) may be evaluated at Rj = 1. Under the additional restriction stated above, all RjpaG(Ri) can be evaluated at all levels. □

Proposition 2 always fails if a special collider structure Xj(1)RiRj, which we call the colluder, exists in G. The following Lemma implies that colluders always imply the full law is not identified.

Lemma 1. In a DAG G(X(1),R,O,X), if there exists Ri, RjR such that {Rj,Xj(1)}paG(Ri), then p(Ri|paG(Ri))|Rj=0 is not identified. Hence, the full law p(X(1), R) is not identified.

Proof. Follows by providing two different full laws that agree on the observed law on a DAG with 2 counterfactual random variables (Appendix C). This result holds for an arbitrary DAG representing a missing data model that contains the colluder structure mentioned above. □

Propositions 1 and 2 do not address a computationally efficient search procedure for a valid fixing schedule ⊲ that permit identification of p(Ri|paG(Ri)) for a particular RiR. Nevertheless, the following Lemma shows how to easily obtain identification of the target law in a restricted class of missing data DAGs.

Lemma 2. Consider a DAG G(X(1),R,O,X) such that for every RiR, {Rj|Xj(1)paG(Ri)}anG(Ri)=. Then for every RiR, a fixing schedulefor {{Rj}|RjGRdeG(Ri)} given by the partial order induced by the ancestrality relation on GRdeG(Ri) is valid in G(X(1),R,O,X), by taking each XZ˜(1)=Z{Z˜}XZ(1), for every Z˜{{Ri}}. Thus the target law is identified.

6. DISCUSSION AND CONCLUSION

In this paper we addressed the significant gap present in identification theory for missing data models representable as DAGs. We showed, by examples, that straightforward application of identification machinery in causal inference with hidden variables do not suffice for identification in missing data, and discussed the generalizations required to make it suitable for this task. These generalizations included fixing (possibly sets of) variables on a partial order and avoiding selection bias by introducing hidden variables into the problem though they were not present in the initial problem statement. Proposition 1 gives a characterization of how to utilize these generalized procedures to obtain identification of the target law, while Proposition 2 gives a similar characterization for the full law. While neither of these propositions alluded to a computationally efficient algorithm to obtain identification in general, Lemma 2 provides such a procedure for a special class of missing data models where the partial order of fixing operations required for each R is easy to determine. Providing a computationally efficient search procedure for identification in all DAG models of missing data, and questions regarding the completeness of our proposed algorithm are left for future work.

Supplementary Material

Appendix

Acknowledgements

This project is sponsored in part by the National Institutes of Health grant R01 AI127271-01 A1 and the Office of Naval Research grant N00014-18-1-2760.

References

  • [1].Huang Yimin and Valtorta Marco. Pearl’s calculus of interventions is complete. In Twenty Second Conference On Uncertainty in Artificial Intelligence, 2006. [Google Scholar]
  • [2].Lauritzen Steffan L.. Graphical Models. Oxford, U.K: Clarendon, 1996. [Google Scholar]
  • [3].Mohan Karthika and Pearl Judea. Graphical models for recovering probabilistic and causal queries from missing data. In Advances in Neural Information Processing Systems, pages 1520–1528. 2014. [Google Scholar]
  • [4].Mohan Karthika, Pearl Judea, and Tian Jin. Graphical models for inference with missing data. In Advances in Neural Information Processing Systems, pages 1277–1285, 2013. [Google Scholar]
  • [5].Pearl Judea. Probabilistic Reasoning in Intelligent Systems. Morgan and Kaufmann, San Mateo, 1988. [Google Scholar]
  • [6].Pearl Judea. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2 edition, 2009. [Google Scholar]
  • [7].Richardson Thomas S., Evans Robin J., Robins James M., and Shpitser Ilya. Nested Markov properties for acyclic directed mixed graphs. arXiv:1701.06686v2, 2017. Working paper. [Google Scholar]
  • [8].Robins James M.. A new approach to causal inference in mortality studies with sustained exposure periods – application to control of the healthy worker survivor effect. Mathematical Modeling, 7:1393–1512, 1986. [Google Scholar]
  • [9].Robins James M.. Non-response models for the analysis of non-monotone non-ignorable missing data. Statistics in Medicine, 16:21–37, 1997. [DOI] [PubMed] [Google Scholar]
  • [10].Rubin DB. Causal inference and missing data (with discussion). Biometrika, 63:581–592, 1976. [Google Scholar]
  • [11].Sadinle Mauricio and Reiter Jerome P.. Item-wise conditionally independent nonresponse modelling for incomplete multivariate data. Biometrika, 104(1):207–220, 2017. [Google Scholar]
  • [12].Shpitser Ilya. Consistent estimation of functions of data missing non-monotonically and not at random. In Advances in Neural Information Processing Systems, pages 3144–3152, 2016. [Google Scholar]
  • [13].Shpitser Ilya, Mohan Karthika, and Pearl Judea. Missing data as a causal and probabilistic problem In Proceedings of the Thirty First Conference on Uncertainty in Artificial Intelligence (UAI-15), pages 802–811. AUAI Press, 2015. [Google Scholar]
  • [14].Shpitser Ilya and Pearl Judea. Identification of joint interventional distributions in recursive semi-Markovian causal models In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI-06). AAAI Press, 2006. [Google Scholar]
  • [15].Eric J Tchetgen Tchetgen, Linbo Wang, and BaoLuo Sun. Discrete choice models for non-monotone nonignorable missing data: Identification and inference. Statistica Sinica, 28(4):2069–2088, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Tian Jin and Pearl Judea. A general identification condition for causal effects. In Eighteenth National Conference on Artificial Intelligence, pages 567–573, 2002. [Google Scholar]
  • [17].Tsiatis Anastasios. Semiparametric Theory and Missing Data. Springer-Verlag New York, 1st edition edition, 2006. [Google Scholar]
  • [18].Zhou Yan, Little Roderick J. A., and John D. Kalbfleisch. Block-conditional missing at random models for missing data. Statistical Science, 25(4):517–532, 2010. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix

RESOURCES