Abstract
Missing data is a pervasive problem in data analyses, resulting in datasets that contain censored realizations of a target distribution. Many approaches to inference on the target distribution using censored observed data, rely on missing data models represented as a factorization with respect to a directed acyclic graph. In this paper we consider the identifiability of the target distribution within this class of models, and show that the most general identification strategies proposed so far retain a significant gap in that they fail to identify a wide class of identifiable distributions. To address this gap, we propose a new algorithm that significantly generalizes the types of manipulations used in the ID algorithm [14, 16], developed in the context of causal inference, in order to obtain identification.
1. INTRODUCTION
Missing data is ubiquitous in applied data analyses resulting in target distributions that are systematically censored by a missingness process. A common modeling approach assumes data entries are censored in a way that does not depend on the underlying missing data, known as the missing completely at random (MCAR) model, or only depends on observed values in the data, known as the missing at random (MAR) model. These simple models are insufficient however, in problems where missingness status may depend on underlying values that are themselves censored. This type of missingness is known as missing not at random (MNAR) [9, 10, 17].
While the underlying target distribution is often not identified from observed data under MNAR, there exist identified MNAR models. These include the permutation model [9], the discrete choice model [15], the no self-censoring model [11, 12], the block-sequential MAR model [18], and others. Restrictions defining many, but not all, of these models may be represented by a factorization of the full data law (consisting of both the target distribution and the missingness process) with respect to a directed acyclic graph (DAG).
The problem of identification of the target distribution from the observed distribution in missing data DAG models bears many similarities to the problem of identification of interventional distributions from the observed distribution in causal DAG models with hidden variables. This observation prompted recent work [3, 4, 13] on adapting identification methods from causal inference to identifying target distributions in missing data models.
In this paper we show that the most general currently known methods for identification in missing data DAG models retain a significant gap, in the sense that they fail to identify the target distribution in many models where it is identified. We show that methods used to obtain a complete characterization of identification of interventional distributions, via the ID algorithm [14, 16], or their simple generalizations [3, 4, 13], are insufficient on their own for obtaining a similar characterization for missing data problems. We describe, via a set of examples, that in order to be complete, an identification algorithm for missing data must recursively simplify the problem by removing sets of variables, rather than single variables, and these must be removed according to a partial order, rather than a total order. Furthermore, the algorithm must be able to handle subproblems where selection bias or hidden variables, or both, are present even if these complications are missing in the original problem. We develop a new general algorithm that exploits these observations and significantly narrows the identifiability gap in existing methods. Finally, we show that in certain classes of missing data DAG models, our algorithm takes on a particularly simple formulation to identify the target distribution.
Our paper is organized as follows. In section 2, we introduce the necessary preliminaries from the graphical causal inference literature. In section 3 we introduce missing data models represented by DAGs. In section 4, we illustrate, via examples, that existing identification strategies based on simple generalizations of causal inference methods are not sufficient for identification in general, and describe generalizations needed for identification in these examples. In section 5, we give a general identification algorithm which incorporates techniques needed to obtain identification in the examples we describe. Section 6 contains our conclusions. We defer longer proofs to the supplement in the interests of space.
2. PRELIMINARIES
Many techniques useful for identification in missing data contexts were first derived in causal inference. Causal inference is concerned with expressing counterfactual distributions, obtained after the intervention operation, from the observed data distribution, using constraints embedded in a causal model, often represented by a DAG.
A DAG is a graph with a vertex set V connected by directed edges such that there are no directed cycles in the graph. A statistical model of a DAG is the set of distributions p(V) such that , where are the set of parents of V in . Casual models of a DAG are also sets of distributions, but on counterfactual random variables. Given Y ∈ V and A ⊆ V \ {Y}, a counterfactual variable, or potential outcome, written as Y (a), represents the value of Y in a hypothetical situation where A were set to values a by an intervention operation [6]. Given a set Y, define Y(a) ≡ {Y} (a) ≡ {Y (a) | Y ∈ Y}. The distribution p(Y(a)) is sometimes written as p(Y|do(a)) [6].
A causal parameter is said to be identified in a causal model if it is a function of the observed data distribution p(V). Otherwise the parameter is said to be non-identified. In all causal models of a DAG that are typically used, all interventional distributions p({V\A} (a)) are identified by the g-formula [8]:
| (1) |
If a causal model contains hidden variables, only data on the observed marginal distribution is available. In this case, not every interventional distribution is identified, and identification theory becomes more complex. A general algorithm for identification of causal effects in this setting was given in [16], and proven complete in [14, 1]. Here, we describe a simple reformulation of this algorithm as a truncated nested factorization analogous to the g-formula, phrased in terms of kernels and mixed graphs recursively defined via a fixing operator [7]. As we will see, many of the techniques developed for identification in the presence of hidden variables will need to be employed (and generalized) for missing data, even if no variables are completely hidden.
We describe acyclic directed mixed graphs (ADMGs) obtained from a hidden variable DAG by a latent projection operation in section 2.1, and a nested factorization associated with these ADMGs in section 2.2. This factorization is formulated in terms of conditional ADMGs and kernels (described in section 2.2.1), via the fixing operator (described in section 2.2.2). The truncated nested factorization that yields all identifiable functions for interventional distributions is described in section 2.3.
As a prelude to the rest of the paper, we introduce the following notation for some standard genealogic sets of a graph with a set of vertices V: parents , children , descendants , ancestors , and non-descendants . A district D is defined the maximal set of vertices that are pairwise connected by a bidirected path (a path containing only ↔ edges). We denote the district of V as , and the set of all districts in as . By convention, for any V, . Finally, the Markov blanket is defined as the set that gives rise to the following independence relation through m-separation: [7]. The above definitions apply disjunctively to sets of variables S ⊂ V; e.g. .
2.1. LATENT PROJECTION ADMGS
Given a DAG , where V are observed and H are hidden variables, a latent projection is the following ADMG with a vertex set V. An edge A → B exists in if there exists a directed path from A to B in with all intermediate vertices in H. Similarly, an edge A ↔ B exists in if there exists a path without consecutive edges → ○ ← from A to B with the first edge on the path of the form A ← and the last edge on the path of the form → B, and all intermediate vertices on the path in H. Latent projections define an infinite class of hidden variable DAGs that share identification theory. Thus, identification algorithms are typically defined on latent projections for simplicity.
2.2. NESTED FACTORIZATION
The nested factorization of p(V) with respect to an ADMG is defined on kernel objects derived from p(V) and conditional ADMGs derived from . The derivations are via a fixing operation, which can be causally interpreted as a single application of the g-formula on a single variable (to either a graph or a kernel) to obtain another graph or another kernel.
2.2.1. Conditional Graphs And Kernels
A conditional acyclic directed mixed graph (CADMG) is an ADMG in which the nodes are partitioned into W, representing fixed variables, and V, representing random variables. Only outgoing directed edges may be adjacent to variables in W.
A kernel qV(V|W) is a mapping from values in W to normalized densities over V [2]. In other words, kernels act like conditional distributions in the sense that ∑v∈V qV(v|w) = 1, ∀w ∈ W. Conditioning and marginalization in kernels are defined in the usual way. For A ⊆ V, we define q(A|W) ≡ ∑V\A q(V|W) and q(V \ A|A, W) ≡ q(V|W)/q(A|W).
2.2.2. Fixability And Fixing
A variable V ∈ V in a CADMG is fixable if . In other words, V is fixable if paths V ↔ ⋯ ↔ U and V → ⋯ → U do not both exist in for any U ∈ V \ {V}. Given a CADMG and V ∈ V fixable in , the fixing operator yields a new CADMG , where all edges with arrowheads into V are removed, and all other edges in are kept. Similarly, given a CADMG , a kernel qV(V|W), and V ∈ V fixable in , the fixing operator yields a new kernel . Fixing is a operation in which we divide a kernel by a conditional kernel. In some cases this operates as a conditioning operation, in other cases as a marginalization operation, and in yet other cases, as neither, depending on the structure of the kernel being divided.
For a set S ⊆ V in a CADMG , if all vertices in S can be ordered into a sequence σS = 〈S1, S2, … 〉 such that S1 is fixable in , S2 in , etc., S is said to be fixable in , V \ S is said to be reachable in , and σS is said to be valid. A reachable set C is said to be intrinsic if has a single district, where is the induced subgraph where we keep all vertices in C and edges whose endpoints are in C. We will define and via the usual function composition to yield operators that fix all elements in S in the order given by σS.
The distribution p(V) is said to obey the nested factorization for an ADMG if there exists a set of kernels { is intrinsic in } such that for every fixable S, and any valid σS, . All valid fixing sequences for S yield the same CADMG , and if p(V) obeys the nested factorization for , all valid fixing sequences for S yield the same kernel. As a result, for any valid sequence σ for S, we will redefine the operator ϕσ, for both graphs and kernels, to be ϕS. In addition, it can be shown that the above kernel set is characterized as: { is intrinsic in } = { is intrinsic in } [7]. Thus, we can re-express the above nested factorization as stating that for any fixable set S, we have .
An important result in [7] states that if p(V ⋃ H) obeys the factorization for a DAG with vertex set V ⋃ H, then p(V) obeys the nested factorization for the latent projection ADMG .
2.3. IDENTIFICATION AS A TRUNCATED NESTED FACTORIZATION
For any disjoint subsets Y, A of V in a latent projection representing a causal DAG , define . Then p(Y(a)) is identified from p(V) in if and only if every set is intrinsic. If identification holds, we have:
In other words, p(Y(a)) is identified if and only if it can be expressed as a factorization, where every piece corresponds to a kernel associated with a set intrinsic in . Moreover, no term in this factorization contains elements of A as random variables, just as was the case in (1). The above provides a concise formulation of the ID algorithm [16, 14] in terms of the nested Markov model which contains the causal model of the observed distribution.
If Y = {Y}, and , then the above truncated factorization has a simpler form:
In words, to identify the interventional distribution of Y where all parents (direct causes) A of Y are set to values a, we must find a total ordering on variables other than Y (V \ {Y}) that forms a valid fixing sequence. If such an ordering exists, the identifying functional is found from p(V) by applying the fixing operator to each variable in succession, in accordance with this ordering. Fig. 1 shows the identification of the functional p(Y (a)) following a total ordering of fixing M, B, A.
Figure 1:
Identification of p(Y (a)) by following a total order of valid fixing operations.
Before generalizing these tools to the identification of missing data models, we first introduce the representation of these models using DAGs.
3. MISSING DATA MODELS OF A DAG
Missing data models are sets of full data laws (distributions) p(X(1), O, R) composed of the target laws p(X(1), O), and the nuisance laws p(R|X(1), O) defining the missingness processes. The target law is over a set of random variables that are potentially missing, and a set O ≡ {O1, … ,Om} of random variables that are always observed. The nuisance law defines the behavior of missingness indicators R ≡ {R1, … ,Rk} given values of missing and observed variables. Each missing variable has a corresponding observed proxy variable Xi, defined as Xi ≡ “?” if Ri = 1, and defined as Xi ≡ “?” if Ri = 0 (this is the missing data analogue of the consistency property in causal inference). As a result, the observed data law in missing data problems is p(R, O, X), while some function of the target law p(X(1), O), as its name implies, is the target of inference. The goal in missing data problems is to estimate the latter from the former. By chain rule of probability,
| (2) |
In other words, p(X(1), O) is identified from the observed data law p(R, O, X) if and only if p(R = 1|X(1), O) is. In general, p(X(1)) is not identified from the observed data law, unless sufficient restrictions are placed on the full data law defining the missing data model.
Many popular missing data models may be represented as a factorization of the full data law with respect to a DAG [4]. These include the permutation model, the monotone MAR model, the block sequential MAR model, and certain submodels of the no self-censoring model [9, 12, 18].
Given a set of full data laws p(X(1), O, R), a DAG with the following properties may be used to represent a missing data model: has a vertex set X(1), O, R, X; for each Xi ∈ X, ; for each Ri ∈ R, . Given a DAG with the above properties, missing data model associated with is the set of distributions p(X(1), O, R) that can be written as
| (3) |
where the set of factors of the form are deterministic to remain consistent with the definition of Xi. Note that by standard results on DAG models, conditional independences in p(X(1), O, R) may be read off from by the d-separation criterion [5].
4. EXAMPLES OF IDENTIFIED MODELS
In this section, we describe a set of examples of missing data models that factorize as in (3) for different DAGs, where the target law is identified. We start with simpler examples where sequential fixing techniques from causal inference suffice to obtain identification, then move on to describe more complex examples where existing algorithms in the literature suffice, and finally proceed to examples where no published method known to us obtains identification, illustrating an identifiability gap in existing methods. In these examples, we show how identification may be obtained by appropriately generalizing existing techniques. In these discussions, we concentrate on obtaining identification of the nuisance law p(R|X(1), O) evaluated at R = 1, as this suffices to identify the target law p(X(1), O) by (2). In the course of describing these examples, we will obtain intermediate graphs and kernels. In these graphs, lower case letters (e.g. υ) indicates the variable V is evaluated at υ (for Ri, ri = 1). A square vertex indicates V had been fixed. Drawing the vertex normally with lower case indicates V was conditioned on (creating selection bias in the subproblem). For brevity, we use to denote {Ri = 1}.
We first consider the block-sequential MAR model [18], shown in Fig. 2 for three variables. The target law is identified by applying the (valid) fixing sequence 〈R1, R2, R3〉via the operator ϕ to and p(R, X). We proceed as follows. is identified immediately. Applying the fixing operator yields the graph shown in Fig. 2(b), and corresponding kernel where is now observed. Thus, in the new subproblem represented by and q1, is identified. Applying the fixing operator to and q1 yields shown in Fig. 2(c), and . Finally, in the new subproblem represented by and q2, is identified. Applying the fixing operator to and q2 yields . The identifying functional for the target law only involves monotone cases (cases where Ri = 0 implies Ri+1 = 0) just as would be the case under the monotone MAR model, although this model does not assume monotonicity and is not MAR. In this simple example, identification may be achieved purely by causal inference methods, by treating variables in R as treatments, and finding a valid fixing sequence on them. In this example, each Ri in the sequence is fixable given that the previous variables are fixable, since all parents of each Ri become observed at the time it is fixed.
Figure 2:
(a), (b), (c) are intermediate graphs obtained in identification of a block-sequential model by fixing {R1, R2, R3} in sequence. (d) is an MNAR model that is identifiable by fixing all Rs in parallel.
Following a total order to fix is not always sufficient to identify the target law, as noted in [4, 3, 13]. Consider the model represented by DAG in Fig. 2(d). For any Ri in this model, say R1, we have, by d-separation, that , which is identified. However, if we were to fix R1 in p(X, R), we would obtain a kernel where selection bias on R2 and R3 is introduced. The fact that q1 is not available at all levels of R2 and R3 prevents us from sequentially obtaining , for Ri = R2, R3, due to our inability to sum out those variables from q1.
The model in Fig. 2(d) allows identification of the target law in another way, however. This follows from the fact that is identified for each Ri by exploiting conditional independences in p(X, R) displayed by Fig. 2(d). Since , the nuisance law is identified, which means the target law is also identified, as long as we fix R1, R2, R3 in parallel (as in (2)) rather than sequentially. In other words, the model is identified, but no total order on fixing operations suffices for identification. A general algorithm that aimed to fix indicators in R in parallel, while potentially exploiting causal inference fixing operations to identify each was proposed in [13]. Our subsequent examples show that this algorithm is insufficient to obtain identification of the target law in general, and thus is incomplete.
Consider the DAG in Fig. 3. Since R2 is a child of R3 and is a parent of R3, we cannot obtain by d-separation in any kernel (including the original distribution) where R2 is not fixed. Thus, any total order on fixing operations of elements in R must start with R1 or R2. Fixing either of these variables entails dividing p(X, R) by some factor , which is identified as either or . This division entails inducing selection bias on the subsequent kernel q1 for a variable not yet fixed (either R3 or R1). Thus, no total order on fixing operations works to identify the target law in this model. At the same time, attempting to fix all R variables in parallel would fail as well, since we cannot identify either in the original distribution or any kernel obtained by standard causal inference operations described in [13]. In particular, in any such kernel or distribution R3 remains dependent on R2 given .
Figure 3:
(a) A DAG where Rs are fixed according to a partial order. (b) The CADMG obtained by fixing R2.
However, the target law in this model is identified by following a partial order ≺ of fixing operations. In this partial order, R1 is incompatble with R2, and R2 ≺ R3. This results in an identification strategy where we fix each variable only given that variables earlier than it in the partial order are fixed. That is, distributions and are obtained directly in the original distribution without fixing anything. The distribution , on the other hand, is obtained in the kernel after R2 (the variable earlier than R3 in the partial order) is fixed. The graph corresponding to this kernel is shown in Fig. 3(b). Note that in this graph is observed, and there is selection bias on R1. However, it easily follows by d-separation that R3 is independent of R1. It can thus be shown that even if q1 is only available at value R1 = 1. Since all are identified, so is the target law in this model, by (2).
Next, we consider the model in Fig. 4. Here, and are identified immediately. However, poses a problem. In order to identify this distribution, we either require that R1 is conditionally independent of R2, possibly after some fixing operations, or we are able to render observable by fixing R2 in some way. Neither seems to be possible in the problem as stated. In particular, fixing R2 via dividing by will necessarily induce selection bias on R1, which will prevent identification of in the resulting kernel.
Figure 4:
A DAG where selection bias on R1 is avoidable by following a partial order fixing schedule on an ADMG induced by latent projecting out .
However, we can circumvent the difficulty by treating as an unobserved variable U1, and attempting the problem in the resulting (hidden variable) DAG shown in Fig. 4(b), and its latent projection ADMG shown in Fig. 4(c), where U1 is “projected out.” In the resulting problem, we can fix variables according to a partial order ≺ where R2 and R3 are incompatible, R2 ≺ R1, and R3 ≺ R1. Thus, we are able to fix R2 and R3 in parallel by dividing by and , leading to a kernel , and the graph shown in Fig. 4(d), where notation means “fix all necessary elements that occur earlier than R1 in the partial order, in a way consistent with that partial order.” In this example, this means fixing R2 and R3 in parallel. We will describe how fixing operates given general fixing schedules given by a partial order later in the paper. In the kernel the parent of R1 is observed data, meaning that is identified as . This implies the target law is identified in this model.
In general, to identify , we may need to use separate partial fixing orders on different sets of variables for different Ri ∈ R. In addition, the fact that fixing introduces selection bias sometimes results in having to divide by a kernel where a set of variables are random, something that was never necessary in causal inference problems. In general, for a given Ri, the goal of a fixing schedule is to arrive at a kernel where an independence exists allowing us to identify , even if some elements of are in X(1) in the original problem. This fixing must be given by a partial order, and sometimes on sets of variables. In addition, some elements of X(1) must be treated as hidden variables. These complications are necessary in general to avoid creating selection bias in subproblems, and ultimately to identify the nuisance law. The following example is a good illustration.
Consider the graph in Fig. 5(a). For R1 and R3, the fixing schedules are empty, and we immediately obtain their distributions as and . For R2, the partial order is R3 ≺ R1 in a graph where we treat as a hidden variable U2. This yields , where is equal to , and .
Figure 5:
(a) A DAG where the fixing operator must be performed on a set of vertices. (b) A latent projection of a subproblem used for identification of .
In order to obtain the propensity score for R4 we must either render observable through fixing R1 or perform valid fixing operations until we obtain a kernel in which R4 is conditionally independent of R1 given its parent . However, there exists no partial order on elements of R. All partial orders on elements in R induce selection bias on variables higher in the order, preventing the identification of the required distribution for R4. For example, choosing a partial fixing order of R1 ≺ R3, where we treat and as hidden variables results in selection bias on R3 as soon as we fix R1. Other partial orders fail similarly. However, the following approach is possible in the graph in which we treat and as hidden variables.
R1 and R3 lie in the same district in the resulting latent projection ADMG, shown in Fig. 5(b). Moreover, the set {R1, R3} is closed under descendants in the district in Fig. 5(b). As a result, R1 and R3 can essentially be viewed as a single vertex from the point of view of fixing. Indeed we may choose a partial order {R1, R3} ≺ R2, where we fix R1 and R3 as a set. The fixing operation on the set is possible since is a function of observed data law, p(X, R). Specifically, it is equal to , where the equality holds by d-separation . We then obtain , where , and .
Our final example demonstrates that in order to identify the target law, we may potentially need to fix variables outside R, including variables in X(1) that become observed after fixing or conditioning on some elements of R. Fig. 6(a) contains a generalization of the model considered in [13], where O3 is fully observed. In this model, distributions for R4 and R1 are identified immediately, while identification of R2 requires a partial order in the graph where we treat as latent variables (with the latent projection ADMG shown in Fig. 6(b)) until they are rendered observed by fixing the corresponding missingness indicators. To illustrate fixing operations according to this order, the intermediate graphs that arise are shown in Fig. 6(c),(d),(e),(f).
Figure 6:
A DAG where variables besides Rs are required to be fixed.
5. A NEW IDENTIFICATION ALGORITHM
In order to identify the target law in examples discussed in the previous section, we had to consider situations where some variables were viewed as hidden, and marginalized out, and others were conditioned on, introducing selection bias. In addition, fixing operations were performed according to a partial, rather than a total, order as was the case in causal inference problems. Finally, we sometimes fixed sets of variables jointly, rather than individual variables. We now introduce relevant definitions that allow us to formulate a general identification algorithm that takes advantage of all these techniques.
Let V be a set of random variables (and corresponding vertices) consisting of observed variables O, R, X, missing variables X(1), and selected variables S. Let W be a set of fixed observed variables. The following definitions apply to a latent projection , for some , and a corresponding kernel . Graph can be viewed as a latent variable CADMG for q where are latent. Such CADMGs represent intermediate subproblems in our identification algorithm.
For , let , and . We say Z is fixable in if
,
S ⋂ Z = ∅,
.
In words, these conditions apply to some Z that is a subset of its own district (which is trivial when the set Z is a singleton). The conditions, in the listed order, require that Z is closed under descendants in the district, should not contain any selected variables, and should be independent of both selected variables S and the missingness indicators RZ of the corresponding counterfactal parents given the Markov blanket of Z, respectively. Consider the graph in Fig. 5(b) where S = ∅ and let Z = {R1, R3}. Z is fixable since Z ⊆ DZ = {R1, R3, X2, X4}, is closed, and both S and RZ are empty sets.
A set spanning multiple elements in is said to be fixable if it can be partitioned into a set of elements Z, such that each Z is a subset of a single district in and is fixable.
Given an ordering ≺ on vertices V ⋃ W topological in and fixable in , define as
| (4) |
where and is the set of all elements earlier than Z in the order ≺ (this includes Z itself).
Given a set Z ⊆ R ⋃ O ⋃ X(1), and an equivalence relation ~, let Z/~ be the partition of Z into equivalence classes according to ~. Define a fixing schedule for Z/~ to be a partial order ⊲ on Z/~. For each Z ∈ Z/~, define to be the set of elements in Z/~ earlier than in the order ⊲, and . Define and to be restrictions of ⊲ to and , respectively. Both restrictions, and , are also partial orders.
We inductively define a valid fixing schedule (a schedule where fixing operations can be successfully implemented), along with the fixing operator on valid schedules. The fixing operator will implement fixing as in (4) on within an intermediate problem represented by a CADMG where some will become observed after fixing , with treated as latent variables, and kernel associated with this CADMG defined on the observed subset of variables. We also define .
We say is valid for in if for every ⊲-largest element of , is valid for . If is valid for , we define to be a new CADMG obtained from by:
Removing all edges with arrowheads into
Marking any , as observed,
Marking any as selected to value 1, where RZ is defined with respect to
Treating elements of as hidden variables.
We say is valid for if is valid for and is fixable in . If is valid, we define
| (5) |
where , and are defined inductively as the denominator of (4) for , and .
We have the following claims.
Proposition 1. Given a DAG , the distribution is identifiable from p(R, O, X) if there exists
Z ⊆ X(1) ⋃ R ⋃ O,
an equivalence relation ~ on Z such that {Ri} ∈ Z/~,
a set of elements such that for each ,
,
and a valid fixing schedule ⊲ for Z/~ in such that for each , .
Moreover, is equal to , defined inductively as the denominator of (4) for {Ri}, and , and evaluated at .
Proposition 1 implies that is identified if we can find a set of variables that can be fixed according to a partial order (possibly through set fixing) within subproblems where certain variables are hidden. At the end of the fixing schedule, we require that Ri itself is fixable given its Markov blanket in the original DAG. We encourage the reader to view the example provided in Appendix B, for a demonstration of valid fixing schedules that may be chosen by Proposition 1.
Corollary 1. Given a DAG , the target law p(X(1), O) is identified if is identified via Proposition 1 for every Ri ∈ R.
Proof. Follows by Proposition 1 and (2). □
In addition, in special classes of models, the full law, rather than just the target law is identified.
Proposition 2. Given a DAG , the full law p(R, X(1), O) is identifiable from p(R, O, X) if for every Ri ∈ R, all conditions in Proposition 1 (i-v) are met, and also for each , does not contain any elements in . Moreover, is equal to , defined inductively as the denominator of (4) for {Ri}, and , and
Proof. Under conditions (i-v) in Proposition 1, we are guaranteed to identify the target law and obtain where some may be evaluated at Rj = 1. Under the additional restriction stated above, all can be evaluated at all levels. □
Proposition 2 always fails if a special collider structure , which we call the colluder, exists in . The following Lemma implies that colluders always imply the full law is not identified.
Lemma 1. In a DAG , if there exists Ri, Rj ∈ R such that , then is not identified. Hence, the full law p(X(1), R) is not identified.
Proof. Follows by providing two different full laws that agree on the observed law on a DAG with 2 counterfactual random variables (Appendix C). This result holds for an arbitrary DAG representing a missing data model that contains the colluder structure mentioned above. □
Propositions 1 and 2 do not address a computationally efficient search procedure for a valid fixing schedule ⊲ that permit identification of for a particular Ri ∈ R. Nevertheless, the following Lemma shows how to easily obtain identification of the target law in a restricted class of missing data DAGs.
Lemma 2. Consider a DAG such that for every Ri ∈ R, . Then for every Ri ∈ R, a fixing schedule ⊲ for given by the partial order induced by the ancestrality relation on is valid in , by taking each , for every . Thus the target law is identified.
6. DISCUSSION AND CONCLUSION
In this paper we addressed the significant gap present in identification theory for missing data models representable as DAGs. We showed, by examples, that straightforward application of identification machinery in causal inference with hidden variables do not suffice for identification in missing data, and discussed the generalizations required to make it suitable for this task. These generalizations included fixing (possibly sets of) variables on a partial order and avoiding selection bias by introducing hidden variables into the problem though they were not present in the initial problem statement. Proposition 1 gives a characterization of how to utilize these generalized procedures to obtain identification of the target law, while Proposition 2 gives a similar characterization for the full law. While neither of these propositions alluded to a computationally efficient algorithm to obtain identification in general, Lemma 2 provides such a procedure for a special class of missing data models where the partial order of fixing operations required for each R is easy to determine. Providing a computationally efficient search procedure for identification in all DAG models of missing data, and questions regarding the completeness of our proposed algorithm are left for future work.
Supplementary Material
Acknowledgements
This project is sponsored in part by the National Institutes of Health grant R01 AI127271-01 A1 and the Office of Naval Research grant N00014-18-1-2760.
References
- [1].Huang Yimin and Valtorta Marco. Pearl’s calculus of interventions is complete. In Twenty Second Conference On Uncertainty in Artificial Intelligence, 2006. [Google Scholar]
- [2].Lauritzen Steffan L.. Graphical Models. Oxford, U.K: Clarendon, 1996. [Google Scholar]
- [3].Mohan Karthika and Pearl Judea. Graphical models for recovering probabilistic and causal queries from missing data. In Advances in Neural Information Processing Systems, pages 1520–1528. 2014. [Google Scholar]
- [4].Mohan Karthika, Pearl Judea, and Tian Jin. Graphical models for inference with missing data. In Advances in Neural Information Processing Systems, pages 1277–1285, 2013. [Google Scholar]
- [5].Pearl Judea. Probabilistic Reasoning in Intelligent Systems. Morgan and Kaufmann, San Mateo, 1988. [Google Scholar]
- [6].Pearl Judea. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2 edition, 2009. [Google Scholar]
- [7].Richardson Thomas S., Evans Robin J., Robins James M., and Shpitser Ilya. Nested Markov properties for acyclic directed mixed graphs. arXiv:1701.06686v2, 2017. Working paper. [Google Scholar]
- [8].Robins James M.. A new approach to causal inference in mortality studies with sustained exposure periods – application to control of the healthy worker survivor effect. Mathematical Modeling, 7:1393–1512, 1986. [Google Scholar]
- [9].Robins James M.. Non-response models for the analysis of non-monotone non-ignorable missing data. Statistics in Medicine, 16:21–37, 1997. [DOI] [PubMed] [Google Scholar]
- [10].Rubin DB. Causal inference and missing data (with discussion). Biometrika, 63:581–592, 1976. [Google Scholar]
- [11].Sadinle Mauricio and Reiter Jerome P.. Item-wise conditionally independent nonresponse modelling for incomplete multivariate data. Biometrika, 104(1):207–220, 2017. [Google Scholar]
- [12].Shpitser Ilya. Consistent estimation of functions of data missing non-monotonically and not at random. In Advances in Neural Information Processing Systems, pages 3144–3152, 2016. [Google Scholar]
- [13].Shpitser Ilya, Mohan Karthika, and Pearl Judea. Missing data as a causal and probabilistic problem In Proceedings of the Thirty First Conference on Uncertainty in Artificial Intelligence (UAI-15), pages 802–811. AUAI Press, 2015. [Google Scholar]
- [14].Shpitser Ilya and Pearl Judea. Identification of joint interventional distributions in recursive semi-Markovian causal models In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI-06). AAAI Press, 2006. [Google Scholar]
- [15].Eric J Tchetgen Tchetgen, Linbo Wang, and BaoLuo Sun. Discrete choice models for non-monotone nonignorable missing data: Identification and inference. Statistica Sinica, 28(4):2069–2088, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Tian Jin and Pearl Judea. A general identification condition for causal effects. In Eighteenth National Conference on Artificial Intelligence, pages 567–573, 2002. [Google Scholar]
- [17].Tsiatis Anastasios. Semiparametric Theory and Missing Data. Springer-Verlag New York, 1st edition edition, 2006. [Google Scholar]
- [18].Zhou Yan, Little Roderick J. A., and John D. Kalbfleisch. Block-conditional missing at random models for missing data. Statistical Science, 25(4):517–532, 2010. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.






