Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Dec 1.
Published in final edited form as: Prev Sci. 2010 Dec;11(4):384–396. doi: 10.1007/s11121-010-0175-4

Handling Missing Data in Randomized Experiments with Noncompliance

Booil Jo 1,, Elizabeth M Ginexi 2, Nicholas S Ialongo 3
PMCID: PMC2912956  NIHMSID: NIHMS196810  PMID: 20379779

Abstract

Treatment noncompliance and missing outcomes at posttreatment assessments are common problems in field experiments in naturalistic settings. Although the two complications often occur simultaneously, statistical methods that address both complications have not been routinely considered in data analysis practice in the prevention research field. This paper shows that identification and estimation of causal treatment effects considering both noncompliance and missing outcomes can be relatively easily conducted under various missing data assumptions. We review a few assumptions on missing data in the presence of noncompliance, including the latent ignorability proposed by Frangakis and Rubin (Biometrika 86:365–379, 1999), and show how these assumptions can be used in the parametric complier average causal effect (CACE) estimation framework. As an easy way of sensitivity analysis, we propose the use of alternative missing data assumptions, which will provide a range of causal effect estimates. In this way, we are less likely to settle with a possibly biased causal effect estimate based on a single assumption. We demonstrate how alternative missing data assumptions affect identification of causal effects, focusing on the CACE. The data from the Johns Hopkins School Intervention Study (Ialongo et al., Am J Community Psychol 27:599–642, 1999) will be used as an example.

Keywords: Causal inference, Complier average causal effect, Latent ignorability, Missing at random, Missing data, Noncompliance

Introduction

In field experiments that are frequently employed in prevention research, it is common to have missing data due to dropout or nonresponse (i.e., failing to provide the data) at followup assessments. Along with the extensive development of statistical methods and analysis tools, proper handling of missing data has become a much more manageable task in recent years. Multiple imputation (Schafer 1997) and likelihood-based estimation methods (Little and Rubin 2002) are now considered standard analysis options for handling missing data. Another common complication in field experiments is noncompliance of study participants with the assigned programs or treatments. Although not as widely spread as missing data techniques to handle missing outcomes, analytical strategies of handling non-compliance are also increasingly utilized in prevention research (Stuart et al. 2008). Since Angrist et al. (1996) demonstrated the possibility of making causal inference in the presence of noncompliance, variations of the instrumental variable approach such as the complier average causal effect (CACE) estimation method have received much attention in various research fields that involve experiments in naturalistic settings. More recently, Frangakis and Rubin (2002) proposed a broader conceptual framework called “principal stratification,” which generalizes the idea laid out in Angrist et al. (1996). Principal stratification has rapidly become an essential framework for causal inference taking into account heterogeneity in intermediate outcomes (also known as mediators in social science) including, but not limited to, compliance behavior.

As one may suspect from the apparently similar nature of noncompliance and nonresponse at followup assessments, the two complications often occur simultaneously in field experiments. How noncompliance and missing outcomes can collectively affect causal effect estimation has also been studied in the principal stratification framework (Dunn et al. 2003; Frangakis and Rubin 1999; Jo 2008a; Jo and Vinokur 2010; Mattei and Mealli 2007; Mealli et al. 2004; O’Malley and Normand 2004; Peng et al. 2004), although its history is even shorter than that of the CACE estimation method. If treatment compliance is fully observed, noncompliance does not add complexities in handling missing outcome data. However, in reality, compliance information is almost always incomplete. In field experiments, a control condition often does not involve any particular treatments or programs, and therefore we do not observe compliance under the control condition. Some experiments may employ different treatments or programs for different conditions, and in this case, the observed compliance is mostly not comparable across randomized arms. For example, if complying with a standard program is easier than complying with a new program, observed compliance is unlikely to be comparable across the two programs, which leads to a situation where comparable compliance information is partially missing. Given that, co-occurrence of noncompliance and missing outcomes is a concern in causal treatment effect estimation because causal effects can be differently identified depending on what we assume about the unknown relationship between noncompliance and availability of the outcome data.

Although noncompliance and missing outcomes are common problems in prevention research, statistical methods that address both complications have not been routinely considered in its analysis practice. As pointed out in Frangakis and Rubin (1999), if we impose an assumption that deviates from the true relationship between noncompliance and outcome missingness, the causal treatment effect estimates can be biased. However, given that we do not observe the true relationship between the two, investigating possible bias mechanisms and harnessing plausible ranges of causal treatment effects can be an overwhelming task even for researchers with advanced statistical skills. One practical way of addressing the complication of having both noncompliance and missing outcomes would be to consider a few scientifically plausible assumptions about the relationship between the two. The use of alternative assumptions will at least provide a range of causal effect estimates and therefore will work as a way of sensitivity analysis. In this paper, we will review a few previously discussed assumptions on missing data in the presence of noncompliance and provide a tutorial on using these assumptions in the parametric CACE estimation framework. In particular, we will use the Mplus program (Muthén and Muthén 1998–2009), which has been increasingly used for parametric CACE estimation.

This paper is organized as follows: Section “Johns Hopkins PIRC School Intervention Study” provides a brief description of the Johns Hopkins School Intervention Study, which will be used as an example throughout the paper. Section “Complier Average Causal Effect (CACE)” defines the complier average causal effect in the presence of both noncompliance and missing outcomes. In Section “Missing Data Assumptions,” we discuss alternative missing data assumptions and identification of causal effects. Section “Application to the PIRC Study” shows how the alternative missing data assumptions can be used in estimating CACE using the data from the Hopkins Study. Section “Conclusion” provides conclusions. Some readers may find the details provided in Sections “Complier Average Causal-Effect (CACE)” and “Missing Data Assumptions” somewhat too technical, although these are the key steps for researchers who intend to actively get involved in the missing data modeling process. We intend to provide minimal, but sufficient, information to fully understand the alternative procedures of handling missing data in the presence of noncompliance without going back and forth across related technical papers. Readers who want to first see how these methods can be actually implemented may skip the details and focus on the definition of CACE in Section “Complier Average Causal-Effect (CACE)” and definitions of missing data assumptions in Section “Missing Data Assumptions,” and then go to the real data application in Section “Application to the PIRC Study.”

Johns Hopkins PIRC School Intervention Study

We will use the school intervention study conducted by the Johns Hopkins University Preventive Intervention Research Center (PIRC) in 1993–1994 (Ialongo et al. 1999) as an example. The PIRC study was designed to improve academic achievement and to reduce early behavioral problems. First-grade children were randomly assigned to the control or to intervention conditions. Two programs were employed in the study: the Classroom-Centered intervention and the Family-School Partnership (FSP) intervention. In this paper, we will compare the control and the FSP intervention groups. In the FSP condition, parents were asked to implement 66 take-home activities related to literacy and mathematics, whereas no special instructions were given to control condition children’s parents.

The intervention was provided over the first-grade school year, following a baseline assessment in the early fall. Various outcomes related to academic achievement and behavioral problems were measured at followup assessments. The outcome that will be analyzed in this paper is shy behavior assessed in the spring of the second grade (18 months from the baseline assessment). The shy behavior was measured by the TOCA-R (Teacher Observation of Classroom Adaptation-Revised; Werthamer-Larsson et al. 1991), which was designed to assess children’s adequacy of performance on core tasks in the classroom as rated by the teacher. The shy behavior is a composite variable that consists of TOCA-R items (scale ranges from 1 to 6) such as friendly to classmates, interact with classmates and teachers, play with classmates, and initiate interactions with classmates. Social interaction/engagement is one of the tasks teachers identified in focus groups in the Woodlawn study (Kellam et al. 1975) as essential to success in the elementary school classroom. In their study, the term “shy behavior” was actually used to describe the maladaptive form of the behavior. Children who are socially disengaged would be less likely to participate in class discussion or seek out the help of their teacher when they do not understand something, which may then lead to academic problems. These shy children would also be less likely to develop healthy peer relations since they failed to adequately interact with their peers.

In the PIRC FSP intervention, many parents failed to complete a sufficient amount of assigned activities (66 total activities) and over-reporting of completion was also expected (parents self-reported). Given that, it was expected that the intervention may not show any desirable effects unless parents had reported a quite high level of completion. When the receipt of intervention treatment is defined as completing about two-thirds of activities (45 out of 66 activities), about 47% of children in the FSP intervention condition properly received the intervention treatment. The trial also suffered from subsequent missing outcomes. The overall response (i.e., providing outcome data) rate was 0.84 (0.88 in the intervention, 0.80 in the control) at the second grade followup assessment. In the intervention condition, the average response rate was 0.91 for those who completed 45 or more activities and 0.85 for those who completed less than 45. Since compliance types could not be observed among individuals in the control condition, how noncompliance with the intervention treatments was related to missing data status at the followup under the control condition was also unknown. We may assume that the same relationship between noncompliance and outcome missing data status we observe under the intervention condition would also apply to the control condition, or we may assume something different. The resulting causal treatment effect estimates may be sensitive to what we assume about this unknown missing data mechanism. The key difficulty here is that these assumptions are not directly testable based on observed information.

Complier Average Causal Effect (CACE)

Following the convention originated from Angrist et al. (1996), we will define compliers as individuals who would receive the treatment if offered. Noncompliers include three types of individuals. Never-takers are individuals who would not receive the treatment regardless of whether it is offered. Always-takers are individuals who would always receive the treatment regardless of whether it is offered. Defiers are individuals who would do the opposite of what they are assigned to do. Among these three types of noncompliance, we will focus on the never-taker category in this paper. When individuals are not allowed to access the treatment other than the one they are assigned to take, never-taker is the only possible type of noncompliance, which is the case in the JHU PIRC trial. In this trial, students and parents assigned to the control condition did not have an access to the home learning activity materials.

In line with the JHU PIRC trial, we will consider a randomized experiment where individuals are assigned to one of the two conditions. The assignment status Zi = 1 if individual i (i = 1, …, N) is randomly assigned to the intervention, and Zi = 0 if assigned to the control condition. The observed treatment receipt status Si = 1 if individual i received the treatment (in JHU PIRC, this means completing at least 45 intervention activities), and Si = 0 otherwise. The response indicator Ri = 1 if i provides outcome information at the second year followup assessment, and Ri = 0 if not. Note that Zi, Si, Ri are always observed. The shy behavior outcome Yi is observed if Ri = 1, and unobserved if Ri = 0.

Let Yi(1) denote the potential outcome for individual i when assigned to the intervention condition, and Yi(0) when assigned to the control condition. Then, the effect of treatment assignment can be defined as Yi(1) − Yi(0). This definition considers potential values of posttreatment outcomes under all compared conditions. This way of defining treatment effects is often referred to as the potential outcomes approach (Holland 1986; Neyman 1923; Rubin 1978, 1980). The individual-level causal effect Yi(1) − Yi(0) cannot be identified because an individual cannot be assigned to both the treatment and control conditions. However, the causal effect of treatment assignment can be identified at the average level under certain assumptions. The overall average causal effect, also known as the intention to treat (ITT) effect is defined as μ1μ0, where μ1 is the population mean potential outcome under the treatment and μ0 under the control condition. The key advantage of using the concept of potential outcomes is that underlying assumptions necessary for causal interpretation can be explicitly clarified. This becomes more critical when identifying causal treatment effects that vary across different intermediate posttreatment outcome values such as compliance status because of increased complexity. In recent years, principal stratification (Frangakis and Rubin 2002) has emerged as a popular method of approaching causal inference with intermediate outcomes in the potential outcomes framework (Jo 2008b). Principal stratification refers to classification of individuals on the basis of potential values of intermediate outcomes under all treatment conditions that are compared.

Let Si(1) denote the potential treatment receipt status for individual i when Z = 1, and Si(0) when Z = 0. Since individuals were prohibited from receiving a different treatment than the one that they were assigned to, only two compliance types are possible based on Z and S. The principal strata membership, or latent compliance type Ci = c if individual i would receive the treatment when assigned to the intervention condition, and Ci = n if i would not receive the treatment regardless of the intervention assignment. That is,

Ci={c(complier)ifSi(1)=1,andSi(0)=0n(nevertaker)ifSi(1)=0,andSi(0)=0,

which implies that Ci is observed if individual i is assigned to the intervention condition, but unobserved if assigned to the control condition.

First, let us define CACE without worrying about the missing data indicator Ri. Let Yi(1, Si(1)) denote the potential outcome for individual i when assigned to the intervention condition, and Yi(0, Si(0)) when assigned to the control condition. Then, the effect of treatment assignment can be defined as Yi(1, Si(1)) −Yi(0, Si(0)). In particular, the effect of treatment assignment for complier i is defined as Yi(1, Si(1) = 1) −Yi(0, Si(0) = 0). This effect cannot be identified because an individual cannot be assigned to both the treatment and control conditions, but can be identified at the average level under certain assumptions. That is, the complier average causal effect (CACE) is defined at the population average level as

CACE=μ1cμ0c, (1)

where μ1c is the population mean potential outcome for compliers when assigned to the treatment condition and μ0c when assigned to the control condition. Some introductory materials on CACE can be found in several places (e.g., Jo 2002a; Jo et al. 2008; Little and Yau 1998).

Recall that, compliance type Ci is observed in the intervention condition, but unobserved in the control condition. In other words, in Eq. 1, μ1c is observed from the intervention group data, but μ0c is not observed from the control group data. Instead, we observe the overall control condition mean. The distribution of the overall population mean potential outcome under the control condition (μ0) can be thought of as a mixture of the complier and never-taker distributions. That is, μ0 = (1 − πc) μ0n + cc μ0c, where πc is the proportion of compliers in the population, and μ0n is the population mean potential outcome for never-takers under the control condition. From this mixture, μ0c = (μ0 − (1 − πc) μ0n)c. Then, Eq. 1 can be rewritten as

CACE=μ1c{μ0(1πc)μ0nπc}. (2)

The following assumptions are commonly used in identifying the CACE, and we will assume in this paper that these assumptions hold.

  • Assumption 1: Random assignment—Individuals are randomly assigned to the intervention (Zi = 1) or to the control (Zi = 0) condition.

  • Assumption 2: Stable unit treatment value (SUTVA)—Potential outcomes (including intermediate outcomes such as compliance) for each person are unrelated to the treatment status of other individuals (Rubin 1978, 1980, 1990). In the JHU PIRC example, the unit of randomization was a classroom. Therefore, it is likely that the level of interaction among individuals across different treatment conditions remains about the same as that observed when the unit of randomization is an individual (Sobel 2006). However, interaction within the same classroom can be a problem. In principle, this interaction can be statistically handled in the principal stratification context (Frangakis et al. 2002; Jo et al. 2008), although how well it can be handled with small numbers of clusters (e.g., nine classrooms in each arm in the JHU PIRC example) is still unclear. In this paper, we assume that this interaction is not considerable.

  • Assumption 3: Outcome Exclusion Restriction (OER)—There is no effect of treatment assignment on the outcome for those who would not change their treatment receipt behavior regardless of the treatment assignment status (i.e., never-takers and always-takers). This assumption also can be thought of disallowing any direct effect of treatment assignment (Jo 2008b). Although we assume OER in this paper to focus on missing data assumptions, this assumption may be violated if treatment assignment itself has an effect on the outcome (e.g., psychological effect). In the randomized trial setting we consider in this paper, this assumption means that μ0n = μ1n, where μ1n is the population mean potential outcome for never-takers under the treatment condition.

  • Assumption 4: Monotonicity—There are no defiers. This assumption is automatically satisfied if individuals are not allowed to take other treatments than the one they are assigned to take, as in the JHU PIRC trial.

Under the assumptions 1–4 above, the CACE can be expressed as

CACE=μ1c{μ0(1πc)μ1nπc}, (3)

where μ0, μ1c, μ1n, and πc are all directly estimable (e.g., using sample statistics, or using maximum likelihood estimation as in this paper) from the observed data. In analyzing data from the JHU PIRC trial using Mplus, we will consistently use maximum likelihood estimation.

Let us now consider availability of the outcome data in defining CACE. Let π1R denote the proportion of individuals who provide outcome information in the population under the treatment condition, and π0R under the control condition. As with the outcome itself, π0R can be thought of as a mixture of the complier and never-taker distributions, which cannot be separately observed. That is, π0R=(1πc)π0nR+πcπ0cR, where π0nR is the response rate for never-takers under the control condition, and π0cR for compliers. With this information, the observed average outcome of the control condition is

μ0obs=π0nRπ0R(1πc)μ0n+π0cRπ0Rπcμ0c. (4)

From (4), μ0c can be written as

μ0c=μ0obsπ0Rμ0nπ0nR(1πc)π0Rπ0nR(1πc). (5)

Then, using (5), the Eq. 2 can be rewritten as

CACE=μ1c{μ0obsπ0Rμ0nπ0nR(1πc)π0Rπ0nR(1πc)}, (6)

where μ0obs,π0R and πc are directly estimable from the observed data. Under OER, μ0n can be replaced by μ1n, which is also directly estimable from t he observed data. However, further restrictions are necessary to identify π0nR (recall that compliance status is not observed under the control condition), which is necessary to fully identify CACE. In the following section, we will review a few previously suggested assumptions that can be imposed on π0nR.

Missing Data Assumptions

One way to model the missing data mechanism with incomplete compliance information is to apply the standard missing data assumption that the missing data mechanism is ignorable, or data are missing at random (MAR: Little and Rubin 2002) conditional on observed information, which includes the observed portion of compliance data. Under MAR, the missing-data mechanism is ignorable for likelihood-based inferences. In other words, the missing data mechanism does not even need to be explicitly modeled when likelihood-based estimation methods are used.

  • Missing data assumption 1: Missing At Random (MAR)—The probability of outcome being recorded is not associated with the outcome conditional on treatment assignment and observed treatment receipt status, and covariates. In the current setting, a sufficient restriction to satisfy this condition is that π0cR=π0nR, meaning that compliers and never-takers have the same response rate under the control condition, but they may have different response rates under the treatment condition. Under this assumption, missingness is not attributable to unobserved data including unobserved compliance status under the control condition.

    Under MAR, π0cR=π0nR=π0R. With MAR and the common assumptions defined in the previous section, Eq. 6 can be rewritten as
    CACEMAR=μ1c{μ0obsμ1n(1πc)πc}, (7)

    where all involved parameters are directly estimable from the observed data, and therefore CACE is fully identified.

    However, if missingness is attributable to unobserved data, the missing-data mechanism is nonignorable. In this case, likelihood-based inferences can be sensitive to whether and how the missing-data mechanism is specified in the statistical model. For example, in the PIRC example, a low level of completion of intervention activities by parents may indicate family instability, meaning that these families are more likely to move from place to place (or children are more likely to be sent to live with a relative or placed in foster care) due to financial stress or other reasons related to drug or alcohol problems, and therefore it is harder to locate these parents and their children at follow-up assessments. In other words, response probability may be higher among potentially well-complying families even in the absence of intervention treatments (i.e., control condition). In this case, missingness is attributable to unobserved compliance status in the control condition. In other words, the missing data mechanism is no longer ignorable, and therefore cannot be modeled within the MAR framework.

    Frangakis and Rubin (1999) considered a missing data mechanism called “latent ignorability (LI),” which means that potential outcomes and associated potential outcome missing data indicators are independent within each level of the latent compliance variable (as opposed to observed compliance in MAR). This assumption is weaker than the conventional ignorability (MAR) assumption, and therefore allows for more flexibility in missing data modeling. Since LI is weaker than MAR, if LI is violated, MAR is also violated. Under LI, the missing data mechanism may be attributable not only to the observed compliance data, but also to the unobserved compliance data (i.e., the missing-data mechanism can be nonignorable). However, the LI assumption itself is not sufficient to identify CACE (LI is weaker than MAR). The following two are the examples of missing data assumptions that can be imposed under LI to identify CACE.

  • Missing data assumption 2: Response Exclusion Restriction (RER)—For never-takers, the probability of outcome being recorded is not affected by treatment assignment status. In the current setting, this assumption implies that π1nR=π0nR (Frangakis and Rubin 1999). Some deviation from RER is expected in the PIRC trial. Poorly complying families might have felt some benefit from the intervention and might have felt more obliged to provide information at followup assessments than families in the control condition who would have complied poorly if the intervention had been offered. It is also possible that these families might have been demoralized by failing to comply with the intervention and might have provided data less than families in the control condition who would have complied poorly if the intervention had been offered.

    With RER and the common assumptions defined in the previous section, Eq. 6 can be rewritten as
    CACERER=μ1c{μ0obsπ0Rμ1nπ1nR(1πc)π0Rπ1nR(1πc)}, (8)

    where all involved parameters are directly estimable from the observed data.

  • Missing data assumption 3: Stable Complier Response (SCR)—For compliers, the probability of outcome being recorded is unaffected by treatment assignment status. In other words, whether compliant study participants provide the outcome data would not change regardless of intervention assignment. This assumption has been previously considered as a complier version of the response exclusion restriction (Mealli et al. 2004). In the current setting, this assumption implies that π0cR=π1cR. In the JHU PIRC trial, compliers’ response behavior is likely to be stable regardless of treatment assignment because good compliance is an indicator of family stability. However, this assumption can be violated if compliers provide the outcome data more when assigned to the treatment condition than when assigned to the control condition because they feel benefited from the intervention and feel obliged to provide the data.

Under SCR, π0cR=π1cR. With SCR and the common assumptions defined in the previous section, Eq. 6 can be rewritten as

CACESCR=μ1c{μ0obsπ0Rμ1n(π0Rπ1cRπc)π1cRπc}, (9)

where all involved parameters are directly estimable from the observed data.

In this paper, we focus on identification of CACE by directly imposing restrictions on the relationship between outcome missingness and noncompliance. However, it is also possible to identify CACE relying on auxiliary information such as from covariates (Jo 2002b) and proper priors (Hirano et al. 2000), which we do not cover in this paper. For example, in Emsley et al. (2010), causal effects are identified under LI imposing restrictions on the effects of covariates on the outcome instead of imposing restrictions on the relationship between outcome missingness and noncompliance (they also provide Mplus input files for the analyses).

Application to the PIRC Study

In this section, the three missing data assumptions discussed in the previous section will be applied to the PIRC data to identify CACE. The four common assumptions (random assignment, SUTVA, OER, monotonicity) will be consistently assumed in this process. As shown in the previous section, the models discussed here can be easily estimated using the instrumental variable approach if covariates are not present. The difference between the instrumental variable and maximum likelihood estimates is often minimal, although there are situations where one method performs better than the other. In principle, it is possible to incorporate covariates in the instrumental variable framework (Bloom 1984; Little and Yau 1998). However, the estimation procedure can be quite cumbersome even without considering missing outcome data, and little is known about how the method works in simultaneously handling noncompliance and missing data when covariates are present. We will use a maximum likelihood estimation approach, which is known to be often more efficient than the instrumental variable approach in estimating the CACE (Imbens and Rubin 1997; Little and Yau 1998), in particular when covariates are included in the estimation. To carry out maximum likelihood estimation of CACE, the Mplus program (Muthén and Muthén 1998–2009) is used. Mplus input setups used for actual PIRC data analyses are provided in Appendix 1. Corresponding Mplus output files are provided in Appendix 2. For readers who are interested in hands-on experience, we also provide an artificial data set that can be analyzed using the same input files provided in Appendix 1. See Appendix 3 for more details regarding the artificial data.

Let us assume that the following expression represents the true model for the shy behavior outcome at the second grade followup for individual i. Two dummy variables are used to represent compliers and never-takers. That is, ci = 0 and ni = 1 for a never-taker, and ci = 1 and ni = 0 for a complier. A continuous outcome variable Y for individual i with compliance status ci and ni can be expressed as

Yi=αnYni+αcYci+γnYniZi+γcYciZi+λYxi+εi, (10)

where αnY is an outcome intercept for never-takers, and αcY is an outcome intercept for compliers. To represent CACE, we will use γcY. The effect of treatment assignment on never-takers is γnY, which is zero under the common assumption OER. A vector of covariates x includes baseline shy behavior, gender, parent’s health, and ethnicity. We will assume that covariate effects γY are the same for never-takers and compliers, but this is not an essential assumption and can be relaxed. Alternative ways of imposing restrictions on covariates in identifying CACE can be found, for example, in Jo (2002b). We assume a normally distributed residual εi with zero mean and variance σ2. However, in this paper, we do not use normality as an assumption that is central to identification of CACE.

Since we will also model the missing data mechanism, let us also assume the following true model for the missing outcome indicator at the second grade followup for individual i. In the logit scale, the probability of outcome Y being recorded for individual i can be expressed as

logit(πiR)=αnRni+αcRci+γnRniZi+γcRciZi+λRxi, (11)

where αnR is an logit intercept for never-takers, and αcR for compliers. The effect of treatment assignment on outcome missingness is γnR for never-takers, γcR for compliers. We will assume that covariate effects λR are the same for never-takers and compliers, but this assumption can be relaxed.

The third component of modeling CACE is the relationship between compliance status and pretreatment covariates. The logistic regression of compliance on covariates is described as

P(ci=1xi)=πci,P(ci=0xi)=1πci,logit(πci)=β0+β1xi, (12)

where πci denotes the probability of being a complier, β0 is a logit intercept, and β1 is a vector of logit coefficients, which represent the association between compliance and pretreatment covariates.

First, we estimate CACE assuming the conventional MAR assumption. This procedure can be carried out in Mplus without explicitly modeling the relationship between outcome missing data status and noncompliance. The Mplus input file for this setup is shown in Appendix 1.1. In this input, the outcome (shy6) is regressed on treatment assignment (Z) and four covariates (shy0, male, health, black). The effect of treatment assignment is freely estimated for the complier class (C#1), and fixed at zero for the never-taker class (C#2) according to OER. The compliance status (C) is regressed on the same covariates in the logistic regression. The missing data indicator (R) is not modeled at all in this setup. An alternative setup for CACE estimation under MAR is shown in Appendix 1.2. In this setup, the missing data indicator (R) is explicitly modeled although that is not necessary. The missing data indicator (R) is regressed on treatment assignment (Z) and covariates in the logistic regression. The effect of treatment assignment on the missing data indicator (R ON Z) is allowed to vary across compliance classes, but the logit intercept (R$1) is not, meaning that the response rate is the same across compliance classes under the control condition conditional on covariates. We present this setup to show how the MAR assumption can be expressed in the Mplus framework. In this way, how the missing data mechanism is differently modeled across different missing data assumptions can be easily compared. The two Mplus setups shown in Appendices 1.1 and 1.2 result in the identical estimates.

Second, we estimate CACE assuming the RER assumption. The Mplus input file for this setup is shown in Appendix 1.3. Compared to the MAR setup shown is Appendix 1.2, a slight change is made in terms of the missing data model. In this input, the effect of treatment assignment on the missing data indicator (R ON Z) is freely estimated for the complier class, but fixed at zero for the never-taker class. In other words, the response rate is allowed to vary across assignment arms for the compliers, but are not allowed for the never-takers. The logit intercept (R$1) is freely estimated in both compliance classes, meaning that the response rate can be different across compliance classes conditional on covariates.

Finally, CACE is estimated under the SCR assumption. The Mplus input file for this setup is shown in Appendix 1.4. Whereas the restriction on the treatment assignment effect on the outcome missingness is imposed for never-takers in the RER model, the same restriction is instead imposed for compliers in the SCR model. In this input, the effect of treatment assignment on the missing data indicator (R ON Z) is freely estimated for the never-taker class, but fixed at zero for the complier class.

Table 1 shows the CACE estimates obtained under different missing data assumptions. In addition to the three missing data models discussed in this paper, we also present the result from the complete case only analysis. In Appendix 1.1, if we revive the command “USEOBS = (shy6 NE 999),” which is currently commented out, the analysis will be conducted using complete cases only (i.e., excluding cases with missing outcome data). Although this strategy is often used in practice, excluding cases with missing outcome data is unnecessary for identification of CACE.

Table 1.

JHU PIRC: CACE estimates under different missing data assumptions (standard error in parentheses)

CC MAR RER SCR
−0.545 (0.229) −0.553 (0.229) −0.586 (0.251) −0.477 (0.205)

CC complete cases only, MAR missing at random, RER response exclusion restriction, SCR stable complier response

In terms of the logistic regression of compliance on covariates, baseline shy behavior, parent’s health, and ethnicity (African-American or not) were significant predictors of compliance in the analyses assuming MAR and SCR. Parents with limited health complied less, African-American parents complied less, parents with kids with higher baseline shy behavior complied less with intervention activities. In the analysis assuming RER, parent’s health and ethnicity were significant predictors of compliance. In the analysis with complete cases only, only ethnicity was a significant predictor of compliance.

In the results reported in Table 1, negative values mean positive effects of treatment assignment (i.e., assignment to the treatment condition lowers the level of shy behavior). Given that, let us evaluate the magnitude of CACE estimates in terms of their absolute values. The CACE estimator assuming SCR resulted in the smallest CACE estimate and the estimator assuming RER resulted in the largest CACE estimate. The resulting range of CACE estimates is quite narrow, suggesting that there was a significant positive effect of treatment assignment on the shy behavior outcome at the followup. However, both of these assumptions may be violated and we cannot exclude the possibility that the true CACE is still not captured within the range of CACE estimates we obtained based on considered missing data assumptions. The ideal situation in this case would be that SCR actually yields the lower bound and RER the upper bound for CACE. In particular, the lower bound is often more of concern in prevention studies. As discussed earlier, in the JHU PIRC trial, compliers’ response behavior is likely to be stable, but if not, it is very likely that compliers provide the data more under treatment condition than under the control condition. We can analytically derive how this directional knowledge would affect the CACE estimate, but a quick and easy way would be to empirically estimate CACE with a higher response rate of compliers under the treatment condition. In Appendix 1.4, this can be done by changing R ON Z@0 to, for example, R ON Z@1. Whereas the response rate remains the same across treatment arms when this coefficient is fixed at zero (i.e., SCR), the response rate is higher for the treatment condition if we fix this at a positive value in the logistic regression. The absolute value of the CACE estimate with this modified setting is 0.555 (compared to 0.477 under SCR) meaning that the CACE estimate gets larger if the response rate for compliers is higher under the treatment condition. In other words, it is very unlikely that the magnitude of true CACE is smaller than the one estimated under SCR.

Conclusion

It was demonstrated in this paper that identification and estimation of causal treatment effects considering both noncompliance and missing outcomes can be relatively easily conducted under various missing data assumptions. In particular, parametric maximum likelihood estimation of CACE can be carried out using the mixture analysis feature embedded in widely used latent variable modeling software such as Mplus (Muthén and Muthén 1998–2009). Given that, there is little reason to settle with a single solution or to exclude cases with missing outcome data when estimating CACE in practice. Nonetheless, the methods we discussed in this paper are more complicated than the standard ways of handling missing data. When we handle outcome missing data by listwise deletion or in the likelihood-based estimation framework assuming MAR (Little and Rubin 2002), there is no need to explicitly model the missing data mechanism. In this paper, we discussed missing data assumptions that require explicit modeling. This necessitates more active involvement of researchers in the missing data modeling and identification of causal effects, which has not been advocated much in the prevention research field.

As discussed in the context of the PIRC trial, we usually do not have one missing data assumption we are absolutely confident about. In fact, we rarely have a clear idea about which assumption is more likely than others. Even if we know the relative plausibility of these assumptions, it is still difficult to predict relative performance of CACE estimators that operate under different assumptions because the bias mechanism can be quite complex when missing data are accompanied by treatment noncompliance (Jo 2008a). Given the complexities at hand, as a practical way of sensitivity analysis, we proposed the use of alternative missing data assumptions, which will at least provide a range of causal effect estimates. This is already a significant improvement compared to the usual practice of CACE estimation with no sensitivity analysis. However, researchers may go one step further and check if the true CACE is likely to fall within the range of CACE estimates obtained with alternative assumptions. This is possible when researchers are confident about the directionality of missing data assumptions. We showed a simple way of checking bounds in this paper. More elaborate ways of establishing bounds for CACE utilizing all considered assumptions can be found in Jo and Vinokur (2010).

In this paper, we assumed that the exclusion restriction on the outcome (OER) always holds and focused on how different missing data assumptions affect CACE estimation. However, in field experiments where blinding is rarely possible, this assumption may also be violated, requiring sensitivity analysis considering both missing data and outcome assumptions. In the PIRC trial, never-takers can partially receive the treatment if assigned to the treatment condition (recall that never-takers were defined as those who completed less than 45 out of 66 intervention activities), and therefore, plausibility OER is questionable. To maintain simplicity in introducing the use of missing data assumptions, which is already somewhat complicated in the CACE estimation context, we chose to use the consistent outcome assumption (i.e., OER). However, it is possible to conduct sensitivity analysis considering variations of both missing data and outcome assumptions (Jo and Vinokur 2010). This paper also limited its discussion to CACE estimation. However, the methods discussed here can be applied to more general causal inference problems involving intermediate outcomes (not limited to compliance) and outcome missing data.

Acknowledgments

We appreciate helpful feedback from the Prevention Science Methodology Group. The study of the first author was supported by MH066319 and MH066247 from the National Institute of Mental Health. The work of the second author was conducted while she was a postdoctoral student at the George Washington University. Elizabeth M. Ginexi is now at the National Institute on Drug Abuse, Bethesda, MD.

Appendix 1: PIRC Example Mplus Input Files

1.1 CACE Estimation Under MAR

TITLE: CACE estimation under MAR
DATA: FILE = ps09jhu.dat;
VARIABLE:
NAMES = Z S R shy6 shy0 male health black;
USEV = Z S shy6 shy0 male health black;
!USEOBS = (shy6 NE 999);
CATEGORICAL = S;
!binary compliance indicator S (0/1, missing=999)
CLASSES = C(2); !two compliance strata
MISSING = all (999); !missing values coded as 999
ANALYSIS:TYPE = MIXTURE;
MODEL:
%OVERALL%
shy6 ON Z shy0-black;
!shy6 regressed on randomization Z and covariates
C#1 ON shy0-black;
!compliance class C regressed on covariates
%C#1%
 [S$1@-15]; !compliers
shy6 ON Z;
!compliers’ outcome varies across Z
%C#2%
 [S$1@15]; !never-takers
shy6 ON Z@0;
!never-takers’ outcome is stable across Z (OER)

1.2 CACE Estimation Under MAR II

TITLE: CACE estimation under MAR II
DATA: FILE = ps09jhu.dat;
VARIABLE:
NAMES = Z S R shy6 shy0 male health black;
USEV = Z S R shy6 shy0 male health black;
CATEGORICAL = S R;
! binary missing indicator R for shy6
CLASSES = C(2); !two compliance strata
MISSING = all (999); ! missing values coded as 999
ANALYSIS:TYPE = MIXTURE;
MODEL:
%OVERALL%
shy6 ON Z shy0-black;
C#1 ON shy0-black;
R ON Z shy0-black;
!R is related to observed information
%C#1%
 [S$1@-15]; !compliers
[R$1] (1);
!R stable across C under the control (MAR)
shy6 ON Z;
R ON Z;
!compliers’ R status varies across Z
%C#2%
 [S$1@15]; !never-takers
[R$1] (1);
!R stable across C under the control (MAR)
shy6 ON Z@0; !OER
R ON Z;
!never-takers’ R status varies across Z

1.3 CACE Estimation Under RER

TITLE: CACE estimation under RER
DATA: FILE = ps09jhu.dat;
VARIABLE:
NAMES = Z S R shy6 shy0 male health black;
USEV = Z S R shy6 shy0 male health black;
CATEGORICAL = S R;
CLASSES = C(2); !two compliance strata
MISSING = all (999); ! missing values coded as 999;
ANALYSIS:TYPE = MIXTURE;
MODEL:
%OVERALL%
shy6 ON Z shy0-black;
C#1 ON shy0-black;
R ON Z shy0-black;
!R is related to observed information;
%C#1%
 [S$1@-15]; !compliers
[R$1];
!R varies across C under the control
shy6 ON Z;
R ON Z;
!compliers’ R varies across Z
%C#2%
 [S$1@15]; !never-takers
[R$1];
!R varies across C under the control
shy6 ON Z@0; !OER
R ON Z@0;
!never-takers’ R stable across Z (RER)

1.4 CACE Estimation Under SCR

TITLE: CACE estimation under SCR
DATA: FILE = ps09jhu.dat;
VARIABLE:
NAMES = Z S R shy6 shy0 male health black;
USEV = Z S R shy6 shy0 male health black;
CATEGORICAL = S R;
CLASSES = C(2); !two compliance strata
MISSING = all (999); ! missing values coded as 999;
ANALYSIS:TYPE = MIXTURE;
MODEL:
%OVERALL%
shy6 ON Z shy0-black;
C#1 ON shy0-black;
R ON Z shy0-black;
%C#1%
 [S$1@-15]; !compliers
[R$1];
!R varies across C under the control
shy6 ON Z;
R ON Z@0;
!compliers’ R stable across Z (SCR)
%C#2%
 [S$1@15]; !never-takers
[R$1];
!R varies across C under the control
shy6 ON Z@0; !OER
R ON Z;
!never-takers’ R varies across Z

Appendix 2. PIRC Example Mplus Output Files (Key Model Parameter Estimates Only)

2.1 CACE Estimation Under MAR

Estimate S.E. Est./S.E. P-Value
Latent class 1 (complier)
SHY6 ON
 Z(CACE) 0.553 0.229 −2.417 0.016
 SHY0 0.228 0.052 4.372 0.000
 MALE 0.223 0.112 1.982 0.047
 HEALTH 0.305 0.211 1.443 0.149
 BLACK 0.019 0.162 0.115 0.908
Intercepts
 SHY6 2.256 0.253 8.908 0.000
Residual variances
 SHY6 0.924 0.083 11.173 0.000
Latent class 2 (never-taker)
SHY6 ON
 Z (OER) 0.000 0.000 999.000 999.000
Intercepts
 SHY6 1.431 0.212 6.764 0.000
Logistic regression of C on X
C#1 ON
 SHY0 −0.300 0.151 −1.987 0.047
 MALE 0.135 0.290 0.465 0.642
 HEALTH −1.094 0.538 −2.035 0.042
 BLACK −1.044 0.412 −2.536 0.011
Intercepts
 C#1 1.437 0.499 2.878 0.004

2.2 CACE Estimation Under MAR II

Estimate S.E. Est./S.E. P-Value
Latent class 1 (complier)
SHY6 ON
 Z (CACE) 0.553 0.229 −2.417 0.016
 SHY0 0.228 0.052 4.372 0.000
 MALE 0.223 0.112 1.982 0.047
 HEALTH 0.305 0.211 1.443 0.149
 BLACK 0.019 0.162 0.115 0.908
Intercepts
 SHY6 2.256 0.253 8.908 0.000
Residual variances
 SHY6 0.924 0.083 11.173 0.000
R (missing indicator) ON
 Z 0.952 0.395 2.409 0.016
 SHY0 0.014 0.123 0.114 0.909
 MALE −0.226 0.278 −0.814 0.415
 HEALTH −0.212 0.402 −0.528 0.597
 BLACK 0.781 0.324 2.412 0.016
Thresholds (MAR: [R$1] (1) in Input)
 R$1 0.902 0.453 −1.989 0.047
Latent class 2 (never-taker)
SHY6 ON
 Z (OER) 0.000 0.000 999.000 999.000
Intercepts
 SHY6 1.431 0.212 6.764 0.000
R (missing indicator) ON
 Z 0.298 0.324 0.920 0.357
Thresholds (MAR: [R$1] (1) in input)
 R$1 0.902 0.453 −1.989 0.047

2.3 CACE Estimation Under RER

Estimate S.E. Est./S.E. P-Value
Latent class 1 (complier)
SHY6 ON
 Z(CACE) 0.586 0.251 −2.332 0.020
 SHY0 0.227 0.052 4.368 0.000
 MALE 0.225 0.112 2.004 0.045
 HEALTH 0.306 0.212 1.441 0.149
 BLACK 0.016 0.162 0.098 0.922
Intercepts
 SHY6 2.291 0.277 8.266 0.000
Residual variances
 SHY6 0.920 0.083 11.033 0.000
R (missing indicator) ON
 Z 1.171 0.537 2.178 0.029
 SHY0 −0.003 0.124 −0.025 0.980
 MALE −0.219 0.280 −0.780 0.435
 HEALTH −0.294 0.417 −0.704 0.481
 BLACK 0.717 0.343 2.092 0.036
Thresholds
 R$1 −0.757 0.525 −1.442 0.149
Latent class 2 (never-taker)
 SHY6 ON
 Z (OER) 0.000 0.000 999.000 999.000
 Intercepts 1.437 0.210 6.848 0.000
  SHY6
R (missing indicator) ON
Z(RER) 0.000 0.000 999.000 999.000
Thresholds
 R$1 −1.276 0.548 −2.328 0.020

2.4 CACE Estimation Under SCR

Estimate S.E. Est./S.E. P-Value
Latent class 1 (complier)
SHY6 ON
 Z (CACE) 0.477 0.205 −2.325 0.020
 SHY0 0.228 0.053 4.332 0.000
 MALE 0.215 0.113 1.899 0.058
 HEALTH 0.301 0.210 1.436 0.151
 BLACK 0.025 0.161 0.155 0.877
Intercepts
 SHY6 2.179 0.231 9.423 0.000
Residual variances
 SHY6 0.934 0.082 11.380 0.000
R (missing indicator) ON
Z (SCR) 0.000 0.000 999.000 999.000
 SHY0 0.065 0.128 0.507 0.612
 MALE −0.300 0.300 −0.999 0.318
 HEALTH 0.001 0.443 0.002 0.999
 BLACK 1.028 0.385 2.670 0.008
Thresholds
 R$1 −1.620 0.516 −3.139 0.002
Latent class 2 (never-taker)
SHY6 ON
 Z (OER) 0.000 0.000 999.000 999.000
Intercepts
 SHY6 1.419 0.217 6.533 0.000
R (missing indicator) ON
 Z 0.866 0.413 2.097 0.036
Thresholds
 R$1 −0.021 0.605 −0.035 0.972

Appendix 3. Artificial Data Analyses

For readers who are interested in hands-on experience, we provide an artificial data set, which can be obtained from the Prevention Science website (www.preventionresearch.org). The same Mplus input files provided in Appendix 1 can be used after changing the data file name (i.e., DATA: FILE = artif.dat;). The results of the artificial data analyses are provided below in Table 2.

Table 2.

Artificial data: CACE estimates under different missing data assumptions (standard error in parentheses)

CC MAR RER SCR
−0.584 (0.169) −0.617 (0.174) −0.632 (0.178) −0.546 (0.180)

Contributor Information

Booil Jo, Email: booil@stanford.edu, Department of Psychiatry & Behavioral Sciences, Stanford University, Stanford, CA 94305-5795, USA.

Elizabeth M. Ginexi, Center for Family Research, George Washington University, Washington, DC, USA

Nicholas S. Ialongo, Department of Mental Health, Johns Hopkins University, Baltimore, MD, USA

References

  1. Angrist JD, Imbens GW, Rubin DB. Identification of causal effects using instrumental variables. Journal of the American Statistical Association. 1996;91:444–455. [Google Scholar]
  2. Bloom HS. Accounting for no-shows in experimental evaluation designs. Evaluation Review. 1984;8:225–246. [Google Scholar]
  3. Dunn G, Maracy M, Dowrick C, Ayuso-Mateos JL, Dalgard OS, Page H, et al. Estimating psychological treatment effects from a randomized controlled trial with both non-compliance and loss to follow-up. British Journal of Psychiatry. 2003;183:323–331. doi: 10.1192/bjp.183.4.323. [DOI] [PubMed] [Google Scholar]
  4. Emsley R, Dunn G, White IR. Mediation and moderation of treatment effects in randomised controlled trials of complex interventions. Statistical Methods in Medical Research. 2010 doi: 10.1177/0962280209105014. [DOI] [PubMed] [Google Scholar]
  5. Frangakis CE, Rubin DB. Addressing complications of intention-to-treat analysis in the presence of all-or-none treatment-noncompliance and subsequent missing outcomes. Biometrika. 1999;86:365–379. [Google Scholar]
  6. Frangakis CE, Rubin DB. Principal stratification in causal inference. Biometrics. 2002;58:21–29. doi: 10.1111/j.0006-341x.2002.00021.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Frangakis CE, Rubin DB, Zhou XH. Clustered encouragement design with individual noncompliance: Bayesian inference and application to advance directive forms. Biostatistics. 2002;3:147–164. doi: 10.1093/biostatistics/3.2.147. [DOI] [PubMed] [Google Scholar]
  8. Hirano K, Imbens GW, Rubin DB, Zhou XH. Assessing the effect of an influenza vaccine in an encouragement design. Biostatistics. 2000;1:69–88. doi: 10.1093/biostatistics/1.1.69. [DOI] [PubMed] [Google Scholar]
  9. Holland PW. Statistics and causal inference. Journal of the American Statistical Association. 1986;81:945–960. [Google Scholar]
  10. Ialongo NS, Werthamer L, Kellam SG, Brown CH, Wang S, Lin Y. Proximal impact of two first-grade preventive interventions on the early risk behaviors for later substance abuse, depression and antisocial behavior. American Journal of Community Psychology. 1999;27:599–642. doi: 10.1023/A:1022137920532. [DOI] [PubMed] [Google Scholar]
  11. Imbens GW, Rubin DB. Bayesian inference for causal effects in randomized experiments with non-compliance. Annals of Statistics. 1997;25:305–327. [Google Scholar]
  12. Jo B. Statistical power in randomized intervention studies with noncompliance. Psychological Methods. 2002a;7:178–193. doi: 10.1037/1082-989x.7.2.178. [DOI] [PubMed] [Google Scholar]
  13. Jo B. Estimating intervention effects with noncompliance: Alternative model specifications. Journal of Educational and Behavioral Statistics. 2002b;27:385–420. [Google Scholar]
  14. Jo B. Bias mechanisms in intention-to-treat analysis with data subject to treatment noncompliance and missing outcomes. Journal of Educational and Behavioral Statistics. 2008a;33:158–185. doi: 10.3102/1076998607302635. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Jo B. Causal inference in randomized experiments with mediational processes. Psychological Methods. 2008b;13:314–336. doi: 10.1037/a0014207. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Jo B, Asparouhov T, Muthén BO, Ialongo NS, Brown CH. Cluster randomized trials with treatment non-compliance. Psychological Methods. 2008;13:1–18. doi: 10.1037/1082-989X.13.1.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Jo B, Vinokur A. Sensitivity analysis and bounding of causal effects with alternative identifying assumptions. Journal of Educational and Behavioral Statistics. 2010 doi: 10.3102/1076998610383985. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Kellam SG, Branch JD, Agrawal KC, Ensminger ME. Mental health and going to school: The Woodlawn program of assessment, early intervention, and evaluation. Chicago: University of Chicago Press; 1975. [Google Scholar]
  19. Little RJA, Rubin DB. Statistical analysis with missing data. New York: Wiley; 2002. [Google Scholar]
  20. Little RJA, Yau L. Statistical techniques for analyzing data from prevention trials: Treatment of no-shows using Rubin’s causal model. Psychological Methods. 1998;3:147–159. [Google Scholar]
  21. Mattei A, Mealli F. Application of the principal stratification approach to the Faenza randomized experiment on breast self-examination. Biometrics. 2007;63:437–446. doi: 10.1111/j.1541-0420.2006.00684.x. [DOI] [PubMed] [Google Scholar]
  22. Mealli F, Imbens GW, Ferro S, Biggeri A. Analyzing a randomized trial on breast self-examination with noncompliance and missing outcomes. Biostatistics. 2004;5:207–222. doi: 10.1093/biostatistics/5.2.207. [DOI] [PubMed] [Google Scholar]
  23. Muthén LK, Muthén BO. Mplus user’s guide. Los Angeles: Muthén & Muthén; 1998–2009. [Google Scholar]
  24. Neyman J. On the application of probability theory to agricultural experiments. Section 9 translated. Statistical Science. 1923;5:465–480. 1990. [Google Scholar]
  25. O’Malley AJ, Normand SLT. Likelihood methods for treatment noncompliance and subsequent nonresponse in randomized trials. Biometrics. 2004;61:325–334. doi: 10.1111/j.1541-0420.2005.040313.x. [DOI] [PubMed] [Google Scholar]
  26. Peng Y, Little RJ, Raghunathan TE. An extended general location model for causal inferences from data subject to noncompliance and missing values. Biometrics. 2004;60:598–607. doi: 10.1111/j.0006-341X.2004.00208.x. [DOI] [PubMed] [Google Scholar]
  27. Rubin DB. Bayesian inference for causal effects: The role of randomization. Annals of Statistics. 1978;6:34–58. [Google Scholar]
  28. Rubin DB. Discussion of “randomization analysis of experimental data in the Fisher randomization test” by D. Basu. Journal of the American Statistical Association. 1980;75:591–593. [Google Scholar]
  29. Rubin DB. Comment on “Neyman (1923) and causal inference in experiments and observational studies. Statistical Science. 1990;5:472–480. [Google Scholar]
  30. Schafer JL. Analysis of incomplete multivariate data. London: CRC; 1997. [Google Scholar]
  31. Sobel ME. What do randomized studies of housing mobility demonstrate: Causal inference in the face of interference. Journal of the American Statistical Association. 2006;101:1398–1407. [Google Scholar]
  32. Stuart EA, Perry DF, Le H-N, Ialongo NS. Estimating intervention effects of prevention programs: Accounting for noncompliance. Prevention Science. 2008;9:288–298. doi: 10.1007/s11121-008-0104-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Werthamer-Larsson L, Kellam SG, Wheeler L. Effect of first-grade classroom environment on child shy behavior, aggressive behavior, and concentration problems. American Journal of Community Psychology. 1991;19:585–602. doi: 10.1007/BF00937993. [DOI] [PubMed] [Google Scholar]

RESOURCES