Skip to main content
VA Author Manuscripts logoLink to VA Author Manuscripts
. Author manuscript; available in PMC: 2019 Oct 15.
Published in final edited form as: J Pain Symptom Manage. 2016 May 21;52(3):446–452. doi: 10.1016/j.jpainsymman.2016.01.016

Understanding Treatment Effect Terminology in Pain and Symptom Management Research

Melissa M Garrido a, Bryan Dowd b, Paul L Hebert c, Matthew L Maciejewski d
PMCID: PMC6794006  NIHMSID: NIHMS1054339  PMID: 27220944

Abstract

Within health services and medical research, there is a wide variety of terminology related to treatment effects. Understanding differences in types of treatment effects is especially important in pain and symptom management research where non-experimental observational data analysis is common. We use the example of a palliative care consultation team leader considering implementation of a medication reconciliation program and a care-coordination intervention reported in the literature to illustrate population-level and conditional treatment effects and to highlight the sensitivity of values of treatment effects to sample selection and treatment assignment. Our goal is to facilitate appropriate reporting and interpretation of study results and to help investigators understand what information a decision-maker needs when deciding whether to implement a treatment. Greater awareness of the reasons why treatment effects may differ across studies of the same patients in the same treatment settings can help policymakers and clinicians understand to whom a study’s results may be generalized.

Keywords: treatment effect, randomized controlled trial, health services research, terminology, observational

Introduction

Within health services and medical research, there is a wide variety of terminology related to treatment effects. There are different terms to describe the same treatment effect (synonyms), and there also is a wide variety of treatment effects of interest as defined by the specific population of interest. Understanding differences in types of treatment effects is especially important in pain and symptom management research where non-experimental observational data analysis is common.

As an example, consider the case of an inpatient palliative care consultation team that is planning to implement a new medication reconciliation program that a previous study had shown to reduce the incidence of adverse drug events and inpatient costs. Clinicians who are considering implementing this program for their patients must ask whether the results of the previous study are applicable to patients seen in their institution. The answer will depend on the research design, the analytic methods, and the type of treatment effect that is reported. Clear understanding of different types of treatment effects will enhance investigators’ ability to provide decision-makers with the information required for appropriate interpretation of study results, resulting in more accurate expectations about the effect of applying a treatment in a new setting or population. Different disciplines use different terms for the same concept and the same term for different concepts,12 and clarifying those differences will facilitate multidisciplinary collaboration and improve the scientific process of manuscript and grant review. In the following sections, we define treatment effects and then pose questions that an investigator should answer when writing up the results of a research study so that the clinician in our example has enough information to interpret the findings appropriately.

Treatment effects and counterfactuals

A treatment effect for a specific individual is the change in outcome that results from the subject or unit of observation receiving a treatment. For the sake of exposition, we will use individuals as the unit of observation, but the unit of observation also could refer to groups of individuals (e.g., patient-caregiver dyads) or to other units, such as states, hospitals, or organizations. In addition, “treatment” could refer to any medication or policy experienced by some individuals (or other units) being studied. In our example, the patient is the unit of observation and the treatment is the medication reconciliation program. The outcome is whether the patient experiences an adverse drug event while hospitalized.

To know the causal effect of a treatment on an individual with absolute certainty, one would have to observe the same individual receiving and not receiving the treatment at the same point in time. However, only one treatment state can be observed at any given time for an individual, so a plausible counterfactual (an estimate of the individual’s outcome in the unobserved state) must be identified. This framework is known as the potential outcomes model; more formal definitions are available elsewhere.36

For a treated individual, a plausible counterfactual is the average outcome of individuals in the untreated group who are similar to the treated individual on all observed characteristics except for the receipt of treatment. There are two types of strategies for identifying a plausible counterfactual. The first is to assign a large number of individuals randomly to the treatment and control groups. Randomization does not guarantee a perfect match in the control group for each individual in the treatment group, but in large samples, it does ensure that the distribution of both observed and unobserved characteristics of the individuals in the treatment and control groups are similar. The second strategy is careful analysis of observational data, where there is no a priori assumption of balance in observed or unobserved characteristics across treatment groups in the absence of randomization, with methods such as difference-in-differences, regression discontinuity, sample selection models, or instrumental variable analysis. Here, we use instrumental variables to illustrate differences in treatment effects (see Question 3).

Question 1: Does the reported treatment effect apply to all hospitalized patients or only to those who consented to the medication reconciliation program?

Two general treatment effects typically are considered in causal inference: the average treatment effect (ATE) and the average treatment effect on the treated (ATT). The ATE is the average effect of the treatment across the entire eligible sample. This refers to the average effect one would expect for a group of patients chosen randomly from patients eligible for the treatment, sometimes referred to as the “target group”. The ATT refers to the treatment effect for those who actually received treatment. We focus on a binary treatment, but the ATT concept generalizes to comparisons to a second treatment or to several other treatments.4,7 In our medication reconciliation example, the ATT would be the estimated average difference in adverse drug events resulting from the intervention, calculated only among those who received the intervention. In a study of the relationship between palliative care consultations and hospitalization costs, the ATT represents cost-savings among patients who received palliative care, while the ATE would represent cost-savings among all patients, regardless of palliative care receipt.8 The ATE represents a weighted average of the ATT and the treatment effect for those who did not receive the treatment, which is referred to as the average treatment effect on the untreated (ATU). See Table 1 for treatment effect definitions in other example research studies.

Table 1.

Examples of Treatment Effects in Observational Data and RCTs.

Treatment Effect Example A: Consider the case of researchers who are interested in using observational data to estimate the effect of a hospital-based care coordination intervention on likelihood of 30-day hospital readmission. The treatment variable is binary (1=care coordination intervention, 0=usual care).
ATE (Question 1) Estimated average difference in likelihood of 30-day readmission between patients who did and did not receive the care coordination intervention
ATT (Question 1) Estimated average difference in likelihood of 30-day readmission resulting from the care coordination intervention, calculated only among those who received the intervention
ATU (Question 1) Estimated average difference in likelihood of 30-day readmission resulting from the care coordination intervention, calculated only among those who did not receive the intervention
CATE (Question 2) Estimated average difference in likelihood of 30-day readmission between patients who did and did not receive the intervention, conditional on the patient being 65 years of age or older
LATE (Question 3) Assume that treatment likelihood and 30-day readmission likelihood are both related to unobserved patient comorbidity (the treatment is endogenous). Assume that treatment likelihood is correlated with observed attending physician identity (and that physicians are randomly assigned to patients), but that physician identity is not correlated with 30-day readmission rates. The LATE is the estimated average difference in 30-day readmission likelihood for patients whose treatment assignment was sensitive to attending physician identity. (It would not apply to patients who were too healthy to ever warrant a care coordination intervention or to those who were so ill that they would always require the intervention.)
PeT (Question 4) Assume the treatment is endogenous and that treatment likelihood (but not outcome) is correlated with several observed instrumental variables, including attending physician identity and day of week of admission. The PeT is the estimated difference in readmission likelihood for a single patient and is based on observed principal diagnosis and age, propensity for receiving treatment (conditional on diagnosis, age, and instrumental variables), and observed treatment status.

Example B: Consider a randomized controlled trial of a new medication (“Newmed”) versus a placebo to reduce pain. Despite the researchers’ best efforts, some patients in the Newmed group did not take a single dose of study medication. Some patients in the placebo group received Newmed by mistake.

ATE (Question 1) Difference in pain score reduction between patients assigned to Newmed versus placebo, in an intent-to-treat analysis.
ATT (Question 1) Degree of pain score reduction among patients who received Newmed, regardless of treatment assignment.
ATU (Question 1) Degree of pain score reduction among patients who received placebo, regardless of treatment assignment.
CATE (Question 2) Among a subgroup of patients with rheumatoid arthritis, difference in pain score reduction between patients assigned to Newmed versus placebo, in an intent-to-treat analysis.
LATE (Question 3) Difference in pain score reduction between patients assigned to Newmed and patients assigned to the placebo group who complied with treatment assignment, in a per-protocol analysis.

ATE = Average Treatment Effect, ATT = Average Treatment Effect on the Treated, ATU = Average Treatment Effect on the Untreated, CATE = Conditional Average Treatment Effect, LATE = Local Average Treatment Effect, PeT = Person-Centered Treatment Effect

One can distinguish further between sample and population treatment effects of the ATE and ATT. Sample and population treatment effects will be the same if the sample was chosen through simple random sampling or if sampling weights are used to estimate the effect of a treatment reported in a survey.910 Otherwise, the sample ATE (SATE) or sample ATT (SATE) is likely to differ from the population ATE (PATE) or population ATT (PATT). For this article, we focus on sample treatment effects and drop the “S” and “P” for “sample” and “population”, respectively.

In a well-constructed RCT with perfect compliance to treatment assignment, the ATE, ATT, and ATU should have the same value and all represent the treatment’s efficacy, because randomization should ensure that the control group is similar to the treatment group. Further, the generalizability of these treatment estimates should increase as the population under consideration broadens due to less restrictive inclusion and exclusion criteria in terms of patient characteristics, treatment settings and geography.

In a RCT with non-perfect compliance that is analyzed with an intention to treat analysis, the treatment effect calculated for the whole sample will be an ATE, but it will represent effectiveness, or the effect of assignment to treatment. For instance, Currow et al. report the ATE from an intention to treat analysis of a randomized study of octreotide, and it reflects the effect of randomization to the octreotide arm on incidence of vomiting.11 With non-perfect compliance, the values of ATEs, ATTs, and ATUs likely diverge (Table 1). The ATT will represent the effect of the treatment among patients who received it, regardless of treatment or control group membership. The values of these treatment effects are likely to vary more in an observational study, where distributions of confounders are likely to differ across treatment groups. For instance, the ATT can differ from the ATE if those who select the treatment did so for some reason that is related to subsequent outcomes, a phenomenon known as selection bias.12 Selection bias is discussed in greater detail under Questions 3 and 4. To enable appropriate interpretation of study results, analysts of observational data should state whether they are presenting ATEs or ATTs.

Question 2: Are the magnitude and direction of the treatment effect expected to be the same for all patients?

When reviewing either a randomized trial or analysis of observational data, the clinician might be interested in whether the magnitude and direction of the treatment effect were constrained to be the same for patients or allowed to vary across subgroups of patients. Models that constrain the treatment effect to be the same across all patients are referred to as homogeneous treatment effect models (see Figure 1). It is important to note that the model is not homogeneous in outcomes – different individuals can have different expected post-treatment values of the outcome because they have different pre-treatment values of the outcome. However, the change in the outcome due to the treatment is the same for all individuals if the treatment effect is homogeneous.

Figure 1:

Figure 1:

A homogeneous treatment effects model. The magnitude and direction of the treatment effect is the same for all patients, regardless of any other patient characteristics.

Models that allow the treatment effect to be different for different individuals are referred to as heterogeneous treatment effect models. For example, the impact of the medication reconciliation program may be null in some patients and quite protective against adverse drug events in others. This is reflected in the conditional ATE (CATE), which is the average of the difference in potential outcomes of treated and untreated individuals for a subgroup of individuals defined by one or more covariates other than the treatment variable.1314 For instance, one could estimate a CATE in the medication reconciliation study for patients with more comorbidities and a CATE for patients with fewer comorbidities (see Figure 2). Other examples of homogeneous and heterogeneous treatment effects are provided in Table 2. In the simplest heterogeneous treatment effects model, the average treatment effects depend on the values of observed covariates (such as age, number of comorbidities, or gender), causing ATEs, ATTs, and ATUs (and CATEs, conditional ATTs [CATTs], and conditional ATUs [CATUs]) to differ.

Figure 2:

Figure 2:

Heterogeneous treatment effects model. The magnitude and direction of the treatment effect may differ with patient characteristics.

Table 2. Heterogeneity of Treatment Effects.

Consider the case of researchers who are interested in using observational data to estimate the effect of a hospital-based care coordination intervention on likelihood of 30-day hospital readmission. The treatment variable is binary (1=care coordination intervention, 0=usual care). The population is patients who are Medicare beneficiaries in an acute care hospital.

Type of Heterogeneity Example
None (Homogenous) All patients in the sample experience the same change in likelihood of 30-day readmissions after participating in the care coordination intervention
Heterogeneous within the treatment group due to observed variables Older patients experience a greater reduction in likelihood of 30-day readmissions than younger patients after both groups participate in the care coordination intervention (estimated treatment effect depends on value of exogenous, observed covariate)
Heterogeneous within the treatment group due to unobserved variables
  Non-essential Within the group of patients who receive the care coordination intervention, the likelihood of 30-day readmission differs with respect to each hospital’s prevalence of Clostridium difficile infections, which neither the patient nor the researcher observes
  Essential Assume that patients with better health literacy are both more likely to participate in the care coordination intervention and less likely to have a 30-day readmission than patients with poor health literacy. Health literacy is observed by the patient and it influences the participation decision and the outcome, but health literacy is not observed by the researcher.

For each outcome, there is one ATE but there could be many CATEs. Similarly, one could estimate a CATT for treated patients who had more comorbidities compared to treated patients who had fewer comorbidities. CATEs can be calculated via either interaction terms between the treatment variable and variables representing the subgroup of interest or separate analyses of subsamples of interest. Conditional treatment effects are found in reports of RCTs (e.g., Yarlas et al. report the results of a buprenorphine transdermal system treatment among a subgroup of patients with depression15) and observational data analyses (e.g. Miller et al. report the relationship between increased Medicaid nursing home per diem rates and nursing home hospice use, conditional on urban or rural nursing home location16).

In homogeneous treatment effect models, the ATE and CATE are equivalent. In a well-conducted randomized trial with homogeneous treatment effects, the ATE and CATE also are equal to the ATT, ATU, CATE, CATT, and CATU. To facilitate accurate generalization of results, investigators should explain whether the treatment effects are expected to differ with observed patient characteristics.

Question 3: If patients were not randomized to the treatment group, were there patient characteristics that influenced both the decision to participate in treatment and the outcome that the investigator was unable to measure?

If patients are not randomized to treatment groups, observed and unobserved patient characteristics may be associated with treatment choice. When patients are not randomized and unobserved variables influence both participation in treatment as well as subsequent outcomes of interest, the result is endogenous selection bias. Endogenous selection into the treatment versus comparison group can occur within the context of either homogeneous treatment effects (Figure 1) or heterogeneous treatment effects (Figure 2), and can result in biased ATE, ATT, ATU, CATE, CATT and CATU estimates, depending on the inference that the analyst attempts to draw from the study.

Suppose, for example, that unobserved levels of patients’ trust in their health care providers influenced both the patient’s willingness to participate in a new medication reconciliation program and subsequent outcomes (through likelihood to adhere to prescribed medication regimens). The analyst has a choice. She can report treatment effects conditional on the treatment selection mechanism remaining stable, that is, trust remaining an important variable influencing voluntary participation in the intervention. In that case, her estimates of the treatment effects are not biased. However, if she wishes to estimate the treatment effect for non-participants or even for participants who undergo the intervention for any reason other than voluntary participation under the current conditions, then her estimated treatment effects may be biased. She will need to find a way to separate the “true” effect of the treatment from the confounding influence of trust.

One way to isolate the true treatment effect is to identify variables that, like randomization, influence membership in the treatment group but have no other effect on outcomes. Such variables are referred to as instruments.1719 Instrumental variables that satisfy these assumptions play a special role in the analysis of models with selection bias because they mimic randomization. For example, one individual might happen to live closer to the site where the non-elective treatment is being administered, making participation in the treatment more convenient. Similarly, participation in the treatment might cost some individuals less than others for reasons unrelated to outcomes.

Individuals vary in their responsiveness to an instrumental variable, which affects the treatment effect that may be estimated. For instance, the effect of distance to the site at which the treatment is administered will vary among individuals. For some individuals it may have a decisive effect on membership in the treatment group, while for others it may have little effect. In our example, the local average treatment effect (LATE) is the treatment effect for individuals who could be persuaded to change membership in the medication reconciliation group during a study in response to the instrumental variable.18 Elsewhere, in a study of the association between hospice use and survival, Saito et. al. used availability of hospices within Health Care Service Areas as an instrumental variable.20 In this case, the LATE reflects the effect of hospice on survival among patients who may be persuaded to use hospice because it is readily available. (In an RCT, the LATE is the treatment effect for individuals who comply with random treatment assignment, the result of a “per protocol” analysis.21 With perfect compliance and no other problems, such as contamination, the RCT LATE is equal to the ATE.) These individuals belong to the “marginal population” who reasonably could receive either the treatment or the comparison condition (as opposed to those who would almost certainly receive or not receive the treatment in real life).22 In a homogeneous treatment effect model the LATE is always equal to the ATE because there is only one treatment effect. When the treatment effect is heterogeneous, the LATE and ATE are not necessarily equal. This is discussed in greater detail under Question 4.

In sum, selection bias in observational studies may cause the estimated ATE or ATT to differ from the ATE or ATT that would be obtained from an RCT on the same patients in the same geographic area and treatment settings. Investigators examining observational data should report the extent to which they have accounted for potential selection bias in their analyses and explain the interpretation of the findings in detail.

Question 4: Did the patients know more about their potential outcomes than the researchers?

When there is treatment effect heterogeneity within the treated group, the ability of the LATE to approximate the ATE depends on whether the heterogeneity is essential or nonessential.12,2223 In non-essential heterogeneity, treated individuals respond differently to a given treatment due to some unobserved baseline factor, but they are unable to anticipate their outcome (they have no more information about their likely outcome than the researcher when they choose a treatment).24 For instance, in our example, the percent reduction in adverse drug events as a result of the medication reconciliation program might differ with respect to pharmacy staffing levels, which neither the patient nor the researcher observes. Instrumental variables or selection models can be used to estimate the LATE and the ATE in these cases.

In essential heterogeneity, however, treated individuals respond differently to a given treatment, anticipate this different response, and choose their treatment according to this knowledge that is not available to the researcher.24 These individuals are said to be “sorting on the gain.”12 In our example, if individuals choose to participate in the medication reconciliation program because they expect they will derive relatively greater benefit from the program and their expectations are correct, inference regarding the effect of the program on these individuals is likely to be biased because the researcher was not able to adjust for this person-specific expected gain. This is referred to as expected gains bias. The term “expected gains” refers to the patient’s assessment of the benefits she is likely to derive from treatment. Instrumental variables can be used to estimate the LATE in these cases. However, the LATE may not approximate the ATE in models with essential heterogeneity, because the unobserved variables influencing treatment effect estimates also are influencing treatment selection.12,2223

Person-centered treatment effects (PeT) may better portray individual variation in treatment effects in cases of essential heterogeneity.25 PeTs are weighted average treatment effects that account for sorting on the gain and differential probabilities of treatment choice by considering both unobserved and observed confounders. For instance, in a study of the effect of surgery versus surveillance on survival among patients with prostate cancer, Basu calculated PeTs to account for the possibility that patients who choose to undergo surgery are systematically more likely to believe surgery will improve life expectancy than patients who do not choose surgery, and that patients’ beliefs about surgery benefits are correct.25 More information about PeTs is available elsewhere.25 Investigators should clearly describe to whom their research results are generalizable when essential heterogeneity is suspected.26

Conclusion

The palliative care consultation team leader in our example would first need to determine whether patients were randomized to the treatment or whether they entered the treatment group through some other process. If treatment was randomly assigned, if the study sample is representative of the clinician’s patients, and if the organizational context of the previous study is similar to the context in which the treatment will be implemented, the ATE from the RCT is likely generalizable to the clinician’s patients. However, the clinician should pay close attention to any heterogeneity in the estimated effects. If the results were from an observational study, the clinician would need to be aware of the potential for bias due to endogenous choice of the treatment. She would need to know whether the reported treatment effect refers to all hospitalized patients or only to those who participated in the medication reconciliation program, and whether the analyst was able to observe all patient characteristics that were likely to be associated with both treatment choice and outcome. If not, she should assess whether the analyst attempted to control for unobserved variables affecting both treatment choice and outcomes, perhaps through instrumental variables. She should be concerned that the decision to participate in the program might have been based on the expected benefit from the program. When instrumental variables are employed, she should determine if the analyst attempted to describe which patients’ participation decisions may have been influenced by the instrument.

Greater awareness and greater author disclosure of the subgroup to whom a treatment effect refers and the potential for treatment effect heterogeneity may help those disseminating and using research to choose and interpret the most appropriate treatment.2729 Greater awareness of the reasons why treatment effects may differ across studies of the same patients in the same treatment settings and same geography can help policymakers and clinicians understand to whom a study’s results may be generalized. Our hope is that a common set of terms to describe various treatment effects of interest across disciplines could contribute to better communication, more appropriate dissemination of research findings across different settings, and ultimately, improved patient outcomes.

Acknowledgments

Financial Support: Dr. Garrido (CDA 11-201/CDP 12-255) and Dr. Maciejewski (RCS 10-391) receive support from the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Health Services Research and Development Service.

Footnotes

The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

References

  • 1.Maciejewski ML, Diehr P, Smith MA, Hebert P. Common methodological terms in health services research and their symptoms. Medical Care 2002; 40: 477–484. [DOI] [PubMed] [Google Scholar]
  • 2.Maciejewski ML, Weaver EM, Hebert PL. Synonyms in health services research methodology. Medical Care Research and Review 2011; 68: 156–176. 10.1177/107755871037280 [DOI] [PubMed] [Google Scholar]
  • 3.Cameron AC, Trivedi PK. Microeconometrics: Methods and Applications. New York: Cambridge University Press; 2005. [Google Scholar]
  • 4.StataCorp. Stata 13 Treatment Effects Reference Manual. College Station, TX: Stata Press; 2013. [Google Scholar]
  • 5.Splawa-Neyman J, Dabrowska DM, Speed TP. On the application of probability theory to agricultural experiments: Essay on principles. Statistical Science 1990; 5: 465–472. [Google Scholar]
  • 6.Rubin DB. Estimating causal effects of treatments in randomized and nonrandomized studies. J Educational Psychology 1974; 66: 688–701. [Google Scholar]
  • 7.McCaffrey DF, Griffin BA, Almirall D, Slaughter ME, Ramchand R, Burgette LF. A tutorial on propensity score estimation for multiple treatments using generalized boosted models. Statistics in Medicine 2013; 32: 3388–3414. 10.1002/sim.5753 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Garrido MM, Deb P, Burgess JF, et al. Choosing models for health care cost analyses: Issues of nonlinearity and endogeneity. Health Services Research 2012; 47(6): 2377–2397. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Imai K, King G, Stuart E. Misunderstandings between experimentalists and observationalists about causal inference. Journal of the Royal Statistical Society. Series A 2008; 171(2): 481–502. [Google Scholar]
  • 10.DuGoff EH, Schuler M, Stuart E. Generalizing observational study results: Applying propensity score methods to complex surveys. Health Services Research 2014; 49 (1): 284–303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Currow DC, Quinn S, Agar M, et al. Double-blind, placebo-controlled, randomized trial of octreotide in malignant bowel obstruction. Journal of Pain and Symptom Management 2015; 49(5): 814–821. [DOI] [PubMed] [Google Scholar]
  • 12.Heckman J, Urzua S, Vytlacil E. Understanding instrumental variables in models with essential heterogeneity. The Review of Economics and Statistics 2006; 88: 389–432. 10.1162/rest.88.3.389 [DOI] [Google Scholar]
  • 13.Rothwell PM. Subgroup analysis in randomised controlled trials: Importance, indications, and interpretation. Lancet 2005; 365: 176–186. [DOI] [PubMed] [Google Scholar]
  • 14.Sussman JB, Kent DM, Nelson JP, Hayward RA. Improving diabetes prevention with benefit based tailored treatment: Risk based reanalysis of Diabetes Prevention Program. BMJ 2015; 350: h454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Yarlas A, Miller K, Wen W, et al. A subgroup analysis found no diminished response to buprenorphine transdermal system treatment for chronic low back pain patients classified with depression. Pain Practice 2015; 10.1111/papr.12298. [DOI] [PubMed]
  • 16.Miller SC, Gozalo P, Lima JC, et al. The effect of Medicaid nursing home reimbursement policy on Medicare hospice use in nursing homes. Med Care 2011; 49(9): 797–802. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Blundell R, Costa Dias M. Alternative approaches to evaluation in empirical microeconometrics. Institute for the Study of Labor; (IZA) Discussion Paper No. 3800; 2008. [Google Scholar]
  • 18.Imbens GW, Angrist JD. Identification and estimation of local average treatment effects. Econometrica 1994; 62: 467–475. 10.3386/t0118 [DOI] [Google Scholar]
  • 19.Penrod JD, Goldstein NE, Deb P. When and how to use instrumental variables in palliative care research. Journal of Palliative Medicine 2009; 12(5): 471–474. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Saito AM, Landrum MB, Neville BA, et al. Hospice care and survival among elderly patients with lung cancer. Journal of Palliative Medicine 2011; 14(8): 929–939. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Currow DC, Plummer JL, Kutner JS, et al. Analyzing phase III studies in hospice/palliative care. A solution that sits between intent-to-treat and per protocol analyses: The palliative-modified ITT analysis. Journal of Pain and Symptom Management 2012; 44(4): 595–603. [DOI] [PubMed] [Google Scholar]
  • 22.Harris KM, Remler DK. Who is the marginal patient? Understanding instrumental variables estimates of treatment effects. Health Services Research 1998; 33:1337–1360. [PMC free article] [PubMed] [Google Scholar]
  • 23.Heckman J. Instrumental variables: A study of implicit behavioral assumptions used in making program evaluations. Journal of Human Resources 1997; 32: 441–462. 10.2307/146178 [DOI] [Google Scholar]
  • 24.Basu A, Heckman JJ, Navarro-Lozano S, Urzua S. Use of instrumental variables in the presence of heterogeneity and self-selection: An application to treatments of breast cancer patients. Health Economics 2007; 16: 1133–1157. 10.1002/hec.1291 [DOI] [PubMed] [Google Scholar]
  • 25.Basu A. Estimating person-centered treatment (PeT) effects using instrumental variables: An application to evaluating prostate cancer treatments. Journal of Applied Econometrics 2014; 29: 671–691. 10.1002/jae.2343 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Brooks JM, Fang G. Interpreting treatment-effect estimates with heterogeneity and choice: Simulation model results. Clin Ther 2009; 31: 902–919. [DOI] [PubMed] [Google Scholar]
  • 27.Altman DG, Schulz KF, Moher D, et al. The revised CONSORT statement for reporting randomized trials: Explanation and Exploration. Annals of Internal Medicine 2001; 134(8): 663–694. [DOI] [PubMed] [Google Scholar]
  • 28.Schulz KF, Altman DG, Moher D, et al. CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials. BMJ 2010; 340:c332. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting Observational Studies. Annals of Internal Medicine 2007; 147(8): 573–577. [DOI] [PubMed] [Google Scholar]

RESOURCES