Abstract
Quasi-experimental studies evaluate the association between an intervention and an outcome using experiments in which the intervention is not randomly assigned. Quasi-experimental studies are often used to evaluate rapid responses to outbreaks or other patient safety problems requiring prompt non-randomized interventions. Quasi-experimental studies can be categorized into three major types: interrupted time series designs, designs with control groups, and designs without control groups. This methods paper highlights key considerations for quasi-experimental studies in healthcare epidemiology and antimicrobial stewardship including study design and analytic approaches to avoid selection bias and other common pitfalls of quasi-experimental studies.
Introduction
The fields of healthcare epidemiology and antimicrobial stewardship (HE&AS) frequently apply interventions at a unit-level (e.g. intensive care unit [ICU]). These are often rapid responses to outbreaks or other patient safety problems requiring prompt non-randomized interventions. Quasi-experimental studies evaluate the association between an intervention and an outcome using experiments in which the intervention is not randomly assigned.1, 2 Quasi-experimental studies can be used to measure the impact of large scale interventions or policy changes where data are reported in aggregate and multiple measures of an outcome over time (e.g., monthly rates) are collected.
Quasi-experimental studies vary widely in methodological rigor and can be categorized into three types: interrupted time series designs, designs with control groups, and designs without control groups. The HE&AS literature contains many uncontrolled before-and-after studies (also called pre-post studies), but advanced quasi-experimental study designs should be considered to overcome the biases inherent in uncontrolled before-and-after studies.3 In this article, we highlight methods to improve quasi-experimental study design including use of a control group that does not receive the intervention2 and use of the interrupted time series study design, in which multiple equally spaced observations are collected before and after the intervention.4
Advantages and Disadvantages (Table 1)
Table 1.
Advantages | Notes |
---|---|
Less expensive and time consuming than RCTs or Cluster Randomized Trials | Do not need to randomize groups |
Pragmatic | Include patients that are often excluded in RCTs, tests effectiveness more than efficacy, may have good external validity |
Can retrospectively analyze policy changes | Even if policy implementation is out of your control |
Meets some requirements of causality | Quasi-experimental studies meet some requirements for causality including temporality, strength of association and dose response2 |
Designs can be strengthened with control groups, multiple measures over time and cross-overs | Not gold standard to establish causation but can be next level below RCT if well-designed |
Disadvantages | Notes |
Retrospective data is often incomplete or difficult to obtain | Need processes to assess availability, accuracy and completeness during baseline phase before implementation |
Not randomized | Nonrandomized designs tend to overestimate effect size3 Does not meet all requirements to determine causality Lack of internal validity |
Potential pitfalls | Notes |
Selection Bias | When group receiving the intervention differs from the baseline group.2 |
Maturation Bias | Maturation bias can occur when natural changes over the passage of time may influence the study outcome.1 Examples include seasonality, fatigue, aging, maturity or boredom.2 |
Hawthorne Effect | Could bias quasi-experimental studies in which baseline rates are collected retrospectively and intervention rates are collected prospectively, because the intervention group could be more likely to improve when they are aware of being observed.3 |
Historical Bias | Historical bias is a threat when other events occur during the study period that may have an effect on the outcome.2 |
Regression to the Mean | Regression to the mean is a statistical phenomenon in which extreme measures tend to naturally revert back to normal.2 |
Instrumentation Bias | Instrumentation bias occurs when a measuring instrument changes over time (e.g. improved sensitivity of laboratory tests) or when data are collected differently before and after an intervention.2 |
Ascertainment Bias | Systematic error or deviation in the identification or measurement of outcomes. |
Reporting Bias | Reporting bias is especially prevalent in retrospective quasi-experimental studies, in which researchers only publish quasi-experimental studies with positive findings and do not publish null or negative findings. |
Need advanced statistical analysis when using more complex designs | With time series designs, should use interrupted time series analysis, not just single measurements before and after a response to an outbreak. Should account for intracluster correlation in power calculations |
Note: RCT, randomized controlled trial.
The greatest advantages of quasi-experimental studies are that they are less expensive and require fewer resources compared with individual randomized controlled trials (RCTs) or cluster randomized trials. Quasi-experimental studies are appropriate when randomization is deemed unethical (e.g., effectiveness of hand hygiene studies).1 Quasi-experimental studies are often performed at a population-level not an individual-level, and thus they can include patients who are often excluded from RCTs, such as those too ill to give informed consent or urgent surgery patients, with IRB approval as appropriate.5 Quasi-experimental studies are also pragmatic because they evaluate the real-world effectiveness of an intervention implemented by hospital staff, rather than efficacy of an intervention implemented by research staff under research conditions.5 Therefore, quasi-experimental studies may also be more generalizable and have better external validity than RCTs.
The greatest disadvantage of quasi-experimental studies is that randomization is not used, limiting the study’s ability to conclude a causal association between an intervention and an outcome. There is a practical challenge to quasi-experimental studies that may arise when some patients or hospital units are encouraged to introduce an intervention, while other units retain the standard of care and may feel excluded.2 Importantly, researchers need to be aware of the biases that may occur in quasi-experimental studies that may lead to a loss of internal validity, especially selection bias in which the intervention group may differ from the baseline group.2 Types of selection bias that can occur in quasi-experimental studies include maturation bias, regression to the mean, historical bias, instrumentation bias, and the Hawthorne effect.2 Lastly, reporting bias is prevalent in retrospective quasi-experimental studies, in which researchers only publish quasi-experimental studies with positive findings and do not publish null or negative findings.
Pitfalls and Tips
Key study design and analytic approaches can help avoid common pitfalls of quasi-experimental studies. Quasi-experimental studies can be as small as an intervention in one ICU or as large as implementation of an intervention in multiple countries.6 Multisite studies generally have stronger external validity. Subtypes of quasi-experimental study designs are shown in Table 2 and the Supplemental Figure.1, 2, 7 In general, the higher numbers assigned to the designs in the table are associated with more rigorous study designs. Quasi-experimental studies meet some requirements for causality including temporality, strength of association and dose response.1, 8 The addition of concurrent control groups, time series measurements, sensitivity analyses and other advanced design elements can further support the hypothesis that the intervention is causally associated with the outcome. These design elements aid in limiting the number of alternative explanations that could account for the association between the intervention and the outcome.2
Table 2.
Type and Subtype | Description | Notation | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A. INTERRUPTED TIME-SERIES QUASI-EXPERIMENTAL DESIGNS | |||||||||||||
#15 | Interrupted time series that uses switching replications and a control group | A1c | A2c | A3c | X | A4t | A5t | A6t | removeX | A7c | A8c | A9c | A10c |
B1c | B2c | B3c | B4c | B5c | B6c | X | B7t | B8t | B9t | B10t | |||
#14 | Interrupted time series with repeated treatment design13 | A1c | A2c | A3c | X | A4t | A5t | removeX | A6c | A7c | X | A8t | A9t |
#13 | Interrupted time series removing the treatment at a known time | A1c | A2c | A3c | A4c | X | A5t | A6t | A7t | A8t | removeX | A9c | A10c |
#12 | Interrupted time series with a nonequivalent dependent variable14 | (A1cv, A1cn) | (A2cv, A2cn) | (A3cv, A3cn) | X | (A4tv, A4tn) | (A5tv, A5tn) | ||||||
#11 | Interrupted time series with an untreated control group12 | A1c | A2c | A3c | A4c | A5c | X | A6t | A7t | A8t | A9t | A10t | |
B1c | B2c | B3c | B4c | B5c | B6c | B7c | B8c | B9c | B10c | ||||
#10 | Simple interrupted time series11, 15 | A1c | A2c | A3c | A4c | A5c | X | A6t | A7t | A8t | A9t | A10t | |
B. QUASI-EXPERIMENTAL DESIGNS THAT USE CONTROL GROUPS | |||||||||||||
# 9 | The control group design that uses dependent pretest and posttest samples and switching replications | A1c | X | A2t | removeX | A3c | |||||||
B1c | B2c | X | B3t | ||||||||||
# 8 | The untreated-control group design that uses dependent pretest and posttest samples and a double pretest | A1c | A2c | X | A3t | ||||||||
B1c | B2c | B3c | |||||||||||
# 7 | The untreated-control group design that uses dependent pretest and posttest samples | A1c | X | A2t | |||||||||
B1c | B2c | ||||||||||||
# 6 | The posttest-only design that uses an untreated control group | X | A1t | ||||||||||
B1c | |||||||||||||
C. QUASI-EXPERIMENTAL DESIGNS THAT DO NOT USE CONTROL GROUPS | |||||||||||||
# 5 | The repeated-treatment design | A1c | X | A2t | removeX | A3c | X | A4t | |||||
# 4 | The removed-treatment design | A1c | X | A2t | A3t | removeX | A4c | ||||||
# 3 | The 1-group pretest-posttest design that uses a nonequivalent dependent variable | (A1cv, A1cn) | X | (A2tv, A2tn) | |||||||||
# 2 | The 1-group pretest-posttest design that uses a double pretest | A1c | A2c | X | A3t | ||||||||
# 1 | The 1-group pretest-posttest design | A1c | X | A2t |
Note: Classification types adapted prior publications1, 2; A, B = Groups; 1,2,3, etc.= observations for a Group; X= intervention; removeX = remove intervention; v=variable of interest; n=non-equivalent dependent variable; t=treatment group; c=control group. Time moves from left to right. Citations are published examples from the literature.
Quasi-experimental studies can use observations that were collected retrospectively, prospectively, or a combination thereof. Prospective quasi-experimental studies use baseline measurements that are calculated prospectively for the purposes of the study, then an intervention is implemented and more measurements are collected. It is often necessary to use retrospective data when the intervention is outside of the researcher’s control (e.g. natural disaster response) or when hospital epidemiologists are encouraged to intervene quickly in response to external pressure (e.g. high central line-associated bloodstream infection [CLABSI] rates).2 However, retrospective quasi-experimental studies have a higher risk of bias compared with prospective quasi-experimental studies.2
The first major consideration in quasi-experimental studies is the addition of a control group that does not receive the intervention (Table 2 subtype 6–9, 11, 15). Control groups can assist in accounting for seasonal and historical bias. If an effect is seen among the intervention group but not the control group, then causal inference is strengthened. Careful selection of the control group can also strengthen causal inference. Detection bias can be avoided by blinding those who collect and analyze the data to which group received the intervention.2
The second major consideration is designing the study in a way to reduce bias, either by including a non-equivalent dependent variable or by using a removed-treatment design, a repeated treatment design or a switching replications design. Non-equivalent dependent variables should be similar to the outcome variable except that the non-equivalent dependent variable is not expected to be influenced by the outcome (Table 2 subtypes 3, 12). In a removed-treatment design the intervention is implemented then taken away and observations are made before, during and after implementation (Table 2 subtypes 4, 5, 13). This design can only be used for interventions that do not have a lasting effect on the outcome that could contaminate the study. For example, once staff have been educated, that knowledge cannot be removed.2 Researchers must clearly explain before implementation that the intervention will be removed, otherwise this can lead to frustration or demoralization by the hospital staff implementing the intervention.2 In the repeated treatment design (Table 2 subtypes 5, 14) interventions are implemented, removed, then implemented again. Similar to the removed-treatment design, the repeated treatment design should only be used if the intervention does not have a lasting effect on the outcome. In a switching replications design, which is also known as a cross-over design, one group implements the intervention while the other group serves as the control. Then, the intervention is stopped in the first group and implemented in the second group (Table 2 subtypes 9, 15). The cross-overs can occur multiple times. If the outcomes are only impacted during intervention observations, but not in the control observations, then there is support for causality.2
A third key consideration for quasi-experimental studies with the interrupted time series design is to collect many evenly spaced observations in both the baseline and intervention periods. Multiple observations are used to estimate and control for underlying trends in data, such as seasonality and maturation.2 The frequency of the observations (e.g. weekly, monthly, quarterly) should have clinical or seasonal meaning so that a true underlying trend can be established. There are conflicting recommendations as to the minimum number of observations needed for a time series design but they range from 20 observations before and 20 after intervention implementation to 100 observations overall.2–4, 9 The interrupted time series design is the most effective and powerful quasi-experimental design, particularly when supplemented by other design elements.2 However, time series designs are still subject to biases and threats to validity.
The final major consideration is ensuring an appropriate analysis plan. Time series study designs collect multiple observations of the same population over time, which result in autocorrelated observations.2 For instance, carbapenem-resistant Enterobacteriaceae (CRE) counts collected one month apart are more similar to one another than CRE counts collected two months apart.4 Basic statistics (e.g. chi-square test) should not be used to analyze time series data because they cannot take into account trends over time and they rely on an independence assumption. Time series data should be analyzed using either regression analysis or interrupted time-series analysis (ITSA).4 Linear regression models or generalized linear models can be used to evaluate the slopes of the observed outcomes before and during implementation of an intervention. However, unlike regression models, ITSA relaxes the independence assumption by combining a correlation model and a regression model to effectively remove seasonality effects before addressing the impact of the intervention.2, 4 ITSA assesses the impact of the intervention by evaluating the changes in the intercept and slope before and after the intervention. ITSA can also include a lag effect if the intervention is not expected to have an immediate result, and additional sensitivity analyses can be performed to test the robustness of the findings. We recommend statistician consultation while designing the study in order to determine which model may be appropriate and to help perform power calculations that account for correlation.
Key considerations for designing, analyzing and writing a quasi-experimental study can be found in the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement and are summarized in Table 3.10
Table 3.
CONSIDERATIONS FOR RETROSPECTIVE AND PROSPECTIVE QUASI-EXPERIMENTAL STUDIES |
1. Determine PICO: population, intervention, control group, outcomes (specify primary vs. secondary outcomes) |
2. What is the hypothesis? |
3. Is it ethical or feasible to randomize patients to the intervention? |
4. Will this be a retrospective or prospective study or a combination of both? |
5. What are the main inclusion and exclusion criteria? |
6. Will anyone (participants, study staff, research team, analyst) be blinded to the intervention assignment? |
7. Consider options for control group |
8. Consider options for nonequivalent dependent variable |
9. How will the observations (outcomes) be measured? |
10. How many observations can be measured pre and post intervention? |
11. How should the observations be spaced to account for seasonality? Weekly? Monthly? Quarterly? |
12. Do you hypothesize that the intervention will diffuse quickly or slowly? (e.g. are changes in the outcomes expected right away or only after a phase-in period?) |
13. Do you hypothesize that the intervention will have a lasting effect on the outcome? (If yes, do not use cross-over design) |
14. What is the analysis plan? (Consult a statistician) |
15. If the unit of analysis differs from the unit of assignment, what analytical method will be used to account for this (e.g. adjusting the standard error estimates by the design effect or using multilevel analysis)? |
16. What sample size is needed to be powered to see a significant difference? (Consult a statistician) |
17. Will the analysis strategy be intention to treat or how will non-compliers be treated in the analysis? |
ADDITIONAL CONSIDERATIONS FOR QUASI-EXPERIMENTAL STUDIES WITH PROSPECTIVE COMPONENTS |
18. What will be the unit of delivery? (e.g. Individual patient or unit or hospital) |
19. How will the units of delivery be allocated to the intervention? |
20. Who will deliver the intervention? (e.g. study team or healthcare workers) |
21. How and when will the intervention be delivered? |
22. How will compliance with the intervention be measured? |
23. Will there be activities to increase compliance or adherence? (e.g. incentives, coaching calls) |
Examples of Published Quasi-Experimental Studies in HE&AS
Recent quasi-experimental studies illustrated strengths and weaknesses that require attention when employing this study design.
A recent prospective quasi-experimental study (Table 2 subtype 10) implemented a multicenter bundled intervention to prevent complex Staphylococcus aureus surgical site infections.11 The study exemplified strengths of quasi-experimental design using a pragmatic approach in a real-world setting that even enabled identification of a dose response to bundle compliance. To optimize validity, the authors included numerous observation points before and after the intervention and used time series analysis. This study did not include a concurrent control group, and outcomes were collected retrospectively for the baseline group and prospectively for the intervention group which may have led to ascertainment bias.
Quach and colleagues performed a quasi-experimental study (Table 2 subtype 11) to evaluate the impact of an infection prevention and quality improvement intervention of daily chlorhexidine gluconate (CHG) bathing to reduce CLABSI rates in the neonatal ICU.12 The primary strength of this study was the authors used a non-bathed concurrent control group. Given that the baseline rates of CLABSI exceed the National Healthcare Surveillance Network (NHSN) pooled mean and the observation that the concurrent control group did not see a reduction in rates post-intervention suggest that the treatment effect was more likely due to the treatment than to regression to the mean, seasonal effects, or secular trends.
Yin and colleagues performed a quasi-experimental study (Table 2 subtype 14) to determine whether universal gloving reduced HAIs in hospitalized children.13 This retrospective study compared the winter respiratory syncytial virus (RSV) season during which healthcare workers (HCW) were required to wear gloves for all patient contact and the non-winter, non-RSV season when HCWs were not required to wear gloves. Because the study period extended many calendar years, the design enabled for multiple crossovers removing the intervention and use of time series analysis. This study did not have a control group (another hospital or unit that did not require universal gloving during RSV season) nor did it have a non-equivalent dependent variable.
Major Points
Quasi-experimental studies are less resource intensive than RCTs, test real world effectiveness, and can support a hypothesis that an intervention is causally associated with an outcome. These studies are subject to biases that can be limited by carefully planning the design and analysis. Consider key strategies to limit bias, such as including a control group, including a non-equivalent variable or removed-treatment design, collecting adequate observations before and during the intervention, and using appropriate analytic methods (i.e. interrupted time series analysis).
Conclusion
Quasi-experimental studies are important for HE&AS because practitioners in those fields often need to perform non-randomized studies of interventions at the unit level of analysis. Quasi-experimental studies should not always be considered methodologically inferior to RCTs because quasi-experimental studies are pragmatic and can evaluate interventions that cannot be randomized due to ethical or logistic concerns.10 Currently, too many quasi-experimental studies are uncontrolled before-and-after studies using suboptimal research methods. Advanced techniques such as use of control groups and non-equivalent dependent variables, as well as interrupted time series design and analysis should be used in future research.
Acknowledgments
Financial support. MLS is supported through VA Health Services Research and Development (HSR&D) Career Development Award (CDA 11-215)
Footnotes
Potential conflicts of interest. None.
References
- 1.Harris AD, Bradham DD, Baumgarten M, Zuckerman IH, Fink JC, Perencevich EN. The use and interpretation of quasi-experimental studies in infectious diseases. Clin Infect Dis. 2004;38:1586–91. doi: 10.1086/420936. [DOI] [PubMed] [Google Scholar]
- 2.Shadish WR, Cook TD, Campbell DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin; 2002. [Google Scholar]
- 3.Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and quasi-experimental designs for evaluating guideline implementation strategies. Fam Pract. 2000;17(Suppl 1):S11–6. doi: 10.1093/fampra/17.suppl_1.s11. [DOI] [PubMed] [Google Scholar]
- 4.Shardell M, Harris AD, El-Kamary SS, Furuno JP, Miller RR, Perencevich EN. Statistical analysis and application of quasi experiments to antimicrobial resistance intervention studies. Clin Infect Dis. 2007;45:901–7. doi: 10.1086/521255. [DOI] [PubMed] [Google Scholar]
- 5.Thorpe KE, Zwarenstein M, Oxman AD, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009;62:464–75. doi: 10.1016/j.jclinepi.2008.12.011. [DOI] [PubMed] [Google Scholar]
- 6.Lee AS, Cooper BS, Malhotra-Kumar S, et al. Comparison of strategies to reduce meticillin-resistant Staphylococcus aureus rates in surgical patients: a controlled multicentre intervention trial. BMJ Open. 2013;3:e003126. doi: 10.1136/bmjopen-2013-003126. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Harris AD, Lautenbach E, Perencevich E. A systematic review of quasi-experimental study designs in the fields of infection control and antibiotic resistance. Clin Infect Dis. 2005;41:77–82. doi: 10.1086/430713. [DOI] [PubMed] [Google Scholar]
- 8.Hill AB. The Environment And Disease: Association Or Causation? Proc R Soc Med. 1965;58:295–300. [PMC free article] [PubMed] [Google Scholar]
- 9.Crabtree BF, Ray SC, Schmidt PM, O'Connor PJ, Schmidt DD. The individual over time: time series applications in health care research. J Clin Epidemiol. 1990;43:241–60. doi: 10.1016/0895-4356(90)90005-a. [DOI] [PubMed] [Google Scholar]
- 10.Des Jarlais DC, Lyles C, Crepaz N. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;94:361–6. doi: 10.2105/ajph.94.3.361. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Schweizer ML, Chiang HY, Septimus E, et al. Association of a bundled intervention with surgical site infections among patients undergoing cardiac, hip, or knee surgery. JAMA. 2015;313:2162–71. doi: 10.1001/jama.2015.5387. [DOI] [PubMed] [Google Scholar]
- 12.Quach C, Milstone AM, Perpete C, Bonenfant M, Moore DL, Perreault T. Chlorhexidine bathing in a tertiary care neonatal intensive care unit: impact on central line-associated bloodstream infections. Infect Control Hosp Epidemiol. 2014;35:158–63. doi: 10.1086/674862. [DOI] [PubMed] [Google Scholar]
- 13.Yin J, Schweizer ML, Herwaldt LA, Pottinger JM, Perencevich EN. Benefits of universal gloving on hospital-acquired infections in acute care pediatric units. Pediatrics. 2013;131:e1515–20. doi: 10.1542/peds.2012-3389. [DOI] [PubMed] [Google Scholar]
- 14.Popoola VO, Colantuoni E, Suwantarat N, et al. Active Surveillance Cultures and Decolonization to Reduce Staphylococcus aureus Infections in the Neonatal Intensive Care Unit. Infect Control Hosp Epidemiol. 2016;37:381–7. doi: 10.1017/ice.2015.316. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Waters TM, Daniels MJ, Bazzoli GJ, et al. Effect of Medicare's nonpayment for Hospital-Acquired Conditions: lessons for future policy. JAMA Intern Med. 2015;175:347–54. doi: 10.1001/jamainternmed.2014.5486. [DOI] [PMC free article] [PubMed] [Google Scholar]