Introduction
The quasi-experimental design is a research methodology that lies between the rigor of a true experimental method (true experimental design includes random assignment to at least one control and one experimental/interventional group) (Hulley, 2013) and the flexibility of observational studies (Maciejewski, 2020). This method is often used in social sciences, education, and public health research. Quasi- experimental designs include the posttest-only design with a control group, one-group pretest-posttest design, and pretest-posttest with a control group (Gray, 2023; Harris et al., 2006). Please see Figure 1. The quasi-experimental method is often used when classic experimental designs are not feasible or ethical, thus bridging the gap between observational studies and true experiments. This paper builds upon the other manuscripts in this series, introduces the basics of quasi-experimental designs, and provides examples of each research method.
Figure 1.

Posttest-only Design with Control Group, One Group Pretest-Posttest Design, Pretest and Posttest Design with Control Group
Overview – Quasi-experimental Design
Quasi-experimental design strategies are those that, while not incorporating every component of a true experiment, can be developed to make some inferences by managing potential risks to internal validity. Internal validity represents the level of confidence that a cause-and-effect relationship observed in a study is not influenced by other variables (Patino & Ferreira, 2018). Essentially, it answers the question, can a direct causal connection be established between the independent (experimental) variable and the outcome (dependent variable) without interference from external factors?
The quasi-experimental designs are often utilized when the investigator cannot implement a control group or randomize study groups. If it is not feasible to randomize an intervention or establish a control group, additional factors can be included in the design to strengthen internal validity (Gallin, 2018). Similar to other research designs, such as randomized controlled trials and cohort studies, investigators must develop the eligibility criteria to be part of a quasi-experimental research study, the study aims, and select the appropriate measurement tools to assess outcomes.
Quasi-experimental studies are often used in real-world settings and can leverage events, such as examining health outcomes following a hurricane or other natural disasters (de Vocht et al., 2021; Gallin, 2018). Suppose an investigator is investigating health outcomes after a natural disaster. In that case, the study will likely be an observational quasi-experimental design wherein no intervention is given to the study participants. Quasi-experimental designs are also employed for studying behavioral interventions using natural settings (i.e., a walking initiative developed in a local city). When conducting these studies, the investigator must consider potential sources of bias and threats to validity and select the most appropriate research design (Gallin, 2018). For example, if an investigator wants to assess stress levels associated with surviving a hurricane. An investigator could not randomize participants in this situation and may consider using a quasi-experimental design and the hurricane’s geographical location as an eligibility criterion. Another eligibility criteria for this example might include damage or loss of residential dwellings.
For additional information about the quasi-experimental design, the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) (Des Jarlais, Lyles, & Crepaz, 2004) is a 22-item checklist investigators can review when using a quasi- experimental design. The TREND guideline was developed to improve the reporting quality of nonrandomized behavioral and public health intervention studies. However, the guideline is also excellent for learning about the research design.
Posttest-Only Design with a Control Group
In the posttest-only design with a control group, there are two groups: an experimental group that receives an intervention and a control (comparison) group that does not (Gray, 2023). Both groups are measured after the intervention. For example, a posttest-only design with a control group might be employed to assess the impact of a new hand hygiene intervention among hospital staff to reduce rates of healthcare- associated infections. For this study, two hospitals will be selected with similar size and level of patient acuity. One would implement the new hand hygiene for its hospital staff, while the other would not. Infection rates would be taken from both groups after three months of following or not following the new hand hygiene intervention. Working with different institutions might have restrictive policies or competing interests in implementing new procedures (i.e., vendor contracts for certain supplies; hospital management does not agree with the product used for the study); therefore, posttest- only was the best option for the sample study.
It is essential to acknowledge that this design does not allow for definitive conclusions regarding causality or efficacy due to various threats to validity (Gray, 2023). Moreover, there is a potential risk of selection bias affecting both groups and including a control group could create a false sense of certainty regarding the study results. Additionally, the absence of a pretest, which would involve measuring infection rates before the implementation of the new hand hygiene intervention, hinders our ability to determine whether any observed disparities in infection rates between the two facilities can be attributed to the intervention itself or are a result of pre-existing discrepancies, such as staffing differences. Consequently, the observed disparities in posttest measures, specifically the changes in infection rates, may be attributable to the new hand hygiene practices or other factors preceding the intervention (i.e., increased attention to hand washing due to COVID-19).
One Group Pretest-Posttest Design
The one-group pretest-posttest structure is a frequently used quasi-experimental design. The participants are selected based on convenience and suitability to the study (Gray, 2023). In this design, participants are measured before (pretest) and after (posttest) the intervention. The effect of the intervention is inferred from the difference in pretest and posttest results. For example, a study using high-intensity training for 30 minutes five times a week for three months for weight loss weighs participants before and after the intervention.
Even with the inclusion of a pretest, this design has significant limitations that may render the results incomprehensible (Gray, 2023). Pretest results cannot effectively act as a control group. Certain events that influence the posttest responses might occur between the pretest and posttest. Scores on the posttest could be affected by factors such as historical events and maturation (Gray, 2023). A historical event is considered a threat to internal validity when an external event unrelated to the study takes place and influences the outcome of the dependent variable. For example, in continuing with the high-intensity training for weight loss, a historical event for this study could be a new dietary supplement promoted on social media and available over the counter for weight loss. Participants could take the new supplement while completing high-intensity training. Maturation, for this study, poses a risk to internal validity when normal changes stemming from the progression of the impact of time on the outcome of the dependent variable. In other words, high-intensity training may increase muscle mass and body weight.
Additionally, suppose participants have high pretest scores (i.e., lower body weight – not overweight or obese). In that case, the tool may not be adequately sensitive to detect progress, or scores may gravitate toward the mean (Gray, 2023). In clinical research, gravitating toward the mean or regression to the mean refers to the statistical event where extreme initial measurements, unusually high or low, tend to move closer to the average in subsequent measurements (Barnett, van der Pols, & Dobson, 2005). Regression to the mean can cause misleading interpretations in research results, primarily if not adequately accounted for during the study design and analysis (Gray, 2023) (Barnett et al., 2005). Incorporating a comparison or control group into this design would enhance the investigator’s capacity to draw causal inferences (Gray, 2023).
Pretest and Posttest Design with a Control Group
The pretest and posttest design with a control group is a widely used quasi- experimental design. In this design, the researcher selects a group to receive the treatment and another with similar characteristics to serve as the control group(Alessandri, Zuffianò, & Perinelli, 2017; Gray, 2023). Both groups complete a pretest, after which the treatment group receives the intervention, and finally, both groups complete a posttest. It is ideal if the groups’ mean scores on the pretest are similar (p-value > .05). Additionally, the investigator compares demographic characteristics and other variables influencing posttest scores, such as disease status or time since diagnosis. By ensuring similarity between the treatment and control groups, any differences in posttest scores can be attributed to the intervention received by the treatment group (Gray, 2023).
To illustrate this design, investigators recruited older adults from two senior centers: Senior Center A and Senior Center B. The objective was to assess the impact of an app-based game on the memory of healthy, ambulatory older adults aged 75 and older from the same city. Participants from Senior Center A were provided with the app- based game. They were asked to attend the senior center five days a week for a month, dedicating 30 minutes to playing the game while at the center and participating in the usual activities. Meanwhile, participants from Senior Center B engaged in their usual activities, such as crafting, dancing, chair yoga, and board games. Also, they attended the senior center five days a week for a month. Both groups of older adults underwent memory tests before and after the 30 days to measure the efficacy of the app-based game.
The above example study, using the pretest and posttest design with a control group, has limitations regarding the design impacting internal validity. One of the main weaknesses is that participants are not randomized into the treatment and control groups (Naci, 2017). Therefore, any differences observed in the posttest scores of the treatment group may be attributed to an unmeasured confounding variable (a situation where a third variable affects the independent and dependent variables, leading to a distorted association (Capili, 2021). Additionally, external events or unrelated changes between the pretest and posttest may positively affect the dependent variable of the control group or reduce positive changes in the dependent variable of the treatment group (Gray, 2023; Harris et al., 2006). To illustrate potential issues that could impact the outcomes include using memory-enhancing nutritional supplements or using other memory-based games among participants from the control group.
Conclusion
In conclusion, while quasi-experimental research designs have limitations, the designs are an alternative option in the field where fully randomized, controlled clinical trials are not feasible or ethical. When choosing a research design, investigators should consider the internal and external validity (generalizability of the results beyond the study) trade-offs. Despite the potential shortcomings, the quasi-experimental designs can provide valuable insights into causal relationships in real-world settings, provided steps are taken to minimize potential confounders and biases.
Acknowledgments
This manuscript is supported in part by grant # UL1TR001866 from the National Center for Advancing Translational Sciences (NCATS), the National Institutes of Health (NIH) Clinical and Translational Science Award (CTSA) program.
Contributor Information
Bernadette Capili, Heilbrunn Family Center for Nursing Research, Rockefeller University, New York, NY.
Joyce K. Anastasi, New York University Rory Meyers College of Nursing, New York, NY.
References
- Alessandri G, Zuffianò A, & Perinelli E (2017). Evaluating Intervention Programs with a Pretest-Posttest Design: A Structural Equation Modeling Approach. Frontiers in Psychology, 8. doi: 10.3389/fpsyg.2017.00223 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barnett AG, van der Pols JC, & Dobson AJ (2005). Regression to the mean: what it is and how to deal with it. Int J Epidemiol, 34(1), 215–220. doi: 10.1093/ije/dyh299 [DOI] [PubMed] [Google Scholar]
- Capili B (2021). Selection and Implementation of Outcome Measurements. Am J Nurs, 121(8), 63–67. doi: 10.1097/01.Naj.0000767840.30291.31 [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Vocht F, Katikireddi SV, McQuire C, Tilling K, Hickman M, & Craig P (2021).Conceptualising natural and quasi experiments in public health. BMC Medical Research Methodology, 21(1), 32. doi: 10.1186/s12874-021-01224-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Des Jarlais DC, Lyles C, & Crepaz N (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health, 94(3), 361–366. doi: 10.2105/ajph.94.3.361 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gallin J, Ognibene FP, & Johnson LL (Ed.) (2018). Principles and practice of clinical research (4th ed.). Cambridge, MA: Elsevier Inc. [Google Scholar]
- Gray JGS (2023). Burns & Groves The Practice of Nursig Research: Appraisal, Synthesis, and Generation of Evidence. St Louis, MO: Elsevier. [Google Scholar]
- Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE, & Finkelstein J (2006). The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inform Assoc, 13(1), 16–23. doi: 10.1197/jamia.M1749 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hulley S, Cummings SR, Browner WS, Grady DG, Newman TB (Ed.) (2013). Designing Clinical Research (4th ed.). Philadelphia, PA: Wolters Kluwer/Lippincott Williams & Wilkins. [Google Scholar]
- Maciejewski ML (2020). Quasi-experimental design. Biostatistics & Epidemiology, 4(1), 38–47. doi: 10.1080/24709360.2018.1477468 [DOI] [Google Scholar]
- Naci H (2017). What to do (or not to do) when randomization is not possible. J Heart Lung Transplant, 36(11), 1174–1177. doi: 10.1016/j.healun.2017.07.008 [DOI] [PubMed] [Google Scholar]
- Patino CM, & Ferreira JC (2018). Internal and external validity: can you apply research study results to your patients? Jornal brasileiro de pneumologia : publicacao oficial da Sociedade Brasileira de Pneumologia e Tisilogia, 44(3), 183. doi: 10.1590/s1806-37562018000000164 [DOI] [PMC free article] [PubMed] [Google Scholar]
