I thank Gargani for his remarks on my recent commentary, launching a series “Evaluating Public Health Interventions” in AJPH, which addresses critical methodological issues that arise in the course of evaluating public health interventions. In this series, I have1,2 and will consider study design and analysis, describe the latest and most innovative emerging methodologies, and provide an overview of best practices. In the first column,1 the one that to which Gargani’s letter responds, I defined four overlapping focal areas of inquiry: implementation science, impact evaluation, comparative effectiveness research, and program evaluation. Based on my review of the literature defining “program evaluation,” it appeared that the goal of program evaluation is typically intended to be specific for the program being evaluated, rather than aspiring to broader generalizability.
Gargani disagrees and made a convincing case that program evaluation may also aim for generalizability beyond the index program. I thank him for providing further evidence that unifying methods for implementation science, impact evaluation, program evaluation, and comparative effectiveness research will be a useful exercise, and that the exceptions to the unity of methods for these closely related disciplines will likely be rare. In future columns, I will be mindful to point out these exceptions when they occur.
REFERENCES
- 1.Spiegelman D. Evaluating public health interventions: 1. Examples, definitions, and a personal note. Am J Public Health. 2016;106(1):70–73. doi: 10.2105/AJPH.2015.302923. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Spiegelman D. Evaluating public health interventions: 2. Stepping up to routine public health evaluation with the stepped wedge design. Am J Public Health. 2016;106(3):453–457. doi: 10.2105/AJPH.2016.303068. [DOI] [PMC free article] [PubMed] [Google Scholar]