Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
editorial
. 2016 Sep;106(Suppl 1):S17–S18. doi: 10.2105/AJPH.2016.303354

Replication Typology and Guidelines for Adolescent Pregnancy Prevention Initiatives

Kenneth R McLeroy 1, Kelly L Wilson 1, Jennifer Farmer 1, Whitney R Garney 1,
PMCID: PMC5049461  PMID: 27689482

This Office of Adolescent Health and the AJPH supplement on adolescent pregnancy prevention illustrates the implications and practical lessons that behavioral scientists and health educators face in large-scale replication of evidence-based adolescent pregnancy prevention programs.

Of the program models replicated during the 2010 to 2015 initiative, a variety of approaches—abstinence, sexual health education, youth development, and programs for clinical settings and specific populations—were represented. Throughout these evaluations, the focus on replication was largely aimed at assuring program fidelity, which was measured using facilitator’s self-reported adherence to program models and observations by independent observers. Findings from the research presented in this theme issue suggest there is valuable information to be gained through replication studies. Information provided by this first cohort of adolescent pregnancy prevention grantees can inform the evidence base and provide insight into what is needed for program replication in other fields.

WHAT IS REPLICATION?

Funders, researchers, and practitioners have shown greater attention to scientific replication in recent years through a variety of efforts, ranging from

  1. attempting to replicate theoretical findings, such as the work conducted since 2008 by the American Psychological Association, which replicated (or failed to replicate) the findings of 100 prominent articles in the discipline;

  2. summarizing the effectiveness of interventions for specific health problems (e.g., systematic reviews);

  3. expanding the evidence base and identifying interventions with demonstrated effectiveness;

  4. documenting the continued effectiveness of interventions;

  5. working to expand the generalizability of interventions to new settings, populations, and implementation models; and

  6. attempting to identify core and modifiable elements of existing interventions.

REPLICATION TYPOLOGY

There may be multiple types of replication research, many of which are represented in this special issue. Valentine et al. identify five types of replications including: statistical replications, which aim to replicate an existing study with a new sample and test whether the original results are attributable to random effects; generalizability replications, in which one aspect of the study design is altered, such as the target population, to determine the extent to which results may generalize from one population or setting to another; implementation replications, where some implementation details are varied, such as number of sessions or contacts; theory development replications, in which variations in the intervention allow for better understanding how the intervention works, such as variations in mediators or modifiers; and ad hoc replications, in which interventions may vary from each other in multiple and usually unsystematic ways.1

Why and how a replication is being conducted, or the typology of replication research, are important because research models and standards vary according to the purpose of the replication. For example, statistical replications would generally require the most fidelity to an implementation model because the purpose is to replicate previous or existing findings. Generalizability replications, on the other hand, allow for systematic variations in program settings or populations.

This typology of replication is useful to interpret study designs and results of adolescent pregnancy prevention. In peer reviews of the articles in this issue, one of the most frequently raised criticisms was that while studies adhered to the original program model, they were implemented in new settings and with new populations. This criticism represents a common misconception in replication studies—that replications have to be conducted in the precise manner as the previous study. However, as noted by Valentine et al., exact replication is virtually impossible and the type of replication attempted will affect judgments about the quality of the study.1 As a result, readers are encouraged to take into account the type of replication being attempted and how the replication adds to our understanding of the effectiveness of specific approaches, strategies, and theories.

Moreover, decisions about the effectiveness of interventions should be based on more than one replication study. We need to use all evidence available to make decisions about intervention effectiveness.1

REPORTING GUIDELINES

In the past, evaluation studies providing information to determine if a program should be deemed “evidence-based” have frequently lacked transparency. Reporting guidelines for scientific research have been developed to remedy this issue2 and improve transparency by requiring detailed information needed for others to replicate research.

At this point in time, hundreds of reporting guidelines, ranging in scope and purpose, exist; the Equator Network has created an inventory of guidelines, which is available online (http://www.equator-network.org).3 Reporting guidelines address study designs, including parallel group randomized trials (CONSORT 20104), observational studies (STROBE), and qualitative research (SRQR5). In recent years, guideline extensions have been developed to emphasize important aspects of interventions (TIDieR6). A checklist and explanatory document typically describe the guideline’s components and characteristics. Guidelines assist authors in thoroughly documenting their studies in peer-reviewed literature, thus improving other researcher’s ability to replicate studies and interpret findings.

Several reporting guidelines apply to the adolescent pregnancy prevention replication studies, three of which, and two extensions, being geared toward rigorous evaluation studies and interventions. These guidelines include (1) CONSORT (Consolidated Standards of Reporting Trials), which was revised in 2010 to offer guidance about reporting parallel group randomized trials4; (1a) an extension to CONSORT for social and psychological interventions (SPI), which is specifically relevant to public health; (1b) TIDieR (Template for Intervention Description and Replication) an extension to CONSORT used for reporting interventions6; (2) TREND (Transparent Reporting of Evaluations with Nonrandomized Designs), which is used for reporting nonrandomized designs7; and (3) STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) which is used for reporting on observational studies in epidemiology.

REPLICATION AHEAD

Initiatives supporting adolescent pregnancy prevention are moving the field forward in productive ways. However, an emphasis on replication that embraces the various types and purposes of replication research will benefit both the science and practice of adolescent pregnancy prevention, as well as strengthen our ability to apply social and behavioral sciences to public health problems.

Embracing replication research is not without challenges. The development of the knowledge and intervention base for adolescent pregnancy prevention will require considerable resources, collective effort, and a systematic approach to replication that addresses the multiple goals and strategies for building the evidence.

REFERENCES

  • 1.Valentine JC, Biglan A, Boruch RF et al. Replication in prevention science. Prev Sci. 2011;12(2):103–117. doi: 10.1007/s11121-011-0217-6. [DOI] [PubMed] [Google Scholar]
  • 2.Gottfredson DC, Cook TD, Gardner FE et al. Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: next generation. Prev Sci. 2015;16(7):893–926. doi: 10.1007/s11121-015-0555-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman D. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR network. BMC Med. 2010;8(1):24. doi: 10.1186/1741-7015-8-24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332. doi: 10.1136/bmj.c332. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–1251. doi: 10.1097/ACM.0000000000000388. [DOI] [PubMed] [Google Scholar]
  • 6.Hoffmann TC, Glasziou PP, Boutron I et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687. doi: 10.1136/bmj.g1687. [DOI] [PubMed] [Google Scholar]
  • 7.Vlahov D. Transparent reporting of evaluations with nonrandomized designs (TREND) J Urban Health. 2004;81(2):163–164. doi: 10.1093/jurban/jth099. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES