Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
editorial
. 2007 Apr;97(4):630–633. doi: 10.2105/AJPH.2006.094169

Reporting Implementation in Randomized Trials: Proposed Additions to the Consolidated Standards of Reporting Trials Statement

Evan Mayo-Wilson 1
PMCID: PMC1829360  PMID: 17329641

Abstract

Randomized controlled trials of public health interventions are often complex: practitioners may not deliver interventions as researchers intended, participants may not initiate interventions and may not behave as expected, and interventions and their effects may vary with environmental and social context.

Reports of randomized controlled trials can be misleading when they omit information about the implementation of interventions, yet such data are frequently absent in trial reports, even in journals that endorse current reporting guidelines.

Particularly for complex interventions, the Consolidated Standards of Reporting Trials (CONSORT) statement does not include all types of information needed to understand the results of randomized controlled trials. CONSORT should be expanded to include more information about the implementation of interventions in all trial arms.


REPORTING THE DESIGN OF an intervention tells part of a complex story, but public health interventions may involve multiple sites and practitioners, clinical decisions, and patient preferences. Practitioners may not deliver all parts of interventions or may add components; experimental participants may not take up interventions completely, and control participants may receive unintended services; and experimental interventions themselves may change according to contextual demands.

Trial reporting has improved since the introduction of guidelines that emphasize transparent reporting of methods and results1; however, evidence demonstrates that trial reports continue to lack information about the implementation of interventions—their actual delivery by practitioners and uptake by participants.

Implementation data increase the external validity of trials and aid the application of results by practitioners.2,3 Policymakers, administrators, and researchers need these data to assess the generalizability of findings, to synthesize literature,4 to design future trials, to determine the feasibility of interventions,5 and to develop treatment guidelines.6 The importance of implementation data is emphasized in the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement,7 a guide for reporting nonrandomized controlled trials that complements the Consolidated Standards of Reporting Trials (CONSORT) statement,8 a guide for reporting randomized controlled trials. Implementation data are needed to understand the results and implications of both randomized and nonrandomized trials, but unlike TREND, CONSORT gives little attention to practitioner actions and participant experiences.

On the basis of previous research findings, I propose that CONSORT be expanded to encourage the inclusion of implementation data in reports of randomized controlled trials.

IMPLEMENTATION DATA

There is extensive behavioral literature about operationally defining and measuring dependent and independent variables.(e.g.,9,10) Reviews have consistently shown that independent variables are poorly defined and infrequently measured in trial reports; reports would be more useful if they contained richer information about actual similarities and differences between trial arms. These reviews also demonstrate that the quality of implementation reporting has not improved in recent years despite improvements in overall report quality.

One review of 539 studies published in the Journal of Applied Behavior Analysis between 1968 and 1980 found that among the surveyed studies presenting operational definitions, only an average of 16% (range 3%–34%) also performed some check on the accuracy of the implementation of the independent variable.11 A similar review of school-based studies found that 64 of 181 (35%) operationally defined the intervention and 45 (25%) monitored or measured its implementation.12 In a review of studies involving people with learning disabilities, 12 of 65 (18%) measured implementation of the independent variable.13 A review of 148 studies on parent training research published in 18 journals between 1975 and 1990 found that almost all reports failed to examine differences between program design and implementation.14

In a broader review, fewer than 6% of 359 psychosocial trials included a treatment manual, implementer supervision, and an adherence check; 55% did not report using any of these methods to promote and verify implementation.15 An analysis of 162 prevention studies found that 39 (24%) reported a method for verifying intervention delivery,16 and reviews of the 1990 editions of Behavior Therapy and the Journal of Consulting and Clinical Psychology found that 9 of 25 (36%) and 7 of 22 (32%) articles, respectively, assessed treatment delivery directly.17

In 2005, the National Institutes of Health Behavior Change Consortium published one of the most comprehensive analyses of implementation data from 342 health behavior intervention studies; 71% of studies reported theoretical models, whereas only 27% reported mechanisms to monitor adherence.18

Recently, systematic reviews have been used to highlight the omission of implementation information in trial reports.1921 For example, a review about smoking cessation concluded that studies should describe “the intervention in sufficient detail for its replication even if the detail requires a separate paper.”22(p10) A review of interventions to promote smoke detectors identified what later authors labeled “systematic deficiencies in the literature in reporting context, methods, and details of implementation.”23(p150)

Recurrent omissions of implementation data may prevent readers from acting on the results of trials. Worse, results can be misleading when implementation data are not considered. A review of tap water for wound cleansing concluded that tap water might be as effective as sterile water or sterile saline water for preventing infection and promoting healing24; however, most trials took place in settings with sanitary tap water. The results applied only to similar settings.25

EXISTING GUIDELINES

The CONSORT statement includes practical, evidence-based recommendations for reporting randomized trials. Since their introduction, the quality of trial reports has improved,1 but only 1 of 22 CONSORT items (item 4) explicitly mentions the design and administration of interventions. Even articles in journals that have adopted CONSORT frequently report implementation inadequately26 and omit the number of participants receiving the treatment allocated.27 These omissions may occur because CONSORT focuses on the examination rather than the implementation of interventions. For example, CONSORT asks researchers to report evidence that blinding occurred as planned, but it does not ask researchers to report evidence that interventions occurred as planned.28

The Transparent Reporting of Evaluations with Nonrandomized Designs statement complements CONSORT and strongly emphasizes the importance of implementation data2: “Sufficient detail and clarity in the report allow readers to understand the conduct and findings of the intervention study and how the study was different from or similar to other studies in the field.”8(p361) The same logic surely applies to the reporting of randomized trials.

Implementation data may not be collected for practical and scientific reasons (e.g., monitoring adherence might confound a trial); however, information about implementation is generally undervalued. Researchers may exclude implementation information because it does not seem important or to give positive impressions of interventions that encounter problems in delivery or compliance. Journal editors may not demand implementation data because of space restrictions. Furthermore, funding bodies neglect mixed-methods research about putting interventions into practice.29 Expanding CONSORT would signal the importance of implementation information, expose its frequent omission, and encourage its measurement and reporting.

Guidelines in Action

A review of a 2006 series of reports of the Women’s Health Initiative Randomized Controlled Dietary Modification Trial3032 shows that reports of well-conducted trials in journals endorsing CONSORT could be improved by including data about the implementation of interventions.

The Women’s Health Initiative trial tracked nearly 49 000 women for more than 8 years to investigate the impact of “18 group sessions in the first year and quarterly maintenance sessions thereafter”30(p631) on cardiovascular disease, breast cancer, and colorectal cancer. Although the trial tested a behavioral intervention, reports implied that the study was designed to “directly address the health effects of a low-fat eating pattern”31(p644); an early paper said “the intervention is a dietary pattern.”33(pS95) The trial allowed substantial variation across sites,34 and some aspects of delivery were monitored,35 but the reports neither included nor referenced information about the actual delivery of the intervention by program staff.

The titles of these reports (“Low-fat dietary pattern and. . .”) and popular media accounts of them(e.g., 36) unintentionally confused diet (what people actually eat) and a diet (a behavioral modification program). In the first year of the study, 68.6% of women in the intervention group did not reduce their fat consumption to the target level (20% of total energy intake); in the sixth year, 85.6% exceeded their target, reducing their fat consumption to less than 20% of their total energy intake.31 One abstract reported that “women in the comparison group continued their usual eating pattern,”31(p643) but in the first year of the study, women in the control group reduced both their energy and fat intakes.30 Furthermore, self-reported food intake was inconsistent with changes in weight for both groups.37

Information about participants33 and recruitment38 has been published elsewhere, but participant uptake (e.g., attendance at sessions) was mentioned only in a definition of dropout and as a statistical variable in an analysis comparing women who attended specified numbers of sessions. Reports considered the impact of compliance on statistical power, but reports did not consider why the intervention failed to produce expected behavioral changes.

Although reports suggest that the Women’s Health Initiative trial was well designed and internally valid, more implementation data would increase the utility of its results. Implementation data would help readers understand the trial and help health professionals design and improve other dietary interventions.

PROPOSALS

The original CONSORT statement was criticized for lack of process data (e.g., sessions attended)38 and was revised accordingly.7 CONSORT now includes deviations from protocol as part of participant flow (item 13); when a participant does not receive or complete treatment, “the nature of protocol deviation and the exact reason for excluding participants after randomization should always be reported.”28(p679) Protocol deviations related to subjects included in analyses are equally relevant.

In reports of randomized trials, authors often report what they intended rather than what actually happened. Information about intentional and unintentional deviations in the delivery of interventions by practitioners, as well as information about deviations because of external factors, helps readers understand and apply the results of randomized trials.

The Transparent Reporting of Evaluations with Nonrandomized Designs statement includes incentives used as part of the intervention (items 4 and 21) and asks for “discussion of the success of and barriers to implementing the intervention”8(p365) (item 20). This information is similarly important in reports of randomized trials.

Adding such items to CONSORT would encourage authors to include, as far as possible, information needed to replicate trials as they happened, including delivery of nonspecific treatment components and receipt of interventions outside trial protocols.

CONCLUSION

The implementation of interventions demands more attention than it receives in most trial reports. Moncher and Prinz argued in 1991 that journal editors should give more attention to implementation.15 In 2004, Bellg et al. expressed hope that funders and publishers would require information about the delivery of interventions.2 With the explosive growth of evidence-based practice and the introduction of CONSORT, it is both unfortunate and surprising that so little has changed.

Critics of evidence-based practice are right to argue that many researchers value statistical outcomes at the expense of other types of data; limited reporting of qualitative and descriptive data prevents researchers, practitioners, and policymakers from using the application of the results of randomized controlled trials appropriately.

CONSORT has helped improve the quality of trial reports, yet it remains true that “in many clinical trials we must still guess what treatment was actually tested.”17(p1) Implementation data are essential to users of trial reports. Although it may be impossible to include complete data in all printed reports, implementation data could be included in extended online journals, in referenced papers or Web sites, or in trial registries. CONSORT should ask researchers to include or reference implementation data in reports of randomized controlled trials or to justify their absence.

Acknowledgments

I owe special thanks to Paul Montgomery for his comments on the article and for his guidance. I am grateful to Frances Gardner, Janet Harris, Don Operario, and Kristen Underhill for their ideas and suggestions and to Todd Berzon for his editorial assistance.

Peer Reviewed

References

  • 1.Moher D, Jones A, Lepage L for the CONSORT Group. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA. 2001;285:1992–1995. [DOI] [PubMed] [Google Scholar]
  • 2.Bellg A, Borrelli B, Resnick B, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23:443–451. [DOI] [PubMed] [Google Scholar]
  • 3.Resnick B, Bellg A, Borrelli B, et al. Examples of implementation and evaluation of treatment fidelity in the BCC studies: where we are and where we need to go. Ann Behav Med. 2005; 29(Suppl 2):46–54. [DOI] [PubMed] [Google Scholar]
  • 4.Herbert RD, Bo K. Analysis of quality of interventions in systematic reviews. BMJ. 2005;331:507–509. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Dusenbury L, Brannigan R, Falco M, Hansen W. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Educ Res. 2003;18: 237–256. [DOI] [PubMed] [Google Scholar]
  • 6.Jackson N, Waters E, for the Guidelines for Systematic Reviews of Health Promotion and Public Health Interventions Taskforce. Guidelines for Systematic Reviews of Health Promotion and Public Health Interventions. Version 1.2. Victoria, Australia: Deakin University; 2005. Available at: http://www.vichealth.vic.gov.au/cochrane/activities/Guidelines%20for%20HPPH%20reviews.pdf. Accessed January 14, 2007.
  • 7.Moher D, Schulz KF, Altman DG, for the CONSORT Group. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Ann Intern Med. 2001;134: 657–662. [DOI] [PubMed] [Google Scholar]
  • 8.Des Jarlais D, Lyles C, Crepaz N, for the TREND Group. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;94: 361–366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Billingsley F, White O, Munson R. Procedural reliability: a rationale and an example. Behav Assess. 1980;2: 229–241. [Google Scholar]
  • 10.Bond G, Evans L, Salyers M, Williams J, Kim HW. Measurement of fidelity in psychiatric rehabilitation. Ment Health Serv Res. 2000;2:75–87. [DOI] [PubMed] [Google Scholar]
  • 11.Peterson L, Homer A, Wonderlich S. The integrity of independent variables in behavior analysis. J Appl Behav Anal. 1982;15:477–492. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Gresham F, Gansle K, Noell G, Cohen S, Rosenblum S. Treatment integrity of school-based behavioral intervention studies: 1980–1990. School Psychol Rev. 1993;22:254–272. [Google Scholar]
  • 13.Gresham F, MacMillan D, Beebe-Frankenberger M, Bocian K. Treatment integrity in learning disabilities intervention research: do we really know how treatments are implemented? Learn Disabil Res Pract. 2000;15:198–205. [Google Scholar]
  • 14.Rogers-Wiese MR. A critical review of parent training research. Psychol Sch. 1992;29:229–236. [Google Scholar]
  • 15.Moncher F, Prinz R. Treatment fidelity in outcome studies. Clin Psychol Rev. 1991;11:247–266. [Google Scholar]
  • 16.Dane A, Schneider B. Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clin Psychol Rev. 1998; 18:23–45. [DOI] [PubMed] [Google Scholar]
  • 17.Lichstein K, Riedel B, Grieve R. Fair tests of clinical trials: a treatment implementation model. Adv Behav Res Ther. 1994;16:1–29. [Google Scholar]
  • 18.Borelli B, Sepinwall D, Ernst D, et al. A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research. J Consult Clin Psychol. 2005;73:852–860. [DOI] [PubMed] [Google Scholar]
  • 19.Jackson N, Waters E. Criteria for the systematic review of health promotion and public health interventions. Health Promot Int. 2005;20:367–374. [DOI] [PubMed] [Google Scholar]
  • 20.Montgomery P, Gardner F, Operario D, Mayo-Wilson E, Tamayo S, Underhill K. The Oxford Implementation Reporting Index: the development of an indicator of treatment fidelity in systematic reviews of psychosocial interventions. Paper presented at: XIII Cochrane Colloquium; October 22–26, 2005; Melbourne, Australia.
  • 21.Underhill K, Operario D, Montgomery P. Reporting deficiencies in trials of abstinence-only programmes for HIV prevention. AIDS. 2007;21(2): 266–268. [DOI] [PubMed] [Google Scholar]
  • 22.Lumley J, Oliver S, Chamberlain C, Oakley L. Interventions for promoting smoking cessation during pregnancy. Cochrane Database Syst Rev. 2004;(4): CD001055. [DOI] [PubMed]
  • 23.Arai L, Roen K, Roberts H, Popay J. It might work in Oklahoma but will it work in Oakhampton? context and implementation in the effectiveness literature on domestic smoke detectors. Inj Prev. 2005;11:148–151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Fernandez R, Griffiths R, Ussia C. Water for wound cleansing. Cochrane Database Syst Rev. 2002;(4): CD003861. [DOI] [PubMed]
  • 25.Tharyan P. Doing the right reviews: how do we ensure our reviews bring about answers to important questions? Paper presented at: XIII Cochrane Colloquium; October 24, 2005; Melbourne, Australia.
  • 26.Mills E, Wu P, Gagnier J, Devereaux P. The quality of randomized trial reporting in leading medical journals since the revised CONSORT statement. Contemp Clin Trials. 2005; 26:480–487. [DOI] [PubMed] [Google Scholar]
  • 27.Egger M, Jüni P, Bartlett C, for the CONSORT Group. Value of flow diagrams in reports of randomized controlled trials. JAMA. 2001;285: 1996–1999. [DOI] [PubMed] [Google Scholar]
  • 28.Altman DG, Schulz KF, Moher D, et al. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med. 2001;134:663–694. [DOI] [PubMed] [Google Scholar]
  • 29.Sanders D, Haines A. Implementation research is needed to achieve international health goals. PLoS Med. 2006;3(6):e186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Prentice R, Caan B, Chlebowski R, et al. Low-fat dietary pattern and risk of invasive breast cancer: the Women’s Health Initiative Randomized Controlled Dietary Modification Trial. JAMA. 2006;295:629–642. [DOI] [PubMed] [Google Scholar]
  • 31.Beresford S, Johnson K, Ritenbaugh C, et al. Low-fat dietary pattern and risk of colorectal cancer: the Women’s Health Initiative Randomized Controlled Dietary Modification Trial. JAMA. 2006;295:643–654. [DOI] [PubMed] [Google Scholar]
  • 32.Howard B, Van Horn L, Hsia J, et al. Low-fat dietary pattern and risk of cardiovascular disease: the Women’s Health Initiative Randomized Controlled Dietary Modification Trial. JAMA. 2006;295:655–666. [DOI] [PubMed] [Google Scholar]
  • 33.Ritenbaugh C, Patterson R, Chlebowski R, et al. The Women’s Health Initiative Dietary Modification trial: overview and baseline characteristics of participants. Ann Epidemiol. 2003;13(9):S87–S97. [DOI] [PubMed] [Google Scholar]
  • 34.Tinker L, Burrows E, Henry H, Patterson R, Rupp J, Van Horn L. The Women’s Health Initiative: overview of the nutrition components. In: Krummel D, Etherton P, eds. Nutrition in Women’s Health. Gaithersburg, Md: Aspen Publishers Inc; 1996:510–542.
  • 35.Anderson GL, Manson J, Wallace R, et al. Implementation of the Women’s Health Initiative study design. Ann Epidemiol. 2003;13(9):S5–S17. [DOI] [PubMed] [Google Scholar]
  • 36.Kolata G. Low-fat diet does not cut health risks, study finds. New York Times. February 16, 2006; A1.
  • 37.Greene PJ. Low-fat diet and weight change in postmenopausal women. JAMA. 2006;296:394. [DOI] [PubMed] [Google Scholar]
  • 38.Hays J, Hunt J, Hubbell F, et al. The Women’s Health Initiative recruitment methods and results. Ann Epidemiol. 2003;13(9):S18–S77. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES