Abstract
Background
Careful consideration of intervention fidelity is critical to establishing the validity and reliability of research findings, yet such reports are often lacking in the research literature. It is imperative that intervention fidelity be methodically evaluated and reported in order to promote the translation of effective interventions into sound evidence-based practice.
Purpose
The purpose of this paper is to explore strategies used to promote intervention fidelity, incorporating examples from a multi-site clinical trial, that illustrate the National Institutes of Health Behavior Change Consortium’s five domains for recommended treatment practices: (1) study design, (2) facilitator training, (3) intervention delivery, (4) intervention receipt, and (5) intervention enactment. A multi-site randomized clinical trial testing the efficacy of a computer-assisted cognitive rehabilitation intervention for adults with multiple sclerosis is used to illustrate strategies promoting intervention fidelity.
Methods
Data derived from audiotapes of intervention classes, audits of computer exercises completed by participants, participant class attendance, and goal attainment scaling suggested relatively high fidelity to the intervention protocol.
Conclusion
This study illustrates how to report intervention fidelity in the literature guided by best-practice strategies, which may serve to promote fidelity monitoring and reporting in future studies.
Keywords: evidence-based practice, reproducibility of results, research design, intervention fidelity
Introduction
Intervention fidelity (also referred to as treatment or implementation fidelity in the literature) describes the degree to which an intervention study is carried out as planned.1–3 Intervention fidelity is integral to both the interpretation (internal validity) and generalization (external validity) of the research findings; it includes crucial methodological strategies to enhance the rigor of behavioral interventions. Great effort and resources are expended throughout the life span of an intervention study to ensure that the study’s hypotheses are supported by theory, that measurements are accurate, and that the testing of the intervention is both valid and reliable.4 Strategies and practices must be embedded into every phase of the intervention so that the study will accurately measure what it is designed to measure and will be replicable. Yet although evaluating an intervention’s fidelity is critical to determining its quality and suitability for clinical use, the chronicling of intervention fidelity is sparse in the research literature.5 Reviewers have cited low rates of fidelity measurement in many fields of research, including social psychology, health behavior, and social work.6–8 At stake is the scientific community’s confidence in a study’s findings as well as the translation of interventions into clinical practice, which may, or may not, be effective.
In response to the growing demand for evidence-based practice in the social and behavioral sciences, the Treatment Fidelity Workgroup of the National Institutes of Health Behavior Change Consortium (BCC) has published guidelines for assessing and reporting treatment fidelity.4 Embedding, executing, and monitoring treatment fidelity strategies increases a study’s power by strengthening internal validity, reducing error variance, and enhancing generalizability.9 The purpose of this article is to describe the fidelity strategies incorporated within an ongoing multi-site randomized controlled trial and the methods used to assess and monitor strategy adherence throughout the study. The framework developed by the BCC informs the following presentation. That framework is composed of best-practice treatment fidelity recommendations in five domains: (1) study design, (2) facilitator training, (3) intervention delivery, (4) intervention receipt, and (5) enactment of intervention skills (performance/demonstration).
Methods
Study Design Strategies
The design of an intervention study should clearly operationalize the intervention’s theoretical framework and rigorously describe the uniform intervention dose, and it should identify potential fidelity threats. The Memory, Attention, & Problem Solving Skills for Persons with Multiple Sclerosis (MAPSS-MS) study, which is used here to illustrate fidelity strategies, is a multi-site randomized clinical trial testing the efficacy of a computer-assisted cognitive rehabilitation intervention for adults (age 18 to 60) with multiple sclerosis (MS) who have concerns about their cognitive functioning.10 The MAPSS-MS intervention is based on a conceptual model integrating concepts from the health belief model,11 Pender’s model of health promotion,12 and self-efficacy theory.13 The 8-week intervention and its manual were pilot tested with a sample of 61 persons with MS and then refined using input from a post intervention focus group.10 The intervention dose was defined by the number of classes attended and the number of online computer exercises completed prior to post intervention outcome assessment testing. All intervention classes were audiotaped and reviewed by the principal investigator in order to monitor consistency and adherence to the intervention protocol. Class attendance and completion of the intervention’s online computer exercises were monitored weekly and reported to the principal investigators and each cohort facilitator. Plans were made for participants’ absences by inviting participants who were unable to attend one or two classes in person to call in and participate by phone. These monitoring strategies promote early detection of any protocol deviances and allow “course corrections” to be made promptly.
Facilitator Training Strategies
Standardized training of the intervention facilitators is critical to fidelity. The goal is to standardize the intervention protocol so that each facilitator delivers, and each participant receives, the same core content or “dose” of the intervention. The MAPSS-MS intervention was led by four different facilitators, in three different cities, over 34 months. Facilitator qualifications included having a master’s degree or higher in nursing, psychology, or a related field, as well as experience in facilitating groups with chronic health conditions. One member of the research team, a master’s prepared nurse who was the facilitator in the pilot study, trained all of the facilitators. Training was done in person, approximately 1 month prior to the first intervention class. Training strategies included the use of didactic materials, modeling, video demonstrations and use of a training manual. The training manual was mailed to the facilitator 1 week before the training, with instructions to read the manual prior to the in-person training sessions. Components of the training were standardized using a training protocol with handouts, the MAPSS-MS manual, a skills checklist, and a demonstration video from the pilot study. Delivery of the training was tailored to each facilitator based on the facilitator’s qualifications and experience in delivering group interventions and prior knowledge of MS. The training emphasized the delicate balance in delivering this intervention between covering the content outlined in the manual and providing opportunities for informal discussion that would enable group members to tailor their receipt of the content to their individual needs and experiences.
In vivo observation and frequent communication with facilitators throughout each session of the 8-week intervention were aimed at minimizing decay of facilitation skills. One booster session to review the intervention protocol was conducted by conference call with the facilitators from two sites when there was an interval of more than 1 month between cohorts. The booster session focused on areas of the protocol that had presented challenges in intervention delivery.
Intervention Delivery Strategies
The MAPSS-MS intervention, approved by The University of Texas at Austin Institutional Review Board, was delivered in eight 2-hr sessions once a week to 10 cohorts in three cities. The delivery of the content in all intervention sessions was carefully monitored and recorded in order to verify that the intervention was delivered as planned. A hardcopy manual was given to each participant, and in each session the facilitator presented the manual’s contents using session-specific PowerPoint slides. To monitor facilitators’ adherence to the training protocol, all intervention sessions were audiotaped. Samples from the audiotaped intervention sessions were reviewed by one of the principal investigators, who had developed a fidelity checklist to monitor intervention adherence. Facilitator trainers did in vivo observation of at least one session per facilitator. Breitenstein et al.2 cite several advantages to this strategy, which enables one to assess key issues such as the overall class environment, nonverbal communication patterns, and participant engagement/attention level. Ongoing support and timely feedback, which are key components of effective training, were provided to the facilitators to cultivate intervention delivery fidelity.14
The online computer-delivered content was monitored each week by analyzing data provided by Lumos Labs, Inc. documenting participants’ completion of study-specific computer exercises. This method enabled ongoing assessment of (a) participants’ exposure to the study’s computer-based content, (b) their adherence to the weekly training assignment – complete 45 exercises each week by doing 15 exercises/day on three different days, (c) diagnosis of computer or Internet problems, and (d) timely support from the facilitators and research staff.
Contamination between the intervention and control conditions was possible, because multiple cohorts were recruited for each intervention site. Although the sites were located in large metropolitan communities with populations ranging from 2.2 to 6.3 million, the number of persons with MS, with an estimated prevalence of 1 in 1,000, is but a small fraction of the total population.15 Therefore, upon enrollment, each person in the study was asked to refrain from sharing or discussing details of the study activities with others with MS.
Intervention Receipt Strategies
Several strategies have been proposed for monitoring and improving the degree to which participants receive core intervention components specified in study protocols.4,16 Facilitators recorded participants’ session attendance to measure their exposure to core content. At the beginning of each class, the facilitator led a discussion focusing on participants’ use of compensatory strategies during the previous week to improve cognitive functioning along with their use of the intervention’s cognitive computer program outside of class. This discussion served as a way to assess the extent to which the participants understood the intervention components as well as to promote practice of the behavioral content by the participants. To promote behavior self-monitoring, participants were asked to keep a log of their computer practice, which they turned in after completing the 8-week intervention. Additionally, the manufacturer of the online computer training program, Lumos Labs, Inc., provided weekly reports documenting the number of exercises completed by each participant. The reports were analyzed each week and results communicated to the principal investigators and facilitators. The facilitators incorporated this information into weekly intra-intervention phone calls to participants. The calls were intended to improve participants’ performance by encouraging them to practice core intervention components (i.e., compensatory strategies and computer exercises). The call also served to ensure that the participants were not encountering problems in accessing or practicing the intervention’s web-based computer exercises. Although having access to a computer and the Internet outside of the intervention classes was a study inclusion criterion, there was a wide range of experience in using computers and accessing the Internet, as well as a wide range in communicating problems with the research staff.
Enactment Strategies
Enactment of the behaviors taught in the intervention is arguably the most challenging and critical strategic domain in the BCC framework. Enactment has been described as the application of “what has been learned in the intervention to real life situations”17(p612); in rehabilitation programs such as MAPSS-MS intervention, this can be viewed “as restoration of function that requires the client to practice exercises and skills in their daily life.”17(p612) Enactment of the core components of the intervention was assessed in a number of innovative ways throughout the MAPSS-MS study. In addition to weekly discussions in class of participants’ use of compensatory strategies to improve cognitive function, practice completing exercises on the cognitive computer program, and weekly phone calls from facilitators, goal attainment scaling (GAS) was used to measure participants’ progress on self-identified behavioral goals to improve cognitive functioning at 3 and 6 months after completion of the intervention. GAS is a meaningful method to monitor goal-directed behavior change.18,19
Results
Review of the audiotapes suggested a relatively high fidelity to the intervention protocol as detailed in the MAPSS manual on the part of the facilitators. The intervention dose was defined as the number of online computer exercises completed (i.e., 360 exercises organized into 4 cognitive domains) and the number of classes attended. The number of exercises completed ranged from 0 to 438, with an average of 288.9 (SD = 106.8). Over half of the participants (67.7%) completed 80% (288) of the exercises. Participants’ self-report logs of minutes spent practicing exercises were highly correlated (r=.60, p<.05) with the data provided by Lumos Labs, Inc. The average class attendance was 6.4 sessions (SD = 2.3; range, 0–8). A majority of participants (88.2%) attended 6 or more of the 8 sessions. Lastly, participants defined one or two goals related to improving cognitive function at the final class or, if absent, by phone. Attainment ratings that have been assessed thus far indicate that those with both 3- and 6-month data have made progress toward attaining their goals.
Discussion
Assessment of treatment fidelity strategies provides evidence that an intervention has been delivered as planned and that its outcomes are more likely the result of the intervention itself rather than extraneous unmeasured variables. Appraisal of the fidelity strategies incorporated into the MAPSS-MS study revealed a number of “lessons learned” and identified steps taken to strengthen the fidelity of the intervention as it was being delivered. Key fidelity strategies, such as monitoring the intervention dose by auditing the number of computer exercises completed, revealed that several participants were simply repeating exercises offered by the computer program. Communication between the facilitator and the participants revealed that those participants were clicking a small “repeat” icon rather than the large “next” icon as intended. This fidelity strategy allowed staff to make timely adjustments to the protocol and instruct participants to refrain from repeating exercises, because this stalled their progress through the program. The review of the intervention tapes also revealed that although all facilitators covered the same topics, diversity in group discussions meant that some groups discussed certain strategies more than others. Conversely, fidelity strategies to directly assess participants’ receipt of the intervention such as pre-/post intervention knowledge testing were not included in our design. Conceivably some behaviors, such as compensatory strategy use, are so individualized to each participant that they are unsuited for group measurement. Nevertheless, by incorporating GAS into the design, we were able to assess enactment of intervention behaviors months after the intervention was completed. With respect to participant feedback about the intervention, we monitored the audio recordings and facilitator feedback, asked an open-ended question about use of strategies in the immediate post intervention survey, and conducted focus groups with participants in the pilot of the intervention. In summary, as the demand for evidence-based practice continues to expand, this study illustrates how to report intervention fidelity in the literature, and it may serve to promote fidelity monitoring and reporting in future studies.
Acknowledgments
This research was supported by grants from the National Institute of Nursing Research of the National Institutes of Health (1R01NR0114362 and F31 NR014601). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors also wish to thank John Bellquist, Ph.D. for his editorial assistance and Andrea Hanson, Anne Meng, Kim Forcier, and Ileana Velazquez for their group facilitation skills. Editorial support was provided by the Cain Center for Nursing Research and the Center for Transdisciplinary Collaborative Research in Self-management Science (P30, NR015335) at The University of Texas at Austin School of Nursing
Footnotes
The authors report no conflicts of interest.
Contributor Information
Janet D. Morrison, School of Nursing, The University of Texas at Austin.
Heather Becker, School of Nursing, The University of Texas at Austin.
Alexa K. Stuifbergen, School of Nursing, The University of Texas at Austin.
References
- 1.Horner S, Rew L, Torres R. Enhancing intervention fidelity: a means of strengthening study impact. J Spec Pediatr Nurs. 2006;11(2):80–89. doi: 10.1111/j.1744-6155.2006.00050.x. https://doi.org/10.1111/j.1744-6155.2006.00050.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Breitenstein SM, Fogg L, Garvey C, Hill C, Resnick B, Gross D. Measuring implementation fidelity in a community-based parenting intervention. Nurs Res. 2010;59(3):158–165. doi: 10.1097/NNR.0b013e3181dbb2e2. https://doi.org/10.1097/NNR.0b013e3181dbb2e2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Gearing RE, El-Bassel N, Ghesquiere A, Baldwin S, Gillies J, Ngeow E. Major ingredients of fidelity: a review and scientific guide to improving quality of intervention research implementation. Clin Psychol Rev. 2011;31(1):79–88. doi: 10.1016/j.cpr.2010.09.007. https://doi.org/10.1016/j.cpr.2010.09.007. [DOI] [PubMed] [Google Scholar]
- 4.Bellg AJ, Borrelli B, Resnick B, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23(5):443–451. doi: 10.1037/0278-6133.23.5.443. https://doi.org/10.1037/0278-6133.23.5.443. [DOI] [PubMed] [Google Scholar]
- 5.Stone JA. Why does treatment fidelity matter? Altern Ther Health Med. 2015;21(4):24–25. [PubMed] [Google Scholar]
- 6.Borrelli B, Sepinwall D, Ernst D, et al. A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10 years of health behavior research. J Consult Clin Psychol. 2005;73(5):852–860. doi: 10.1037/0022-006X.73.5.852. https://doi.org/10.1037/0022-006X.73.5.852. [DOI] [PubMed] [Google Scholar]
- 7.McArthur BA, Riosa PB, Preyde M. Treatment fidelity in psychosocial intervention for children and adolescents with comorbid problems. Child Adolesc Ment Health. 2012;17(3):139–145. doi: 10.1111/j.1475-3588.2011.00635.x. https://doi.org/10.1111/j.1475-3588.2011.00635.x. [DOI] [PubMed] [Google Scholar]
- 8.Corley NA, Kim I. An assessment of intervention fidelity in published social work intervention research studies. Res Soc Work Pract. 2016;26(1):53–60. https://doi.org/10.1177/1049731515579419. [Google Scholar]
- 9.Moncher FJ, Prinz RJ. Treatment fidelity in outcome studies. Clin Psychol Rev. 1991;11(3):247–266. https://doi.org/10.1016/0272-7358(91)90103-2. [Google Scholar]
- 10.Stuifbergen AK, Becker H, Perez F, Morrison J, Kullberg V, Todd A. A randomized controlled trial of a cognitive rehabilitation intervention for persons with multiple sclerosis. Clin Rehabil. 2012;26(10):882–893. doi: 10.1177/0269215511434997. https://doi.org/10.1177/0269215511434997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Becker MH, editor. The health belief model and personal health behavior. Health Educ Monogr. 1974;2(4, theme issue):324–508. [Google Scholar]
- 12.Pender NJ. Health Promotion in Nursing Practice. 2. Norwalk, CT: Appleton & Lange; 1987. [Google Scholar]
- 13.Bandura A. Human agency in social cognitive theory. Am Psychol. 1989;44(9):1175–1184. doi: 10.1037/0003-066x.44.9.1175. https://doi.org/10.1037/0003-066X.44.9.1175. [DOI] [PubMed] [Google Scholar]
- 14.Goense PB, Boendermaker L, van Yperen T. Support systems for treatment integrity. Res Soc Work Pract. 2016;2016;26(1):69–73. https://doi.org/10.1177/1049731515579205. [Google Scholar]
- 15.Mayr WT, Pittock SJ, McClelland RL, Jorgensen NW, Noseworthy JH, Rodriguez M. Incidence and prevalence of multiple sclerosis in Olmsted County, Minnesota, 1985–2000. Neurology. 2003;2003;61(10):1373–1377. doi: 10.1212/01.wnl.0000094316.90240.eb. https://doi.org/10.1212/01.WNL.0000094316.90240.EB. [DOI] [PubMed] [Google Scholar]
- 16.Borrelli B. The assessment, monitoring, and enhancement of treatment fidelity in public health clinical trials. J Public Health Dent. 2011;71(suppl 1):S52–S63. doi: 10.1111/j.1752-7325.2011.00233.x. https://doi.org/10.1111/j.1752-7325.2011.00233.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Poltawski L, Norris M, Dean S. Intervention fidelity: developing an experience-based model for rehabilitation research. J Rehabil Med. 2014;46(7):609–615. doi: 10.2340/16501977-1848. https://doi.org/10.2340/16501977-1848. [DOI] [PubMed] [Google Scholar]
- 18.Strecher VJ, Seijts GH, Kok GJ, et al. Goal setting as a strategy for health behavior change. Health Educ Q. 1995;22(2):190–200. doi: 10.1177/109019819502200207. [DOI] [PubMed] [Google Scholar]
- 19.Becker H, Stuifbergen A, Rogers S, Timmerman G. Goal attainment scaling to measure individual change in intervention studies. Nurs Res. 2000;2000;49(3):176–180. doi: 10.1097/00006199-200005000-00011. [DOI] [PubMed] [Google Scholar]