This article reports the process of adapting a research intervention to improve medication adherence among cardiovascular disease patients and translating it into community-based clinical programs.
Keywords: Medication adherence, Implementation, Chronic disease, Diabetes, Hypertension, Diffusion of innovation
Abstract
Relatively few successful medication adherence interventions are translated into real-world clinical settings. The Prevention of Cardiovascular Outcomes in African Americans with Diabetes (CHANGE) intervention was originally conceived as a randomized controlled trial to improve cardiovascular disease-related medication adherence and health outcomes. The purpose of the study was to describe the translation of the CHANGE trial into two community-based clinical programs. CHANGE 2 was available to Medicaid patients with diabetes and hypertension whose primary care homes were part of a care management network in the Northern Piedmont region of North Carolina. CHANGE 3 was available to low-income patients receiving care in three geographical areas with multiple chronic conditions at low or moderate risk for developing cardiovascular disease. Adaptations were made to ensure fit with available organizational resources and the patient population’s health needs. Data available for evaluation are presented. For CHANGE 2, we evaluated improvement in A1c control using paired t test. For both studies, we describe feasibility measured by percentage of patients who completed the curriculum. CHANGE 2 involved 125 participants. CHANGE 3 had 127 participants. In CHANGE 2, 69 participants had A1c measurements at baseline and 12-month follow-up; A1c improved from 8.4 to 7.8 (p = .008). In CHANGE 3, interventionists completed 47% (n = 45) of calls to enroll participants at the 4-month encounter, and among those eligible for a 12-month call (n = 52), 21% of 12-month calls were completed with participants. In CHANGE 2, 40% of participants (n = 50) completed all 12 encounters. Thoughtful adaptation is critical to translate clinical trials into community-based clinic settings. Successful implementation of adapted evidence-based interventions may be feasible and can positively affect patients’ disease control.
Implications
Practice: Our experience with the CHANGE projects demonstrates that thoughtful adaptation is critical to translate clinical trials into community-based clinic settings. Successful implementation of adapted evidence-based interventions may be feasible and can positively affect patients’ disease control.
Policy: Given the significance of medication adherence in the CMS five-star rating system, there is a need to translate successful research interventions into real-world settings.
Research: There is a need to design and report adherence improvement interventions in a way that they could be practically translated into community-based settings.
BACKGROUND
Developing and testing behavioral interventions is time and resource consuming. A minority of effective interventions, approximately 14%, are implemented in “real-world” settings, taking an average of 17 years to be translated into practice [1, 2]. Even when interventions are translated into practice, they may not be implemented in the way that they were designed [3]. This challenge of translating and implementing interventions causes a tension between research and practice. Clinicians need timely, evidence-based solutions to improve health care delivery and patients’ health outcomes. When evidence-based behavioral interventions that have been proven effective in trials are not implemented in community-based practices, the potential to improve patients’ health and well-being is lost [4]. Emerging health information technology and evolving health policies necessitate more efficient implementation of proven interventions. Effective interventions may be obsolete before we implement them in the “real world” if we continue to rely on the traditional research pipeline [4].
Evidence-based practices are often adapted when they are translated into the real world [4, 5]. These adaptations may be purposeful and planned or unintentional deviations from the intervention as it was designed [6]. Adaptations vary regarding their extensiveness and therefore have varying degrees of impact on fidelity (e.g., the agree to which core components are implemented as originally designed) and health outcomes [3, 7]. Arguments supporting strict fidelity to the original intervention and advocating for adaptation have both been made [3, 8, 9]. It is known that adaptation of intervention content and delivery may improve an intervention’s impact [10] and that there is a continuum of adaptations with different impacts on fidelity [8], but relatively little is known about the impact of a full range of adaptations on the effectiveness of interventions implemented in community settings. One reason why little is known about the full range of adaptations is because few interventions report and classify their adaptations in a meaningful way that can be applied to other interventions.
Stirman et al. developed a system for classifying modifications to interventions across a variety of settings [6]. We selected this framework because it is applicable to interventions requiring coordination of multiple staff and innovative procedures to encourage or support individual behavioral change [6, 11]. We applied this classification system to adaptations made to the Cholesterol, Hypertension, And Glucose Education (CHANGE) study. CHANGE was originally conceived as a tightly controlled randomized control trial to improve cardiovascular disease (CVD)-related medication adherence and health outcomes for African American patients [12]. While statistically significant improvements in biological outcomes were not achieved within the time constraints of the clinical trial, partly attributed to significant missing data in the electronic health records, CHANGE was associated with statistically significant improvements in medication adherence. Medication adherence was not hampered by missing data as the biological outcomes were. This development was notable in that medication management is strongly associated with clinically significant improvements in health in these chronic illness patient populations. The improvement in medication adherence was not only statistically significant but also attracted attention from local clinical stakeholders who were eager for immediate tools to support their patients’ chronic disease self-management.
The original CHANGE trial was funded as a clinical trial and took place in an academic medical center. CHANGE was then implemented in two different environments—community-based clinics serving a predominately Medicaid population (CHANGE 2) and a Centers for Medicare & Medicaid Services (CMS) demonstration project (CHANGE 3). The objective of this article is to describe adaptations of the CHANGE intervention in various environments using an established classification system.
Original CHANGE study (Phase I): traditional academic medical center
CHANGE was a tightly controlled randomized clinical trial testing the delivery of an evidence-based program to improve medication adherence and clinical outcomes among African Americans patients with diabetes receiving primary care at Duke University Medical Center [12]. The methods and curriculum have been previously described [13]. The intervention was delivered by nurses over the telephone. Over 12 months, participants received monthly calls designed to engage them in self-management. Nurse interventionists also communicated with participants’ primary care providers regarding participants’ self-reported readiness to change their behaviors and medication management needs. Outcome measures included both process measures (e.g., self-reported medication adherence) and clinical outcome measures obtained from individuals’ medical records (e.g., systolic blood pressure, HbA1c, low-density lipoprotein cholesterol).
At the conclusion of the study, participants in the intervention arm were more likely to report improved medication adherence (odds ratio 4.4, 95% confidence interval: 1.8–10.6, p < .001), but this did not translate into improvements in clinical outcome measures possibly because of missing clinical measures that were not available from the electronic medical record [12]. Our previous work suggests that interventions may have a differential impact for specific patient subpopulations [14]. Despite its mixed success, the CHANGE intervention curriculum filled an important clinical need by providing self-management support and was adopted by two nonacademic groups.
Because of differing resource availability and populations’ health needs, CHANGE was adapted before it was implemented by nonacademic groups. Adaptation required consideration of core components (Fig. 1), the essential program elements that are believed to make the program effective and should be kept intact to maintain the intervention’s effectiveness [15]. Core components included content helping patients evaluate the risk to benefit ratio of medication adherence, and skill-building for confidence and self-efficacy with making health behavior changes.
Fig 1.
| Core components of CHANGE.
METHODS
CHANGE Phase II: adaptation for community-based clinics serving Medicaid population
Population
The Northern Piedmont Community Care (NPCC) primary care case management program consists of 53 community-based clinics serving a predominately low-income, Medicaid population in six counties (Durham, Granville, Vance, Warren, Person, and Franklin) of North Carolina. These community-based clinics sought an evidence-based tool to support patients who were struggling with hypertension and diabetes control that was easily accessible and required fewer resources than in-person visits.
Patients were referred for CHANGE 2 through several paths. The nurse care manager referred potential patients at the point of care. She also reviewed existing administrative and clinical records to identify potential patients and then telephoned them to ask about their interest in participation. In-service training was conducted at the NPCC community-based clinics to encourage providers to refer additional patients.
Intervention
A team of clinicians at these clinics developed a case management program targeting high-risk, complex patients (e.g., poor cardiovascular disease control and/or recent hospitalizations), but the program was not effecting large enough improvements in disease control to affect health outcomes. A team of multiple providers approached the research team (the research team were also the developers of the original intervention) and asked whether the CHANGE intervention materials could be adapted for use in clinical care. Revisions were made based on suggestions from the Northern Piedmont Community Care case managers.
Adaptations are reported in Table 1 and were made at the population and cohort level to the intervention content, context, and training of personnel. CHANGE was focused on African American patients with diabetes. In CHANGE 2, the population shifted to patients of all races with comorbid diabetes and hypertension. As CHANGE 2 expanded the number of patients served, the number and type of interventionists also shifted from dedicated research nurses in the original trial to nurse case managers balancing other job duties.
Table 1.
| Differences between projects
| Delivered by | Content | Patient identification approach | Patient population | Location of sites | Control group | Funding source | |
|---|---|---|---|---|---|---|---|
| CHANGE | Research nurse (n = 2) | Empirically based medication adherence and behavioral lifestyle | Through electronic health record screening | Low-income African Americans with diabetes receiving Duke primary care (n = 359) | 2 community-based primary care clinics in Durham, NC | Usual care | Research grant |
| CHANGE 2 | Research nurses transitioned to nurse case managers (n = 2) | Original content modified to remove research-specific language and Clinical Guideline changes | Referred by clinic case manager | Low-income complex patients (n = 125) | Northern Piedmont Community Care clinics | None (pre/ post) | DCHN |
| (a) Participants must have a diagnosis of both hypertension and diabetes to be eligible for participation | |||||||
| (b) Because it is being delivered to Medicaid patient initially, the requirement that all the patients be African American is waived | |||||||
| (c) Currently, the patient population is Medicaid based; however, we may get patient referrals who are Medicare patients, no insurance or private payer. | |||||||
| CHANGE 3 | CHW (n = 14) | CHANGE 2 revised content modified to reflect delivery by non-clinically trained CHWs. Side effects module removed in total, medication module restricted and language added to guide understanding for patients. | Telephone consent process used for patients. Cabarrus site IRB required sending of letter prior to contact be conducted by CHW. Other site fell under Duke IRB and no letter was required. | Low-income complex patients who scored moderate on the health- risk assessments (n = 127) | Cabarrus Co, NC; Durham Co, NC; Quitman Co, MS; Mingo Co, WV | None (pre/ post) | Research grant supplemental |
CHW community health workers.
The intervention content was adapted in several ways. First, the content was shortened and research-specific language was removed. This reduced content made the intervention more practical for implementation in a clinical setting and could be disseminated more easily in clinical practice than the original CHANGE program. Additional information about hypertension was added, and the content was simplified for patients with lower health literacy skills. Cultural adaptations were also made. For example, suggestions regarding engaging in healthy dietary behaviors and physical activity were modified based on cultural preferences of the rural North Carolina populations. Additional examples of modifications include changing images in printed materials to more closely represent characteristics of patients enrolled in CHANGE 2 (e.g., both sexes, all races, overweight) and changing dietary guidance to address healthier options for food preparation and menu selection at popular eateries in rural North Carolina. North Carolina tends to be very hot during the summer and materials stressed ways to engage in physical activity that was free and air conditioned in the local community (e.g., walking in an air-conditioned shopping mall). Local resources were introduced, particularly resources available through Medicaid coverage and a federally qualified health centers in the area, including access to prescription medications, and provision of mirrors for diabetic patients to conduct self-examinations of their feet.
CHANGE 2 occurred over a 3-year period, but there were periods of time when delivery of the intervention was interrupted because of staff turnover and other administrative changes (Fig. 2). During the 3-year period, we encountered unique challenges and made adjustments accordingly. First, patients were referred from multiple sources (e.g., different clinic sites and case managers) who had differing familiarity with the program’s goals. To minimize program drift, the research staff provided multiple on-the-job and as-needed training sessions with CHANGE 2 clinic staff. Because of staff turnover and hiring delays, CHANGE 2 was suspended for 7 months. These interruptions in the study protocol are often a reality when working with “real world” clinics serving traditionally underserved patient populations. The research team maintained contact with the clinical partners during this interlude. When the program resumed, new interventionists required training. Because CHANGE 2 was an applied clinical program and not a research study, there was no target sample size or definitive end dates; thus, the program expanded to fit demand. This expansion required the research team to be facile regarding training and retraining clinic staff, modifying the patient-tracking database to accommodate clinics needs on an ongoing basis, and maintaining communication between the research and clinical teams. The nurse case manager regularly visited clinics managers and providers and conducted in-service training for her NPCC colleagues to encourage referrals. This lack of specific target goals put strain on resources because the interventionists had to adjust their effort and rebalance priorities in order to meet the increased patient load and the extended intervention delivery timeframe. Finally, because the overall goal of CHANGE 2 was to improve patients’ chronic disease control, many patients were offered support services in addition to CHANGE 2; however, the clinical staff did not systematically collect information about co-intervention. The potential for co-intervention made it challenging to know the precise impact of CHANGE 2 on health outcomes.
Fig 2.
Classification of adaptations
CHANGE Phase III: adaptation for CMS demonstration project
Population
The CHANGE program was also adapted and integrated into an ongoing intervention—the Southeastern Diabetes Initiative (SEDI) [16]. SEDI was a CMS demonstration project that aimed to improve diabetes control and wellness in four counties spanning three states (i.e., North Caroline, Mississippi, West Virginia). SEDI used a risk algorithm to determine high, moderate, and low risk of having a severe clinical outcome in the next year. The risk algorithm determined the likelihood of experiencing a severe cardiovascular health outcome within the next year (i.e., 90%–100% high risk, 65%–89% moderate risk). The data feeding into the algorithm came from two sources: the electronic health record and a manual data collection sheet completed by a patients’ health care provider. Patients could be identified through the algorithm and/or identified by their provider and then scored. Patients who scored high risk were enrolled in the primary SEDI project. Those at low or moderate risk were offered SEDI adapted CHANGE program. This adaptation has been coined as CHANGE 3 (Table 1 and Fig. 2).
Intervention
While the first two iterations of CHANGE used nurse interventionists, CHANGE 3 relied on community health workers without formal clinical training. As a result, the program content had to be adapted to stay within the community health workers scope of professional practice. For example, intervention content about side effect reporting and providing medication management were removed because community health workers were not equipped to address these issues.
In addition, CHANGE 3 was a larger-scale program with 14 interventionists and participants in large and diverse geographic areas. Therefore, adaptations were made to the training program. Instead of receiving individualized on-the-job training, interventionists were trained through a series of five video conferencing calls lasting 60–90 min each. For reference after training, the research team provided a standard operating procedures manual with step-by-step directions on delivery of intervention content and how to use the intervention software.
Despite training and documentation efforts, because CHANGE 3 was integrated into a larger demonstration project, the original research team was not well integrated into operational decisions and was unable to evaluate fidelity. Local adaptation was necessary, and there was also program drift. For example, CHANGE 3 involved four study sites and these sites had different operating procedures for approaching potential eligible patients. One site was required to mail letters before telephoning patients about the project. However, the other three sites could telephone patients directly (i.e., without mailing letters) after receiving agreement from their primary care provider. In addition, one CHANGE 3 site had difficulty contacting patients over the telephone. Their solution was to deliver the intervention content in person when the patients presented in the clinic. Finally, based on clinical need, patient enrollment continued up to 2 months prior to the end of the primary SEDI project. Thus, participants enrolled later only had exposure to the intervention for 2–9 months instead of the planned 12-month period.
RESULTS
The methods and outcomes of the original CHANGE research study have been previously described [12, 13].
CHANGE 2: adaptation for community-based clinics serving Medicaid population
For CHANGE 2, anecdotal reports suggest that loss to follow-up was a problem, but drop-out was not routinely measured or quantified. However, there was a striking difference in patients who completed the CHANGE 2 curriculum and those who did not. At baseline, most participants were African American (80.2%, n = 97), women (75%, n = 94), with an average age of 57. The baseline median A1c was 8.03 among patients who completed the curriculum, compared with a median A1c of 11.1 for those who did not (n = 78 with baseline A1c; Fig. 3). Patients with a presumed greater need for the intervention were less likely to complete the program. However, among patients for whom both baseline and follow-up A1c laboratory values were available (n = 69), the median A1c improved from a baseline of 8.4–7.8 (p = .008). CHANGE 2 had 40% (n = 50) complete all 12 encounters; however, since it was not a controlled study, it took longer than 12 months for some to complete all encounter calls.
Fig 3.
Study flow diagram.
Sustainability of interventions is important. While a formal sustainability assessment was not formally conducted, clinic staff and leadership reported that the CHANGE 2 curriculum supported their clinic goals. Despite local stakeholder engagement and support, as a result of changing North Carolina state legislature priorities, the CHANGE 2 program was discontinued. This addresses the importance of gaining stakeholder and policy support at multiple levels.
CHANGE 3: adaptation for CMS demonstration project
Ownership of CHANGE 3 demonstration project was held by the clinic. The research team could not access patient-level data; thus, health outcome data was unavailable. We report participants’ demographic characteristics and measures of feasibility (e.g., percentage of participants completing each monthly telephone encounter). Most participants were African American (59%, n = 55), women (60%, n = 46), and between the ages of 26–64 years (69%, n = 64). The majority of participants originated from North Carolina (Durham county 46%, n = 44; Cabarrus county 41%, n = 39). Rural counties had difficult enrolling participants. Clinic staff cited transportation problems and poor telephone infrastructure as barriers (e.g., poor cellular service).
The majority of participants (82%, n = 77) completed at least two telephone-based encounters. There was a significant drop-out. Drop-out occurred for many reasons. The patient population targeted in CHANGE 3 (low-income patients at moderate or high risk of CVD) may respond well to more intensive intervention programs. Thirty-four participants were presented with the opportunity to receive in home-based care and, in favor of that program, withdrew from CHANGE 3. The clinic staff involved in CHANGE 3 often had difficulty reaching patients because they came to the clinic infrequently and were challenging to reach over the phone (e.g., disconnected phone numbers, poor cellular service, no answers), which resulted in completion of fewer telephone encounters and drop-out. Of the remaining participants (n = 88), 51% (n = 45) completed 4 months of the CHANGE 3 telephone encounters. Among those eligible (n = 52), only 21% (n = 11) of participants completed all 12 possible telephone encounters. Sites continued enrolling participants knowing that the intervention would end before individuals would have an opportunity to complete all 12 telephone encounters.
DISCUSSION
The ultimate objective of health service interventions and clinical research is to improve patients’ health and the delivery of health care. While fidelity is important to ensure that an intervention remains effective and works in the way that it was designed, adaptation may also be important to ensure fit with organizational resources and patients’ clinical needs [8, 17, 18]. There is a tension and a careful balance must be struck between fidelity and fit.
In addition, it is important to identify the right population for the intervention. In previous studies, we have found that an intervention may be more effective in a specific patient subpopulation [14]. The CHANGE adaptations were intended with the goal of delivering the intervention to individuals with moderate and high CVD risk, targeting patient subpopulations who had the most potential to benefit. When adapting the CHANGE curriculum, the core components identified a priori by the research team remained intact. These included (a) the principle of tailoring of the behavioral content based on patients’ self-reported level of motivation and readiness to change and (b) the focus on the initiation and persistence phases of medication adherence. Adaptations were made not only to maximize the potential program impact but also because of sites’ resource availability. While the research team was peripherally involved in CHANGE 3, there is concern that site’s need for adaptation may have trumped fidelity to the core components, allowing for an inappropriate degree of program drift. However, that was not the case. In both adaptations, the core content remained secure. In CHANGE 2, participants received the majority of the intervention content (e.g., near to full dosing of the intervention). However, that was not the case in CHANGE 3 since people were still being enrolled in the program 2 months before it was discontinued.
Measures of feasibility were favorable across all three iterations of the CHANGE intervention programs. When data were available, the evidence tended to suggest that participants’ chronic disease control improved. However, we did not have information about self-reported medication adherence which is a process measure that often leads to improvements in clinical outcomes. Improvements in self-reported medication adherence, but not in clinical outcomes, were demonstrated in the initial CHANGE clinical trial [12]. In the absence of self-reported medication adherence information, we sought pharmacy claims data to calculate medication adherence (e.g., medication possession ratio) from the CHANGE 2 and CHANGE 3 study sites but were unable to access these data for research purposes. When beginning collaborations like CHANGE 2 and CHANGE 3, proactively establishing agreements for data sharing is best practice.
Our understanding of the true health implications of the intervention is limited by the unavailability of health outcome data (CHANGE 3) and by not having a control group (CHANGE 2 and CHANGE 3). The absence of a control group makes evaluating regression to mean challenging. However, threats to internal validity such as regression to the mean are indicative of many “real-world” clinic settings. When there is a clinical demand, it may be most ethical and acceptable to clinic administrators to treat all-comers.
Implementing interventions in real-world clinics is challenging. We had several lessons learned that could be informative to other researchers. Specifically, we found differences in language and expectations between researchers and nonresearch clinical staff. Because clinical needs trump research agendas and timelines, during CHANGE 2 and 3, we relied on electronic health record data reports of clinically collected laboratory values. We would suggest that future studies collect their own research-ordered laboratory values and outcomes data outside of the clinic environment. Collection of these data will ensure that completeness and data availability. All three iterations of the CHANGE study had a noteworthy amount of missing data because of our reliance on clinical outcome data that was naturally occurring in the EHR. Data could be missing for several reasons such as when patients sought care at longer time intervals that were outside of our observation window or when patients sought care at a different health care clinic that was not encompassed in the EHR. We found that limited data access was a challenge. Future studies could develop contractual agreements and/or better embed themselves into the clinic environment to ensure data access. In addition, we found that participating sites had limited research infrastructure in the clinic. It is critical to proactively determine who will be conducting the study evaluation, define expectations before beginning the project, and ensure that the party conducting the evaluation has adequate data access. When failures occur, it may be difficult to disentangle whether the intervention was ineffective in the new setting (intervention failure) or if the intervention was effective, but was incorrectly implemented (implementation failure) [19]. While the CHANGE intervention had successes, including the improvement in A1c for CHANGE 2, there were also mixed results. A minority of participants in CHANGE 3 completed the intervention and the health outcomes are unknown. Challenges may occur when implementing interventions in clinical setting. Unlike a structured clinical trial, changes in practice are more difficult to control. For example, the CHANGE research team had no control over managing competing demands for clinical staff members’ time, the evolving priorities of key stakeholders, access to identify patient-level data, or to measure fidelity on a regular basis. However, the potential rewards of implementation work outweigh these challenges. Implementing evidence-based interventions has the potential to produce real-world impact and improve population health outcomes. We assert that researchers should work with clinicians and administrative and policy leaders outside of traditional research settings. Our goal should extend the translation of evidence-based, clinically effective interventions beyond the traditional academic setting to where patients receive the majority of their care—the community-based practice.
Acknowledgments
The findings reported have not been previously published or reported and that the manuscript is not being simultaneously submitted elsewhere. The authors have full control of all primary data and, within the limitations of their IRB approved protocols, agree to allow the journal to review their data if requested. This article does not contain any studies with animals performed by any of the authors.
Funding
Research reported in this publication was supported by the National Institute of Diabetes, Digestive and Kidney Diseases of the National Institutes of Health under Award Number P30DK096493. Dr. Zullig is supported by a VA Health Services Research and Development (HSR&D) Career Development Award (CDA 13–025). Dr. Bosworth is supported by a VA Research Career Scientist Award (RCS 08-027). Funders were not involved in the design of the study and collection, analysis, interpretation of data, or writing of the manuscript.
Compliance with Ethical Standards Statements
Conflict of Interest: The authors have no actual or potential conflicts of interest to report.
Authors’ Contributions: All authors were involved in the preparation of this manuscript and read and approved the final version.
Ethical Approval: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed Consent: CHANGE and CHANGE 3 were approved by the Duke University Medical Center Institutional Review Board and informed consent was obtained from all individual participants included in those studies. CHANGE 2 was a quality improvement project and participant consent was not required.
References
- 1. Balas EA, Weingarten S, Garb CT, Blumenthal D, Boren SA, Brown GD. Improving preventive care by prompting physicians. Arch Intern Med. 2000;160(3):301–308. [DOI] [PubMed] [Google Scholar]
- 2. Kellam SG, Langevin DJ. A framework for understanding “evidence” in prevention research and programs. Prev Sci. 2003;4(3):137–153. [DOI] [PubMed] [Google Scholar]
- 3. Carvalho ML, Honeycutt S, Escoffery C, Glanz K, Sabbs D, Kegler MC. Balancing fidelity and adaptation: implementing evidence-based chronic disease prevention programs. J Public Health Manag Pract. 2013;19(4):348–356. [DOI] [PubMed] [Google Scholar]
- 4. Chambers DA, Norton WE. The adaptome: advancing the science of intervention adaptation. Am J Prev Med. 2016;51(4 suppl 2):S124–S131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Lundgren L, Amodeo M, Cohen A, Chassler D, Horowitz A. Modifications of evidence-based practices in community-based addiction treatment organizations: a qualitative research study. Addict Behav. 2011;36(6):630–635. [DOI] [PubMed] [Google Scholar]
- 6. Stirman SW, Miller CJ, Toder K, Calloway A. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implement Sci. 2013;8:65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Sussman S, Valente TW, Rohrbach LA, Skara S, Pentz MA. Translation in the health professions: converting science into action. Eval Health Prof. 2006;29(1):7–32. [DOI] [PubMed] [Google Scholar]
- 8. Lara M, Bryant-Stephens T, Damitz M et al. Balancing “fidelity” and community context in the adaptation of asthma evidence-based interventions in the “real world”. Health Promot Pract. 2011;12(6 suppl 1):63S–72S. [DOI] [PubMed] [Google Scholar]
- 9. van Daele T, van Audenhove C, Hermans D, van den Bergh O, van den Broucke S. Empowerment implementation: enhancing fidelity and adaptation in a psycho-educational intervention. Health Promot Int. 2014;29(2):212–222. [DOI] [PubMed] [Google Scholar]
- 10. Castro FG, Barrera M Jr, Martinez CR Jr. The cultural adaptation of prevention interventions: resolving tensions between fidelity and fit. Prev Sci. 2004;5(1):41–45. [DOI] [PubMed] [Google Scholar]
- 11. Scheirer MA. Linking sustainability research to intervention types. Am J Public Health. 2013;103(4):e73–e80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Crowley MJ, Powers BJ, Olsen MK et al. The Cholesterol, Hypertension, And Glucose Education (CHANGE) study: results from a randomized controlled trial in African Americans with diabetes. Am Heart J. 2013;166(1):179–186. [DOI] [PubMed] [Google Scholar]
- 13. Powers BJ, King JL, Ali R et al. The Cholesterol, Hypertension, and Glucose Education (CHANGE) study for African Americans with diabetes: study design and methodology. Am Heart J. 2009;158(3):342–348. [DOI] [PubMed] [Google Scholar]
- 14. Jackson GL, Oddone EZ, Olsen MK et al. Racial differences in the effect of a telephone-delivered hypertension disease management program. J Gen Intern Med. 2012;27(12):1682–1689. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Eke AN, Neumann MS, Wilkes AL, Jones PL. Preparing effective behavioral interventions to be used by prevention providers: the role of researchers during HIV Prevention Research Trials. AIDS Educ Prev. 2006;18(4 suppl A):44–58. [DOI] [PubMed] [Google Scholar]
- 16. Granger BB, Staton M, Peterson L, Rusincovitch SA. Prevalence and access of secondary source medication data: evaluation of the Southeastern Diabetes Initiative (SEDI). AMIA Jt Summits Transl Sci Proc. 2015;2015:66–70. [PMC free article] [PubMed] [Google Scholar]
- 17. Botvin GJ. Advancing prevention science and practice: challenges, critical issues, and future directions. Prev Sci. 2004;5(1):69–72. [DOI] [PubMed] [Google Scholar]
- 18. Harden SM, Gaglio B, Shoup JA et al. Fidelity to and comparative results across behavioral interventions evaluated through the RE-AIM framework: a systematic review. Syst Rev. 2015;4:155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Proctor E, Silmere H, Raghavan R et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. [DOI] [PMC free article] [PubMed] [Google Scholar]



