Skip to main content
Preventive Medicine Reports logoLink to Preventive Medicine Reports
. 2017 Apr 4;6:322–328. doi: 10.1016/j.pmedr.2017.04.002

Development and application of the RE-AIM QuEST mixed methods framework for program evaluation

Jane Forman a, Michele Heisler a,b, Laura J Damschroder a, Elizabeth Kaselitz a,c,, Eve A Kerr a,b
PMCID: PMC5402634  PMID: 28451518

Abstract

To increase the likelihood of successful implementation of interventions and promote dissemination across real-world settings, it is essential to evaluate outcomes related to dimensions other than Effectiveness alone. Glasgow and colleagues' RE-AIM framework specifies four additional types of outcomes that are important to decision-makers: Reach, Adoption, Implementation (including cost), and Maintenance. To further strengthen RE-AIM, we propose integrating qualitative assessments in an expanded framework: RE-AIM Qualitative Evaluation for Systematic Translation (RE-AIM QuEST), a mixed methods framework. RE-AIM QuEST guides formative evaluation to identify real-time implementation barriers and explain how implementation context may influence translation to additional settings.

RE-AIM QuEST was used to evaluate a pharmacist-led hypertension management intervention at 3 VA facilities in 2008–2009. We systematically reviewed each of the five RE-AIM dimensions and created open-ended companion questions to quantitative measures and identified qualitative and quantitative data sources, measures, and analyses.

To illustrate use of the RE-AIM QuEST framework, we provide examples of real-time, coordinated use of quantitative process measures and qualitative methods to identify site-specific issues, and retrospective use of these data sources and analyses to understand variation across sites and explain outcomes. For example, in the Reach dimension, we conducted real-time measurement of enrollment across sites and used qualitative data to better understand and address barriers at a low-enrollment site.

The RE-AIM QuEST framework may be a useful tool for improving interventions in real-time, for understanding retrospectively why an intervention did or did not work, and for enhancing its sustainability and translation to other settings.

Keywords: RE-AIM, Mixed methods, Program evaluation, Clinical pharmacist intervention

Highlights

  • The RE-AIM QuEST framework provides qualitative questions in each RE-AIM dimension.

  • RE-AIM QuEST allows investigators to improve implementation during the intervention.

  • It expands retrospective evaluation to fully review why intervention worked or failed.

  • It expands Maintenance by explicating whether or not/how intervention was maintained.

1. Background

There are multiple challenges to implementing practices that have demonstrated efficacy for improving healthcare quality and value (Balas and Boren, 2000, McGlynn et al., 2003). Randomized-controlled trials (RCTs) have been the traditional mode of evaluating interventions. Yet, it is often difficult to successfully replicate an intervention in different settings without understanding the processes by and context in which the intervention worked–or didn't work (Tunis et al., 2003, Ware and Hamel, 2011). The RE-AIM framework was developed to assess key dimensions of an intervention essential for consistent use in diverse real-world clinical settings: the intervention's Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM). RE-AIM is used to systematically assess the robustness of interventions across settings and patient subgroups and the potential for scaling up and spreading to additional settings (see Fig. 1). The original framework includes quantitative measures, such as baseline characteristics of non-participants versus participants, that can be used for retrospective evaluation of interventions (Estabrook et al., 2012, Glasgow et al., 2006) and prospective intervention design and planning (Klesges et al., 2005) (Estabrooks & Allen, 2013).

Fig. 1.

Fig. 1

RE-AIM: five dimensions.

RE-AIM developers acknowledge the value of qualitative data as a complement to quantitative measures (Kessler et al., 2013), and a number of studies applying RE-AIM have included qualitative data. A systematic review of studies using the RE-AIM framework found that of the 82 included, 69% used quantitative methods, 30% used mixed methods, and 1 study used qualitative methods (Harden et al., 2015). In Kessler et al.'s (2013) review of what constitutes “fully developed use” of the RE-AIM framework, use of qualitative methods was added as a criteria to each dimension (Kessler et al., 2013). Estabrook and Allen (2013) suggested, however, that while it is useful to include qualitative data across RE-AIM dimensions, it is necessary to delineate the specific types of qualitative data relative to each RE-AIM dimension and intervention type (Estabrooks & Allen, 2013).

Toward that end, we developed the RE-AIM Qualitative Evaluation for Systematic Translation, or RE-AIM QuEST mixed methods framework. The framework proposes open-ended questions in each RE-AIM dimension that are applicable across interventions, study types, and settings. RE-AIM QuEST uses qualitative data, collected before, during, and after implementation (Stetler et al., 2006) (Fig. 2), for retrospective evaluation to explain outcomes, including variation across sites, and how the context of implementation may influence generalizability and translation to other settings (Damschroder & Hagedorn, 2011). Further, RE-AIM QuEST can be used formatively in real time to help guide implementation. It incorporates quantitative and qualitative data and analyses during an intervention to identify and address barriers in real time, allowing adjustment and improvement of implementation and tailoring to maximize results. Our work builds on that of Bakken et al., who described application of an expanded RE-AIM framework specific to assessing clinical informatics interventions that included qualitative questions in each RE-AIM dimension (Bakken & Ruland, 2009), by: 1) proposing general qualitative questions applicable across intervention types in each dimension, then illustrating their adaptation in a particular study; and 2) adding illustrative qualitative and mixed methods data sources and analyses to the framework.

Fig. 2.

Fig. 2

How are qualitative methods used in implementation evaluation?

We applied the framework in our formative evaluation of the Adherence and Intensification of Medications (AIM) program for both real-time and retrospective evaluation. AIM was a pragmatic cluster RCT designed to improve patient adherence to blood pressure (BP) medications and for adherent patients to make appropriate increases in number and/or doses of medications (i.e., intensification) among patients with high BP and diabetes.

2. Methods

2.1. Intervention design and setting

Design, methods, and results of the AIM trial are detailed elsewhere (Heisler et al., 2010, Heisler et al., 2012). Table 1 summarizes key intervention components. Briefly, AIM was a multi-site cluster randomized controlled effectiveness study of an intervention whose components had been shown in efficacy trials to improve adherence and BP control. Sixteen primary care teams at 5 medical centers were randomized to either the AIM intervention or usual care. Clinical pharmacists in the teams randomized to AIM were trained in motivational interviewing (MI) and proactively contacted patients with diabetes and persistent poor BP control identified through pharmacy data as having either adherence problems or lacking recent medication management. Using MI techniques, pharmacists worked with patients to adjust medications and/or increase adherence to treatment. Pharmacists were provided with an outline (or ‘roadmap’) to help structure the flow of encounters and as a tool to reinforce MI techniques. The pharmacists used a computer-based “Medication Management Tool” (MMT) to help structure and support the pharmacist work processes. The tool enabled the pharmacist to track patient contacts and recruitment, provided a “roadmap” for conducting phone or in-person clinical encounters as well as a visual display of gaps in fills of medications, and was used to document encounter notes and capture quantitative measures. The research team had regular contact with the AIM pharmacists, including bi-monthly webinars to discuss clinical, implementation, and MI issues.

Table 1.

Key components of the Adherence and Intensification of Medications intervention.

Key component Description
Proactive patient identification and outreach
  • Patients systematically identified through electronic clinical databases

  • Pharmacists call patients to recruit them into the program

Motivational Interviewing (MI)-based adherence counseling
  • Use of motivational interviewing (MI) during patient encounters to improve medication adherence and clinical outcomes

  • Roadmap guides MI during patient encounters

Pharmacists authorized to change medications
  • Titrate medication according to pre-specified algorithms

Home blood pressure monitoring
  • Provide patients with reliable means to regularly monitor blood pressure at home

  • Patients asked to provide 3 BP readings/day for 2 days before each encounter.

Frequent patient follow-up
  • Follow up patients as needed until BP is at target or continued treatment is contra indicated

Medication Management Tool (MMT)
  • Track, schedule, and document patient contacts and encounters

Conducted at 3 VA facilities in 2008–2009.

The intervention took place in 2008–9 within primary care clinics at three Veterans Affairs (VA) facilities (2 academically affiliated centers [sites 1 and 3] and one community-based outpatient clinic [site 2]) and 2 Kaiser Permanents (KP) clinics. The development, testing, and reported findings of RE-AIM QuEST were based on experience in the three VA facilities and did not include the KP clinics. One pharmacist covered VA sites 1 and 2 (about 50 miles apart), and another covered VA site 3.

2.2. Development of the RE-AIM QuEST framework

We created general, open-ended companion questions to quantitative measures in each RE-AIM dimension to guide real-time and retrospective evaluations. Questions sought to explain quantitative results in each dimension. To tailor the framework to the AIM study, we created study-specific questions where needed, identified information sources to address each question, and described analysis plans for each RE-AIM dimension (see Table 2). Tailoring occurred, for example, by including study-specific questions in the implementation dimension to understand reasons for different levels of fidelity achieved for AIM's key components both with and across sites.

Table 2.

RE-AIM QuEST quantitative and qualitative components: general and applied.

Dimension Quantitative measures Qualitative inquiry
Reach
 Questions * How many and what proportion of the target population is participating in the intervention? * What are the barriers to enrollment, and how can they be addressed in real time?
* What explains variation in Reach, number of patients enrolled, and the decline rate across study sites, retrospectively?
*What are the barriers to participation for
patients?
* What are patients' reasons for not participating?
 Quantitative measures
  • # of enrollees (weekly and cumulative)

  • # participating/# eligible

 Data sources
  • Medication Management Tool (MMT)

  • MMT: Reasons for declining participation (survey and free text)

  • Webinars with AIM Pharmacists: pharmacists' description of their interactions with patients.

  • Semi-structured and informal interviews with Key Informants, including AIM Pharmacists

  • Observations of AIM pharmacist work environments

  • Semi-structured phone interviews with patients.

 Analyses
  • Use measures to track patient contact, declines, and participation weekly

  • Look at variation across sites

  • Real-time review of RE-AIM Reach measures

  • Real-time and retrospective mixed methods analysis using quantitative and qualitative inquiry to identify reasons for variation of measures across sites.

  • Modification of implementation in real time.

  • Site-specific and cross-site matrix and qualitative content analyses

Effectiveness
 Questions * What are the effects of the intervention in eligible patients? *What are the conditions and mechanisms that lead to effectiveness?
*What explains variation in outcome measures across sites?
 Quantitative measures
  • Relative change in systolic blood pressure measurements over time

 Data sources
  • Administrative data

  • MMT (patient BP measures, pharmacist encounter notes)

  • Semi-structured and informal interviews with Key Informants, including AIM Pharmacists

  • Observations of AIM pharmacist work environments

  • Webinars with AIM Pharmacists: pharmacists' description of their interactions with patients.

  • Semi-structured phone interviews with patients.

 Analyses
  • Comparison of the relative change in systolic BP measurements over time.

  • Case-based (site and individual patient levels) qualitative analysis using multiple data sources

  • Mixed methods analysis to explain selected quantitative analysis results, e.g., understand variation across patients in maintenance of target BP

Adoption
 Questions *What is the percentage of providers participating in the program? *What affects provider participation?
 Quantitative measures
  • Participating providers and % of providers asked to participate

 Data sources
  • Primary care clinic operations

  • Semi-structured interviews with adopters and non-adopters

  • Site visits and observations

 Analyses
  • Look at variation across sites

  • Matrix and qualitative content analysis

Implementation
 Questions *Was the intervention implemented as intended? (fidelity)
*How consistent was delivery across settings and staff?
Questions specific to key components:
*Did patients who didn't already have a BP cuff receive one?
*Did pharmacists reach minimum MI competency levels?
*What were the modifications to the intervention and why did they occur?
*What were the barriers to fidelity?
*What are the contextual factors and processes underlying barriers to implementation and how do we address them?


Questions specific to key components:
*What are the barriers at each site to providing patients with blood pressure cuffs?
*What is the process through which pharmacists improve their MI skills?
 Quantitative measures
  • # blood pressure cuffs received/#patient needing cuffs

  • Expert ratings and self-evaluation of MI skills

 Data sources
  • Motivational interviewing expert scoring sheet

  • Pharmacist self-evaluation

  • Semi-structured and informal interviews with Key Informants, including AIM Pharmacists

  • Observations of AIM pharmacist work environments

  • Webinars with AIM Pharmacists: pharmacists' description of their interactions with patients.

  • Semi-structured phone interviews with patients.

 Analyses
  • Calculate and compare MI scores of pharmacists on expert scoring sheet and pharmacist self-evaluation.

  • Real-time review of RE-AIM Implementation measures.

  • Real-time site-specific mixed methods analysis using quantitative and qualitative inquiry to identify reasons for variation of measures across sites.

  • Modification of implementation in real time.

  • Site-specific cross-site matrix and qualitative content analysis of intervention and contextual factors that influence implementation using the CFIR framework.

  • Explain selected quantitative analysis results retrospectively using qualitative and quantitative data, e.g., understand deficiencies in MI skills and how to address them.

Maintenance
 Questions *Is the intervention maintained after the study period?
*To what degree are key components sustained?
*What proportion of the target population is being enrolled?
*What proportion of pharmacist time is spent on AIM?
*In what form are the components of the intervention sustained?
*What are the modifications made at each site after the study?
*What are the barriers to maintaining the program?

  • To what degree and how are pharmacists using MI?

  • What is the pharmacist's role in adjusting medications? Why?

 Quantitative measures
  • # of enrollees (weekly and cumulative)

  • Pharmacist time spent on AIM/all pharmacist time

 Data sources MMT
Administrative data
Time study
  • Post-implementation key informant interviews

  • Post-implementation observation

 Analyses
  • Use measures to track:
    • o
      patient contact, declines, and participation
    • o
      pro-active vs. clinician-initiated referrals
    • o
      percentage of pharmacist time spent on AIM
  • Compare during and post-intervention measures

  • Retrospective site-specific qualitative content analysis, including by AIM component

Conducted at 3 Veterans Affairs (VA) facilities in 2008–2009.

Note: Bolded questions are applicable across intervention types (i.e., not specific to AIM).

We also created questions to expand on topics not measured quantitatively. In particular, we expanded the Implementation dimension to structure identification and understanding of contextual factors and processes underlying barriers to implementation and how to address them in real time, using the Consolidated Framework for Implementation Research (CFIR). The CFIR provides a taxonomy of contextual dimensions that influence implementation (Damschroder et al., 2009) to inform data collection and analysis.

As illustrated in Table 2, we drew from multiple data sources to address questions in the RE-AIM QuEST framework. This allowed data triangulation and provided a holistic and robust understanding of implementation at each site (Patton, 2001). We analyzed data from the MMT, semi-structured interviews with stakeholders and pharmacists, site visit field notes, pharmacist webinar notes, semi-structured phone interviews with patients, and research team notes and discussions of emerging issues. Analysis techniques were tailored to each question and included real-time and retrospective mixed methods analyses (Fetters et al., 2013), case-specific and cross-case matrix (Averill, 2002), and qualitative content analyses (Forman & Damschroder, 2007) at the site and patient levels.

3. Results

To illustrate use of the RE-AIM QuEST framework, we provide examples of real-time coordinated use of quantitative and qualitative data collection and analysis to systematically identify and address issues at each site. In addition, we illustrate retrospective use of these data sources in analyses to understand variation across sites and explain study outcomes.

3.1. Reach

3.1.1. Is the intervention reaching the target population?

RE-AIM measures of Reach include number and percentage of eligible patients enrolled and comparison of participants versus non-participants with respect to key characteristics. We expanded upon these by including the qualitative questions: What are barriers to enrollment, and how can they be addressed? What explains variation in Reach across study sites? We conducted direct observations of the pharmacists' work environments (Taylor-Powell & Steele, 1996) and interviews in real time to understand barriers to enrollment. We also conducted a post-implementation, retrospective assessment of the variation in enrollment across sites.

3.1.1.1. Real-time formative evaluation

We reviewed enrollment trajectories by site (# patients contacted, # new enrollees, # and % of eligible patients enrolled) on a weekly basis, and used these data to detect potential issues with enrollment. These weekly reports showed that the number of patients contacted and the yield of enrollees from patients contacted was much lower at one site than the other two sites. From the quantitative data alone, we could not determine why. To explore this issue, we used qualitative data from informal contacts with the AIM pharmacists, direct observations in their clinics and formal interviews with them and other clinical stakeholders. We discovered differences in resources across sites that affected the efficiency of pharmacists' work and the time available to recruit and enroll patients. The pharmacist at the site with slow enrollment had no permanent workspace or personal phone extension and voicemail; thus, patients had difficulty returning her calls. She had to search for a room to work daily, and could not store confidential records:

“She may go to the trouble of getting notes from CPRS onto her scrap paper in preparation for a call but then the patient may be a no-show…she has to trash her notes and start all over again whenever the appointment is rescheduled.” (Site 1 Field notes).

The other pharmacist, in contrast, had her own office:

“It is clear that the space and resources that this AIM Pharmacist has are far superior to those of [the site 1 pharmacist]…[The site 3 pharmacist] has the ability to lock up sensitive information…which allows her to create things like the individual patient folders, a system that seems to work well for her.” (Site 3 field notes).

The study team and the pharmacist worked with clinic managers to improve the space situation and procure a permanent phone number and pager. Follow-up observations and interviews revealed that in addition to these improvements, the pharmacist also made her work processes more efficient and increased enrollment rates. Triggered by real-time review of quantitative data, our examination of Reach allowed us to identify contextual factors and implementation issues near the beginning of the intervention.

3.1.1.2. Retrospective interpretive evaluation (Stetler et al., 2006)

To identify patient factors that may have contributed to relatively low enrollment and high decline rates at site 1, we examined tracking data, specifically reasons for declining from a closed-ended survey administered to individuals who declined to participate, AIM Pharmacist open-ended documentation within the tracking tool, and interviews with pharmacists. Survey results showed variation across sites in several reasons for declining (e.g., percentage who were “not worried about blood pressure”). Qualitative data helped to expand the reasons for declining participation that were not included in the survey; several of these varied widely across the three sites. For example, at site 3, which had the lowest decline rate, the pharmacist noted that a high proportion of patients regarded the clinic as a “second home,” and that they especially appreciated the AIM Program:

“…free programs offered to low-income areas are usually a hit… the Veterans, they hang around, this is like their second home more than any VA I've ever seen.” (Pharmacist B interview).

In the same vein, one pharmacist cited good continuity of care between patients and their PCPs and a “close-knit” feeling at site 2 as potential reasons for its high enrollment rate. Site 1, which had the lowest enrollment rate, was a much larger facility that included residents who rotated every 3 years. Many patients did not feel the same sense of continuity:

I just noticed that…Site 2 is a pretty small close-knit clinic…I would think about people who live, let's say 30 min from either facility [Site 1 or Site 2]…those people in Site 2 tended to not only want to participate but they'd want to come in [person rather than just talk on the phone], versus people in Site 1 (Pharmacist A interview).

Real-time tracking of enrollment, coupled with qualitative inquiry, was effective for identifying context-specific reasons for low enrollment and variation in enrollment across sites during the intervention and for explaining results retrospectively. It is important to think about not only patient characteristics and needs but also implementation processes and contextual factors to more fully understand how to maximize Reach.

3.2. Effectiveness

3.2.1. Does the intervention accomplish its goals?

3.2.1.1. Retrospective assessment

To expand upon the traditional RE-AIM measure of effectiveness that often includes effect size and associated specified outcomes, we added the qualitative question: What are the conditions and mechanisms that contributed to effectiveness?

Our primary quantitative effectiveness outcome was the relative change in systolic blood pressure (SBP) between all eligible patients in intervention teams compared to control teams. Using multi-level mixed-effects linear regression models, in an intention-to-treat analysis, we examined longitudinal differences in SBP among participants immediately after receiving the intervention and up to six months later (Heisler et al., 2012). Briefly, in an intention-to-treat population-based analysis we found that patients who were allocated to the AIM program achieved improved SBP control more quickly than control team patients, but by six months after the intervention, control team patients had achieved similar improved levels of SBP (Heisler et al., 2012). One explanation is that only 53% of eligible patients actually participated in AIM. However, qualitative analyses and site observations revealed that clinical leadership at all three sites implemented a range of other programs that, like AIM, were targeted at improving BP control in a similar cohort of patients in order to meet a performance measure; this affected patients in both the intervention and control groups.

To understand why only some patients maintained an improved SBP 6 months after AIM, we analyzed patient interview and MMT data, including BP readings during and after enrollment in AIM and pharmacist documentation of each AIM encounter (e.g., medication adherence, frequency of at-home BP monitoring, and medical record BP data), for a sample of 31 purposively sampled patients. We conducted in-depth analyses to create patient case studies, as well as cross-case analyses to identify patient characteristics and program mechanisms that affected whether patients maintained their target BP. We found that all patients who maintained BP control had moderate or high engagement in AIM, did not prioritize managing co-morbidities over managing their BP, and had fewer psychosocial issues. One mechanism that appeared to contribute to sustained BP improvement was a theme we labeled “linkage” to characterize patients who made the connection between adherence and BP control. They saw the change in BP because they were monitoring their BP regularly at home, and over multiple pharmacist encounters, could see it drop as they improved medication adherence. One patient said:

“I had been [taking]…some medicine…and I was only taking it twice so [AIM pharmacist] convinced me to take it three times a day and it showed an improvement…I was keeping track of my blood pressure and I could see the improvement.”

Several program elements helped contribute to patients seeing this linkage: multiple encounters with the pharmacist over time, medication counseling, and home BP self-monitoring.

3.3. Adoption

3.3.1. To what extent are those targeted to deliver the intervention participating?

3.3.1.1. Retrospective assessment

The adoption measure of RE-AIM typically includes the absolute number and proportion of intervention agents willing to initiate a program. Because our three sites already agreed to participate in this trial of the AIM intervention and because pharmacists only had contact with primary care providers (PCPs) around specific patients, adoption of AIM by PCPs was not as much of an issue as in other interventions that rely on provider participation. All but one provider in all the sites agreed to allow their patients to participate. However, adding the qualitative component to the RE-AIM framework allowed us to hear through our interviews what the PCPs liked about the program. Based on encounter notes from the AIM pharmacist and hearing from their patients, their perception was that AIM helped their patients. One PCP, who gave several examples of patients who had improved adherence, said:

“I thought it was really helpful to have another individual spending time with the patient, perhaps more than I could to try to elicit some of the things that cause them to either take or not take their blood pressure medicines.”

3.4. Implementation

3.4.1. To what extent was the intervention consistently implemented?

3.4.1.1. Real-time formative evaluation and retrospective assessment

In the implementation dimension, RE-AIM implementation measures traditionally focus on fidelity to the intervention protocol and consistency of delivery. We added qualitative methods to our fidelity tracking for the key components of the intervention. For example, we assessed pharmacists' MI skills through self-assessments and, halfway through the intervention, by an expert rater. These ratings showed that pharmacists' self-efficacy in using MI techniques increased over time, and that they met or exceeded the minimum proficiency and competency levels. Longitudinal interviews with pharmacists corroborated these findings; pharmacists reported that their skills and self-efficacy increased. These findings were especially relevant because pharmacists were new to MI and trained just prior to the intervention.

Qualitative data from bi-monthly webinars and pharmacist interviews provided insight into the pharmacists' feelings about their own MI skills. We used this information in real time to improve these skills. For example, pharmacists were at a loss when patients were resistant to talking about their BP even after they had some experience in using MI techniques. As a result of this insight, revealed through interviews and project meetings, this issue was discussed during a webinar with an MI expert who led role-playing. The MI trainer suggested that they see that “there's a benefit in stepping back and letting the patient just be and see what evolves instead of pushing too hard in a particular visit.” (webinar notes) After this session, the pharmacists felt more comfortable with resistant patients whom they thought had potential to do well in the program, rather than feeling they had failed. Thus, using qualitative data, we identified lack of fidelity to MI principles as a barrier to success and were able to address it in real-time.

3.5. Maintenance

3.5.1. To what extent did the intervention become part of routine organizational practices and maintain effectiveness?

3.5.1.1. Retrospective assessment

The RE-AIM maintenance measure is typically defined by studying the long-term effects and outcomes after 6 or more months. We expanded on this by assessing prospects for maintaining the program through post-intervention interviews and site visits. The program was not maintained in its entirety at any of the three sites. Crucial barriers included the perceived high cost of intensive patient management and lack of integration of the AIM program into the electronic medical record. However, some program components have been integrated into patient care (see Table 1). The AIM pharmacists were each hired to run their own clinics after the intervention. They continued to use MI when counseling patients, trained other staff in MI techniques, and were authorized to adjust medications. The site 2 pharmacist described the value she saw in MI:

“…I really learned a lot of, as a clinician, things that are going to help me throughout my whole career. I'm never going to forget the impact of motivational interviewing and… I think I would've been less of a pharmacist if I wouldn't have had that experience.”

She also described the benefit of being able to adjust medications independently:

“When the nurses see a patient and their blood pressure is high…then they come and staff with a pharmacist for those changes…usually they would collect all this data and say, ‘Alright, I'll talk to your doctor and call you back.’ So now me being able to do…on-the-spot changes, patients leave the clinic know what's going to be different instead of [waiting] for a phone call.”

Based on these qualitative data, maintenance may be re-conceptualized to include both integration of an intervention into routine practice and integration of a limited number of program components that may improve patient care.

4. Discussion

The RE-AIM QuEST framework, which proposes general qualitative questions in each RE-AIM dimension applicable across intervention types; and creation of tailored data sources and analyses matched to these questions, expands RE-AIM in three fundamental ways. First, RE-AIM QuEST allows investigators to understand not only whether Reach, Adoption and Implementation vary across and within sites but also how and why, permitting researchers to improve implementation during the intervention. For example, real-time tracking of enrollment, coupled with qualitative inquiry, is a useful tool to identify context-specific reasons for variation in enrollment across sites and address these. Second, RE-AIM QuEST expands retrospective evaluation of effectiveness by examining why the intervention worked or failed to work. Understanding the underlying conditions and mechanisms for success or failure can help inform design of future interventions or explain which components of the intervention or the context in which it was implemented may have been barriers. Finally, the RE-AIM QuEST framework expands the Maintenance dimension by explicating whether or not and in which ways the intervention was maintained. This could offer valuable lessons for future sites planning to implement a similar intervention.

Our experience using a RE-AIM mixed methods framework to assess a health system intervention in a pragmatic RCT, and the addition of general questions that can be adapted to specific evaluation studies, illustrates and facilitates varied uses of this type of framework across intervention types, populations, study types, and settings. However, work is needed to further develop the framework and its adaptation across different contexts and types of projects.

In summary, the RE-AIM QuEST framework expands each RE-AIM dimension to allow systematic evaluation of how the intervention works through an understanding of contextual factors and mechanisms that link context, process and outcomes. In this way, RE-AIM QuEST may enhance potential for an intervention's effectiveness and successful translation. The framework can be adapted to quality improvement efforts, effectiveness and quasi-experimental studies to improve interventions in real-time, to assess implementation retrospectively, and to enhance sustainability and translation to other settings.

Competing interests

All authors declare no competing interests.

Authors' contributions

All authors have made substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data, have been involved in drafting the manuscript or revising it critically for important intellectual content, and have given final approval of the manuscript.

Acknowledgments

Funding was provided by the Department of Veterans Affairs (VA) Health Services Research and Development (HSR&D) Service (grant number SDP 06-128) and the National Institute of Diabetes and Digestive and Kidney Diseases (grant number 5 R18 DK076622 and grant number P30DK092926 (MCDTR)). The opinions expressed are the authors and do not represent those of the US Department of Veterans Affairs.

Contributor Information

Jane Forman, Email: jane.forman@va.gov.

Michele Heisler, Email: mheisler@umich.edu.

Laura J. Damschroder, Email: laura.damschroder@va.gov.

Elizabeth Kaselitz, Email: emaccorm@umich.edu.

Eve A. Kerr, Email: ekerr@umich.edu.

References

  1. Averill J.B. Matrix analysis as a complementary analytic strategy in qualitative inquiry. Qual. Health Res. 2002;12(6):855–866. doi: 10.1177/104973230201200611. [DOI] [PubMed] [Google Scholar]
  2. Bakken S., Ruland C.M. Translating clinical informatics interventions into routine clinical care: how can the RE-AIM framework help? J. Am. Med. Inform. Assoc. 2009;16(6):889–897. doi: 10.1197/jamia.M3085. (PMC3002125) [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Balas E.A., Boren S.A. Managing clinical knowledge for health care improvement. Yearb. Med. Inform. 2000;1:65–70. [PubMed] [Google Scholar]
  4. Damschroder L.J., Hagedorn H.J. A guiding framework and approach for implementation research in substance use disorders treatment. Psychol. Addict. Behav. 2011;25(2):194–205. doi: 10.1037/a0022284. [DOI] [PubMed] [Google Scholar]
  5. Damschroder L.J., Aron D.C., Keith R.E., Kirsh S.R., Alexander J.A., Lowery J.C. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement. Sci. 2009;4:50. doi: 10.1186/1748-5908-4-50. (PMCID: PMC2736161) [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Estabrook B., Zapka J., Lemon S.C. Evaluating the implementation of a hospital work-site obesity prevention intervention: applying the RE-AIM framework. Health Promot. Pract. 2012;13(2):190–197. doi: 10.1177/1524839910385897. [DOI] [PubMed] [Google Scholar]
  7. Estabrooks P.A., Allen K.C. Updating, employing, and adapting: a commentary on what does it mean to “employ” the RE-AIM model. Eval. Health Prof. 2013;36(1):67–72. doi: 10.1177/0163278712460546. [DOI] [PubMed] [Google Scholar]
  8. Fetters M.D., Curry L.A., Creswell J.W. Achieving integration in mixed methods designs-principles and practices. Health Serv. Res. 2013;48(6 Pt 2):2134–2156. doi: 10.1111/1475-6773.12117. (PMC4097839) [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Forman J., Damschroder L. 2007. Qualitative Content Analysis. Empirical Methods for Bioethics: A Primer; pp. 39–62. [Google Scholar]
  10. Glasgow R.E., Strycker L.A., King D.K. Robustness of a computer-assisted diabetes self-management intervention across patient characteristics, healthcare settings, and intervention staff. Am. J. Manag. Care. 2006;12(3):137–145. [PubMed] [Google Scholar]
  11. Harden S.M., Gaglio B., Shoup J.A. Fidelity to and comparative results across behavioral interventions evaluated through the RE-AIM framework: a systematic review. Syst. Rev. 2015;4:155. doi: 10.1186/s13643-015-0141-0. (PMC4637141) [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Heisler M., Hofer T.P., Klamerus M.L. Study protocol: the Adherence and Intensification of Medications (AIM) study—a cluster randomized controlled effectiveness study. Trials. 2010;11:95. doi: 10.1186/1745-6215-11-95. (PMCID: PMC2967508) [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Heisler M., Hofer T.P., Schmittdiel J.A. Improving blood pressure control through a clinical pharmacist outreach program in patients with diabetes mellitus in 2 high-performing health systems: the adherence and intensification of medications cluster randomized, controlled pragmatic trial. Circulation. 2012;125(23):2863–2872. doi: 10.1161/CIRCULATIONAHA.111.089169. (PMCID: PMC3999872) [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Kessler R.S., Purcell E.P., Glasgow R.E., Klesges L.M., Benkeser R.M., Peek C.J. What does it mean to “employ” the RE-AIM model? Eval. Health Prof. 2013;36(1):44–66. doi: 10.1177/0163278712446066. [DOI] [PubMed] [Google Scholar]
  15. Klesges L.M., Estabrooks P.A., Dzewaltowski D.A., Bull S.S., Glasgow R.E. Beginning with the application in mind: designing and planning health behavior change interventions to enhance dissemination. Ann. Behav. Med. 2005;29(Suppl):66–75. doi: 10.1207/s15324796abm2902s_10. [DOI] [PubMed] [Google Scholar]
  16. McGlynn E.A., Asch S.M., Adams J. The quality of health care delivered to adults in the United States. N. Engl. J. Med. 2003;348(26):2635–2645. doi: 10.1056/NEJMsa022615. [DOI] [PubMed] [Google Scholar]
  17. Patton M. Sage; Thousand Oaks, California: 2001. Use of Multiple Data Sources Allows Triangulation of Data and Provides a more Holistic and Valid Understanding of the Implementation Intervention at each Site. [Google Scholar]
  18. Stetler C.B., Legro M.W., Wallace C.M. The role of formative evaluation in implementation research and the QUERI experience. J. Gen. Intern. Med. 2006;21(Suppl. 2):S1–S8. doi: 10.1111/j.1525-1497.2006.00355.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Taylor-Powell E., Steele S. University of Wisconsin-Extension; Wisconsin: 1996. Collecting evaluation data: Direct observation. Program Development and Evaluation. [Google Scholar]
  20. Tunis S.R., Stryer D.B., Clancy C.M. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290(12):1624–1632. doi: 10.1001/jama.290.12.1624. [DOI] [PubMed] [Google Scholar]
  21. Ware J.H., Hamel M.B. Pragmatic trials—guides to better patient care? N. Engl. J. Med. 2011;364(18):1685–1687. doi: 10.1056/NEJMp1103502. [DOI] [PubMed] [Google Scholar]

Articles from Preventive Medicine Reports are provided here courtesy of Elsevier

RESOURCES