Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2009 Nov-Dec;16(6):889–897. doi: 10.1197/jamia.M3085

Translating Clinical Informatics Interventions into Routine Clinical Care: How Can the RE-AIM Framework Help?

Suzanne Bakken a , b ,, Cornelia M Ruland b , c
PMCID: PMC3002125  PMID: 19717799

Abstract

Objective

Clinical informatics intervention research suffers from a lack of attention to external validity in study design, implementation, evaluation, and reporting. This hampers the ability of others to assess the fit of a clinical informatics intervention with demonstrated efficacy in one setting for implementation in their setting. The objective of this model formulation paper is to demonstrate the applicability of the RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) framework with proposed extensions to clinical informatics intervention research and describe the framework's role in facilitating the translation of evidence into practice and generation of evidence from practice. Both aspects are essential to reap the clinical and public health benefits of clinical informatics research.

Design

We expanded RE-AIM through the addition of assessment questions relevant to clinical informatics intervention research including those related to predisposing, enabling, and reinforcing factors and validated it with two case studies.

Results

The first case study supported the applicability of RE-AIM to inform real world implementation of a clinical informatics intervention with demonstrated efficacy in randomized controlled trials (RCTs) - the Choice (Creating better Health Outcomes by Improving Communication about Patients' Experiences) intervention. The second, an RCT of a personal digital assistant–based decision support system for guideline-based care, illustrated how RE-AIM can be used to inform the design of an efficacy RCT that captures essential contextual details typically lacking in RCT design and reporting.

Conclusion

The case studies validate, through example, the applicability of RE-AIM to inform the design, implementation, evaluation, and reporting of clinical informatics intervention studies.

Introduction

Translational research has received heightened attention since the publication and initial implementation of the National Institutes of Health (NIH) Roadmap for Medical Research. 1,2 Although more focus has centered on breaking down the barriers between basic science and clinical science, several authors have emphasized the importance of removing the barriers between clinical science and translation of discoveries into routine clinical practice and healthcare policy. 3,4 Woolf, in particular, has suggested the need for additional resources to support the latter given the likelihood of greater impact on the public health. 5 The NIH Clinical and Translational Science Awards (CTSAs) 6 partially address this need through the funding of Community Engagement Resources and, in some instances, through supplemental funding to conduct pilot work related to the creation of a national network of community-based research sites. 7

Recognition of the importance of studying the real world implementation of efficacious interventions to address their effectiveness preceded these NIH initiatives. Models, such as Veteran's Affairs QUERI 8,9 and the RE-AIM framework (Reach, Effectiveness, Adoption, Implementation, and Maintenance), 10–13 have been proposed as methods to facilitate translation of research into practice. In addition, federal agencies 14 and others 15 have called for the conduct of so-called practice-based or pragmatic trials so the evidence generated through research is more likely to be appropriate for implementation in practice. The number of practice-based research networks (PBRNs) is on the rise and research conducted in practice settings has been identified as “a crucial scientific step, the blue highway, between the great medical advances of the next 25 years and the millions of Americans who want to live a long and healthy life.” 4 Common across these two approaches—using a theoretically based implementation model to apply evidence to practice and generation of evidence from practice-based research—is an increased consideration of external validity in addition to the internal validity of the study design.

Our premise is that, as with clinical science, clinical informatics intervention research suffers from a lack of attention to external validity (i.e., generalizability) in study design, implementation, evaluation, and dissemination. Moreover, lack of attention to external validity hampers the ability of others to assess the fit of a clinical informatics intervention with demonstrated efficacy in one setting for implementation in their setting. The RE-AIM framework addresses these concerns.

The purpose of this model formulation paper is to demonstrate the applicability of the RE-AIM framework to clinical informatics intervention research. First, we discuss the importance of such a framework for planning, implementing, evaluating, and reporting clinical informatics intervention studies. Second, we describe the RE-AIM framework and suggest additional assessment questions for clinical informatics intervention research. Third, we validate the use of the RE-AIM framework with its extension for clinical informatics intervention research through two clinical informatics intervention case studies. The first case study is focused on the real world implementation of a clinical informatics intervention with demonstrated efficacy in randomized controlled trials (RCTs)—the Choice (Creating better Health Outcomes by Improving Communication about Patients' Experiences) intervention. In this instance, the dimensions of the RE-AIM framework provide a model for describing the process of implementing Choice into routine care (i.e., translation of evidence into practice). The second case study is an RCT of a personal digital assistant (PDA)-based decision support system (DSS) for guideline-based screening and management of depression, obesity, and smoking cessation. This case study illustrates how the RE-AIM framework can be used to inform the design of an efficacy RCT that will generate evidence from practice through capture of essential contextual details typically lacking in RCT design and reporting.

Need for a Framework to Facilitate Translation of Clinical Informatics Evidence into Practice

There is a considerable gap between existing clinical research-based knowledge and its implementation into clinical practice. 16 This gap also exists for clinical informatics interventions. There are several reasons for this gap—some are related to insufficient capture and reporting of contextual details that can help others determine if the findings of the efficacy study are relevant and replicable in their setting. 17,18 This is very important in clinical informatics intervention research because in some instances most evidence for a particular intervention (e.g., computer-based provider order entry) can be attributable to only a few organizations whose organizational cultures may differ from settings in which others wish to implement such technology. In addition, because of their emphasis on internal validity, traditional RCTs have characteristics that hamper implementation of the results into practice. These include strict inclusion and exclusion criteria and ignoring or trying to control for variations in clinical practice. 18–21

There has been less attention given to the study of interventions under typical and varying, rather than optimal, conditions such as those in an RCT. 11,22 The need for studying clinical informatics interventions under such conditions has been recognized in the informatics literature. For example, in a model that matches stage of system development with level of evaluation, Stead, et al conceptualized the study of an informatics innovation under typical and varying conditions as the fifth level of evaluation, in which the focus was determination of intervention effectiveness and the reason for its effect. 23

Historically, many information systems have failed to achieve their potential because of lack of organizational leadership, organizational attitudes toward innovation, and inadequate attention to users' judgments about system feasibility, time requirements, and usefulness in clinical practice. 24–27 Thus, there is a need for research that combines intervention effectiveness with research about effective ways to integrate interventions into existing organizational contexts. This type of research is characterized by multiple names in the literature including implementation science, dissemination science, translational research, and knowledge transfer, 28–30 and the RE-AIM framework provides a model to inform such research.

Model Formulation and Description

The RE-AIM framework was designed to assist in the planning, conduct, evaluation, and reporting of studies with a goal of translation of research into practice. 31 As a complement to criteria that focus on internal validity such as the CONSORT criteria for RCTs 32–34 and the recommendations by Shcherbatykh, et al 35 for handling methodologic issues in health informatics trials, RE-AIM dimensions reflect a primary emphasis on external validity. 12 The framework has been mainly applied in the area of health promotion, 11,36 but has also been used more broadly as a framework for eHealth evaluation and dissemination 37 and translational research. 22

The RE-AIM framework differs from existing theories or models related to technology adoption or technology acceptance in several ways. First, although it is based on work in diffusion theory, the breadth of the framework is beyond technology adoption or acceptance. Second, the focus of the framework is research and its translation into practice. Third, the framework is intended to assist in the planning, conduct, evaluation, and reporting of research studies rather than to only guide the implementation of a specific innovation. Fourth, the dimensions of the RE-AIM framework emphasize external validity in study design and evaluation as a method for increasing the likelihood that a particular intervention will work either across settings or in a particular setting. The first aspect relates to the generalizability or robustness of findings across situations and populations. The second, which is paramount for organizational decision making, addresses the relevance of the evidence generated through research in a particular setting, population, and circumstance to another.

summarizes the dimensions discussed below with associated definitions and questions for assessment of research studies with the aim of translating research into practice. also includes our proposed questions for assessment of clinical informatics intervention studies. These expansions are particularly made for the Effectiveness and Implementation dimensions. Metrics have been developed to operationalize some RE-AIM dimensions 38 and conceptually, the impact of an intervention on an individual is a function of the Reach dimension multiplied by the Effectiveness dimension. 22

Table 1.

Table 1 RE-AIM Dimensions, Definitions, and Questions for Assessing Applicability of Clinical Informatics Interventions for New Settings, Populations, and Circumstances

RE-AIM Dimension Definition Questions to Ask to Assess Applicability of Findings of Studies with the Goal of Translating Findings into Practice Additional Questions to Ask to Assess Applicability of Clinical Informatics Intervention Study Findings
Reach (individual level) Absolute number, proportion, and representativeness of individuals who are willing to participate in a given initiative, intervention, or program What percentage of the target population came into contact with or began program? Did program reach those most in need? Were participants representative of your practice setting?
Efficacy/effectiveness (individual level) Impact of an intervention on important outcomes, including potential negative effects, quality of life, and economic outcomes Did program achieve key targeted outcomes? Did it produce unintended adverse consequences? How did it affect quality of life? What did the program cost as implemented and what would it cost in your setting? Did the informatics intervention produce unintended positive consequences? How did the informatics intervention affect quality of care?
Adoption (setting and/or organizational level) Absolute number, proportion, and representativeness of settings and intervention agents (people who deliver the program) who are willing to initiate a program Did low-resource organizations serving high-risk populations use it? Did program help the organization address its primary mission? Is program consistent with your values and priorities?
Implementation (setting and/or organizational level) Setting level—intervention agents' fidelity to the various elements of an intervention's protocol, including consistency of delivery as intended and the time and cost of the intervention; individual level—clients' use of the intervention strategies How many staff members delivered the program? Did different levels of staff implement the program successfully? Were different program components delivered as intended?
  • What barriers to implementation (predisposing factors at individual, setting/organizational levels) were identified and how were they addressed?

  • What enabling factors (individual, setting/organizational levels) were required to support the informatics intervention?

Maintenance (individual and setting and/or organizational levels) Setting level—extent to which a program or policy becomes institutionalized or part of the routine organizational practices and policies; individual level—long-term effects of a program on outcomes for 6 or more mo after the most recent intervention contact Did program produce lasting effects at individual level? Did organizations sustain the program over time? How did the program evolve? Did those persons and settings that showed maintenance include those most in need? What reinforcing factors (individual, setting/organizational levels) were required to maintain the informatics intervention?

RE-AIM = Reach, Effectiveness, Adoption, Implementation, and Maintenance.

Reach

In the original RE-AIM framework the Reach dimension is an individual level measure (e.g., patient or employee) of participation, referring to the percentage of persons who receive a program. 11,36 However, there is a considerable difference between interventions that are administered to patients by a few dedicated program officers or research staff, and clinical informatics interventions that often involve at least two large target groups: clinicians who use the clinical informatics intervention as part of their care processes and their patients. This interdependency among user groups adds an additional layer of complexity that affects other RE-AIM dimensions.

Efficacy/Effectiveness

Efficacy/Effectiveness refers to behavioral, quality of life, and participant satisfaction outcomes as well as physiological endpoints on an individual level, usually the patient. 11,36 In terms of Efficacy/Effectiveness for clinical informatics intervention studies, it is important to examine the impact on quality of care from both the process and outcome perspectives and to monitor unintended negative and positive consequences. For many clinical informatics interventions, the potential effect on patient outcomes is mediated by clinician behavior, i.e., if the clinician does not use the intervention, the patient cannot reap the benefits. Furthermore, when multiple clinicians use a clinical informatics intervention, the dose and consistency with which it is applied may vary greatly. In studies of these types of nested interventions it is crucial to measure outcomes on all user group levels, as well as monitor potential confounders, practice variations, and heterogeneity of patients and clinicians. 39 While this may require larger sample sizes than what is usually necessary in RCTs, inclusion of these data can greatly contribute to understanding how and why a clinical informatics intervention works, for whom and under what conditions, and how clinician and system variables may mediate effects on patient outcomes. Our proposed extension of the RE-AIM assessment questions for the Efficacy/Effectiveness dimension focuses on quality of care and unintended positive consequences. The latter is becoming increasingly important as Web 2.0 social technologies are being implemented into health care and have the potential to enable unanticipated, but positive effects, through the community of users.

Adoption

Adoption relates to the absolute number, proportion, and representativeness of settings and intervention agents (people who deliver the program) who are willing to initiate a program. This differs from the typical way that the words adoption and acceptance are used in most innovation or technology acceptance models; these are integrated as part of the RE-AIM Implementation dimension and discussed in the following paragraph. 40 Multi-site studies are a rarity in the clinical informatics intervention literature. Moreover, study reports often provide insufficient information about the nature of the organization or organization in which the intervention was implemented which precludes an assessment of representativeness of the setting for the Adoption criterion.

Implementation

The Implementation dimension refers to the extent and consistency to which a program is delivered across programs and settings as intended after it is implemented. 22,36 At the individual level, Implementation is the individual's use of the intervention strategies. 31 The framework does not specifically address the processes it takes to actually implement the intervention. Successful implementation depends on several factors at the individual, group, and organizational levels. 41 Our expansion of the assessment questions in the RE-AIM framework through addition of predisposing and enabling factor concepts from the PRECEDE-PROCEED model 42 is consistent with the need to capture the breadth of clinical informatics interventions and also to carefully document the clinical informatics intervention and its subsequent adaptations in new contexts. Predisposing factors occur before a behavior and influence motivation to undertake a particular behavior; for example, knowledge, attitudes, beliefs, values, self-efficacy, behavioral intentions, and existing skills. 43 Enabling factors also precede behavior and make it possible for individuals or populations to change their behavior or their environment. 43 Enabling factors for clinical informatics interventions include institutional commitment and central leadership support, extent of integration of the system into its organizational context, time to allow learning, investment in change processes, and adequate user training. 24,26,44,45

Maintenance

Maintenance refers to the degree to which a program becomes routine and part of the everyday culture and norms of an organization. 11,36 At the individual level, Maintenance is defined as the long-term effects of a program on outcomes for 6 or more months after the most recent intervention contact. 31 Due to the multidimensionality of clinical informatics interventions, reinforcing factors 42 are crucial to maintain the intervention. 41 For example, high turnover rates among clinicians, practice changes, and other individual and organizational factors may diminish commitment and use of a clinical informatics intervention after implementation. Conversely, factors such as remuneration, rewards, and awareness of positive attitudes and behaviors of others reinforce behaviors related to use of clinical informatics interventions. We have expanded the RE-AIM framework Maintenance assessment questions by adding a question related to reinforcing factors at the individual and setting/organizational levels.

Validation through Examples

Case Study 1: The Choice Intervention

Choice is an interactive tailored patient assessment tool designed to help clinicians (nurses and physicians) elicit and integrate patient-reported symptoms and health problems into patient care. Using a portable tablet computer, patients rate their symptoms and health problems according to physical, functional, and psychosocial dimensions, denote degree of bother, and prioritize need for symptom-related care. Choice creates an assessment summary that displays patient symptoms and distress ratings in rank-order of prioritized need for care. Choice's effectiveness in improving patient-centered care and patient outcomes has been repeatedly demonstrated in clinical trials with cancer and rehabilitation patients. 46–48 Use of Choice resulted in significantly increased frequency and precision of symptom and problem reporting, 49 higher preference achievement, functional status and patient satisfaction, 47,48 and greater symptom improvement and reduction in patients' prioritized needs. 50 In addition, when nurses and physicians in experimental groups had assessment summaries available for care planning, there was significantly greater congruence between patient-reported symptoms and those addressed in clinician documentation.

Based on the evidence for Choice efficacy, the five hospital units who had participated in the most recent RCT requested to use Choice in routine patient care. Supported by the nursing and medical leadership, the decision was made to apply for funding to implement and test the effects of Choice as part of routine clinical practice. Since previous study results had been obtained under the controlled conditions of a RCT, applicability and replication of study results in routine care were unknown.

The following paragraphs illustrate the application of the extended RE-AIM framework in the context of the Choice intervention. The specific research questions associated with each dimension are outlined in . To present a more time-oriented flow related to the implementation of Choice in routine practice, description of the Adoption, Implementation, and Maintenance dimensions precedes that of Reach and Effectiveness.

Table 2.

Table 2 Illustration of Research Questions Assessing RE-AIM Dimensions in Two Case Studies

RE-AIM Dimension Choice PDA DSS
Reach (individual level)
  • What proportion of eligible patients was offered the Choice intervention at outpatient consultations, at admission, during hospital stay, or in preparation for discharge?

  • Which patients did not receive the intervention and why?

  • Were the patients who used or did not use the intervention representative of those eligible to use it?

  • What proportion of nurses and physicians actively use Choice assessment summaries to support patient-centered care and patient provider-communication?

  • Does Choice use vary by practice settings, and if so, why?

  • What proportion of nurses eligible to use the DSS actually used it?

  • In what proportion of eligible patient encounters was the DSS used?

  • Were the nurses who used the DSS representative of those eligible to use it?

  • Were the patient encounters in which the DSS was used representative of the eligible patient encounters?

Efficacy/effectiveness (individual level)
  • What is the effect of Choice on system outcomes (e.g., work processes; organizational change; interdisciplinary collaboration)?

  • What is the effect of Choice on provider outcomes (e.g., quality of care, congruence between symptoms reported and addressed, patient-provider communication, satisfaction)?

  • What is the effect of Choice on patient outcomes (e.g., symptom distress, quality of life, self-efficacy, satisfaction, participation in care)?

  • In what proportion of eligible encounters did screening for depression, obesity, or smoking occur?

  • Were there differences in the number of guideline-related diagnoses in DSS versus no DSS group?

  • Were there differences in the number of guideline-related interventions in DSS versus no DSS group?

Adoption (setting and/or organizational level)
  • What are the characteristics of the settings who decided to adopt Choice?

  • How well did the goals and values of Choice fit with the values and expectations of patients, nurses, and physicians?

  • How well did the goals of Choice fit with the values and expectations of the practice settings?

  • What proportion of patient encounters in DSS versus no DSS groups involved those who were Hispanic, African-American, or lacked private medical insurance?

  • Did DSS use help the Columbia University School of Nursing achieve its educational and practice missions?

  • Did DSS use help specific clinical practice sites achieve their practice missions?

Implementation (setting and/or organizational level)
  • How many nurses and physicians used Choice?

  • Did Choice use vary by unit?

  • Was Choice used as originally intended?

  • Did users perceive Choice as easy to use? (predisposing)

  • Did users perceive Choice as useful? (predisposing)

  • Was there sufficient leadership support and user buy-in? (predisposing)

  • What measures were needed to improve readiness for Choice adoption, commitment, and buy-in of practice settings? (enabling)

  • How were end-users involved in the Choice implementation process? (enabling)

  • What workflow adjustments needed to be made to streamline Choice into routines of daily clinical practice? (enabling)

  • What adjustments needed to be made to the Choice application itself? (enabling)

  • What were the confidentiality and data security issues when integrating Choice into routine practice and how were they addressed? (enabling)

  • What support, resources and outside collaborations were needed to implement Choice? (enabling)

  • Were the necessary resources and support available? (enabling)

  • What were the educational needs of Choice users? (enabling)

  • What were the potential barriers to successful Choice implementation and how were they addressed? (enabling)

  • How many nurses used the DSS?

  • Did DSS use vary according to time in Master's educational program?

  • Did DSS use vary by nursing specialty?

  • Did DSS use vary by guideline (depression versus obesity versus smoking cessation)?

  • Were DSS functions (screening, assessment, diagnosis, guideline-based plan of care template) used as intended?

  • What level of general PDA knowledge and DSS-specific knowledge was needed to use the DSS? (predisposing)

  • Did users perceive the DSS as easy to use? (predisposing)

  • Did users perceive the DSS as useful? (predisposing)

  • What user training and support services were needed by DSS users? (enabling)

  • What technical infrastructure was required to implement the DSS? (enabling)

Maintenance (individual and setting levels)
  • How did Choice evolve over time?

  • Did Choice produce lasting effects at individual level?

  • Did the units sustain Choice use over time?

  • What efforts were needed to maintain participation rate and effectiveness (e.g., repeated educational sessions concordant with staff turnover)? (reinforcing)

  • How did the DSS evolve over time?

  • Which reinforcing factors were useful (e.g., individual reports, aggregate reports by specialty, booster training sessions)?

DSS = decision support system; PDA = personal digital assistant; RE-AIM = Reach, Effectiveness, Adoption, Implementation, and Maintenance.

Adoption

An important aspect of adoption is the match between the clinical informatics intervention and values or expectations of the organization or of the users. We conducted a set of focus groups with nurses and physicians, asking them about their experiences with Choice from the RCT and their future expectations. These evaluations demonstrated consistency between clinicians' values and purpose of the Choice intervention. Moreover, another indication of matching values was the user request to implement Choice in routine care. 51

Implementation

During the implementation process, we paid careful attention to predisposing and enabling factors. Predisposing factors measured as part of the implementation process focused on clinicians' willingness to adopt Choice in their practice. Clinicians (65 nurses, 12 physicians) who had participated in the RCT completed questionnaires that addressed: (1) use, perceived usefulness and ease of use of the Choice assessment summaries to support patient care; (2) perceptions about Choice's ability to improve care planning, understanding of patient perspectives, and patient-provider communication; and (3) attitudes towards patient involvement in planning patient care. 51

In terms of patient predisposing factors, we also conducted in-depth interviews with 17 cancer patients who had used the Choice intervention during the RCT. These patients unanimously endorsed its use in practice. They reported that Choice was easy to use and that it enhanced their preparation for consultations, provided them with knowledge and understanding about their illness, and helped them to ask questions and participate more in their care. Additionally, patients felt that symptoms and problems were more “appreciated” and addressed by providers, and that they received more individualized care.

Factors that enabled implementation included: formation of an internal PBRN consisting of researchers and clinicians, adjustment of workflow processes, and modifications to Choice.

The internal PBRN consisted of the nurses from the five units, their head nurses, and members of the Choice research team. During the planning phase, network meetings were held regularly. The PBRN members identified potential barriers and benefits and defined clinician need for education and other enabling factors required to make Choice fit with their workflow.

When moving an informatics application from a RCT into routine practice, it is crucial to streamline it into the daily routines of the clinical practice setting so that its use is not perceived as a disruptive, additional task. Consequently, we needed to identify and describe processes related to outpatient consultations, patient admissions, discharges and other situations where the Choice application could be potentially used. Flowcharts were created for different use situations and delineated how patients moved through the system. This included whom they encountered, for what purpose, when, where, for how long, and at which points the Choice assessment would be most helpful and easy to administer. Based on this work, a preliminary set of workflow procedures were described and pilot tested and additional adjustments were made.

A clinical informatics intervention that works well in an RCT is not necessarily technically robust enough for large-scale clinical application and integration into existing systems such as an electronic health record (EHR). Modifications for Choice included:

  • ▪ Additional instructions and help screens to make the application more self-explanatory to patients

  • ▪ Integration of assessment summaries into the EHR necessitating building tools for synchronizing, accessing and viewing patients' assessment data through the EHR, creation of an administrative interface, development of procedures (e.g., data storage, charging and cleaning tablet computers between patients), and improving usability

  • ▪ Diagnosis and management of data security risks

Maintenance

In terms of reinforcing factors, we identified an unintended consequence of Choice use in routine care that required clinician education beyond mastering use of the application itself. Through focus groups, feedback from the PBRN, and monitoring participation rates, we discovered that clinicians also needed training to engage in a more patient-centered communication style, address patient problems from the patient perspective, and treat patients as equal partners. Choice challenged traditional roles which created insecurity among clinicians and some resistance to use. As a result, not all nurses offered the application to their patients, and some nurses and physicians did not use the assessment summaries and or ignored them when talking to patients. The patients who completed assessment summaries, by contrast, expected their problems to be acknowledged, and they were disappointed when this did not happen. Due to close monitoring and assistance from the internal PBRN, we were able to identify and address the problem. As a result, we conducted a series of communication training sessions for nurses and physicians with experts in communication with oncology patients. Furthermore, 4–5 nurses from each unit participated in a 4-day train-the-trainer workshop to become “experts” in patient provider communication. Subsequently, they learned how to conduct one-to-one training sessions and support their peers.

Reach

In the context of Choice, the question about the level of participation applies to two clinician user groups (physicians, nurses) and patients. On the patient level, we regularly collect data over a 2-week period on the ratio of Choice assessments in the EHR and number of admitted patients. If this number drops below a predefined acceptable rate, we plan to further study the processes related to nurses offering Choice to their patients and thus identify potential problems.

Effectiveness

The purpose of measuring the effectiveness of Choice in routine care is to (1) assess whether effects of previous RCTs can be repeated or even strengthened under conditions of routine practice; (2) measure additional system, clinician and patient outcomes; (3) explore the relationships between these outcomes; (4) monitor unexpected outcomes that may occur; and (5) discover new, interesting research questions. We are examining these research questions using a pre-post implementation design with two cohorts.

To achieve the first purpose, we are using similar outcome measures as in the RCT plus additional outcomes such as patient-provider communication. For example, we are audio-taping and analyzing patient-provider communication style with and without the Choice intervention. Our experience to date suggests that implementing a clinical informatics intervention into routine care can provide new exciting opportunities for research and knowledge discovery. Furthermore, using Choice with all patients in the participating units generates a sufficient sample size to study effects of Choice under different practice conditions and in subgroups of patients with varying cancer diagnoses, stages of disease, or demographic characteristics. This allows us to address aspects of internal and external validity; and to answer new research questions that address important problems in clinical practice.

This example demonstrates the applicability of the RE-AIM framework to inform the translation of an efficacious clinical informatics intervention into routine care and to study its effectiveness under typical rather than tightly controlled conditions.

Case Study 2: A PDA DSS for Guideline-based Care

The PDA DSS was designed to support guideline-based care for the screening, diagnosis, and management of obesity, depression, and tobacco dependence and was evaluated in an efficacy RCT. The design included diverse participants and heterogeneous practice settings, two features identified by Tunis, et al 52 as important for increasing the value of research for clinical practice and policy, and incorporated data collection related to all RE-AIM dimensions.

The DSS was built on top of an existing PDA application for documenting patient encounters and provided: (1) a reminder to screen for one of the three conditions, (2) guideline-based screening assessment, (3) automated generation of diagnosis based upon screening, (4) documentation of patient goal (e.g., weight loss, quit smoking), and (5) guideline-based care plan template that is tailored based upon selected demographic characteristics, assessment, diagnosis, and patient goal. 53 Registered nurses in Advanced Practice Nurse training were randomized within clinical specialty (e.g., Adult Nurse Practitioner, Pediatric Nurse Practitioner) to receive the DSS for one of the three conditions and the control PDA application for the remaining two conditions. 54 The control application for each condition and its associated guideline contained all the diagnoses and interventions that were in the DSS. 55

For consistency with the Choice case study, Reach and Efficacy are described following Adoption, Implementation, and Maintenance. Research questions for each dimension of the framework for this study are shown in . The intent of this example is to illustrate how the RE-AIM dimensions were incorporated into the study design as guidance for other investigators wishing to integrate the framework, not to report the study findings.

Adoption

Because we wished to recruit participants from heterogeneous practice settings, we selected guidelines that were appropriate across clinical populations, age groups, and settings. The nurse participants represented six specialties spanning acute care and ambulatory care settings in four states (Adult Nurse Practitioner, Acute Care Nurse Practitioner, Family Nurse Practitioner, Oncology Nurse Practitioner, Pediatric Nurse Practitioner, and Women's Health Nurse Practitioner). Settings ranged from small private practices in suburban settings to urban academic medical centers.

Implementation and Maintenance

We implemented several additional study processes into the design to facilitate the implementation and maintenance of the intervention and the associated capture of data related to the Implementation and Maintenance dimensions of RE-AIM. For example, we logged training and booster training sessions, asked nurse participants to complete questionnaires regarding their perceptions of the DSS's ease of use and usefulness, and conducted focus groups to discuss barriers to use. 56

In terms of automatically capturing data related to the Implementation and Maintenance dimensions of RE-AIM, all data were stamped with time of data entry and were downloaded by the users from the PDA DSS to a central database on a regular basis. The database contained information about the users such as specialty, semester in educational program, setting, and which guideline they were randomized to receive. 55 Consequently, we could determine patterns of use and ascertain if use varied according to patient characteristics, Nurse Practitioner specialty, time, setting, or guideline. Database reports were run at the nurse-level and specialty-level approximately once per month. We iteratively refined the reports that were intended to reinforce usage of the PDA DSS.

Reach

The PDA DSS was available for use in adult and pediatric patient encounters by nurse participants. We established the age for screening eligibility at 2 years for obesity, 8 years for depression, and 9 years for smoking and captured data on the proportion of those eligible for screening that were screened. The PDA DSS also facilitated collection of data regarding patient age, gender, insurance status, and race/ethnicity, thus facilitating an assessment of the representativeness of those who were screened, diagnosed, and treated for the guideline-related conditions.

Efficacy

The primary outcome of the RCT was guideline adherence, which we measured in regards to three health problems with high significance to public health: obesity, depression, and cigarette smoking. Because the study design included randomization within specialty to guideline, subgroup analyses by specialty and by guideline were possible. Such analyses contribute to understanding under what circumstance an intervention is efficacious.

This case study illustrates the usefulness of the RE-AIM framework to inform the capture of relevant contextual data regarding Adoption, Implementation, and Maintenance dimensions within the context of an efficacy RCT. Inclusion of data collection and analysis related to RE-AIM dimensions often requires a variety of methods not typically integrated into efficacy RCTs and consequently additional resources. For example, we conducted focus groups to assess predisposing, enabling, and reinforcing factors. Where appropriate, data related to the RE-AIM dimensions should be automatically captured as part of the clinical informatics intervention.

Discussion

Several authors have provided guidance on design and reporting of RCT results in a manner that attends to the internal validity of the research design. 32–35 This is essential in determining the efficacy of clinical informatics interventions. As a complement to criteria that focus on internal validity, we have described an expansion of the RE-AIM framework to address the external validity of clinical informatics intervention studies. This is a critical step in moving from measuring efficacy to measuring effectiveness of a particular clinical informatics intervention, i.e., Stead, et al's fifth-level of evaluation. 23 As with clinical science, 3,4 data related to the RE-AIM dimensions are vital to the understanding and removal of the barriers between clinical informatics intervention efficacy research and its translation into routine clinical practice and healthcare policy. Moreover, such data are likely to be critical for informing comparative effectiveness research of clinical informatics interventions.

Footnotes

The research described and preparation of the manuscript was supported by Mobile Decision Support for Advanced Practice Nursing (R01NR008903, S. Bakken, Principal Investigator) and the Center for Evidence-based Practice in the Underserved (P30NR010677, S. Bakken, Principal Investigator) funded by the National Institute of Nursing Research, and Communication and Information Sharing between Patients and Their Care Providers (Grant 176823/S10, C. Ruland, Principal Investigator) funded by the Norwegian Research Council.

References

  • 1.Zerhouni E. The NIH Roadmap for Medical Research, 2003http://nihroadmap.nih.gov/Accessed Jan 5, 2008.
  • 2.Zerhouni E. The NIH roadmap Science 2003;302(5642):63-72. [DOI] [PubMed] [Google Scholar]
  • 3.Sung NS, Crowley Jr WF, Genel M, et al. Central challenges facing the national clinical research enterprise J Am Med Assoc 2003;289(10):1278-1287. [DOI] [PubMed] [Google Scholar]
  • 4.Westfall JM, Mold J, Fagnan L. Practice-based research—“Blue Highways” on the NIH roadmap J Am Med Assoc 2007;297(4):403-406. [DOI] [PubMed] [Google Scholar]
  • 5.Woolf SH. The meaning of translational research and why it matters J Am Med Assoc 2008;299(2):211-213. [DOI] [PubMed] [Google Scholar]
  • 6. Clinical and Translational Science Awards, 2008http://www.ctsaweb.org/ 2008. Accessed Aug 30, 2008.
  • 7.Bakken S, Lantigua RA, Busacca L, Bigger JT. Perceived barriers, enablers, and incentives for research among current and potential members of an urban PBRN: A mixed methods analysis J Am Board Fam Med 2009;22:436-445. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Demakis JG, McQueen L, Kizer KW, Feussner JR. Quality enhancement research initiative (Queri): A collaboration between research and clinical practice Med Care 2000;38(6):I17-I25(Suppl 1). [PubMed] [Google Scholar]
  • 9.McQueen L, Mittman BS, Demakis JG. Overview of the Veterans Health Administration (VHA) quality enhancement research initiative (Queri) J Am Med Inform Assoc 2004;11(5):339-343. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Glasgow RE, McKay HG, Piette JD, Reynolds KD. The RE-AIM framework for evaluating interventions: What can it tell us about approaches to chronic illness management? Patient Educ Couns 2001;44(2):119-127. [DOI] [PubMed] [Google Scholar]
  • 11.Glasgow RE, Lichtenstein E, Marcus AC. Why don't we see more translation of health promotion research to practice?. Rethinking the efficacy-to-effectiveness transition. Am J Pub Health 2003;93(8):1261-1267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Glasgow RE, Green LW, Klesges LM, et al. External validity: We need to do more Ann Behav Med 2006;31(2):105-108. [DOI] [PubMed] [Google Scholar]
  • 13.Glasgow RE, Klesges LM, Dzewaltowski DA, Estabrooks PA, Vogt TM. Evaluating the impact of health promotion programs: Using the RE-AIM framework to form summary measures for decision making involving complex issues Health Educ Res 2006;21(5):688-694. [DOI] [PubMed] [Google Scholar]
  • 14. Practice-based research networks (PBRNs) and the translation of research into practice, 2003http://grants.nih.gov/grants/guide/pa-files/PAR-04-041.html 2006. Accessed Nov 22, 2008.
  • 15.Glasgow RE, Davidson KW, Dobkin PL, Ockene J, Spring B. Practical behavioral trials to advance evidence-based behavioral medicine Ann Behav Med 2006;31(1):5-13. [DOI] [PubMed] [Google Scholar]
  • 16.McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States N Engl J Med 2003;348(26):2635-2645. [DOI] [PubMed] [Google Scholar]
  • 17.Glasziou P, Guyatt GH, Dans AL, et al. Applying the results of trials and systematic reviews to individual patients ACP J Club 1998;129(3):A15-A16. [PubMed] [Google Scholar]
  • 18.Walker JS, Bruns EJ. Building on practice-based evidence: Using expert perspectives to define the wraparound process Psychiatr Serv 2006;57(11):1579-1585. [DOI] [PubMed] [Google Scholar]
  • 19.Weisz J, Kazdin A. Concluding thoughts: Present and future of evidence-based psychotherapies for children and adolescentsIn: Kazdin A, Weisz J, et al. editors. Evidence-Based Psychotherapies for Children and Adolescents. New York: Guilford; 2003.
  • 20.Hoagwood K, Burns BJ, Kiser L, Ringeisen H, Schoenwald SK. Evidence-based practice in Child and Adolescent Mental Health Services Psychiatr Serv 2001;52(9):1179-1189. [DOI] [PubMed] [Google Scholar]
  • 21.Roy-Byrne PP, Sherbourne CD, Craske MG, et al. Moving treatment research from clinical trials to the real world Psychiatr Serv 2003;54(3):327-332. [DOI] [PubMed] [Google Scholar]
  • 22.Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology Eval The Health Care Prof 2006;29(1):126-153. [DOI] [PubMed] [Google Scholar]
  • 23.Stead WW, Haynes RB, Fuller S, et al. Designing medical informatics research and library—Resource projects to increase what is learned J Am Med Inform Assoc 1994;1(1):28-33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Anderson J, Aydin C, Jay S. Evaluating Health Care Information Systems: Methods and ApplicationsThousand Oaks, CA: Sage Publications; 1994.
  • 25.Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology MIS Q 1989;13(3):319-340. [Google Scholar]
  • 26.Rogers E. Diffusion of InnovationsNew York: Free Press; 1995.
  • 27.Ruland C. Integrating patient preferences for self-care capability in nursing care: Effects on nurses' care priorities and patient outcomesDoctoral Dissertation, Cleveland, Ohio: Case Western Reserve University; 1998.
  • 28.Sobo EJ, Bowman C, Gifford AL. Behind the scenes in health care improvement: The complex structures and emergent strategies of implementation science Soc Sci Med 2008;67(10):1530-1540Epub 2008 Aug 27. [DOI] [PubMed] [Google Scholar]
  • 29.Chambers DA. Advancing the science of implementation: A workshop summary Adm Policy Ment Health 2008;35(1–2):3-10. [DOI] [PubMed] [Google Scholar]
  • 30.Madon T, Hofman KJ, Kupfer L, Glass RI. Public health. Implementation science. Science 2007;318(5857):1728-1729. [DOI] [PubMed] [Google Scholar]
  • 31. RE-AIM.orghttp://www.re-aim.org 2007. Accessed Oct 4, 2008.
  • 32.Altman DG. Better reporting of randomised controlled trials: The CONSORT statement BMJ 1996;313(7057):570-571. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Boutron I, Moher D, Altman DG, Schulz KF, Ravaud P. Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: Explanation and elaboration Ann Intern Med 2008;148(4):295-309. [DOI] [PubMed] [Google Scholar]
  • 34.Moher D, Schulz KF, Altman DG. The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomised trials Lancet 2001;357(9263):1191-1194. [PubMed] [Google Scholar]
  • 35.Shcherbatykh I, Holbrook A, Thabane L, Dolovich L. Methodologic issues in health informatics trials: The complexities of complex interventions J Am Med Inform Assoc 2008;15(5):575-580. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: The RE-AIM framework Am J Pub Health 1999;89(9):1322-1327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Glasgow RE. eHealth evaluation and dissemination research Am J Prev Med 2007;32(5):S119-S126(Suppl 1). [DOI] [PubMed] [Google Scholar]
  • 38.Glasgow RE, Nelson CC, Strycker LA, King DK. Using RE-AIM metrics to evaluate diabetes self-management support interventions Am J Prev Med 2006;30(1):67-73. [DOI] [PubMed] [Google Scholar]
  • 39.Horn SD, Gassaway J. Practice-based evidence study design for comparative effectiveness research Med Care 2006;45(10):50-57(Suppl 2). [DOI] [PubMed] [Google Scholar]
  • 40.Pinto BM, Friedman R, Marcus BH, et al. Effects of a computer-based, telephone-counseling system on physical activity Am J Prev Med 2002;23(2):113-120. [DOI] [PubMed] [Google Scholar]
  • 41.Kukafka R, Johnson SB, Linfante A, Allegrante JP. Grounding a new information technology implementation framework in behavioral science: A systematic analysis of the literature on IT use J Biomed Inform 2003;36(3):218-227. [DOI] [PubMed] [Google Scholar]
  • 42.Green LW, Kreuter MW. Health Promotion Planning: An Educational and Ecological Approach4th edn. Boston, MA: McGraw-Hill; 2005.
  • 43. Encyclopedia of public healthhttp://www.enotes.com/public-health-encyclopedia/ 2005. Accessed Nov 4, 2008.
  • 44.Holmes-Rovner M, Valade D, Orlowski C, et al. Implementing shared decision-making in routine practice: Barriers and opportunities Health Expect 2000;3(3):182-191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Aronsky D, Chan KJ, Haug PJ. Evaluation of a computerized diagnostic decision support system for patients with pneumonia: Study design considerations J Am Med Inform Assoc 2001;8(5):473-485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Ruland C, White T, Stevens M, Fanciullo G, Khilani S. Effects of a computerized system to support shared decision making in symptom management of cancer patients: Preliminary results J Am Med Inform Assoc 2003;10(6):573-579. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Ruland CM. Handheld technology to improve patient care: Evaluating a support system for preference-based care planning at the bedside J Am Med Inform Assoc 2002;9(2):192-201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Ruland C. Decision support for patient preference-based care planning: Effects on nursing care and patient outcomes J Am Med Inform Assoc 1999;6(4):304-312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Ruland C, Roslien J, Bakken S, Kristiansen J. Comparing tailored computerized symptom assessments to interviews and questionnaires AMIA Annu Symp Proc 2006:1081. [PMC free article] [PubMed]
  • 50.Ruland CM. Unpublished data.
  • 51.Ruland CM. Clinicians' perceived usefulness of a support system for patient-centered cancer care Stud Health Technol Inform 2006;124:624-630. [PubMed] [Google Scholar]
  • 52.Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy JAMA 2003;290(12):1624-1632. [DOI] [PubMed] [Google Scholar]
  • 53.Bakken S, Currie LM, Lee NJ, et al. Integrating evidence into clinical information systems for nursing decision support Int J Med Inform 2008;77(6):413-420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Bakken S, Roberts WD, Chen E, et al. PDA-based informatics strategies for tobacco use screening and smoking cessation management: A case study Stud Health Technol Inform 2007;129(2):1447-1451. [PubMed] [Google Scholar]
  • 55.Lee NJ, Chen E, Mendonca EA, Velez O, Bakken S. Database design and implementation for a PDA-based decision support system for screening and tailored care planning AMIA Annu Symp Proc 2007:1025. [PubMed]
  • 56.John R, Buschman P, Chaszar M, et al. Development and evaluation of a PDA-based decision support system for pediatric depression screening Stud Health Technol Inform 2007;129(2):1382-1386. [PubMed] [Google Scholar]

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES