Skip to main content
. 2018 Mar 22;6:71. doi: 10.3389/fpubh.2018.00071

Table 1.

Examples of applying RE-AIM dimension(s) in different settings across different phases of projects.

Project stage Clinical Community Corporate Overall
Before implementation

Consider the project impact on all RE-AIM dimensions and prioritize the focus for planning and evaluation Example: stakeholders’ interest in intervention reach and representativeness within the setting
Measure: identify potentially eligible patients through electronic medical record
Considerations: may need to conduct sensitivity analyses to determine sample size because of issues like inconsistent coding. There may be coding inconsistencies that influence the numerator or denominator, and all data may not be available for the desired study.
Prioritization: although reach is important dimension to consider, in this example, the team priorities effect of the behavioral outcome
Measure: estimate and compare eligible participants to demographics using Behavioral Risk Factor Surveillance System or Census data
Considerations: reach proportion may seem extremely small when using county-level data to determine denominator. Reach and representativeness within each delivery site, and comparisons across sites, may help understand for whom the intervention is working (or not).
Prioritization: because the anticipated outcomes with evidence-based programs are known, the delivery of programs at multiple sites places additional emphasis on training and fidelity monitoring (to ensure outcomes are achieved).
Measure: identify potentially eligible participants from customers who signed up for intervention via wellness card
Considerations: gain “buy in” from corporate leadership. Use existing corporate infrastructure to identify participants.
Prioritization: implementation factors should be prioritized and carefully considered as they play a key role in the program’s success and ongoing sustainability. Organizations with multiple sites/locations may require local “buy in”
Attempt to keep the target population as large and diverse or representative as possible for a greater public health impact.
Consider ways to enhance recruitment of those most vulnerable and most at risk.
Use a team-based approach to consider which dimension is a priority for the work. Allocate resources accordingly

Determine how each dimension will be included in the project: describe, assess, and/or intervene Example: decision made to intervene to improve adoption, describe effect, and assess implementation fidelity
Intervene: health-care organization is implementing new protocol for nursing rounds. Some clinics receive additional intervention to improve adoption of the protocol.
Describe or measure the effect of the new rounding protocol (i.e., did it achieve outcome of interest).
Assess the degree to which the new nurse rounding protocol was delivered consistently over time and across clinics.
Intervene to improve adoption rates of YMCA centers of a diabetes prevention intervention.
Describe rates of diabetes reduction or other proximal outcomes (weight loss, physical activity improvements).
Assess the degree to which the diabetes prevention program was delivered consistently across YMCA sites.
Intervene to improve adoption rates of a wellness program at a local grocery store within a national chain.
Describe outcomes including unintended negative consequences of the wellness program.
Assess the degree to which the wellness program was delivered consistently across grocery stores in that chain.
Avoid the publication bias for solely reporting on the effect of an intervention on the desired outcome/behavior change without describing or assessing other interventions.
Consider a hybrid design when intervening or assessing both clinical/behavioral intervention as well as implementation strategy.

Develop data collection and reporting procedures and timelines for selected RE-AIM dimensions Consider the metrics of interest and how data will be transferred.
Consider if HIPPA compliance or BAA/DUA* are needed.
Determine the appropriate timeline for observing outcomes (e.g., a full year of observation may be needed to see change in clinical outcomes).
Pragmatically consider what is feasible to collect based on the intended purpose of the intervention.
Consider who, in what community organization, has the time and skills necessary to deliver a program.
Weigh the pros and cons associated with subjective versus objective measures, primary versus secondary data, and self-reported data from participants versus administrative measures.
Consider the messages important for key stakeholders and the data that will drive such messages.
Determine the time and resources needed to obtain such measures and the formats/modalities for disseminating findings to leadership and consumers.
Consider “balancing metrics” and unintended outcomes; as well as assessing and reducing potential health inequities

Engage all project staff and partners in processes to ensure transparency, equity, compliance with regulations, and support (ongoing throughout the project) Example: determine appropriate stakeholders and where, when, how, and why they will be engaged
Consider structure of the clinical health-care organization and potential stakeholders including nurses, nurse assistants, physicians, patients/family, and administrators.
Consider that perhaps it is not appropriate to engage patients with an electronic medical record update.
Bring together stakeholders from diverse sectors (e.g., government, academia, faith-based, aging) to allow each to vocalize their “pain points” and definitions for success.
Form a comprehensive set of variables based on stakeholder priorities and use those elements to measure outcomes relevant to each stakeholder.
Consider time course of putative effects
Engaging multiple employee types (leadership, different divisions/roles) in conversations about new initiatives brings a sense of ownership, which can bolster initial and ongoing support. By including multiple employee perspectives in the planning phase, the logistics about implementation and anticipated outcomes will be identified, which will increase initial adoption and the potential for long-term maintenance Diverse perspectives allow all parties to provide feedback about processes and procedures so that a coordinated approach can be devised and executed with fidelity.
Construct a logic model to understand content, activities, short- and long-term impact.

Plan for sustainability and generalizability from the outset Consider how intervention- and assessment-components can be implemented in settings with different histories, resources, workflows
Plan to communicate results with stakeholders providing guidance and align reporting of information with data needed for decision-making for sustainability
Develop a coalition or advisory board to be engaged throughout the process, including those not directly involved in the project, to identify information and resources needed to increase the likelihood of sustainability Include staff with clinical expertise to be engaged throughout the process, including those not directly involved in the project Design for feasibility, success, and dissemination that addresses each of RE-AIM dimensions.
Design the intervention to be broadly applied within and across settings.

DURING IMPLEMENTATION/ITERATIVE ASSESSMENT AND ADJUSTMENT

Monitor data periodically and at key points for each dimension (emphasis on priority dimensions) Have brief (perhaps “automated”), ongoing data collection. Use rapid, pragmatic assessments to identify reasons for initial results Conduct training for program delivery staff about data collection procedures including data completion and quality checks. Routinely export available data from administrative records and secondary sources to track real-time changes Have brief “automated” ongoing data collection from routine company records. When supplementary outcome measures are used, conduct training for program delivery staff about data collection procedures including data completion and quality checks. Routinely export available data from administrative records and secondary sources to track real-time changes Pragmatic, timely, and low-resource data collection for ongoing decision-making and engagement in the PDSA cycle over time and dimensions

Track implementation and costs as well as fidelity to core components if those are priority dimensions Discuss and implement low burden cost assessments (interviews, tracking, observations) at key time points Develop systems for fidelity monitoring (observation) and adherence to delivery protocol. Programs that breach fidelity are subject to additional unplanned costs (e.g., cost per participant increases if workshops are not filled to capacity) Track implementation and variability across sites. Routinely compare outcomes across a random sample of sites as a way of identifying unanticipated fluctuations and potential protocol deviations Real-time issues can be addressed more rapidly. Avoids type 3 error (concluding that intervention did not work when perhaps delivery was not consistent with evidence-based components)

Perform ongoing assessments of project evolution and adaptations Probe adaptations to address each RE-AIM dimension.
Track implementation and impact over time and across settings and staff
Routinely export available data from administrative records and secondary sources to track real-time progress. Regularly debrief with program deliverers and organizational partners to identify (and adapt to address) unforeseen challenges Track implementation and impact over time and across settings and staff.
Collect stories and “positive deviance” examples to inspire other settings
Need to capture real-world adaptations to systematically collect data on how, why, when, and by whom changes are being implemented in the field

Reconsider the intervention impact on (and priorities for) all RE-AIM dimensions Use both quantitative and qualitative assessments. In applied cases, use “good enough” methods—ballpark estimates make them work when “gold standard” methods are not feasible Assess whether the number of participants reached will enable meaningful outcomes to be observed and adjust recruitment/delivery accordingly. Discuss project progress with program deliverers, partnering organizations, and other key stakeholders regularly to ensure transparency and identify changes in priorities for the project Assess program impact on “bottom line” and estimated return-on-investment.
Discuss project progress with program deliverers, different locations, and other key stakeholders regularly to ensure transparency and identify changes in priorities for the project.
Continued discussion with stakeholders ensures that the appropriate impact is being achieved.
Ongoing considerations of which dimension to intervene, describe, or assess, particularly for long-term intervention work.

Decide if adaptations are needed to address problems with outcomes on one or more RE-AIM dimensions Pilot and then implement intervention or implementation strategy adaptations needed to improve performance, and track their impact Assess the appropriateness of participants engaged in the intervention to determine if appropriate and equitable outcomes are observed. Depending on what is seen, there may be implications for refining participant recruitment and retention procedures Test different intervention or implementation strategy adaptations needed to improve performance, and track their impact
Track innovations
Prioritize adaptations and test their impact across dimensions (see Figure 1)

After implementation/summative

Evaluate the impact on all relevant RE-AIM dimensions Consider subgroup as well as overall effects. Consider overall impact on quality of life and patient-centered outcomes. Include balancing measures Begin with priority dimensions and “low-hanging fruit”. Reach and implementation measures may be easily assessed, whereas adoption and maintenance may require more in-depth processes to identify Consider subgroup effects in addition to overall outcomes. Based on findings, target intervention to streamline resources and impact Return to RE-AIM plan and summarize accordingly.
If retrospective RE-AIM evaluation, use existing tools to ensure consideration of concepts and elements within each dimension

Calculate costs and cost-effectiveness for each RE-AIM dimension Report costs from perspective of multiple stakeholders—adopting settings; clinical team; and patients. Estimate replication costs in different settings or under different conditions Consider the benefits of cost and cost-effectiveness in terms of expanding the initiative geographically versus scaling-up in your local area (or both). Costs may differ for new initiatives relative to those that are ongoing Summarize return-on-investment and expected rate of return
Consider how cost-saving procedures can be employed in future roll-outs
Communication and evaluation of costs contributes to generalizability of the intervention

Determine why and how observed RE-AIM results occurred Consider using mixed methods to blend objective data (the “what”) and impressionistic data (the “why and how”) to gain a more comprehensive understanding about the context of intervention successes and challenges Share findings with stakeholders within and external to organizations to contextualize and interpret findings. Multiple perspectives will drive decisions about impact, needed adaptations, and grand-scale dissemination (if appropriate) Collect stories and reports about keys to success and share these at meetings, on company websites, etc. Contribute to the understanding of the mechanisms that achieved the effect for multiple populations, settings and staff

Disseminate findings for accountability, future projects, and policy change Base statistical findings on clinically significant findings valued by clinicians.
Costs may be appropriate for leadership and health plans.
In community settings, general findings about improvements seen among participants and testimonials may be appropriate for community residents and partnering organizations In corporate settings, metrics related to productivity and staff absenteeism may be most appropriate for leadership to assess cost–benefits of employee-level interventions. Staff outcomes and program feedback may be indicative of overall employee engagement Determine the most appropriate format to distribute findings and which messages are most meaningful for that audience

Plan for replication in other settings based on results Summarize lessons learned and provide guides for implementation and adaptation for different types of settings Consider reporting venues and organizations to share results (e.g., community-based organizations, governmental agencies) Consider issues of scalability and how to efficiently implement successful programs company-wide (with appropriate adaptations) Develop implementation and adaptation guides for future applications and new settings

*HIPPA, health insurance portability and accountability act; BAA, business associate agreement; DUA, data use agreement.