Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 May 1.
Published in final edited form as: Am J Bioeth. 2020 May;20(4):83–85. doi: 10.1080/15265161.2020.1730511

Implementation Science Can Do Even More for Translational Ethics

Katherine W Saylor 1, Megan C Roberts 2
PMCID: PMC7164609  NIHMSID: NIHMS1578949  PMID: 32208087

In their paper “The ‘Ought-Is’ Problem,” Sisk and colleagues present a strong case for translating bioethics research and show how implementation science can help identify requirements and barriers for bioethics findings to be incorporated into practice (Sisk et al 2020). However, by showcasing only one type of implementation science framework, they leave out other potentially useful contributions implementation science can make to bioethics. In this commentary, we present three categories of implementation science frameworks: process models, frameworks for understanding implementation barriers and facilitators (discussed by Sisk and colleagues), and evaluative frameworks. We propose that process models and evaluative frameworks have much to contribute to translational bioethics because they give specific recommendations for how to design and evaluate implementation efforts.

As the authors write, the purpose of ethical deliberation is to enable ethical practice; and yet, ethical norms often fail to be translated into practice. In addition to bioethicists’ personal motivation to find meaning in theoretical work, constraints from funders, policy-makers, and universities drive bioethicists to demonstrate practical value and impact from ethics research investments (Mathews et al. 2016). Theoretical bioethicists and practitioners have specific roles and responsibilities for bridging the theory-practice gap, and in fact there can be value to translation in both directions – from theory to practice, and from practice to theory (Bærøe 2014). Empirical bioethicists see a role for their research in enabling translation from ethical analysis to ethically justifiable behavior—including through identifying implementation barriers and facilitators (Solomon 2005). However, as with any implementation process, when ideal ethical norms are specified and applied to messy, real-world settings, there is a possibility for distortion from the initial content of the ideal norms (Cribb 2010). Methods to assess and enhance efforts to bridge the translation gap are needed.

While Sisk and colleagues are not the first to call for coordinated, interdisciplinary efforts to address this ethics translation gap, they may be the first to propose the use of implementation science. Originally developed for translation of public health applications, implementation science has natural applications to translational ethics. Sisk and colleagues describe in detail how one framework, the Consolidated Framework for Implementation Research (CFIR), can help researchers systematically study the requirements and barriers to implementing known best practices for informed consent. As Sisk and colleagues describe, empirical bioethics researchers have made great strides in developing and testing best practices to enhance comprehension during informed consent processes—for example, including graphics, using plain language, and taking advantage of formatting to improve readability. But these best practices have not become widespread in practice. Working through CFIR provides a useful way to ensure that all potentially relevant factors that influence implementation are examined. It can help put existing empirical evidence about the implementation process in context and may reveal knowledge gaps. It may also reveal obvious problems and targets that need to be addressed. CFIR is useful for understanding factors that should be taken into account when implementing evidence into practice.

Yet, this is just one framework of many theoretical lessons to be learned from implementation science. Implementation science, as a field, aims to understand, evaluate, and facilitate the translation of evidence-based interventions into practice. Theoretical implementation science consists of theories of causal relationships between constructs, models of how all important constructs interact within a system, and frameworks to systematically organize constructs descriptively without ascribing causal relationships (Bauer et al. 2015). Importantly, implementation scientists have developed theories, models and frameworks that can be used for three primary goals: “(1) describing and/or guiding the process of translating research into practice, (2) understanding and/or explaining what influences implementation outcomes and (3) evaluating implementation” (Nilsen 2015, 2). All of these processes are relevant to translational bioethics.

First, the goal of describing or guiding the implementation process is supported by process models. Process models aim to explain causal processes and specify steps in the translational process. Early models were typically linear, but more modern models allow for iterative, reflexive, and context-dependent processes. Some of these models provide instructions on how develop an implementation process, and they emphasize deliberate implementation planning at early stages (Nilsen 2015). An example of a process model, the Quality Improvement Framework (QIF) provides guidance for the steps that should be taken during four phases of implementation: (1) initial considerations regarding the host setting; (2) creating a structure for implementation; (3) ongoing structure once implementation begins; and (4) improving future applications (Meyers, Durlak, and Wandersman 2012).

Second, the goal of understanding what influences translation is supported by determinant frameworks, classic theories, and implementation theories. As described above, CFIR, a determinant framework, is a framework to understand factors that influence implementation. These are frameworks to categorize, understand and explain factors that inhibit or enable implementation. Some frameworks also aim to describe (but not necessarily explain) relationships between determinants. Determinant frameworks take a systems approach, meaning that they all include characteristics of the intervention, the users (providers and/or consumers/patients), the context, and the desired implementation outcomes. Using accepted determinant frameworks ensures consistency across studies and helps reduce the risk of ignoring important determinants (Nilsen 2015). They do not provide theoretical explanations of how change occurs, and therefore do not produce direct recommendations. Within the broad umbrella of understanding the facilitators and barriers to implementation, there are also theories that provide deeper information about the relationships between a smaller number of constructs. Some of these theories are borrowed from other fields such nursing and psychology, and some are born from the field of implementation science itself.

Finally, the third goal that Nilsen describes, evaluating implementation, is supported by evaluation frameworks. These frameworks provide a structure for evaluation and lists of implementation outcomes that can be measured. For example, outcomes measured under one common evaluation framework include acceptability of the intervention, adoption or uptake rate, appropriateness to the setting, costs, feasibility, fidelity of the practice to the intended intervention, penetration, and sustainability (Proctor et al. 2011). Another common framework, the Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM) evaluation framework, is a tool for assessing both implementation outcomes and the effectiveness of the intervention in real-word settings (Glasgow et al. 2006).

Turning back to the case study of implementing best practices in informed consent, we can now explore how the QIF process model and the RE-AIM evaluation framework can provide useful guidance in addition to the CFIR determinant framework. QIF would be useful for bioethics researchers and practitioners (in this case, clinical researchers) working together to develop an implementation plan for informed consent practices. In phase 1, clinical researchers would need to, for example, get buy-in from research coordinators and PIs, assess how researchers are currently doing informed consent, determine whether proposed informed consent templates need to be adapted for the setting, and identify training needs for those conducting the informed consent process. In phase 2, they would need to establish a team of administrators, PIs and research coordinators responsible for implementing the new informed consent process, and develop a setting-specific strategy for how the new process will work. In phase 3, researchers would need to provide ongoing training and support for people conducting informed consent, and monitor and evaluate how implementation is going. Finally, in phase 4, they would use lessons learned from the early implementation efforts to optimize uptake and fidelity or reduce the burdens on key participants. Thus, QIF provides step-by-step guidance that can help researchers and practitioners bridge the translation gap for informed consent best practices.

Once an implementation strategy for improved informed consent is in place, the RE-AIM framework can be used to evaluate implementation and effectiveness outcomes. To evaluate “reach”, implementation researchers could monitor the percentage of PIs or research coordinators who are using the new informed consent process, and the percentage of research participants who received the new approach. To evaluate “effectiveness” of the new processes in a real-world setting, researchers could assess research participant comprehension and retention (the desired outcomes of improved informed consent processes). Under “adoption”, researchers could look at uptake at a higher organizational level—how many departments or clinics now use or even require the new informed consent process? For “implementation”, researchers would evaluate how close actual practices are to the intended best practices—for example, whether new informed consent documents avoid jargon and use graphics effectively. Finally, for “maintenance”, researchers could evaluate what structures and institutional supports are in place to enable long-term maintenance of the improved informed consent processes. Working through this evaluation framework will help researchers know how successful their implementation process is and identify whether additional implementation support is needed.

In our commentary, we have described three types of implementation frameworks and illustrated how they are relevant to guiding, understanding, and evaluating best practices in support of ethical norms, specifically improved informed consent processes. We commend Sisk and colleagues for bringing implementation science lessons to bear on bridging the theory-practice gap in bioethics, and we hope that our commentary extends their efforts to show how much implementation science has to offer translational ethics. Sisk and colleagues also make a valuable contribution to understanding translational bioethics by describing a hierarchy or translational pipeline of claims ranging from high-level aspirational norms down to tested interventions. In the case study of improved informed consent processes, they focus on translating evidence-based interventions into practice, which is an obvious application of implementation science approaches. We believe that implementation science approaches may also be useful in more abstract discovery stages earlier in the pipeline. Exploring the application of implementation science thinking to translation from aspirational norms to specific norms and to ethical interventions would be a fascinating avenue for future consideration.

References

  1. Bærøe Kristine. 2014. “Translational Ethics: An Analytical Framework of Translational Movements between Theory and Practice and a Sketch of a Comprehensive Approach.” BMC Medical Ethics 15 (1): 71. doi: 10.1186/1472-6939-15-71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bauer Mark S., Damschroder Laura, Hagedorn Hildi, Smith Jeffrey, and Kilbourne Amy M.. 2015. “An Introduction to Implementation Science for the Non-Specialist.” BMC Psychology 3 (1): 32. doi: 10.1186/s40359-015-0089-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Cribb A 2010. “Translational Ethics? The Theory-Practice Gap in Medical Ethics.” Journal of Medical Ethics 36 (4): 207–10. doi: 10.1136/jme.2009.029785. [DOI] [PubMed] [Google Scholar]
  4. Glasgow Russell E., Klesges Lisa M., Dzewaltowski David A., Estabrooks Paul A., and Vogt Thomas M.. 2006. “Evaluating the Impact of Health Promotion Programs: Using the RE-AIM Framework to Form Summary Measures for Decision Making Involving Complex Issues.” Health Education Research 21 (5): 688–94. doi: 10.1093/her/cyl081. [DOI] [PubMed] [Google Scholar]
  5. Mathews Debra J. H., Hester D. Micah, Kahn Jeffrey, McGuire Amy, McKinney Ross, Meador Keith, Philpott-Jones Sean, Youngner Stuart, and Wilfond Benjamin S.. 2016. “A Conceptual Model for the Translation of Bioethics Research and Scholarship.” Hastings Center Report 46 (5): 34–39. doi: 10.1002/hast.615. [DOI] [PubMed] [Google Scholar]
  6. Meyers Duncan C., Durlak Joseph A., and Wandersman Abraham. 2012. “The Quality Implementation Framework: A Synthesis of Critical Steps in the Implementation Process.” American Journal of Community Psychology 50 (3–4): 462–80. doi: 10.1007/s10464-012-9522-x. [DOI] [PubMed] [Google Scholar]
  7. Nilsen Per. 2015. “Making Sense of Implementation Theories, Models and Frameworks.” Implementation Science 10 (1): 53. doi: 10.1186/s13012-015-0242-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Proctor Enola, Silmere Hiie, Raghavan Ramesh, Hovmand Peter, Aarons Greg, Bunger Alicia, Griffey Richard, and Hensley Melissa. 2011. “Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda.” Administration and Policy in Mental Health and Mental Health Services Research 38 (2): 65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Sisk Bryan, Mozersky Jessica, Antes Alison L., and DuBois James M.. 2020. “The “Ought-Is” Problem: An Implementation Science Framework for Translating Ethical Norms into Practice.” American Journal of Bioethics. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Solomon Mildred Z. 2005. “Realizing Bioethics’ Goals in Practice: Ten Ways ‘Is’ Can Help ‘Ought.’” Hastings Center Report 35 (4): 40–47. doi: 10.1353/hcr.2005.0048. [DOI] [PubMed] [Google Scholar]

RESOURCES