Abstract
Bioethicists today are taking a greater role in the design and implementation of emerging technologies by “embedding” within the development teams and providing their direct guidance and recommendations. Ideally, these collaborations allow ethical considerations to be addressed in an active, iterative, and ongoing process through regular exchanges between ethicists and members of the technological development team. In this article, we discuss a challenge to this embedded ethics approach—namely, that bioethical guidance, even if embraced by the development team in theory, is not easily actionable in situ. Many of the ethical problems at issue in emerging technologies are associated with preexisting structural, socioeconomic, and political factors, making compliance with ethical recommendations sometimes less a matter of choice and more a matter of feasibility (at least at the local level in which an embedded bioethicist operates). Moreover, incentive structures within these systemic factors maintain them against reform efforts. We recommend that embedded bioethicists utilize principles from behavioral science (such as behavioral economics) to better understand and account for these incentive structures so as to encourage ethically responsible uptake of technological innovations.
As the pace of innovation increases, bioethics is attracting greater attention than ever before across academic, public, and political spheres (Adashi, Walters, and Menikoff 2018; Bishop and Jotterand 2006; Jongsma, Bredenoord, and Lucivero 2018; Dave and Dastin 2021). Discoveries in therapeutic implants, robotics, genetics, neurotechnologies, and the integration of artificial intelligence and machine learning (AI/ML) into health care are on the rise, generating new frontiers for bioethicists to consider how to effectively and equitably translate emerging insights and technologies into health care. As greater number of bioethicists are casting their attention toward emerging technologies, developers are increasingly encouraged to “embed” bioethicists in early development stages to ensure that final design elements of a system or tool reflect stakeholder (patients’) values and interests (McLennan et al. 2020). This embedded arrangement often entails close collaboration between bioethicists and bio- and mechanical engineers, computer scientists, AI/ML specialists, and industry. As some of the pioneers of the embedded approach suggest, these collaborations allow ethical considerations to be addressed in an active, iterative, and ongoing process through regular exchanges (both formal or informal) between ethicists and technical members of the team (Bezuidenhout and Ratti 2021; Bonnemains, Saurel, and Tessier 2018; McLennan et al. 2020). The goal is that ethical recommendations will inform the pipeline from scientific/medical discovery and development to implementation, resulting in more just, equitable, and beneficial outcomes for patients and for society more broadly.
In our own work as bioethicists, we engage in such diverse collaborations to identify bio- and neuroethical issues of emerging technologies, ranging from therapeutic neuroimplants, mechanical circulatory support implants, genomic and polygenic embryo screening, and AI/ML-based personalized risk prediction, to name a few (Barlevy et al. 2022; Kostick and Blumenthal-Barby 2021; Kostick-Quenet et al. 2018, 2022a, 2022b, 2022c; Muñoz et al. 2020, 2021; Pereira et al. 2022). Through our empirical and applied bioethics research in these areas, as well as our observation of impacts in the wider field of bioethics, we have observed that multidisciplinary research collaborations, while necessary, are often not sufficient to ensure the integration of ethical recommendations in practice. True embedded ethics requires understanding audience receptivity and whether ethicists’ recommendations can be acted upon.
Limitations of Applied Ethics Without Embedding
While bioethicists’ greater proximity to medical technology developers may enhance the developers’ motivation to integrate our ethical recommendations, whether these recommendations can be acted upon is often governed by a multitude of socioeconomic, structural, and political factors. McLennan and colleagues (2022) suggest that in embedded ethics, the decision-making structure and allocations of responsibility should be clearly established in collaborative teams from the beginning. They point out that collaborations can also involve power differentials and diverse understandings—such as the idea that ethics can stymie innovation—that may present challenges to implementing ethical considerations. While helping to shape the attitudes, understandings, and buy-in of developers in service of ethical principles is one of the key roles of an embedded bioethicist, the obstacles encountered can render it difficult for any one entity or team member to simply “decide” to implement them.
Accordingly, the embedded ethics approach by itself does not address an enduring challenge that development decisions are often constrained by systemic rather than localized factors. Because these systemic factors are often numerous and distributed, they are difficult to pinpoint, let alone influence. However, we do not view this as an invitation not to try—only as a goad to reconsider the appropriate targets of our efforts. To offer an illustrative example from one of the domains we work in, there is growing consensus among bioethicists and other scholars that the predictive models (often involving AI/ML) used to guide personalized clinical insights (prognostic risk estimates) depend on the accuracy and relevance of the data sets from which they were derived (Jobin, Ienca, and Vayena 2019). To guard against ethical pitfalls such as context bias and harmful or discriminatory application of predictive algorithms, ethics and legal scholars at the level of government, nongovernmental organizations, and industry23 have proposed high-level standards to improve algorithmic fairness and accuracy, many revolving around improving data quality (Cohen et al. 2020; Friedman and Nissenbaum 1996; Heer 2018; Microsoft 2022; White House 2019). For example, the European Commission’s AI Act defines “high-quality” data sets as data that are “sufficiently relevant, representative,” and having “the appropriate statistical properties … as regards the persons or groups of persons on which the high-risk AI system is intended” (European Commission 2021). This guideline is based on extensive research into the disparate impacts of algorithmic predictions on certain vulnerable subgroups and a collective desire to prevent further harms associated with the fast proliferation of predictive technologies.
However, as we illustrate elsewhere, applying these ethical criteria is often outside of developers’ jurisdiction (Kostick-Quenet et al. 2022b). It remains unclear who should claim responsibility for ensuring a “high-quality” data set. To say that developers should by default assume this responsibility—or that bioethicists should be charged with convincing them to—is to ignore the complex factors involved in manifesting such a data set from the start. Collecting high-quality, representative data—at least for use in health care—often begins with researchers and collaborators such as medical device or pharmaceutical companies, who design a study with specific questions in mind—typically whether a clinical intervention is safe and generates the intended effects. Clinical samples are often opportunistic rather than demographically balanced, with eligible patients enrolled on a first come, first served basis. Many if not most of these data sets are never intended for the purpose of training predictive algorithms and are notoriously rife with missing data and annotative gaps. Often, they do not meet strict data quality criteria to allow for balanced and well-indexed data sets. This is more often due to the bustling and time-scarce contexts in which data are collected, not to the lack of ethical foresight or good faith of clinical collaborators. Still, in many circumstances, these real-world data sets are the best we have.
But, further complicating our mission, even these suboptimal data sets are often proprietary, with access governed by strict data sharing agreements and ownership clauses to guard their monetization potential as scarce assets. With data now said to have surpassed even oil in value (Economist 2017), and considering the substantial resources and strategic collaborations needed to generate volumes of health data, it is no wonder corporations seek to minimize access and maintain exclusive control. The proprietary nature of the health information exchange (HIE) economy thus encourages data monetization and disincentivizes forms of data sharing that are not accompanied by a promise of shared profits. As such, bioethicists’ recommendations to implement more relevant and representative data sets can prove naïve to the complexities of data exchange markets and sharing agreements driven by an entirely different set of incentives.
This is only one example of a wider range of incentives and disincentives at play when attempting to align development goals with patient-centered values and design criteria. The further we progress in our work as bioethicists, the more apparent it becomes that it is not enough for us to call attention to ethical and normative considerations or propose design solutions to prevent potential harms. Instead, if we wish to be effective, we are also tasked with the need to be aware of and responsive to the range of incentives and disincentives that influence key stakeholders’ motivation and capacity to actuate ethical guidelines, no matter how concrete or well operationalized we fashion those guidelines to be.
Integrating Behavioral Science into Embedded Ethics
To address this challenge in our work, we turn towards a framework of behavioral economics, a field that combines elements of cognitive psychology, anthropology, and economics to understand how and why individuals come to make the choices they do, as well as to identify sociocultural and environmental factors shaping human judgment and decision-making (Kahneman 2011). Employing Herbert Simon’s (1990) notion of “bounded rationality,” or the idea that people employ shortcuts that can undermine strictly rational thought, behavioral economists attempt to map these decisional heuristics to help increase the effectiveness of human decision-making and even, in some cases, to “nudge” positive decision making by actively manipulating the “choice architectures” that structure them (Thaler and Sunstein 2008).
Applying these principles to our embedded ethics work, we have successfully used behavioral economics to encourage the responsible uptake of innovations in real clinical settings. We start by using qualitative and mixed methods to identify which cognitive and decisional criteria, shortcuts, and constraints (including positive and negative influences or reward systems) are at stake in a given decision. Once stakeholders’ motivations, incentives, and understandings are identified, we can draw on an extensive empirical literature outlining strategies to address these particular biases and cognitive shortcuts and to use established techniques—including framing, use of defaults, and normative influence—to positively shape desired decisional outcomes.
In a concrete example from our work, we developed a decision aid for patients with advanced heart failure who are considering a life-altering—and most often, lifesaving—intervention (Kostick et al. 2016). Using an embedded ethics approach in close collaboration with cardiologists and nurse coordinators, the decision aid was shown to facilitate informed, values-congruent decision-making. Part of our approach involved making sure we identified a tool and recommendations for use that could be easily used by a busy clinical team. To address this, we identified positive and negative incentives among clinicians for using the aid as intended, and we found that in some cases, using it was perceived as a disincentivizing loss of independence and self-direction in how clinical staff choose to conduct patient education. Recognizing from the literature that individuals are motivated more by avoiding loss—for example, of resources or power—than by being rewarded with gains, we sought to counter this perception and to incentivize uptake by promoting ownership and self-direction through encouraging key staff to make reflective choices in how, when, and where to use it in practice (Allcott and Rogers 2014; Kahneman 2011). In some ways, this constituted a tradeoff between standardization and the ideal ethical use and increased uptake. However, we considered this a key takeaway from the project: that by using insights from behavior science, the range of such tradeoffs can be explicitly modeled, considered, and prioritized in systematic fashion in light of project goals. Further, we learned that because incentives for behavioral change can disappear once incentives are removed, our task was to better understand how to sustain stakeholders’ intrinsic motivations, so that over time they continue to implement and internalize our recommendations on their own (Uy et al. 2014).
Proximate versus Ultimate Incentives
In the example above, our collaborating stakeholders were “near,” in the sense that they were located at partnering institutions whose motivational dynamics and infrastructure we came to know and understand via our collaborative and ethnographic approach. However, identifying the intended audience or “targets” of behavioral or attitudinal change may sometimes require looking further upstream than one’s immediate collaborators. Returning to our earlier example regarding high-quality data sets, if we as bioethicists were to encourage developers of a ML-based risk prediction tool to improve the quality of their training data set, this would be akin to asking a tree to drink more heartily from the soil absent a good rainfall. Often, the critical decisions we encourage collaborators and stakeholders to make may be outside of their immediate control. And while it would be ideal to suggest that bioethicists have a role, therefore, to seek ultimate rather than proximate sources of power and influence, this is no easy task. Nor is it clear whether this falls within the scope of a bioethicist’s role. It certainly lies beyond the traditional boundaries of bioethical training and can stray into realms of social, economic, or political activism.
It is undeniable, however, that truly effective and sustainable embedded ethics often depend on going the extra mile to identify ultimate rather than proximate actors with the capacity to implement our proposed guidelines. While it may be beyond our bioethics training and expertise to strategize or implement systems-level incentives for achieving certain ethical guidelines or policy frameworks, this does not mean we should not embed” in our already-embedded teams a range of other experts—social psychologists, cognitive and decision scientists—with the relevant knowledge and methodologies (such as game theory or ethnography) to guide our strategies.
Conclusion
In order for bioethicists to issue normative recommendations that actually translate into practice, they must embed within development teams (as a first step), they must work with those who understand behavioral and implementation science, and they must sometimes be willing to orient away from ideal normative ethics and towards non-ideal ethics (ethical recommendations that are feasible and actionable for the context or system in question; Victor and Guidry-Grimes 2021). Combining embedded ethics with behavioral science insights provides a concrete way for bioethicists to maximize their potential effectiveness. This proposal is a complement to the “bottom-up” AI ethics implied by the embedded approach, in its emphasis that “top-down” forces still exist and require attention (Annoni et al. 2018; Mittelstadt 2019). Our hope is that these approaches can meet in the middle, around a theoretical and methodological framework that helps to give bioethicists back their “teeth” in making actionable and effective ethical recommendations.
References
- Adashi EY, Walters LB, and Menikoff JA. 2018. “The Belmont Report at 40: Reckoning with Time.” Am J Public Health 108 (10): 1345–48. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Allcott H, and Rogers T. 2014. “The Short-Run and Long-Run Effects of Behavioral Interventions: Experimental Evidence from Energy Conservation.” Am Econ Rev 104 (10): 3003–37. [Google Scholar]
- Annoni A, et al. 2018. Artificial Intelligence: A European Perspective. Luxembourg: Publications Office of the European Union. DOI: 10.2760/11251. [DOI] [Google Scholar]
- Barlevy D, et al. “Capacities and Limitations of Using Polygenic Risk Scores for Reproductive Decision Making.” Am J Bioethics 22 (2): 42–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bezuidenhout L, and Ratti E. 2021. “What Does It Mean to Embed Ethics in Data Science? An Integrative Approach Based on Microethics and Virtues.” AI Soc 36 (3): 939–53. [Google Scholar]
- Bishop JP, and Jotterand F. 2006. “Bioethics as Biopolitics.” J Med Philos 31 (3): 205–12. [DOI] [PubMed] [Google Scholar]
- Bonnemains V, Saurel C, and Tessier C. 2018. “Embedded Ethics: Some Technical and Ethical Challenges.” Ethics Info Technol 20 (1): 41–58. [Google Scholar]
- Cohen IG, et al. 2020. “The European artificial intelligence strategy: implications and challenges for digital health.” Lancet Digit Health 2 (7): e376–e379. [DOI] [PubMed] [Google Scholar]
- Dave P, and Dastin J. 2021. “Money, Mimicry and Mind Control: Big Tech Slams Ethics Brakes on AI.” Reuters, Sept. 8. https://www.reuters.com/technology/money-mimicry-mind-control-big-tech-slams-ethics-brakes-ai-2021-09-08/.
- Economist. 2017. “The World’s Most Valuable Resource Is No Longer Oil, but Data.” Economist, May 6. https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data?utm_medium=cpc.adword.pd&utm_source=google&utm_campaign=a.22brand_pmax&utm_content=_conversion.direct-response.anonymous&gclid=Cj0KCQjwxveXBhDDARIsA-I0Q0x3SMoKwqdbSaOr_UbwODoMgYdTVVZrUjSTo8Lgj8fcOWPG2f04lh-sIaAuGTEALw_wcB&gclsrc=aw.ds. [Google Scholar]
- European Commission. 2021. “Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts.” Commission Staff Working Document. Brussels: Europeann Commission. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=SWD:2021:0085:FIN:EN:PDF. [Google Scholar]
- Friedman B, and Nissenbaum H. 1996. “Bias in Computer Systems.” ACM Trans Info Syst 14 (3): 330–47. [Google Scholar]
- Heer J 2018. “The Partnership on AI.” AI Matters 4 (3): 25–26. [Google Scholar]
- Jobin A, Ienca M, and Vayena E. 2019. “The global landscape of AI ethics guidelines.” Nat Mach Intell 1 (9): 389–99. [Google Scholar]
- Jongsma KR, Bredenoord AL, and Lucivero F. 2018. “Digital Medicine: An Opportunity to Revisit the Role of Bioethicists.” Am J Bioethics 18 (9): 69–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kahneman D 2011. Thinking, Fast and Slow. New York: Macmillan. [Google Scholar]
- Kostick KM, and Blumenthal-Barby JS. 2021. “Avoiding ‘Toxic Knowledge’: The Importance of Framing Personalized Risk Information in Clinical Decision-Making.” Per Med 18 (2): 91–95. DOI: 10.2217/pme-2020-0174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kostick KM, et al. 2016. “Development and Validation of a Patient-Centered Knowledge Scale for Left Ventricular Assist Device Placement.” J Heart Lung Transplant 35 (6): 768–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kostick KM, et al. 2018. “A Multisite Randomized Controlled Trial of a Patient-Centered Ventricular Assist Device Decision Aid (VADDA Trial).” J Cardiac Fail 24 (10): 661–71. [DOI] [PubMed] [Google Scholar]
- Kostick-Quenet KM, et al. 2022b. “Mitigating Racial Bias in Machine Learning.” J Law Med Ethics 50 (1): 92–100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kostick-Quenet K, et al. 2022a. “Integrating Personalized Risk Scores in Decision Making About Left Ventricular Assist Device (LVAD) Therapy: Clinician and Patient Perspectives.” J Heart Lung Transplant 41 (4): S230. [Google Scholar]
- Kostick-Quenet K, et al. 2022c. “Researchers’ Ethical Concerns About Using Adaptive Deep Brain Stimulation for Enhancement.” Front Hum Neurosci. DOI: 10.3389/fnhum.2022.813922. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McLennan S, et al. 2020. “An Embedded Ethics Approach for AI Development.” Nat Mach Intell 2 (9): 488–90. [Google Scholar]
- McLennan S, et al. 2022. “Embedded Ethics: A Proposal for Integrating Ethics into the Development of Medical AI.” BMC Med Ethics 23 (1): 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Microsoft. 2022. “Microsoft Responsible AI Standard, v2.” https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4ZPmV. [Google Scholar]
- Mittelstadt B 2019. “Principles Alone Cannot Guarantee Ethical AI.” Nat Mach Intell 1 (11): 501–7. [Google Scholar]
- Muñoz KA, et al. 2020. “Researcher Perspectives on Ethical Considerations in Adaptive Deep Brain Stimulation Trials.” Front Hum Neurosci. DOI: 10.3389/fn-hum.2020.578695. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Muñoz KA, et al. 2021. “Pressing Ethical Issues in Considering Pediatric Deep Brain Stimulation for Obsessive-Compulsive Disorder.” Brain Stimulation 14 (6): 1566–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pereira S, et al. 2022. “Polygenic Embryo Screening: Four Clinical Considerations Warrant Further Attention.” Hum Reprod 37 (7): 1375–78. DOI: 10.1093/humrep/deac110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Simon HA 1990. “Bounded Rationality.” In Utility and Probability, ed. Eatwell J, Milgate M, and Newman P, 15–18. London: Palgrave Macmillan. DOI: 10.1007/978-1-349-20568-4_5. [DOI] [Google Scholar]
- Thaler RH, and Sunstein CR. 2008. Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven: Yale University Press. [Google Scholar]
- Uy V, et al. 2014. “Barriers and Facilitators to Routine Distribution of Patient Decision Support Interventions: A Preliminary Study in Community Based Primary Care Settings.” Health Expect 17 (3): 353–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Victor E, and Guidry-Grimes LK. 2021. Applying Nonideal Theory to Bioethics: Living and Dying in a Nonideal World. London: Springer. [Google Scholar]
- White House. 2019. Executive Order on Maintaining American Leadership in Artificial Intelligence. Executive Order 13859. Federal Register. federalregister.gov/d/2019-02544. [Google Scholar]