Abstract
Artificial intelligence-based algorithms are being widely implemented in health care, even as evidence is emerging of bias in their design, problems with implementation, and potential harm to patients. To achieve the promise of using of AI-based tools to improve health, healthcare organizations will need to be AI-capable, with internal and external systems functioning in tandem to ensure the safe, ethical, and effective use of AI-based tools. Ideas are starting to emerge about the organizational routines, competencies, resources, and infrastructures that will be required for safe and effective deployment of AI in health care, but there has been little empirical research. Infrastructures that provide legal and regulatory guidance for managers, clinician competencies for the safe and effective use of AI-based tools, and learner-centric resources such as clear AI documentation and local health ecosystem impact reviews can help drive continuous improvement.
Keywords: artificial intelligence, organizational capabilities, AI competencies, healthcare management, algorithms, AI implementation
INTRODUCTION
Artificial intelligence (AI) refers to computer science techniques that employ machine learning, probabilistic methods, fuzzy systems, expert systems, and evolutionary computing.1 These technologies hold great potential and are being implemented in health care, even as evidence is emerging of bias in their design, problems with implementation, and potential harm to patients.2–4 It would be naïve to expect that potentially transformative AI technologies will be scaled back or abandoned in response to these concerns. Rather, healthcare organizations must identify approaches to mitigate and prevent harm as AI technologies are implemented. We assert that to achieve the promise of using AI-based tools to improve health, healthcare organizations will need to be AI-capable. AI-capable organizations are those with internal systems that function effectively with external infrastructures to ensure the safe, ethical, and effective use of AI-based tools. The purpose of this perspective is to advocate for the use of an organizational capabilities perspective to frame the resourcing of responsible AI in healthcare by managers, regulators, researchers, professional organizations, and other responsible institutions.
What are organizational capabilities?
An organization has a capability when it can construct systems that produce a particular outcome (Figure 1).5 For example, for a hospital to claim that it is capable of providing safe medication management for patients, it must do more than employ competent pharmacists. Systems for safe ordering, fulfillment, administration, and monitoring involve a variety of clinical personnel, procedures, and technologies.
Scholarship in organizational sciences has framed organizational capabilities in terms of management’s ability to leverage resources and competencies to create routines6 that can accomplish the work required to produce the desired outcome.7 In addition, infrastructures8 exist beyond the organization that can both enable and constrain the development of routines. This multilevel vision of organizational capability could guide healthcare leaders as they seek to identify risks, minimize harms, and maximize rewards of deploying AI-based tools.
How does an organizational capabilities perspective complement other frameworks?
There are numerous other framings to be considered that build on seminal work in technology adoption9 and diffusion of innovations.10 The recent work of Greenhalgh et al11 is particularly notable; they conducted a synthesis of existing conceptual resources that produced the nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) framework to help predict and evaluate the success of technology-supported programs in health care. The framework unifies several streams of research into one framework that considers various forms of context, technological features, and individual actors. Similarly, the Consolidated Framework for Implementation Research (CFIR) brings together multiple conceptual resources for implementation of evidence-based medicine12 as does the sociotechnical framework posed by Sittig and Singh13 that focuses on technology and patient safety.
There is a body of scholarship that has been more specific to AI in health care, including the translational path for AI tool development into clinical care delivery proposed by Sendak et al14 that uses real-world implementations to describe approaches that teams have taken in 4 key phases of activity: design and develop, evaluate and validate, diffuse and scale, and continuing monitoring and maintenance. Other work has looked specifically at strategies to manage barriers to adoption of AI tools15 and at the characterization of emerging options for technical deployment.16 Finally, in addition to the work of individual researchers, groups such as the Coalition for Health AI (CHAI) have produced reports that provide thoughtful resources to organizations seeking to develop and implement health-related AI.17
Organizational capabilities are a well-described and foundational theoretical framework in the organization sciences. However possibly because of its roots in market-focused economic theory, with only a relatively recent stream of papers exploring dynamic capabilities and the relationship between capabilities and routines,18,19 there are few examples of the use of organizational capabilities to theorize sociotechnical change in health care.20 The work on dynamic capabilities, combined with more recent scholarship on the notion of digital transformation and organizational capabilities,21 has produced conceptual resources for capturing the complexity of healthcare processes, social organization, regulatory infrastructure, and societal role. The goal of our organizational capabilities framing is to provide a way of organizing the creation, maintenance, and interrelationships of resources that will be needed for healthcare organizations to be AI-capable. In concrete terms, our aim is to help healthcare managers, professional organizations, regulatory bodies, and other responsible actors understand how they should be engaged in the work of enacting safe and responsible health-related AI.
ORGANIZATIONAL CAPABILITIES FOR AI IN HEALTHCARE
The previous section described useful frameworks and resources for AI implementation. These frameworks have been complemented by numerous perspectives on how biomedicine should wrestle with the impacts and potential risks posed by AI technologies.22–26 This paper offers a framing, along with detailed examples, to help guide the implementation and ongoing use of AI that foregrounds the elements managers can put in place, and the interrelationships with external infrastructures that are implicated.
Routines
In the organization sciences, routines are widely understood to be the way most work is accomplished.6,27–30 Routines are defined as “repetitive, recognizable patterns of interdependent actions carried out by multiple actors.” They have been shown to be a source of stability, enabling planning and knowledge sharing across disparate organizational units. They also serve as a vehicle for change, providing pathways for changes to propagate across an organization.31 Routines standardize practices, reducing uncertainty and enabling measurement. An example of a highly standardized routine in health care is medication administration. This routine includes standardized language and defined competencies for the tasks that make up the routine, extensive measurement of its elements, and clarity around the other routines with which it intersects (eg, pharmacy inventory management).29 The automation of these routines enables control over key variables such as efficiency, safety, and clinical practice.
Organizations will need to establish integrated routines for the assessment, implementation, monitoring, and evaluation of AI-based resources. To the extent that these processes can be standardized, the organization has an opportunity to impose an ethical framework on the management of AI-based resources. For example, personnel responsible for assessment of proposed AI tools can use a standardized protocol to evaluate the sponsorship, quality, performance, and planned use of every AI tool, whether it is a relatively minor feature built into a radiological device or a predictive risk scoring tool that will impact patient triage and care.
AI algorithm performance can drift when underlying datasets change as a result of new clinical practices and programs, altered organizational structures, or changes in data structures and processes.32,33 Because of this drift, organizations will need routines not only for the initial evaluation and implementation of new AI-based algorithms and tools, but also for maintaining them and monitoring their outputs. Embi has described this maintenance and monitoring work as algorithmovigilance, defined as “the scientific methods and activities relating to the evaluation, monitoring, understanding, and prevention of adverse effects of algorithms in health care.”34 Because AI-based tools are engineered to answer specific questions, routines will be required for all clinical domains. For example, in the case of clinical predictive analytics, the data used for prediction are often derived from ongoing clinical operations and thus content expertise will be needed to maintain calibration over time. Establishing and continuously improving the necessary routines will require allocation of organizational resources, including the development and support of competent personnel.35 These routines can be implemented optimally by using existing organizational structures and routines (eg, regulatory and quality improvement departments and existing IT monitoring functions).
Competencies
In response to the anticipated surge in deployment of AI-based tools, numerous authors have called for systematic learning programs to prepare the clinicians who will use them.1,22,36–46 To identify the competencies that currently guide the preparation of clinicians as they test, provide requirements for, and implement new AI-based technologies in practice settings, the authors completed a scoping review and large gaps in knowledge and training resources were identified.47,48 Although many publications investigated the clinical impact of new AI-based tools, few reported on observed clinician competencies, and even fewer reported on the education or training processes used to assure effective, safe, and ethical technology deployment.
In response to this gap, we subsequently conducted a series of semistructured interviews with subject matter experts in health professions education and healthcare AI, aiming to generate a set of competencies needed by frontline clinicians to evaluate and safely use AI-based tools.49 The expertise of the participants included informatics, data science, medical education, public health, medical imaging, bioethics, and social sciences. The professional domains of the experts included nursing, medicine, surgery, social medicine, business, pharmacy, and bioethics. Although they reported multiple roles, most were selected primarily for their expertise in health care AI, biomedical informatics, and/or the ethical application of AI in the workplace; others for their expertise in health professions education; and two because of dual expertise in health professions education and health care AI. Participants outlined core technical knowledge and skills, but ethical considerations emerged as paramount. This result is not surprising, considering the prevalence of systemic bias and inequities in healthcare, and unjust outcomes related to technology interventions in other social contexts (eg, social services and criminal justice).50,51 Participants made it clear that safe and fair implementation of AI-based tools in healthcare will require more than frontline clinical competencies and further underscored the need for supportive organizational capabilities.
Resources
Effective routines and competent clinicians will require evidence-based frameworks and resources. Examples of such resources are the Model Facts Labels and Ecosystem Impact Statements outlined below. The importance of appropriately classifying and labeling AI-based models, similar to a nutrition label, has been described previously and is summarized below with a unique set of guiding questions.52 Health Ecosystem Impact Statements are proposed here as a novel application of an existing public governance model.53 Both resources can provide accessible, standardized information to individuals and organizations.
AI software documentation
Ideally, templates and rubrics that identify key elements of AI-driven tools would prompt essential considerations for their adoption while promoting organizational and technological transparency. Clear, targeted, and standardized information about healthcare algorithms would guide all those who interface with them, including clinicians, administrators, data managers, public health officers, resource managers, and patients. Scholars have begun to work on mechanisms and approaches to this challenge.52 These Model Facts Labels should be structured to allow continuous learning for each of the target audiences, and ideally would be required for regulatory approval. See Box 1 for questions that Model Facts Labels should address.
Box 1:
Standard facts labels could answer questions such as:
What is the performance of the model?
How is the model managed, evaluated, and updated?
What populations were used to develop the model?
What patient populations are within the appropriate boundaries for use?
What are the potential impacts of using the model outside of boundary conditions?
How does one interpret typical and atypical outputs?
Is the model approved by the U.S. Food and Drug Administration?
What are the potential harms and safeguards for this model?
Health ecosystem impact statements
Health systems are complex ecosystems, and similar to natural ecosystems, changes in one aspect of the environment can lead to cascading perturbations in others.54 Recognizing that an anticipatory analysis of risks, benefits, alternatives, and mitigation strategies could potentially prevent harmful consequences, the National Environmental Policy Act was enacted in 1969 requiring that a structured environmental impact statement (EIS) be completed prior to implementation of all major federal projects. A standardized process for EIS creation assures consideration of a range of potential impacts and includes a period of public comment.55
We propose that similar impact statements should be created before AI-based tool deployment decisions are finalized. Whereas a model facts label provides information about the design and appropriate use of an AI-based tool, the ecosystem impacts of a tool would facilitate evaluation at the local organizational level, and could be embedded into the organization’s existing quality improvement structure. See Box 2 for questions that health ecosystem impact statements should address.
Box 2:
Standard impact statements could answer questions such as:
What local populations would benefit from the tool and what are those benefits?
What local populations could be harmed by the tool and what are those harms?
What alternatives (if any) could be used to solve this problem?
Which teams of professionals will use the tools, how will this affect their workflows, and how should the workflows be modified?
What types of clinician education are needed before the tools are deployed in this environment?
What will be the impact on learners in all phases of education and what might be the impact on clinical learning environments?
How should the outputs and outcomes of the tools be monitored?
What safeguards should be in place to assure appropriate use of the tool?
Infrastructures
All individual clinicians, as well as their health systems, are embedded in structural systems that both enable and constrain activity. Societal infrastructures that emphasize systemic debiasing, provide effective regulatory oversight, and generate interorganizational collaboration are needed to support AI-capable organizations. Several areas are overdue for infrastructure development. Examples include the development of standards for AI reporting (eg, Model Facts Labels) and evaluation (eg, Health Ecosystem Impact Statements), a transparent process for U.S. Food and Drug Administration (FDA) review and approval of algorithms, and ethical and legal frameworks for patient rights and use of data. Such oversight structures would facilitate balance and fairness in the overall landscape of technology development and would promote alignment with health professions ethics and societal expectations.
Addressing the manifestation of structural inequities in healthcare data
The delivery of health services takes place in a society characterized by structural inequities. This manifests in dysfunctional interpersonal interactions, unequal access to services, and inequities in outcomes. Demographic data are widely believed to be of low quality,56 and in some cases services are delivered based on a skewed or inaccurate evidence base.57 Given that AI algorithms require large amounts of data (often from electronic health records) to achieve valid results, the importance of bias in underlying data is beginning to come under wider scrutiny.58 Commentators continue to raise flags about the transfer of algorithms built from narrow datasets into common practice. Urgent concerns are frequently raised about the impact on vulnerable groups and individuals, as biases result from structural inequities.59–61 Therefore, addressing dataset bias requires awareness of both historical roots and evolving branches. Debiasing efforts require attention to how historical patterns, human behavior, and social organization (including technological systems) create and perpetuate structural forms of inequity.62,63 Shared debiasing infrastructures can be built to support evidence-based practice, systemic evaluation, and continuous improvement.
Regulatory frameworks
Inequities and bias must be understood within wider systems of power imbalance. The structures and patterns that lead to disparate outcomes operate within hierarchies of power involving business and finance, professional guilds, national priorities, scientific norms, and cultural hegemony. Therefore, it is necessary to have counterbalancing structures organized to ensure ethical boundaries and legal protections are respected. National regulatory infrastructures are beginning to operate through the FDA and with support from professional organizations.53,64 These will require continual investment and iterative improvements to keep pace with AI-based tool development and deployment in health care.
Interorganizational collaboration
Technical and health professionals should work together to design and improve AI functionality. At best, patients, administrators, and peer institutions should continuously provide feedback, organize resources, and share outcomes. For example, in 2019 the Radiological Society of North America launched Radiology: Artificial Intelligence, a journal to engage with the rapidly emerging applications of machine learning in health-related imaging.65 Position papers from the American Medical Informatics Association and Nature Medicine outline the need to work collaboratively at national and international levels to evaluate and improve AI-based technologies across diverse clinical domains.66,67 In addition, AI-based tools benefit from broader and more inclusive datasets with greater population diversity, which can be achieved through interorganizational collaboration as well as targeted investments, such as the All of Us research program by the U.S. National Institutes of Health.68
AI-CAPABLE ORGANIZATIONS
AI-supportive internal and external structures enable organizations to deploy and use AI tools safely and effectively (ie, to become AI-capable). These organizations can in turn support AI-competent frontline clinicians. Infrastructures, organizations, and individuals will interact and shape outcomes. Learner-centric resources such as clear Model Facts Labels and Ecosystem Impact Statements can help drive continuous improvement.
Legitimate concerns exist about the ability of all healthcare organizations to become AI-capable. Unequal distribution of expertise in managing the complex ethical and technical challenges may put some organizations at existential risk for negative quality, safety, and legal outcomes. To prevent this negative consequence, thought should be given to innovative ways of sharing expertise across systems or across regions.
It is time to prioritize the development of infrastructures, organizational capabilities, and individual competencies in the deployment of AI-based solutions to optimize benefits and minimize harms for the entire healthcare ecosystem. Tactics to move forward in achieving these aims include establishing ownership of AI management issues in health-related professional organizations to enable thought leadership and policy guidance; creating curricular elements in healthcare management training programs; and developing practical tools for the creation of Model Facts Labels and Ecosystem Impact Statements at the organizational level. All these activities will require dedicated resources to implement effectively.
ACKNOWLEDGMENTS
The authors would like to acknowledge the assistance of Nandini Ovalasumuthovu in project management.
Contributor Information
Laurie Lovett Novak, Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
Regina G Russell, Department of Medical Education and Administration and Office of Undergraduate Medical Education, Vanderbilt University School of Medicine, Nashville, Tennessee, USA.
Kim Garvey, Department of Anesthesiology and the Center for Advanced Mobile Healthcare Learning, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
Mehool Patel, Department of Internal Medicine, Northeastern Ohio Medical University (NEOMED), Rootstown, Ohio, USA; Department of Internal Medicine, Western Reserve Hospital, Cuyahoga Falls, Ohio, USA.
Kelly Jean Thomas Craig, Clinical Evidence Development, Aetna®, Medical Affairs CVS Health®, Wellesley, Massachusetts, USA.
Jane Snowdon, Corporate Technical Strategy, IBM® Corporation, Yorktown Heights, New York, USA.
Bonnie Miller, Department of Medical Education and Administration and Center for Advanced Mobile Healthcare Learning, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
FUNDING
This work was funded by IBM Watson Health® and the National Alliance Against Disparities in Patient Health/NIH Research Centers in Minority Institutions (RCMI) Partnership AIM-AHEAD Infrastructure Core 1 OT2 OD032581-01.
AUTHOR CONTRIBUTIONS
All authors contributed to conception and design of the work, drafting or critical revision of the manuscript, and final approval of the manuscript. No new data were generated or analyzed for this manuscript. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
CONFLICT OF INTEREST STATEMENT
The authors have no competing interests to report other than employment (JS and KJTC are/were employed by IBM® Corporation; KJTC employed by CVS Health® Corporation).
DATA AVAILABILITY
No new data were generated or analyzed in support of this research.
REFERENCES
- 1. Matheny ME, Whicher D, Thadaney Israni S.. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA 2020; 323 (6): 509–10. [DOI] [PubMed] [Google Scholar]
- 2. Obermeyer Z, Emanuel EJ.. Predicting the future—big data, machine learning, and clinical medicine. N Engl J Med 2016; 375 (13): 1216–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Obermeyer Z, Powers B, Vogeli C, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019; 366 (6464): 447–53. [DOI] [PubMed] [Google Scholar]
- 4. Char DS, Shah NH, Magnus D.. Implementing machine learning in health care—addressing ethical challenges. N Engl J Med 2018; 378 (11): 981–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Dosi G, Nelson RR, Winter SG.. The Nature and Dynamics of Organizational Capabilities. Oxford: Oxford University Press; 2000. [Google Scholar]
- 6. Pentland BT, Feldman MS.. Organizational routines as a unit of analysis. Ind Corp Change 2005; 14 (5): 793–815. [Google Scholar]
- 7. Winter SG. Understanding dynamic capabilities. Strat Mgmt J 2003; 24 (10): 991–5. [Google Scholar]
- 8. Edwards P, Bowker G, Jackson S, et al. ; University of Michigan. Introduction: an agenda for infrastructure studies. J Assoc Inform Syst 2009; 10 (5): 364–74. [Google Scholar]
- 9. Davis FD. User acceptance of information technology: system characteristics, user perceptions and behavioural impacts. Int J Hum Comput Stud 1993; 38: 475–87. [Google Scholar]
- 10. Rogers EM. Diffusion of Innovations. 4th ed.New York: The Free Press; 1995. [Google Scholar]
- 11. Greenhalgh T, Wherton J, Papoutsi C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res 2017; 19 (11): e8775. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Damschroder L, Reardon CM, Widerquist MAO, et al. The Updated Consolidated Framework for Implementation Research based on user feedback. Implement Sci2022; 17 (1): 1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Sittig DF, Singh HA.. A new socio-technical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010; 19 (Suppl 3): i68–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Sendak MP, D’Arcy J, Kashyap S, et al. A path for translation of machine learning products into healthcare delivery. EMJ Innov 2020; 10: 19-00172. doi: 10.33590/emjinnov/19-00172. [DOI] [Google Scholar]
- 15. Watson J, Hutyra CA, Clancy SM, et al. Overcoming barriers to the adoption and implementation of predictive modeling and machine learning in clinical care: what can we learn from US academic medical centers? JAMIA Open 2020; 3 (2): 167–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Kashyap S, Morse KE, Patel B, et al. A survey of extant organizational and computational setups for deploying predictive models in health systems. J Am Med Inform Assoc 2021; 28 (11): 2445–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Coalition for Health AI. Blueprint for Trustworthy AI: Implementation Guidance and Assurance for Healthcare. 2022. https://www.coalitionforhealthai.org/papers/blueprint-for-trustworthy-ai_V1.0.pdf. [Google Scholar]
- 18. Eisenhardt KM, Martin JA.. Dynamic capabilities: what are they? Strat Mgmt J 2000; 21 (10–11): 1105–21. [Google Scholar]
- 19. Feldman MS, Pentland BT, D’Adderio L, et al. Beyond routines as things: introduction to the special issue on routine dynamics. Org Sci 2016; 27 (3): 505–13. [Google Scholar]
- 20. Leung RC. Health information technology and dynamic capabilities. Health Care Manag Rev 2012; 37 (1): 43–53. [DOI] [PubMed] [Google Scholar]
- 21. Konopik J, Jahn C, Schuster T, et al. Mastering the digital transformation through organizational capabilities: a conceptual framework. Digit Bus 2022; 2 (2): 100019. [Google Scholar]
- 22. Stead WW, Searle JR, Fessler HE, et al. Biomedical informatics: changing what physicians need to know and how they learn. Acad Med 2011; 86 (4): 429–34. [DOI] [PubMed] [Google Scholar]
- 23. Gerke S, Minssen T, Cohen G.. Ethical and legal challenges of artificial intelligence-driven healthcare. Artif Intell Healthc 2020: 295–336. doi: 10.1016/b978-0-12-818438-7.00012-5. [DOI] [Google Scholar]
- 24. Guo Y, Hao Z, Zhao S, et al. Artificial intelligence in health care: bibliometric analysis. J Med Internet Res 2020; 22 (7): e18228. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Harish V, Morgado F, Stern AD, et al. Artificial intelligence and clinical decision making: the new nature of medical uncertainty. Acad Med 2021; 96 (1): 31–6. [DOI] [PubMed] [Google Scholar]
- 26. Eaneff S, Obermeyer Z, Butte AJ.. The case for algorithmic stewardship for artificial intelligence and machine learning technologies. Jama 2020; 324 (14): 1397–8. [DOI] [PubMed] [Google Scholar]
- 27. March JG, Simon HA.. Organizations. New York: Wiley; 1958. [Google Scholar]
- 28. Nelson RR, Winter SG.. An Evolutionary Theory of Economic Change. Cambridge, MA: Belknap Press; 1982. [Google Scholar]
- 29. Novak L, Brooks J, Gadd C, et al. Mediating the intersections of organizational routines during the introduction of a health IT system. Eur J Inform Syst 2012; 21 (5): 552–69. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Greenhalgh T. Role of routines in collaborative work in healthcare organisations. BMJ 2008; 337 (1): a2448–71. [DOI] [PubMed] [Google Scholar]
- 31. Feldman MS, Pentland BT.. Reconceptualizing organizational routines as a source of flexibility and change. Adm Sci Q 2003; 48 (1): 94–118. [Google Scholar]
- 32. Davis SE, Greevy RA Jr, Fonnesbeck C, et al. A nonparametric updating method to correct clinical prediction model drift. JAMIA 2019; 26: 1448–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Finlayson SG, Subbaswamy A, Singh K, et al. The clinician and dataset shift in artificial intelligence. N Engl J Med 2021; 385 (3): 283–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Embi PJ. Algorithmovigilance—advancing methods to analyze and monitor artificial intelligence–driven health care for effectiveness and equity. JAMA Netw Open 2021; 4 (4): e214622. [DOI] [PubMed] [Google Scholar]
- 35. Park Y, Jackson GP, Foreman MA, et al. Evaluating artificial intelligence in medicine: phases of clinical research. JAMIA Open 2020; 3 (3): 326–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Wartman SA, Combs CD.. Medical education must move from the information age to the age of artificial intelligence. Acad Med 2018; 93 (8): 1107–9. [DOI] [PubMed] [Google Scholar]
- 37. Wiljer D, Hakim Z.. Developing an artificial intelligence–enabled health care practice: rewiring health care professions for better care. J Med Imaging Radiat Sci 2019; 50 (4 Suppl 2): S8–14. [DOI] [PubMed] [Google Scholar]
- 38. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25 (1): 44–56. [DOI] [PubMed] [Google Scholar]
- 39. Tolsgaard MG, Boscardin CK, Park YS, et al. The role of data science and machine learning in Health Professions Education: practical applications, theoretical contributions, and epistemic beliefs. Adv Health Sci Educ Theory Pract 2020; 25 (5): 1057–86. [DOI] [PubMed] [Google Scholar]
- 40. Schwartz WB. Medicine and the computer. The promise and problems of change. N Engl J Med 1970; 283 (23): 1257–64. [DOI] [PubMed] [Google Scholar]
- 41. Sapci AH, Sapci HA.. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ 2020; 6 (1): e19285. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Rajkomar A, Dean J, Kohane I.. Machine learning in medicine. N Engl J Med 2019; 380 (14): 1347–58. [DOI] [PubMed] [Google Scholar]
- 43. Lomis KD, Jeffries P, Palatta A, et al. Artificial Intelligence for Health Professions Educators. NAM Perspect; 2021. https://nam.edu/perspectives/. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Masters K. Artificial intelligence in medical education. Med Teach 2019; 41 (9): 976–80. [DOI] [PubMed] [Google Scholar]
- 45. Hodges BD. Ones and zeros: Medical education and theory in the age of intelligent machines. Med Educ 2020; 54 (8): 691–3. [DOI] [PubMed] [Google Scholar]
- 46. James CA, Wheelock KM, Woolliscroft JO.. Machine learning: the next paradigm shift in medical education. Acad Med 2021; 96 (7): 954–7. [DOI] [PubMed] [Google Scholar]
- 47. Garvey KV, Craig KJT, Russell RG, et al. The potential and the imperative: the gap in AI-related clinical competencies and the need to close it. MedSciEduc 2021; 31 (6): 2055–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Garvey KV, Thomas Craig KJ, Russell R, et al. Considering clinician competencies for the implementation of artificial intelligence–based tools in health care: findings from a scoping review. JMIR Med Inform 2022; 10 (11): e37478. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Russell RG, Novak LL, Patel M, et al. Competencies for the Use of Artificial Intelligence-Based Tools by Healthcare Professionals. Acad Med 2022; 98 (3): 348–56. doi: 10.1097/ACM.0000000000004963. [DOI] [PubMed] [Google Scholar]
- 50. Eubanks V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St Martin’s Press; 2018. [Google Scholar]
- 51. Farahany NA. Neuroscience and behavioral genetics in US criminal law: an empirical analysis. J Law Biosci 2015; 2 (3): 485–509. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Sendak MP, Gao M, Brajer N, et al. Presenting machine learning model information to clinical end users with model facts labels. NPJ Digit Med 2020; 3: 41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Digital Health Innovation Action Plan. Silver Spring, MD: United States Food and Drug Administration: Center for Devices and Radiological Health; 2017. https://www.fda.gov/media/106331/download. Accessed March 10, 2023. [Google Scholar]
- 54. Meadows D. Thinking in Systems: A Primer. White River Junction, VT: Chelsea Green Publishing Company; 2008. [Google Scholar]
- 55. United States Environmental Protection Agency. National Environmental Policy Act Review Process. https://www.epa.gov/nepa/national-environmental-policy-act-review-process. Accessed March 10, 2023.
- 56. Klinger EV, Carlini SV, Gonzalez I, et al. Accuracy of race, ethnicity, and language preference in an electronic health record. J Gen Intern Med 2015; 30 (6): 719–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Marzinke MA, Greene DN, Bossuyt PM, et al. Limited evidence for use of a black race modifier in eGFR calculations: a systematic review. Clin Chem 2022; 68 (4): 521–33. [DOI] [PubMed] [Google Scholar]
- 58. Crawford K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press; 2020. [Google Scholar]
- 59. Bailey ZD, Krieger N, Agenor M, et al. Structural racism and health inequities in the USA: evidence and interventions. Lancet 2017; 389 (10077): 1453–63. [DOI] [PubMed] [Google Scholar]
- 60. Amutah C, Greenidge K, Mante A, et al. Misrepresenting race—the role of medical schools in propagating physician bias. N Engl J Med 2021; 384 (9): 872–8. [DOI] [PubMed] [Google Scholar]
- 61. Evans M, Rosenbaum L, Malina D, et al. Editorial: diagnosing and treating systemic racism. N Engl J Med 2020; 383 (3): 274–6. [DOI] [PubMed] [Google Scholar]
- 62. Aronson L. A tale of two doctors—structural inequalities and the culture of medicine. N Engl J Med 2017; 376 (24): 2390–3. [DOI] [PubMed] [Google Scholar]
- 63. Stonington S, Holmes S, Hansen H, et al. Case studies in social medicine—attending to structural forces in clinical practice. N Engl J Med 2018; 379 (20): 1958–61. [DOI] [PubMed] [Google Scholar]
- 64. Parikh R, Obermeyer Z, Navathe A.. Regulation of predictive analytics in medicine: algorithms must meet regulatory standards of clinical benefit. Science 2019; 363 (6429): 810–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65. Reyes M, Meier R, Pereira S, et al. On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiol Artif Intell 2020; 2 (3): e190043. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Petersen C, Smith J, Freimuth RR, et al. Recommendations for the safe, effective use of adaptive CDS in the US healthcare system: an AMIA position paper. J Am Med Inform Assoc 2021; 28 (4): 677–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67. He J, Baxter SL, Xu J, et al. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019; 25 (1): 30–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68. All of Us Research Program Investigators. The “All of Us” research program. N Engl J Med 2019; 381: 668–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
No new data were generated or analyzed in support of this research.