Abstract
Recent advances in the science and technology of artificial intelligence (AI) and growing numbers of deployed AI systems in healthcare and other services have called attention to the need for ethical principles and governance. We define and provide a rationale for principles that should guide the commission, creation, implementation, maintenance, and retirement of AI systems as a foundation for governance throughout the lifecycle. Some principles are derived from the familiar requirements of practice and research in medicine and healthcare: beneficence, nonmaleficence, autonomy, and justice come first. A set of principles follow from the creation and engineering of AI systems: explainability of the technology in plain terms; interpretability, that is, plausible reasoning for decisions; fairness and absence of bias; dependability, including “safe failure”; provision of an audit trail for decisions; and active management of the knowledge base to remain up to date and sensitive to any changes in the environment. In organizational terms, the principles require benevolence—aiming to do good through the use of AI; transparency, ensuring that all assumptions and potential conflicts of interest are declared; and accountability, including active oversight of AI systems and management of any risks that may arise. Particular attention is drawn to the case of vulnerable populations, where extreme care must be exercised. Finally, the principles emphasize the need for user education at all levels of engagement with AI and for continuing research into AI and its biomedical and healthcare applications.
Keywords: artificial intelligence, machine learning, ethical principles, Belmont principles, transparency, trustworthiness, bias, patient-centered
INTRODUCTION
Significant advances in artificial intelligence (AI), especially in machine learning (ML), have raised hopes of biomedical discovery and of improvements in the quality, timeliness, and consistency of care. While there is much to celebrate in the successes,1 there are serious concerns about the impact of the social, professional, and systemic changes that will inevitably result, particularly with respect to unintended or unanticipated consequences. A growing body of literature urges the development, refinement, and application of ethical principles for the design and use of AI systems developed through ML.2,3 This is particularly relevant in healthcare where such models and tools inform decision-making and clinical care practices, which can have a significant effect on patients, populations, providers, payers, organizations, and potentially all aspects of the healthcare system.
This paper describes the American Medical Informatics Association’s (AMIA) stand on the use of AI in healthcare, highlighting both potential benefits and risks, as well as defining the required principles and rules that must govern the development and adoption of AI in healthcare to assure its safe, effective, just, unbiased, and patient-centered application.
Definition
Merriam-Webster defines AI as (1) a branch of computer science dealing with the simulation of intelligent behavior in computers or (2) the capability of a machine to imitate intelligent human behavior.4 We define AI broadly as the discipline that creates computer systems capable of activities normally associated with cognitive effort.5 ML is a critical component of many AI systems and refers to the ability of a system to “learn” and adapt over time based on exposure to data that reflect changes in underlying conditions, populations, or events, such as changing consumer or professional behavior or increasing numbers of patients with a specific diagnosis or complication. ML is generally an integral component of AI, and the combination is sometimes referred to as AI/ML. However, AI encompasses not only the ability to learn but also to act on what it has learned and change its behavior, such as revising a recommendation based on newly evolving data patterns.
Activities performed by an AI system normally require human intelligence (eg, driving a car, operating a robot), but some systems may also utilize an extensive knowledge base that would generally be beyond human capacity to recall, maintain, or interpret. AI activities include, but are not limited to, reasoning; decision-making; deliberation; visual perception; speech, text, and pattern recognition; prediction; argumentation; and judgment. Systems capable of the above activities with or without human intervention are described as AI systems,5 whether developed by ML or otherwise. We retain the term “AI” for the science of artificial intelligence. For this paper, we define stakeholders broadly as all individuals affected by or involved with an AI system. In the next section on Application of AI, we elaborate on this definition.
Application of AI
In principle, AI has the capacity to benefit patients and healthcare in countless ways. An AI system may take on onerous tasks that humans find difficult or tedious, perform tasks of great cognitive complexity, or complete tasks that may be highly repetitive and yet require undistracted attention. For example, an AI system has demonstrated value in the diagnostic process through superior and more consistent feature identification in pathology and radiology images.6 Combined with genetic data, AI has the potential to predict better outcomes in cancer patients.7 AI in the form of facial recognition technology is utilized for early diagnosis of rare diseases and allows for disease-specific and timely interventions, which reduce morbidity and mortality.8 AI systems can generate plain-language notes from medical notes that are perceived as helpful by patients.9 Recent AI developments in the form of digital assistants to support healthcare delivery during the COVID-19 pandemic have included the screening of symptoms, disease forecasting and triage, medical imaging-based diagnosis and prognosis, early detection and prognosis (nonimaging), and drug repurposing and discovery.10 While virtually all these exemplars performed optimally in well-controlled conditions, significant questions remain concerning the scalability and portability of these solutions from one setting to another, as well as in the same setting with higher volumes or more diverse populations.
Given the breadth of potential AI applications, it must be understood that there are many different stakeholders, whose needs may vary dramatically depending on the context of use. Some of the most obvious stakeholders are the end-users and the system developers themselves, but there are organizational and societal stakeholders as well. In the case of healthcare, stakeholders include, but are not limited to, patients, caregivers, providers, payers, healthcare organizations, emergency services, pharmaceutical and medical device manufacturers, health information technology vendors, administrators, researchers, legal professionals, government officials, policymakers, and regulators. In addition, stakeholders include the IT staff, who must maintain systems, quality officers, individuals responsible for oversight of clinical decisions, and many others who may interact with the system—or its output—in some way.
AI risks
Although the potential benefit of AI is real, there are many apparent challenges and risks at all stages of AI system development and use in the context of healthcare. One of the most fundamental problems encountered has been the development of AI systems with insufficient understanding or attention to the implications of all the different input and output variables used, the context in which those variables are or will be collected, or in which the AI system will be used—yielding potentially incorrect, misleading, or biased results. One example is an algorithm designed to predict complex health needs of patients to allocate resources with the intent to reduce future care costs.11 In this case, bias occurred because the algorithm’s programming used health expenditure as a proxy for health status (“the more spent on healthcare, the worse a person’s health must be”) so that instead of predicting healthcare needs, the algorithm simply predicted future healthcare expenditures. In this specific example, since Black patients with the same level of illness were less likely to be able to afford and access needed services, the algorithm predicted lower future costs, incorrectly assessing better health and fewer needed services for this population.
While most AI systems are trained on data, some AI systems have the ability to train themselves dynamically, modifying recommendations based on their own accumulated experience over time. Such a clinical decision support (CDS) system, often referred to as Adaptive CDS, could have real advantages in some circumstances (eg, detection of emerging antibiotic resistance or potential adverse drug reactions). However, algorithms may also evolve to behave in ways that are unanticipated or unclear to developers and unintuitive to users, including the patients whose care they will influence. Thus, they require additional management and regulation to ensure that they do not create unforeseen patient safety issues, introduce bias, or exacerbate disparities already inherent in healthcare.12
In some cases, AI systems may outperform human abilities, particularly with respect to consistency and attention to nuances and subtle details. This may prove challenging in a completely different way by disrupting established practices and systems, changing job descriptions or responsibilities, potentially even eliminating some jobs. Although it is argued that highly trained professionals will be freed to focus on more demanding decision-making,13 the ripple effects of such disruptive technology may affect individuals, institutions, and whole communities.
The examples above describe ways that an AI system intended for beneficial purposes can go awry. Unfortunately, developers may also design and use AI systems intentionally for unethical or immoral purposes. For example, a deepfake (synthetically generated from an existing image or video, in which a person is replaced by another person’s likeness) was used to scam a company out of $243,00014 and a student used natural language generation to complete his required writing assignments.15 In 2020, an AI tool that alters photos of dressed women into realistic nude images was launched.16 AI can be used to commit crimes by predicting the behavior of individuals or organizations to discover and exploit vulnerabilities.17 An AI system can also be deployed for less sinister but still disruptive tasks such as bot-based telemarketing.18 The unauthorized use of AI in devices that can listen to individuals who are unaware of the intrusion can quickly slide into unethical use of AI.19 Considerations for the implementation of voice assistants and conversational AI in healthcare have recently been described for this new field.20
Although not clearly unethical, businesses using AI to understand consumer purchasing habits can also create privacy violations and social disruption due to insufficient attention to potential downstream affects. In a well-publicized case, a major retailer identified a teenage shopper through her purchases as likely pregnant and began sending baby product advertisements to her home, thereby unwittingly signaling her pregnancy to her father.21
Bias in AI
As discussed, much of modern AI is based on a “machine learning” (ML) model under which software is “trained” using data to arrive at conclusions such as a diagnosis or a prediction. AI has been proposed as a means of reducing or eliminating bias and other variability in decision-making by using presumably untainted data-driven decisions to replace subjective human judgments. Unfortunately, there is clear evidence that prejudices are often deeply embedded in the data used to train an AI system. Before a dataset may be used for ML, the data often need to be preprocessed and transformed into a suitable format, in part because some data may be missing at random, or worse, missing in some systematic way. Errors may arise from incomplete understanding of the characteristics of the data and their provenance, or from biased analytic assumptions. The dataset may be too small or unrepresentative, with a racial, ethnic, or socioeconomic profile that differs markedly from that of the population in which the AI system will be used, or just inherently imperfect because it reflects the impact of historical biases. Any of these biases in the data upon which the model is trained or in the analytic decisions may be unintentionally codified into the ML system and may be perpetuated undetected in the resulting technology.
A good example of underlying bias in data is the case of an AI-informed violence and recidivism risk assessment that uses ML on past data about violent events and their resulting outcomes (arrest, incarceration, and convictions). Such systems may inadvertently learn from racial or social bias present in law enforcement and the judicial system and thus may identify people as violent or likely to reoffend, when those individuals actually pose little or no risk.22,23 Many other examples demonstrate how inadvertent bias in AI has affected insurability, employment, and housing. Amazon discovered that its employment AI tool preferred men over women and had to abandon the tool’s deployment.24 Apple’s credit algorithm extended lower credit to wives than their husbands.25 Hispanics, who are less likely to have bank accounts, are more likely to have their prepaid, legal transactions reported to the Financial Crimes Enforcement Network.26 Facebook’s AI application discriminated by race and gender in housing advertisements.27 In short, real-life data either are generally sampled imperfectly or may be fundamentally skewed by systemic, social, economic, and/or historical biases.28 These biases can adversely affect what is learned through ML, and most importantly, the recommendations that the AI system generates.29
We previously described the study by Obermeyer et al11 in healthcare that led to preferential treatment of White patients over Blacks. Another example includes an AI tool designed to identify patients ready for hospital discharge that demonstrated a bias against people from poorer neighborhoods with more African-Americans.30
As we do not yet understand, nor can fully imagine, the extent of the effect AI has and will have on society, on culture, on laws, on policing and law enforcement, or on the practice of medicine and the entire healthcare ecosystem, principles to govern AI systems are essential. This realization should counsel caution and a principled approach to AI.
AI leading to unethical human behavior
The role of AI in leading to unethical behavior in humans recently has been analyzed in terms of 4 roles: Advisor (makes recommendations), Role Model (models the behavior that a user can adopt), Partner (collaborates with the user on a common goal), and Delegate (executes decisions on behalf of a user).31 As advisor or role model, an AI system nudges humans to unethical behavior by suggesting or recommending antisocial actions or actions harmful to others, using psychological mechanisms including conformity, complacency, inertia, diffusion of responsibility (advisor), observation, imitation, and conformity to social norms (role model). In an enabler or partner role, as in the example of a student using AI to generate essays, AI can corrupt behavior psychologically through collaboration or shared benefits and responsibilities. Using AI as a delegate can put distance between the human and the inflicted harm, providing anonymity, conscience-relief, and displacing responsibilities.32
Many, perhaps most, of the AI systems being developed for deployment in healthcare are focused on improving quality of care and outcomes. However, especially in the domain of billing and claims review, AI could potentially be used to intentionally game or manipulate the system. There is also a risk of discrimination or denial of services when AI is used a priori to regulate access to care based upon insurance provider or to determine every patient’s ability to pay for required medical procedures and interventions.32
PRINCIPLES TO GOVERN AI
The proposed principles for AI governance are displayed in Table 1 and discussed in more detail in the following sections.
Table 1.
Summary of principles governing AI
| AI systems | ||
|---|---|---|
| Rule | Principle | Definitions |
| I. | Autonomy | AI systems must protect the autonomy of all people and treat them with courtesy and respect including facilitating informed consent. |
| II. | Beneficence | AI systems must be helpful to people modeled after compassionate, kind, and considerate human behavior. |
| III. | Nonmaleficence | AI systems shall “do no harm” by avoiding, preventing, and minimizing harm or damage to any stakeholder. |
| IV. | Justice | AI systems must include equity for people in representation and access to AI, its data, and its benefits. AI must support social justice. |
| V. | Explainability | AI developers must describe AI systems in context-appropriate language so that their scope, proper application, and limitations are understandable. |
| VI. | Interpretability | AI developers must endow their systems with the functionality to provide plausible reasoning for decisions or advice in accessible language. |
| VII. | Fairness | AI systems must be free of bias and must be nondiscriminatory. |
| VIII. | Dependability | AI systems must be robust, safe, secure, and resilient. Failure must not leave any system in an unsafe or insecure state. |
| IX. | Auditability | AI systems must provide and preserve a performance “audit trail” including internal changes, model state, input variables, and output for any system decision or recommendation. |
| X. | Knowledge management | AI systems must be maintained including retraining of algorithms. AI models need listed creation, revalidation, and expiration dates. |
| Organizations deploying or developing AI | ||
| XI. | Benevolence | Organizations deploying or developing AI must be committed to use AI systems for positive purposes. |
| XII. | Transparency | AI must be recognizable as such or must announce its nature. AI systems do not incorporate or conceal any special interests and deal even-handedly and fairly with all good faith actors. |
| XIII. | Accountability | AI systems must be the subject of active oversight by the organization, and any risk attributed to AI must be reported, assessed, monitored, measured, and mitigated as needed. Complaints and redress must be guaranteed. |
| Special considerations | ||
| XIV. | Vulnerable populations | AI applied to vulnerable populations requires increased scrutiny to avoid worsening the power differential among groups. |
| XV. | AI research | Academic and industrial research organizations must continue to research AI to address inherent dangers as well as benefits. |
| XVI. | User education | AI developers have a responsibility to educate healthcare providers and consumers on machine learning and AI systems. |
Adherence to the Belmont principles
The four Belmont principles that traditionally apply to the practice of medicine must be extended to AI systems in medicine (AIM). The principles include autonomy, beneficence, nonmaleficence, and justice. These principles may conflict with each other, thereby creating challenges for the development, implementation, and use of AIM.
Autonomy in the context of AI is usually referred to as the capability of AI to operate without human oversight. In the context of ethical principles, however, “autonomy” is further qualified to mean “protecting the autonomy of all people and treating them with courtesy and respect and facilitating informed consent.”33 To promote autonomy especially with regard to AI, we are required to develop systems that provide the most accurate, true, and unbiased representation possible without effort to conceal, or alter data or findings.
Beneficence in the context of AI implies that AI is designed explicitly to be helpful to people who use it, or on whom it is used, and to reflect the ideals of compassionate, kind, and considerate human behavior.
Nonmaleficence is the injunction to “Do No Harm,” that is, that every reasonable effort shall be made to avoid, prevent, and minimize harm or damage to any stakeholder.
Justice refers to equity in representation in and access to AI, data, and the benefits of AI. Justice also requires fair access to redress and remedy be available in the event of harm resulting from the use of AI, as well as the affirmative use of AI to support social justice.34
Trustworthiness: Organizational and technical principles
There are two aspects to trust in the case of AI: (1) the organization deploying and operating the AI must be transparent, responsible, and accountable, and (2) the AI system itself and its data and output must be verifiable. This implies a number of principles for the organization (Benevolence, Transparency, and Accountability) and for the AI (Explainability, Interpretability, Fairness, Dependability, and Auditability).
Organizational principles
Benevolence: Organizations that develop or deploy AI systems must intend to develop and use them for positive purposes (eg, improved health outcomes) rather than for negative purposes (eg, to further bias, exploit individuals, advance financial interests).
Transparency: An AI system may not be unfairly biased to the benefit of its host organization. An organization’s AI systems do not incorporate or conceal any special interests; they deal even-handedly and fairly with all good faith actors. Transparency also requires that stakeholders understand that they are dealing with AI in the first place. Using telephone AI bots that do not declare their nature would thus not be allowable in medicine.19,21
Accountability: AI requires active oversight and a clear “reporting line” to the organization developing, deploying, and maintaining the AI system. Any risk deemed attributable to AI must be reported, assessed, monitored, measured, and mitigated as needed. There must be ongoing oversight of AI systems, as well as a clear and transparent path to identify the group or person that is responsible for their development, validation, and maintenance. Users, providers, patients, caregivers, and other stakeholders need the ability to lodge a complaint and receive proper redress, and escalation of a complaint should be possible. As patients may have limited understanding of the source of harm, it is essential that organizations and providers work as surrogates in the interest of patients.
Technical principles
Explainability: AI may not function as a “black box” to users or patients. While not all details of operation must be stated and understood, developers must declare the scope, proper application, and limitations of their work and provide sufficient information about the general derivation of their output as to be understandable to healthcare providers applying the technology. This requires that, if requested, stakeholders including patients be provided with a role-appropriate (eg, lay language for patients) explanation that is clear, understandable, comprehensive, and straightforward about the AI system’s strengths, limitations, and risks, as well as information supporting realistic expectations.
Interpretability: AI must present plausible reasoning for decisions or advice, which must be presented in appropriately accessible language based on the stakeholder for the context of use.
Fairness: AI must be free of bias and must be nondiscriminatory.
Dependability: AI must be robust, safe, secure, and resilient; at worst, it “fails gracefully,” meaning that it does not leave any system in an unsafe or insecure state.
Auditability: AI must provide an “audit trail” of its performance including internal changes. Transparency further requires that an audit log be preserved for any AI system that allows an understanding of the model state, the input variables, and the resulting output for any system decision or recommendation, as well as the ability to assess changes over time.
Knowledge Management: Developers must maintain AI systems including retraining of algorithms on new data or new populations. The models powering AI need to have clearly listed creation, revalidation, and expiration dates.
These trustworthiness principles recognize that many AI systems will contain highly sophisticated and proprietary code that may not be shared directly to protect developers’ intellectual property. However, to protect users and patients, sufficient detail must be provided at appropriate times (eg, at initiation of an application and following changes) for users to understand the general principles behind the systems and their output and the ways in which they are monitored, retrained, and maintained.
Application of AI in vulnerable populations
AI must be subject to increased scrutiny when applied to vulnerable groups including but not limited to minorities, incarcerated people, military personnel, children, elders, and individuals with disabilities, among others, particularly in cases where such groups were underrepresented in the data used to train the AI. Increased scrutiny of AI is required to avoid contributing to the “digital divide” when there exists a power differential among the groups the model is applied to (such as businesses and consumers). This is particularly critical in healthcare, where patients are often already in a vulnerable position due to the illness or injury for which they are seeking care, potentially compounded by unfamiliarity with medical jargon.
Understanding context of use is also critical. For example, awareness of the lower level of use of the Internet for healthcare by older and lower socioeconomic status populations should be taken into consideration to avoid similar pitfalls for AI applications geared toward these populations.35
AI research
Healthcare must continue to conduct research into every aspect of AI to understand the technology better as it evolves and the social and organizational structures that will be necessary to ensure its humane and ethical application in society and the economy. A recent study of a commercial electronic health record (EHR) vendor’s Sepsis Model found that its transfer to another institution failed36 and suggests that research into the generalizability and portability of AI models from one institution, population, or condition to another and into model degradation over time is critical.
Need for user education
AI is a rapidly evolving discipline that involves highly technical constructs that will be entirely unfamiliar to many end-users of such systems. Healthcare providers, interested patients, caregivers, and consumers must be able to understand and interpret articles and reports on ML and AI-based work.37 Achieving this objective may require AI developers and implementers to publish plain-language summaries of technical reports at the time their tools are applied in healthcare. User documentation should be provided in language that is appropriate to the targeted user base and context of use. Online courses, YouTube instructional videos, and Websites hosted by universities or government agencies (eg, the National Institutes of Health) would also support education of the public.
AI LIFECYCLE
These high-level principles may be interpreted and applied at different stages of the lifecycle of an AI system.
Inception
At inception, the purpose(s) and scope of an envisioned AI system must be made explicit, and stakeholders must be identified and consulted. Specialized expertise may be necessary to assess the potential justification for and likely impact of an AI system. An AI solution will deliver largely the results that it was designed, trained, and developed to deliver. Therefore, it is necessary to ensure that these goals are ethical, transparent, and appropriate to the needs at hand and based upon stakeholder input. In addition, clear documentation of how concepts of interest are operationalized is required, especially in relation to sources of data for training and testing.
Development
Each stage in development—from acquisition of data for training and testing, through choice of development methodology, to continuing engagement of stakeholders and exhaustive “beta-testing”—must be rigorously managed and challenged as necessary to ensure adherence to the principles and standards. An essential step is understanding and characterizing the data used in development and training and careful analysis of the generalizability of the algorithm to other datasets/sources.
Deployment
Deployment must involve all relevant parties, including, in the case of medicine, patients, caregivers, clinicians, other biomedical professionals, administrators, and any other relevant party (see section on Need for user education). Significant educational efforts may be necessary to ensure smooth deployment through fuller appreciation of the technology and through management of expectations and apprehensions. Stakeholders must receive clear, understandable, and honest insights into the AI’s strengths, limitations, and risks; and users must be given realistic expectations. Users must understand that they are dealing with AI.
Algorithmovigilance 38 describes postdeployment monitoring of AIM for serious failure, performance drift, scope creep, “off-label” use, and other problematic developments, in much the same way that drugs are subject to postmarket pharmacovigilance. Therefore, an AI system must have an audit trail that can be reviewed such that its performance can be continuously monitored, with escalation to a responsible authority when deviation from expected performance occurs. Accountable parties at AI-sponsoring/deploying institutions should be identified to perform this monitoring function and to regularly report results to an oversight body. The precise nature and level of oversight that will be required may vary based on the level of risk inherent and the context of use, for example, whether the AI is fully autonomous such as AI embedded in an implanted device or subject to user input, but must always be appropriate to the context in which it is deployed.
Maintenance
Maintenance and knowledge management are critical but frequently neglected parts of the system lifecycle. As populations, diseases, and treatments change and discoveries are made changing medical knowledge, it becomes necessary to update the training models, which are at the very heart of ML.39 For example, an AI system employing a model of COVID-19 in 2021 would be very different from one trained in 2020. At the very least, the old 2020 model would require revalidation using 2021 data. The models powering AI need to have clearly listed creation, revalidation, and expiration dates.
Decommissioning
All systems are eventually retired, and the process of closure, curation, and maintenance of records at the AI system’s retirement must be attended to as rigorously as all the earlier stages. Oversight of this aspect of AI is important as incentives that exist for expenditure of resources for development and deployment of AI may not exist for its necessary oversight and decommissioning.
As AI systems may render recommendations that may affect patient care and that require an audit trail for medical-legal reasons, decommission must incorporate means to preserve a historical record including system inputs and outputs. For AI used in pediatric patients, this may require that the record be kept for age of maturity (18 years) plus 3 years (21 years).
CONCLUSION
Medical knowledge, diagnosis, and treatment in the 21st century likely will outpace any earlier historical period, and AI will play an important role in this progress. AI has the potential to make healthcare safer, more effective, less costly, and even more equitable, but only if AI is introduced judiciously, in the appropriate environments, and in accordance with the principles outlined herein. Without adherence to these principles, including algorithmovigilance, we risk practicing bad or unjust medicine to the detriment of our patients, providers, institutions, and society. It is our responsibility to monitor AI and supervise its ethical, effective, and appropriate use in medicine.
AUTHOR CONTRIBUTIONS
CUL and AES drafted the initial draft. All authors contributed to the development of the paper. AES edited and all authors reviewed and approved the final draft.
ETHICAL APPROVAL
The AMIA Board of Directors formally approved this paper on December 9, 2021.
CONFLICT OF INTEREST STATEMENT
None declared.
REFERENCES
- 1. Topol EJ. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York, NY: Basic Books; 2019. [Google Scholar]
- 2. Tingle J. The computer says no: AI, health law, ethics and patient safety. Br J Nurs 2021; 30 (14): 870–1. [DOI] [PubMed] [Google Scholar]
- 3. Saheb T, Saheb T, Carpenter DO.. Mapping research strands of ethics of artificial intelligence in healthcare: a bibliometric and content analysis. Comput Biol Med 2021; 135: 104660. [DOI] [PubMed] [Google Scholar]
- 4.Merriam-Webster. Artificial intelligence. https://www.merriam-webster.com/dictionary/artificial%20intelligence. Accessed August 27, 2021.
- 5. Matheny ME, Whicher D, Thadaney Israni S.. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA 2020; 323 (6): 509–10. [DOI] [PubMed] [Google Scholar]
- 6. Oszwald A, Wasinger G, Pradere B, Shariat SF, Compérat EM.. Artificial intelligence in prostate histopathology: where are we in 2021? Curr Opin Urol 2021; 31 (4): 430–5. [DOI] [PubMed] [Google Scholar]
- 7. Lee M, Wei S, Anaokar J, Uzzo R, Kutikov A.. Kidney cancer management 3.0: can artificial intelligence make us better? Curr Opin Urol 2021; 31 (4): 409–15. [DOI] [PubMed] [Google Scholar]
- 8. Kruszka P, Addissie YA, McGinn DE, et al. 22q11.2 deletion syndrome in diverse populations. Am J Med Genet A 2017; 173 (4): 879–88. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Bala S, Keniston A, Burden M.. Patient perception of plain-language medical notes generated using artificial intelligence software: pilot mixed-methods study. JMIR Form Res 2020; 4 (6): e16670. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Guo Y, Zhang Y, Lyu T, et al. The application of artificial intelligence and data integration in COVID-19 studies: a scoping review. J Am Med Inform Assoc 2021; 28 (9): 2050–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Obermeyer Z, Powers B, Vogeli C, Mullainathan S.. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019; 366 (6464): 447–53. [DOI] [PubMed] [Google Scholar]
- 12. Petersen C, Smith J, Freimuth RR, et al. Recommendations for the safe, effective use of adaptive CDS in the US healthcare system: an AMIA position paper. J Am Med Inform Assoc 2021; 28 (4): 677–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Wajcman J. Automation: is it really different this time? Br J Sociol 2017; 68 (1): 119–27. [DOI] [PubMed] [Google Scholar]
- 14. Damiani J. A voice deepfake was used to scam a CEO out of $243,000. Forbes Magazine. https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/. Accessed September 9, 2021.
- 15. Robitzski D. This grad student used a neural network to write his papers. Futurism. https://futurism.com/grad-student-neural-network-write-papers. Accessed April 21, 2020.
- 16. Cook J. A powerful new deepfake tool has digitally undressed thousands of women. https://www.huffpost.com/entry/deepfake-tool-nudify-women_n_6112d765e4b005ed49053822. Accessed August 11, 2021.
- 17. Caldwell M, Andrews JTA, Tanay T, et al. AI-enabled future crime. Crime Sci 2020; 9 (1): 14. https://crimesciencejournal.biomedcentral.com/articles/10.1186/s40163-020-00123-8. Accessed August 27, 2021. [Google Scholar]
- 18.YouTube. Meet the robot telemarketer who denies she's a robot - part 1. https://www.youtube.com/watch?v=22ZaKbxmEMA. Accessed January 16, 2022.
- 19. Sezgin E, Huang Y, Ramtekkar U, Lin S.. Readiness for voice assistants to support healthcare delivery during a health crisis and pandemic. NPJ Digit Med 2020; 3: 122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. McGreevey JD, Hanson CW, Koppel R.. Clinical, legal, and ethical aspects of artificial intelligence-assisted conversational agents in health care. JAMA 2020; 324 (6): 552–3. [DOI] [PubMed] [Google Scholar]
- 21. Dugigg C. How companies learn your secrets. https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html. Accessed August 27, 2021.
- 22. Skeem V, Lowenkamp CT.. Risk, Race, and Recidivism: Predictive Bias and Disparate Impact. - Criminology. Wiley Online Library; 2016. https://onlinelibrary.wiley.com/doi/pdf/10.1111/1745-9125.12123. Accessed August 27, 2021. [Google Scholar]
- 23. Hogan NR, Davidge EQ, Corabian G.. On the ethics and practicalities of artificial intelligence, risk assessment, and race. J Am Acad Psychiatry Law 2021; 49 (3): 326–34. [DOI] [PubMed] [Google Scholar]
- 24. Dastin J. Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G. Accessed August 27, 2021.
- 25. Vigdor N. Apple Card Investigated After Gender Discrimination Complaints. https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html. Accessed August 27, 2021.
- 26. Baker J. Bias in Insurance Underwriting: Does AI Help or Hurt? https://silvervinesoftware.com/2020/02/18/use-of-ai-in-insurance-underwriting/. Accessed August 27, 2021.
- 27. Ali M, Sapiezynski P, Bogen M, Korolova A, Mislove A, Rieke A.. Discrimination through Optimization. Proc Acm Hum-Comput Interact 2019; 3 (CSCW): 1–30.34322658 [Google Scholar]
- 28. Shankar S, Halpern Y, Breck E, Atwood J, Wilson J, Sculley D. No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. https://arxiv.org/pdf/1711.08536.pdf. Accessed August 27, 2021.
- 29. Buolamwini J, Gebru T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf. Accessed August 27, 2021.
- 30. Nordling L. A fairer way forward for AI in health care. Nature 2019; 573 (7775): S103–5. [DOI] [PubMed] [Google Scholar]
- 31. Köbis N, Bonnefon JF, Rahwan I.. Bad machines corrupt good morals. Nat Hum Behav 2021; 5 (6): 679–85. [DOI] [PubMed] [Google Scholar]
- 32.Wired.com. Could a Heavy Dose of AI-Powered Analytics Ease Medical Billing Pain? https://www.wired.com/wiredinsider/2019/01/heavy-dose-ai-powered-analytics-ease-medical-billing-pain/. Accessed August 27, 2021.
- 33.Wikipedia. Belmont Report. https://en.wikipedia.org/wiki/Belmont_Report. Accessed August 27, 2021.
- 34. Rogers WA, Draper H, Carter SM.. Evaluation of artificial intelligence clinical applications: detailed case analyses show value of healthcare ethics approach in identifying patient care issues. Bioethics 2021; 35 (7): 623–33. [DOI] [PubMed] [Google Scholar]
- 35. Calixte R, Rivera A, Oridota O, Beauchamp W, Camacho-Rivera M.. Social and demographic patterns of health-related internet use among adults in the United States: a secondary data analysis of the health information national trends survey. Int J Environ Res Public Health 2020; 17 (18): 6856. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Wong A, Otles E, Donnelly JP, et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med 2021; 181 (8): 1065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Liu Y, Chen PC, Krause J, Peng L.. How to read articles that use machine learning: users’ guides to the medical literature. JAMA 2019; 322 (18): 1806–16. [DOI] [PubMed] [Google Scholar]
- 38. Embi PJ. Algorithmovigilance—advancing methods to analyze and monitor artificial intelligence-driven health care for effectiveness and equity. JAMA Netw Open 2021; 4 (4): e214622. [DOI] [PubMed] [Google Scholar]
- 39. Doshi-Velez F, Perlis RH.. Evaluating machine learning articles. JAMA 2019; 322 (18): 1777–9. [DOI] [PubMed] [Google Scholar]
