Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2021 Jan 7;18(1):121–139. doi: 10.1007/s11673-020-10080-1

Teasing out Artificial Intelligence in Medicine: An Ethical Critique of Artificial Intelligence and Machine Learning in Medicine

Mark Henderson Arnold 1,2,
PMCID: PMC7790358  PMID: 33415596

Abstract

The rapid adoption and implementation of artificial intelligence in medicine creates an ontologically distinct situation from prior care models. There are both potential advantages and disadvantages with such technology in advancing the interests of patients, with resultant ontological and epistemic concerns for physicians and patients relating to the instatiation of AI as a dependent, semi- or fully-autonomous agent in the encounter. The concept of libertarian paternalism potentially exercised by AI (and those who control it) has created challenges to conventional assessments of patient and physician autonomy. The unclear legal relationship between AI and its users cannot be settled presently, an progress in AI and its implementation in patient care will necessitate an iterative discourse to preserve humanitarian concerns in future models of care. This paper proposes that physicians should neither uncritically accept nor unreasonably resist developments in AI but must actively engage and contribute to the discourse, since AI will affect their roles and the nature of their work. One’s moral imaginative capacity must be engaged in the questions of beneficence, autonomy, and justice of AI and whether its integration in healthcare has the potential to augment or interfere with the ends of medical practice.

Keywords: Artificial intelligence, Machine learning, Ontology, Epistemology, Ethics, Medical practice

Introduction

“Artificial intelligence (AI), in general while not well defined, is the capability of a machine to imitate intelligent human behavior” (Mintz and Brodie 2019, 73) or to employ the use of specifically tasked computer software to undertake tasks usually necessitating the intelligence of the human brain (Bærøe et al., 2020). Such task-directed systems may lexically “think” and “act” in a “human” manner and, further, may even “think” and “act”’ rationally (Stanila 2018).

Broadly, AI integrates large volumes of data through which knowledge and experience in problem-solving is gained at a rate and volume impossible for humans and is employed in medicine to achieve high levels of accuracy in the predictive tasks of diagnosis, prognosis, and therapeutics and, hence, to improve healthcare. As distinct from being a simple data repository and administrative (appointment and billing) system, as in electronic medical records (EMRs), AI in medicine (AIM) has the capacity to improve its performance through “auto-learning” in real-world applications (Reddy et al. 2019).

AIM can be physical, such as in robotic surgery, or virtual, relating to digital image manipulation, neural networks, and machine and deep learning (Hamet and Tremblay 2017). Examples include the following:

  1. Digital imaging. AIM is well-established for tasks of image processing and interpretation, as may occur in analysis of radiological images (Zhou et al. 2000), skin lesions (Ercal et al. 1994), or retinal photography (Gardner et al. 1996).

  2. Creating artificial neural networks, analogous to human decision-making processes, employing mathematical and statistical data-modelling processes to deal algorithmically with unmanageably complex problems (Baxt 1995).

  3. Machine learning, whereby computers “learn” from a process of repetitive data examination using predetermined processes to answer a particular question. It is an iterative process that is critically dependent on the integrity of a training data set if it is to generate reliable results or predictions relevant to diagnosis and treatment (Schwarzer et al., 2000). Machine learning can lead to deep learning.

  4. Deep learning, whereby machine learning contemporaneously merges multiple data sets which are iteratively evaluated in sequential “convolutional neural networks.” These operational steps may be invisible to both developers and users (Lakhani and Sundaram 2017; Schirrmeister et al. 2017).

AIM is being implemented in a number of ways, most recognizably in:

  1. assessing the risk of disease onset;

  2. making estimates of treatment success / assessing efficacy;

  3. managing or alleviating complications of treatment;

  4. assisting ongoing patient care; and

  5. clinical research and drug development. (Becker 2019)

In all of these instances the “concept of using AI in medicine should be as a decision support system with the final action being from humans” (Mintz and Brodie 2019, 79), alternatively “speeding up or aiding human investigation” (Ching et al. 2018, 3). Despite the overtly positive valance evident in the extensive AIM literature, caveats and limitations exist, typically around the integrity of data inputs from the simplest to the most complex applications (Min et al., 2016). It is evident that human inputs to and control over decision support systems must be “meaningful” to deal with the ethical consequences of degrees of augmentation of human agency in patient interactions (Braun et al. 2020).

It has been proposed that these developments may result in an embodied form of AIM where “natural language conversational agents” may be capable of passing the “Turing test,” being indistinguishable from a human agent in health encounters, and that this would assist physicians, empower patients, and allow nudging towards positive health behaviours (Laranjo et al. 2018). Yet, the potential for unintended consequences relating to the safety (Ash et al., 2004; Fraser et al., 2018), efficacy (Becker 2019), rights (Stanila 2018), and procedural and distributional justice (Gill 2018; Reddy et al. 2019; Risse 2019) of this modality and other forms of AIM already incorporated into routine practice requires careful assessment (Schönberger 2019). Our trust in AIM must be justifiable and justified (Bærøe et al., 2020). It has been cautioned that AIM algorithms—if uncritically adopted—may “become the repository of the collective medical mind” (Char et al., 2018, 981).

Healthcare comprises the largest area of AI investment since 2016 (Buch et al., 2018) and is characterized as the pre-eminent means of progress in public and individual health (Fogel and Kvedar 2018), with the consequence that medical technology increasingly affects “not only the way doctors encounter and treat patients but also how they [patients] understand their ailments and complaints” (Hoffman et al. 2018, 246).

AIM may fundamentally change the roles of humans working in medical disciplines reliant on pattern recognition skills, which may be drastically reconfigured or rendered potentially obsolete (Fogel and Kvedar 2018; Coiera 2018, 2019), as in the following four examples (examples 2 to 4 also indicate aspects of AI where specific ethical caution is needed):

  1. It is posited that robotic surgery may replace much human surgery by the late 2050s (Fogel and Kvedar 2018).

  2. A patient-facing digital symptom checking programme is claimed to outperform “the average human doctor on a subset of the Royal College of General Practitioners exam” (Fraser et al., 2018). Yet this conclusion was based on a flawed validation process (Goldhahn et al., 2018) and is rejected by many patients and their advocates as hyperbole (Mittelman et al., 2018).

  3. Machine learning appears to outperform psychiatrists in suicide prediction (Passos et al. 2016; Walsh et al., 2017), raising the possibility of ethical justification for increased remote electronic surveillance of digitally connected “e-patients” at risk (Fonseka et al., 2019).

  4. It has been proposed that dermatologists may become obsolete in the diagnosis of skin malignancy, yet it has been established that errors in AI arise from misinterpretation of lesions in persons with darker skin, potentially perpetuating health inequities (Adamson and Smith 2018).

The technologization of medicine is a contemporary positivist metaphor (Salvador 2018) that demands scrutiny since it will affect patients, current practitioners, and students, and all these groups must come to a deep understanding of “the difference between what a machine says and what we must do” (Coiera 2019, 166).

The “AIM argument” has been insufficiently teased out in relation to the soundness of its premises, and these premises require further enquiry to objectively assess how physicians should respond. This paper highlights the identified and unidentified epistemic, ontologic, ethical, legal, and sociopolitical challenges that AIM poses for the contemporary physician and their patients.

Ontologic and Epistemic Issues of AIM

Ontological Differences

The meaning of ontology in AI differs from that in philosophy; rather than interrogating the nature of being, existence, categorization, and objective reality, ontology in AI pertains to the development of “machine-processable semantics of information sources that can be communicated between different agents (software and humans)” (Fensel 2001). AI “ontology” describes a machine-readable, precisely defined, and constrained model of concepts relating to a real-life phenomenon that permits domains of data to be constructed that “capture” knowledge, that are then manipulated algorithmically to permit “knowledge sharing and reuse” (Fensel 2001, 11).

“E-patients” are “extended” individuals informed by and responsive to both physical and virtual communities of other “e-people”—relatives and friends who access information on their behalf (Kovachev et al., 2017)—whose decisions can be influenced by the “wisdom” or opinions of unrelated or previously unconnected persons with whom their opinions and beliefs are shared (Colineau and Paris 2010).

AIM can be instantiated as an “expert iDoctor,” being an artificial member of the healthcare team “theoretically capable of replacing the judgment of primary care physicians” (Karches 2018, 91), as exemplified in the following headline articles on IEEE Spectrum which personifies technology as an active agent:

Laser Destroys Cancer Cells Circulating in the Blood. The first study of a new treatment in humans demonstrates a non-invasive, harmless cancer killer;

Smart Knife Detects Cancer in Seconds

By excluding mention of human agency, these statements imply autonomous machine function, potentially denigrating human capacities and skills (Karches 2018), and hence the actors in a clinical encounter are the patient, their various influences, the physician, and an instantiated “machine entity” in a therapeutic triad (Swinglehurst et al. 2014).

By de-emphasizing human agency, instantiated AIM raises the question of a new ontological argument supporting the existence of AIM as a “higher being.” The metaphysical non-inferiority of AI was demonstrated in 2016 when an AIM programme constructed a valid refutation of Gödel’s ontological argument, thereby demonstrating that “artificial intelligence systems—particularly higher-order automated theorem provers—are capable of assisting in the discovery and elucidation of new and philosophically relevant knowledge” (Benzmüller and Paleo 2016). In rationally disproving the existence of God, perhaps “the singularity” is closer as machine rationality is—at least—non-inferior to human rationality.

Epistemological Differences

The epistemology of AIM revolves around the deployability of parallel “learner” and “classifier” algorithms—which probabilistically transforms data into knowledge used to generate predictions. This raises epistemic concerns relating to matters such as

  1. biased training data (for instance, relating to race and gender);

  2. inconclusive correlations (for instance, predicting defendant recidivism);

  3. intelligibility (“black box” inexplicable functions);

  4. predictive inaccuracy (for instance, discharging asthma patients with pneumonia from hospital); and

  5. discriminatory outcomes (predicting defendant recidivism, related to (1)). (Schönberger 2019).

Hence there remains fundamental disquiet about the potential agency that AIM may be delegated to have over human autonomy since it is “not appropriate to manage and decide about humans in the same way we manage and decide about objects or data, even if this is technically conceivable” (European Group on Ethics in Science and New Technologies 2018, 9).

Epistemic challenges also arise for students and physicians related to the use of information by e-patients (Kaczmarczyk et al. 2013; Masters 2017; Osborne and Kayser 2018; Grote and Berens 2019); how physicians relate to such patients, with resultant challenges to historical conceptions of privacy and confidentiality; unanticipated effects on healthcare equity; whether there is a discernible “medical IT ethics”; and whether Big Data can be employed for overtly coercive behavioural modification via “hypernudging” (Yeung 2017) under the guise of qualified paternalism (Souto-Otero and Beneito-Montagut 2016; Grote and Berens 2019).

Techne and Phrenos in AIM

Collecting, correctly analysing, and deploying information (Aristotelian techne) is not equivalent to possessing knowledge and judgement based on experience and expertise to achieve a good purpose (phrenos). The democratization of information and challenges to the sociological role of experts in modern “knowledge societies” means that physicians are no longer the sole custodians and mediators of a “body of knowledge and its application” (Grundmann 2017, 27). Expert physicians deploy scholarly and generalizable propositional knowledge. Non-propositional knowledge is derived from experience and cognitive resources and may, with time, become “more” propositional (for instance, through the Delphi approach) and then dynamically inform practice (Rycroft-Malone et al. 2004).

In comparison to most physicians, lay persons typically use less granular or rigorous propositional knowledge and various sources of non-propositional knowledge (some intensely personal and value-laden), some derived from sources such as relatives or a distant network of web-based contacts. Such sources of information, including unverified opinions and advice, are afforded high degrees of salience simply through the individual’s efforts and engagement with information-seeking (Gray et al. 2005), which can be weighed against medical expert information; the result may be trust in or mistrust of medical opinion and advice.

Trust

The erosion of implicit trust in medicine and distrust per se predates the internet-driven expansion of information (Mechanic and Schlesinger 1996; Meyer et al. 2008). A lack of personalized medical care can foster patients’ trust in online information which appears to be personalized and salient when accessed through non-Bayesian search engines specifically designed and patented to resonate with one’s interests and beliefs (Merriman and O’Connor 1999; Krishan, Chang, and Lambert 2002; Mason et al. 2002; Kublickis 2007). Personalized, salient “misinformation” may enable harmful beliefs and behaviours such as not interfering “with the natural process of inflammation” (Ritschl et al. 2018) and vaccination refusal, (Davis 2019; Dyer 2019; Heywood 2019) at odds with evidence-based best practice.

Expertise

Expertise is ascribed to a person through the process of consultation; the status of social and political stakeholders may be misunderstood, since “not all stakeholders are per se experts” (Grundmann 2017, 45). Lay persons as “influencers” can “claim” expertise (Leach 2019), patronage ascribes expertise to them, affirming the consequent. In contrast, licensure as a (medical) expert has an overtly public objective of independently certifying that licenced experts possess particular knowledge, deploy certain skills, and conform to certain behavioural standards (LaRosa and Danks 2018). However, licensure extends to professionals’ use of devices but not the devices themselves. The use of AI may affect and/or erode trust in the autonomy of doctors as the controllers of AI rather than simply being the professional group “licenced” for its use, devaluing the input of the physician (LaRosa and Danks 2018; Karches 2018).

Automation Bias and Complacency

With automation bias, humans preferentially accept automated/computerized recommendations as a “heuristic replacement of vigilant information seeking and processing” (Mosier and Skitka 1996, 203). Delegating to clinical decision support systems may enable false-positive errors of commission (inappropriately acting on incorrect advice) and false negative errors of omission (inaction due to non-notification) (Goddard, Roudsari, and Wyatt 2011).

In contrast, automation complacency arises when humans ascribe higher accuracy and lower error rates to technology compared to humans, and insufficiently scrutinize technologies’ operations (Cohen and Smetzer 2017). In both situations, AIM is afforded an unwarranted expertise which has “no basis for generalisation to truly novel situations, since it is simply grounded in past experiences” when persons lack “understanding of the ‘mechanisms’ by which the behavior or actions are generated” (LaRosa and Danks 2018, 2). In medical encounters, the third element is the trust relationship between doctor (and related institutions) and patient, which is affected by their trust in the predictive veracity of AIM (LaRosa and Danks 2018). Time constraints, cognitive load, user cognitive style, accountability frameworks, and heavy workload—typical of many medical encounters—are established drivers of automation bias (Goddard, Roudsari, and Wyatt 2011). Automation bias is particularly problematic in instances where there is no true “cut-point” between normality and abnormality (Goldenberg, Moss, and Zareba 2006).

The Veracity or “Truthfulness” of AIM Prediction Models

The performance of any AIM is critically sensitive to the fidelity of its data inputs, as exemplified by false-negative misdiagnoses of skin lesions in persons with pigmented skin (Adamson and Smith 2018), reflecting inappropriate overfitting to the training data (Coiera 2019) also evident in other decision-support programmes (Kim, Coiera, and Magrabi 2017; Fraser, Coiera, and Wong 2018; Coiera 2018, 2019), raising the question whether—even for unidimensional tasks—the physical interaction between physician and patient could or should be undertaken by a robotic physician, which seems unanswerable until the overfitting problem is better characterized (Gichoya et al. 2018).

Patients’ Views on What Counts for Knowledge

Persons in their teens in the early millennium (net health consumers after 2030) identified that the Internet was their primary source of health information; this information gains salience through the act of personalized searching (Gray et al. 2005). In 2011, 80 per cent of U.S. adult internet users sought information about at least one of fifteen healthcare topics, 23 per cent of social network users follow friends’ health updates, and routine “memorialization” of persons with certain health conditions occurs through social media (Fox 2011). Furthermore, “searching for health information on the Internet has a positive, relatively large, and statistically significant effect on an individual’s demand for health care” (Suziedelyte 2012, 1828). This behaviour has the potential to lead to poor quality care driven by patient satisfaction metrics unrelated to quality outcomes (Arnold, Kerridge, and Lipworth 2020) , presenting physicians with “new issues on how to manage the information, make good clinical decisions, and impart that information back to individuals with disease” (Deane 2019, xx).

Patients are also presented with challenges. Since individuals have increasingly free access to the data within their medical record and an ability through internet sources to interpret their results, there is a risk of misinterpretation (Fraccaro et al. 2018) and distress (Deane 2019). Patient’s self-directed testing raises problems when symptoms are ignored in the presence of a negative self-determined test (Ickenroth et al. 2010). It is unclear whether patients are prepared to (or should) assume responsibility for harms that arise. Finally, medical practitioners may also lack understanding of the implications of test results and particularly whether testing even advances patients’ interests (Arnold 2019).

Ethical Considerations of AIM

The Development of “Machine/AI ethics” and Health

“Machine ethics” and how intelligent systems interact with humans is not simply the “accidental dilemma” of autonomous driving vehicle vs human accidents, which are problematic (Fleetwood 2017) in ways that the traditional “trolley dilemma” is not. Autonomous vehicle behaviour is informed and governed by several forms of ethical decision-making algorithms (Leben 2017). In healthcare, there are broader issues with automated resource allocation, prioritization, benefit/loss dilemmas, and consequent existential threats (Kose and Pavaloiu 2017) with risk assessment algorithms employed in decision-making (Rasmussen 2012; Nagler, van den Hoven, and Helbing 2018). Human input is needed as an active veto (Verghese, Shah, and Harrington 2018) to avoid automated decisions resulting in unfair outcomes (Broome 1990).

The potential for Big Data to personalize preferences and direct consumers’ attention into or out of what has been described as a locus of self-resonance—the “echo chamber” or “filter bubble”—also permits the possibility of “Big Nudging” (Souto-Otero and Beneito-Montagut 2016) by employing personalized strategies to operationalize health and other governmental policies, affecting an individual’s autonomy through coercion, particularly when data from health devices linked to the “internet of things” covertly reporting to (for instance) health insurance decision algorithms (Bronsema et al. 2015; Helbing et al. 2019).

Moral Enhancement Through AIM, Distributive Justice, and Libertarian Paternalism

It is posited that ethically orientated and directed AI may be a partial solution to the contemporary “moral lag problem” (Klincewicz 2016).

Autonomy-enhancing agent-specific augmentation of moral judgement might overcome an agent’s inherent limitations whereby “moral AI” may promote collective “moral distributive justice” (Savulescu and Maslen 2015) through the elimination of patients’ or physicians’ arbitrary decisions based on racial, gender, or other stereotypes and unconscious biases (Klincewicz 2016). Robotic “moral nudging” has also been proposed (Borenstein and Arkin 2016); however, moral nudging by any agent can be considered a form of libertarian paternalism (Hausman and Welch 2010) at best, or outright paternalism at worst. If, as a result of any form of nudging, the range of choices available to an agent are neither constrained, forbidden, nor inherently “troublesome,” then, rather than being coercive, nudging can steer agents away from “poor” choices affected by social/peer pressure and framing, heuristics, lack of due attention, inappropriate optimism, overconfidence, loss aversion, bias to the status quo, inherent resistance to change, and simple error (Sunstein and Thaler 2003). Crucially, nudging should not advance the interests of a third party, and in this sense, if algorithmic decisions are likely to improve the well-being of an agent, it must be considered whether this is the primary aim or a by-product of an AIM system implemented by a healthcare organization or government instrumentality. If AIM primarily serves these entities’ ends, AIM potentially constrains rather than augments an agent’s autonomy and/or, is a coercive agent.

There is a significant burden of proof incumbent on machine ethicists to justify the development of artificial moral agents (AMAs) over and above the fact that their development is simply possible (van Wynsberghe and Robbins 2019). Van Wynsbergh and Robbins (amongst others) emphasize the complexity around:

  • the “inevitability” of AMAs, that AMAs can be relied upon to prevent harm occurring to humans and the related notion that harm is encompassed solely by “safety”;

  • the spurious conflation of the “black box” reasoning process of AIM, being both akin to yet superior to the unpredictability of human decision-making;

  • the stipulation that AMAs must not be used for immoral purposes; and

  • a rejection of concerns related to “moral deskilling” as described.

Without specific reference to the term AMA, Biller-Andorno and Biller recently proposed that in certain situations of medical uncertainty the capacity for augmented moral imagination and ethical insight may be better provided by machine learning (Biller-Andorno and Biller 2019).

As discussed, data inputs are crucial for appropriate outcomes of machine learning and hence the quality of data inputs arising from the electronic medical record (EMR) or other sources are likely to be insufficient for nuanced ethical guidance. This arises since it is well documented that information in the EMR is rarely if ever questioned after it is first obtained, witnessed by the near-ubiquitous practice (Tsou et al. 2017) and related critique of “cut and paste” or “cloned”’ entries in EMRs (Hirschtick 2006; Hartzband and Groopman 2008; Thielke, Hammond, and Helbig 2007; O’Donnell et al. 2009; O’Malley et al. 2010; Schenarts and Schenarts 2012; Thornton et al. 2013; Weis and Levy 2014). If incorrect, mutable, or absent, data inputs will have facets of rather than a complete “critical reality” and suffer from “distortions of data” (Smith and Koppel 2013). If narrative in the EMR is replaced by structured data codes (Wasserman 2011), this introduces bias and inaccuracies in any subsequent electronic determinations and recommendations based on this information.

The inherent risk of AIM prediction was evident with IBM’s oncology support software, particularly the fact that the “system was trained using synthetic data and was not refined enough to interpret ambiguous, nuanced, or otherwise “messy” patient health records,” and was reliant on exclusively U.S. medical protocols and hence led to “missed diagnoses and erroneous treatment suggestions, breaching the trust of doctors and hospitals” (Cowls et al. 2019, xx).

Purportedly “stable” patterns in a person’s prior decision-making may be difficult to substantiate, and it is suggested that comparison and extrapolation from population data—the “wisdom of crowds”—will appropriately inform AMAs (Biller-Andorno and Biller 2019), implying that decisions may be validly based on populism. Unless ethical decision-making is to be replaced by automated argumentum ad populum, it is inappropriate to remove “the bias [constraints] of human knowledge” (Biller-Andorno and Biller 2019, 1482).

These authors further state that “future generations may find it quite unthinkable to do entirely without a GPS. Perhaps the role of AI-assisted ethical decision making will be similar” (Biller-Andorno and Biller 2019, 1483). However, this is a poor analogy since there is no coherent link between the skills relevant to using GPS, which relate to unambiguous, verifiable outcomes, and the moral imagination and ability to reflect critically that characterizes ethical decision-making. AMAs may only be able to interpret moral situations once conventional deliberations have arrived at normative views and approximations. They may have limited applicability in truly novel situations and must not perpetuate or entrench biases and inequities such as may occur when AI is used in employment decisions (Caplan and Friesen 2017; Steels 2018; Israni and Verghese 2019).

Delegating to AMAs also runs the risk of succumbing to previously described automation bias/complacency (see above), particularly where there is no true “cut-point” of normality/abnormality, such as occurs in ethical conundrums. As Wallach et al. observe, human moral judgement “is a complex activity … a skill that many either fail to learn adequately or perform with limited mastery” (Wallach, Allen, and Smit 2008, 565). Apart from broadly shared transcultural values, it is evident that many cultures and individuals diverge from prevailing Western ethical systems and mores. Hence it can be impossible to agree upon criteria for judging the adequacy of moral decisions in multicultural societies. The irony of an “ethical GPS,” based on biased datasets (see below) in this connection is disturbing at least.

If ethical prediction algorithms “prove to be useful, reliable, and convenient, they might easily become standard tools with widespread use” (Biller-Andorno and Biller 2019, 1480); if so, concern clearly attends the question of with whom decisions regarding utility, reliability, and convenience will rest on whether those decision-makers will have simple or complex reasons to adopt artificially intelligent predictive models of individuals “best interests” that ultimately constrain individuals’ autonomy.

Autonomy

Data mined from a personal health or third party EMR augmented by social media data may assist in medical decision-making for a person temporarily or permanently incapacitated (the so-called “triple-burden“) in the absence of an available human substitute decision-maker—through an “AI-assisted autonomy algorithm” (Lamanna and Byrne 2018). However, rational persons’ preferences are inherently fluid (Benhabib and Day 1981), and it is not clear a priori whether a person with capacity would agree with an algorithmically derived treatment recommendation based on inferred preferences from social media, let alone whether they would permit such a decision to be implemented. Though social media and internet activity does give details as to one’s interests (Lamanna and Byrne 2018), there is a discontinuity between human objectives relating to the definition of the good—which cannot always be inferred from constructed social media identities or the internet—as distinct from the predefined objectives of decision algorithms and benefit/loss analyses that may be engineered into systems to limit cost/expenditure to a third-party payer (potentially) at the expense of human objectives (Kose and Pavaloiu 2017). Furthermore, if a human substitute decision-maker is present, it is questionable whether the decisions of the substitute decision-maker could be trumped by arguably more broadly informed AIM-derived decisions, ascribing hegemony to the decision-making reliability on AI (through automation bias).

Understanding AIM’s benefits and limitations may be more problematic for persons with suboptimal health literacy who may be inappropriately influenced by non-expert opinion or, alternatively, default to automation bias and overconfidence (Cohen and Smetzer 2017). There has been little debate as to the question of whether, through automation bias or overconfidence, healthcare may take a regressive and paternalistic turn dictated by AIM, rather than a path negotiated with physicians. It is also possible that automation bias and complacency will affect both the patient and the physician; in other words: doctor knows best—but the computer knows more and makes fewer mistakes.

AIM and the Potential for Patient Discrimination and Marginalization

AIM data sets must be unbiased regarding matters of age, race/ethnicity, gender identification, abilities, geographic location, and socio-economic status (Caplan and Friesen 2017; Parikh, Teeple, and Navathe 2019; Hwang, Kesselheim, and Vokinger 2019; Israni and Verghese 2019) to avoid automatically entrenching inequalities, as above. However, data sets are often incomplete as a result of patchy implementation of access to the Internet, data uploading, and capture (Alam et al. 2018), and a lack of operator skillset, infrastructure, and suitable hardware (Hughes et al. 2018), regardless of the functionality of internet speed and bandwidth. Inconsistent and non-uniform data sharing and access to EMR data (Wang and DeSalvo 2018) means that inequities will be exacerbated if human services are withdrawn through neoliberal cost-containment imperatives (Graddy and Fingerhood 2018) and incorrect (even disingenuous) assumptions of equality of access. The current example in Australia of the transition of clinical care from physical encounter to virtual consultations over a matter of weeks during COVID-19 raises the possibility that “efficiency” in the context of pandemics may be extrapolated to the non-pandemic future (Arnold and Kerridge 2020).

Nonetheless, the potential for social media as a positive means of delivering simple primary healthcare interventions has been explored (Wu et al. 2018), since physical proximity and geography are not constraints—with the emergence of the e-patient—to social interactions (Collins and Wellman 2010) or agency (Rannenberg, Royer, and Deuker 2009) However, the same caveats regarding health literacy, biased datasets, and non-uniform access noted above also apply here.

Bias and Stigmatization

Though facial recognition software (including facial expression recognition and inference of mood) is well advanced, this software functions less well in non-Caucasian persons (Venditti, Fleming, and Kugelmeyer 2019)—as has been demonstrated in dermatological diagnosis (Adamson and Smith 2018)—potentially introducing new sources of bias through inaccurate data inputs, even if ethically sentient AI systems can be developed. At the most basic level, data inputs from facial recognition software may improve verification that a physician or healthcare professional is interacting with the correct patient and may show promise for genetic syndrome recognition (Mohapatra 2015). The fidelity of these data inputs cannot go unquestioned since the potential for significant misadventure from patient misidentification exists through overfitting, automation bias, automation complacency, and other human factors, particularly when practitioners are under heavy cognitive load.

Seemingly unaware of extensive biomedical literature relating to the importance of non-verbal communication, Sikora and Burleson note “mounting evidence that body expression is as significant to communication” as verbal communication (Sikora and Burleson 2017, 548). Communication through gesture is inherently nuanced and individualized, and at present it is unlikely that AI interpretation of non-verbal communication has an acceptable degree of reliability (Sikora and Burleson 2017). If AI systems are unable to reliably interpret patients’ body language, facial expression, voice tone, and inflection—data easily available to physicians not distracted by data entry in the EMR—then there are likely to be erroneous judgements made by autonomous or based on semi-autonomous algorithms. These “data inputs” are foundational aspects of trust in person-centred care.

Potential New Forms of Harm to Patients

Near Misses

The benefits of IT in medicine are often lauded, but there is comparatively little investigation of errors and misadventures related to the use of IT. Faulty or absent data inputs, lack of facility with the technology per se, and changes in decisions consequent on IT technology/automation bias have resulted in trivial, near-miss, or consequential harms, including fatalities (Kim, Coiera, and Magrabi 2017). Missed diagnoses in dermatology have already been discussed (Adamson and Smith 2018).

Quantifiable Harms

A systematic review of health IT outcomes confirmed that 53 per cent of studies identified quantifiable harms including (rarely) death, with near misses in 29 per cent of studies (Kim, Coiera, and Magrabi 2017). Regulation pertaining to product updates, modifications, and retesting of performance is necessary to assess whether programmes or devices diverge with such modifications (Hwang, Kesselheim, and Vokinger 2019).

Denial of Service

With system adaptability may come a susceptibility to adversarial attacks which will potentially compromise data reliability (Huang et al. 2017), unforeseen privacy and confidentiality vulnerabilities, and the insertion of ransomware that paralyses rather than simply compromises patient care at the individual practice (Susło, Trnka, and Drobnik 2017) and healthcare system levels. This has potential effects on physician training (Zhao et al. 2018), with nationwide rather than local implications (Hughes 2017). Clinicians are (and will remain) the backstop for such problems. Recalling that “problems with IT are pervasive in health care,” these problems can affect care delivery and cause patient harm (Kim, Coiera, and Magrabi 2017, 258).

Drug Errors

Medication dispensing is particularly subject to automation bias and complacency, resulting in medical errors when autocomplete prescribing functions are either not checked or presumed to be correct. Under high cognitive load—such as multitasking and when being frequently interrupted or distracted (Papadakos and Bertman 2017)—humans default to heuristics, and failures in automation are less likely to be identified (Cohen and Smetzer 2017). Cognitive load is a common reason for inappropriate delegation to technology (Parasuraman and Manzey 2010), and hence system errors are less likely to be detected and corrected. Since physicians routinely employ workarounds in response to poor user interfaces or user experiences, they are known to deliberately defeat the inbuilt advantages of systems. For instance, “alert fatigue” results in deliberate deactivation of distracting medication interaction checkers and disabling of hard stop alerts (Martin and Wilson 2019).

Potential Behavioural Changes and New Forms of Harm to Physicians

Agency of the Physician

Medicine’s “most cherished and defining values including care for the individual and meaningful physician–patient interactions” may be compromised by adherence to neoliberal principles of “efficiency, calculability, predictability and control” enabled by AI (Dorsey and Ritzer 2016, 15). Managerial control of physician agency may be achieved by soft or hard-stop guidelines, decision tools, the specification of tasks to be completed, tests that are mandated or impermissible, and the implementation of treatment pathways by non-human automated means in EMRs.

“Disruption”

When applied to clinician behaviour, “disruptive” is pejorative (Rosenstein and O’Daniel 2005), yet when referring to technology, disruption has a distinctly positive and iconoclastic valence (Downes and Nunes 2013) and is held to be an unchallenged good, with unquestioned enthusiasm for the potential for IT to enhance student teaching and clinical care (Robertson, Miles, and Bloor 2010). However, the potential hazards that technology itself can have are downplayed. Negative effects on students’ learning range from annoyance and interruptions to a diminution in scholarship and study (Selwyn 2016) and may be clinically disruptive for physicians (Papadakos and Bertman 2017; Dhillon et al. 2018; Dhillon, Gewertz, and Ley 2019). Technology—when clinically disconnected and designed for documentation to mitigate medico-legal risk and facilitate billing—produces technology that “unnecessarily disrupts clinical work and frustrates clinicians, with less benefit than otherwise possible” (Coiera 2018, 2331).

Distracted Doctoring

The phenomenon of personal devices inappropriately used in the workplace and unnecessary technological interruptions clearly affecting patient safety has been well documented, with the result that limiting or quarantining the use of personal devices in healthcare settings has been advocated (Papadakos and Bertman 2017).

Interactions With the EMR

The incorporation of EMRs has often been noted to be largely due to the coding and billing requirements of healthcare organizations, resulting in a cost/quality trade-off implemented in the context of a “non-cooperative oligopoly with caregivers and administrators focusing on competing objectives” (Sharma et al. 2016, 26). Some health information technology is unfit for the delivery of care versus conformity with billing and documentation (Dhir et al. 2015). Presently it is often unclear how “EHRs are used to capture and represent what clinicians are thinking about the patients and their problems” (Colicchio and Cimino 2018, 172). Clinicians are now required to be simultaneously care providers, scribes, and records managers. Many EMRs are primarily designed to facilitate coding and billing rather than patient care, and intelligent clinician input is needed to configure and optimize these records (Ashton 2018) for the purpose of delivering satisfactory patient care.

Scribes as a Workaround for the EMR

To counter this impost, the implementation of human or non-human scribing to accommodate the needs of the EMR has been suggested (Doval 2018; Bates and Landman 2018). Medical scribes demonstrably increase physician productivity (Walker et al. 2014) and increase hospital revenue over and above the costs of scribes (Slomski 2019). However, if EMR usage was straightforward and truly labour-saving, there would seem little need to employ or deploy scribes (Doval 2018; Bates and Landman 2018; Ashton 2018; Mosaly, Guo, and Mazur 2019; Slomski 2019).

Physician Well-Being, Burnout and Behavioural Changes

Burnout resulting in morbidity in medical professionals is well recognized (Shanafelt et al. 2015; Shanafelt, Dyrbye, and West 2017), and the EMR is cited as a frequent contributor amongst other organizational factors (West, Dyrbye, and Shanafelt 2018). Burnout has been repetitively linked to EMR interactions (Mosaly, Guo, and Mazur 2019), yet some authors have proposed modifications to physicians’ work practices, effectively making the physician —not the EMR—the problem (Babbott et al. 2013). Junior doctors increasingly spend time remotely dealing with the EMR in supposed personal time (Canham et al. 2018). However, some groups (Rassolian et al. 2017) claim that the EMR per se is not responsible for burnout (citing lower levels than other studies) as distinct from more global workplace factors, though it seems impossible to disentangle the workplace’s requirement for EMR engagement from the purported “protective effect” of face-to-face interactions on burnout.

Malign Effects on Patient–Physician Interactions

The term “acquired autism” has been used to describe the potential for EMR compliance to have malign effects on physician–patient interactions (Loper 2018). Physicians are confounded by “the industry-driven expectation to simultaneously serve as curators of the EHR and physicians to patients” (Loper 2018, 1009), and the resultant behavioural change may be antithetical to the interactions needed to establish or maintain a trusting therapeutic relationship. This has been characterized as the “prioritisation of machine objectives over human objectives” (Kose and Pavaloiu 2017, 203).

Deskilling

AIM may contribute to skills loss (Lu 2016). If AIM becomes a new unattainable benchmark, the question of whether it is ethical for humans to undertake certain procedures will arise, particularly if medical error is prevalent and increasingly seen to be preventable (James 2013). However, preventability may largely be a factor of patient and physicians’ access to technology, and unequal access may create a hierarchical health system that is demonstrably unjust. If, as Biller-Andorno and Biller (2019) have contended, ethical AI is akin to using GPS (which has been refuted), then the ethical “navigational skills” of physicians may atrophy, and hence we run the risk of becoming “ethically lost,” if their analogy holds, should the machine fail (Biller-Andorno and Biller 2019).

Legal Issues of AIM

Previously, the privacy and confidentiality of physical medical data—whether for health-related usage or research—was more easily managed through informed patient consent or formal requests to access physical records. However, notions of the “ownership” of data acquired by professionals about “their” patients has long been quashed by legislation separating access rights from physical ownership (Parkinson 1995).

The breadth and depth of information held in EHRs, the ease of authorized and unauthorized access, and simplicity of transmission means that electronic records are fundamentally different from paper records, particularly since (non-physical) security breaches may paralyse whole health systems (Hassan 2018) rather than affecting one person’s confidentiality (Sade 2010). Problems encountered to date with access to shared medical information will necessitate new potentially cross-jurisdictional precedents (Polito 2012). Formal ethics certification for non-clinical health information professionals is non-standardized (Kluge, Lacroix, and Ruotsalainen 2018), creating cross-jurisdictional problems.

Students and healthcare professionals (Kuo et al. 2017) may inappropriately track unaware and non-consenting patients; though tracking is touted as a positive learning exercise, at least 50 per cent of students do not or cannot differentiate between tracking for educational purposes and curiosity-based inquiries (Brisson and Tyler 2016; Brisson et al. 2018). The latter actions are illegal and are actionable in most jurisdictions (De Simone 2019). Hence medical schools’ “informatics and EMR curricula need to teach students to engage meaningfully and judiciously with patients’ data” (Stern 2016, 1397), possibly with registries of consenting patients (Brisson et al. 2018).

In addition to these considerations encompassing conditions of use, system transparency, data content, and quality, it is important to articulate what “privacy protections exist for patients whose data are used,” and how this aligns with jurisdictional privacy legislation (Evans and Whicher 2018, 860). Data held in electronic health records may be de-identified and, through data linkage, generate beneficial research outcomes. There is a tension between beneficence (for the public) and private confidentiality, overriding contemporary notions of privacy and confidentiality according to the duty of “easy rescue,” particularly in circumstances of minimal risk as defined by research regulators (Mann, Savulescu, and Sahakian 2016). Further concepts such as altruism (McCann, Campbell, and Entwistle 2010), supererogation (Schaefer, Emanuel, and Wertheimer 2009), and the avoidance of “free-riding” (Allhoff 2005) are relevant to this argument.

Sociopolitical

Justified Innovation?

The question of justified innovation, for instance, implementing a predictive algorithm for the management of acute psychosis purporting to offer improved clinical outcomes in comparison to conventional physician-delivered care, has been posed (Martinez-Martin, Dunn, and Roberts 2018). Despite preliminary work (Koutsouleris et al. 2016), there is a clear difference between statistical and clinical validation, and hence achieving adequate informed consent is problematic when the algorithmic decision-making process is opaque to clinicians, patients, or courts (Martinez-Martin, Dunn, and Roberts 2018). Furthermore, the generalizability of a predictive model developed in one location/jurisdiction has considerable potential to reinforce or exacerbate biases with a compounding heuristic bias towards the implementation of such predictive models, and a resourcing bias since economic efficiencies are related to physician time-based costs. This will affect the fiduciary dimension of the relationship between the patient, clinicians, and healthcare organizations, public or private. Will non-insured patients be able to opt out of the use of such a predictive algorithm? Do patients with an acute psychosis have capacity to determine whether AI should be involved in their care, particularly if clinicians cannot explain the derivation of AI-derived recommendations? It is feasible that such patients’ autonomy may be constrained by a non-human team member—the AI algorithm.

Is There a “Moral Imperative” to Adopt AIM?

If patients employ AIM as a prelude to the medical consultation in a “flipped classroom” manner, it might seem necessary or even obligatory for person-centric physicians—at a minimum—to support or supplement their own individual human functioning through the “use of technology in order to help people become faster and more accurate at the tasks they are performing” (Luxton 2019). This autonomy-promoting rationale has been used in IBM’s Watson Health™ application, yet automation bias and complacency may “create [new] opportunities for error in diagnosis and treatment … more visible and potentially detrimental outcomes than what might have happened without the new technology” (Luxton 2019, 133) or the potential for patients and physicians to develop unrealistic or unfulfillable expectations (via automation bias and complacency) based on idealized extrapolations from a “superintelligent machine” (Luxton 2019).

Grote and Behrens (2019) have recently argued for the incorporation of AIM on the basis of the implications of the “equal weight view,” whereby the presence of a differing opinion should cast doubt on one’s own position, lest one be anchored in the “steadfast view” that epistemically privileges one’s own position. Appeal to an algorithm through normative alignment with a supposed “epistemic authority” risks ceding authority to technology; again, this invokes automation bias, if not complacency (Grote and Berens 2019).

Discussion

Artificial intelligence, machine learning, information technology, and the Internet have arisen within the cultural context of contemporary society (González 2017) and reflexively influence the ongoing construction of society and our interpersonal relations. IT may not simply be an artefact or tool; the means with which humans employ IT permits IT to function as an actor in human interactions (Introna 2005). In that context, AIM and related autonomous systems appear to offer on one hand “utopian freedom” and on the other “existential dystopia” (Salla et al. 2018). The hyperbole surrounding past and present promises regarding AIM and the future of medicine provokes a range of opinions spanning fear, scepticism, disappointment, and ambivalence to qualified or unqualified enthusiasm and optimism. AIM innovations are “predicted to drive the greatest evolutionary progress in human history, accelerating the emergence of new technology innovations and affecting the way we humans live and act“ (Salla et al. 2018, 1).

Yet, there is discomfort that the “advance of technology effaces something important” (Karches 2018, 92), insofar as technology appears to be a “background assumption about the world that shapes the way the world appears to us” (Karches 2018, 93). Contemporaneously, society is influenced and constructed by the medicalization narrative—the framing of normal processes such as ageing, pregnancy and death as medical events—with medicine interpreted as a hegemonic, self-creating, and sustaining activity (Illich 1975; Scott-Samuel 2003; Moynihan et al. 2002; Moynihan 2003). Prevalent scientism (LeDrew 2018) asserts that “some of the essential non-academic areas of human life can be reduced to (or translated into) science” and accompanying scientific expansionist claims that “all beliefs that can be known (or even rationally maintained) must and can be included within the boundaries of science” (Stenmark 1997, 19).

The related secular belief system of datafication underpins “precision medicine” (Van Dijck 2014), whereby technology permits “new phenomena and areas of ordinary life [to become] subject to measurement, attention, and medical interpretation” (Hofmann and Svenaeus 2018, 8). Datafication is reflected in prevalent genetic determinism (Dar-Nimrod and Heine 2011), supersedes post-modern relativism, and places an overreliance on screening, biomarkers, and the predictive role of -omics for patient care (Mandl and Manrai 2019). Regardless of the data that is incorporated in AIM, “these new algorithmic decision-making tools come with no guarantees of fairness, equitability, or even veracity” (Beam and Kohane 2018, 1318). This concern may compound when future persons with access to “democratized” information (Doval 2018) and personal versions of AI may drive a values-poor and data-centric form of shared decision-making as a new norm (Coiera 2018). A personally-controlled electronic medical record (EMR) incorporating data from primary care, hospital interactions, consultative doctor–patient interactions, and network-based “collaborations to integrate genetic and genomic knowledge into clinical care” may embolden patients to accept, override, or even ignore information or physician recommendations based on algorithmic decision aids (Herr et al. 2018, 143) in the name of person-centric care.

The potential denigration of human capacities and skills (Karches 2018) is exemplified by the example of robotic neurosurgery for compressive radiculopathy illustrates the confusion this may generate for patients (Schiff and Borenstein 2019). Consent is predicated on the patient’s understanding that the surgeon rather than the AIM system determines the need for surgery, a common misunderstanding provoking patient anxiety, as is the perceived extent and invasiveness of robotic surgery (Müller and Bostrom 2016). If the patient’s understanding of the role of AI in their care is unclear, it exposes all parties to potential liabilities which may be difficult to disentangle, with potential corporeal and legal adverse effects. The clinical encounter extends beyond the patient (and their various influences) and the physician, instantiating “AIM” as a team member in a therapeutic triad (Swinglehurst et al. 2014).

The traditional doctor–patient dyad has been affected by population diversity, globalized access to information, and increasing technologization of much of the urbanized populace (Swinglehurst et al. 2014), with the emergence of the e-patient, “equipped, enabled, empowered and engaged in their health and health care decisions” (Ferguson 2007, 6), who may either doubt or refute the notion of physician omniscience and actively engage in their own care (Hay et al. 2008). Reflecting income and education, access to and facility with technology has always been a determinant of health (Frank and Mustard 1994), and it is suggested that access to technology may potentially mitigate some social health disparities (Wangberg et al. 2007), at least amongst those with sufficient access. However, the potential for the amplification of health disparities with unequal access to (or dependence on) technology has largely been overlooked.

Future doctors must be prepared to interact with e-patients (Farnan et al. 2013), particularly those with an increasing level of familiarity with medical jargon and facility with biostatistics and research evaluation. Physicians should have an awareness of the role the Internet plays in patients’ lives and the ability to guide patients to reputable non-commercial sites. Physicians must now productively and safely use email, social media, electronic devices, and medical apps and manage their digital footprint which will be accessed by patients and, consequently, negotiate digital boundaries with patients (Masters 2017).

A critical examination of the potential for unintended consequences that AIM implementation may pose for humanistic patient care is needed so that AIM facilitates optimal patient care (Israni and Verghese 2019). However, AIM will fail to deliver more effective care until there is acknowledgment that what impedes the quality of care is not simply “a lack of data or analytics but changing the behavior of millions of patients and clinicians” (Emanuel and Wachter 2019, 2281).

Conclusion: What is to Be Done?

The rapid and uncritical assimilation of AIM into the physician–patient encounter is touted to bring in a new era of precision medicine and person-centricity through a greater ability to manipulate data both from the narrow perspective of the patient and from a wider perspective. It has the potential to bring the “power” of broad data linkage to enable truly preventative personalized medicine (Grote and Berens 2019). The e-patient and how the physician relates to them—whether devoid of technology, using technology, partially or fully delegating care to technology—creates an ontologically distinct situation from prior care models. There are both potential advantages and disadvantages with such technology in advancing the interests of patients and ontological and epistemic concerns for physicians and patients relating to the instantiation of AIM as a dependent, semi- or fully-autonomous agent in the encounter. Libertarian paternalism potentially exercised by AIM (and those who control it) has created challenges to conventional assessments of patient and physician autonomy. The presently unclear legal relationship between AI and its users cannot be settled presently, and progress in AIM and its implementation in patient care will necessitate an iterative discourse.

Though AI purports to free physicians to become more humanistic, the implementation of AIM may also be characterized as a new normative force to be exploited by neoliberal governmentality (Hilgers 2010). If this is couched as person-centric care, neoliberal imperatives for efficiency in healthcare creates the threat of the physician and patient being actively disengaged from one another, “rendering unnecessary the bodily expertise and caring attentiveness that characterize pre-technological practices” (Karches 2018, 95) and potentially affecting distributional justice and “fairness in healthcare” (Grote and Berens 2019, 205).

Physicians should neither uncritically accept nor unreasonably resist developments in AI but must actively engage and contribute to the discourse, since AIM will affect their roles and the nature of their work. The premises of the “AIM argument” require further teasing out in an ongoing dialectic. It will not be sufficient for future physicians to have simple technos in the use of AIM and IT; they will need to learn new conceptions of how physicians’ phrenos can be augmented by AIM to benefit patient care and will need to consider any consequences for the patient–physician relationship and the outcomes of care. The physician’s moral imaginative capacity must engage with the questions of the beneficence, autonomy, and justice of AIM and whether its integration in healthcare has the potential to interfere with patients’ and physicians’ ends, resulting in non-maleficence.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Adamson AS, Smith A. Machine learning and health care disparities in dermatology. JAMA Dermatology. 2018;154(11):1247–1248. doi: 10.1001/jamadermatol.2018.2348. [DOI] [PubMed] [Google Scholar]
  2. Alam K, Erdiaw-Kwasie MO, Shahiduzzaman M, Ryan B. Assessing regional digital competence: Digital futures and strategic planning implications. Journal of Rural Studies. 2018;60:60–69. doi: 10.1016/j.jrurstud.2018.02.009. [DOI] [Google Scholar]
  3. Allhoff F. Free-riding and research ethics. The American Journal of Bioethics. 2005;5(1):50–51. doi: 10.1080/15265160590927688. [DOI] [PubMed] [Google Scholar]
  4. Arnold M. The autoimmune screen. Australian Journal for General Practitioners. 2019;48:732–734. doi: 10.31128/AJGP-03-19-4885. [DOI] [PubMed] [Google Scholar]
  5. Arnold MH, Kerridge I, Lipworth W. An ethical critique of person-centred care. European Journal for Person Centered Healthcare. 2020;8:34–44. [Google Scholar]
  6. Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: The nature of patient care information system-related errors. Journal of the American Medical Informatics Association. 2004;11(2):104–112. doi: 10.1197/jamia.M1471. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Ashton M. Getting rid of stupid stuff. New England Journal of Medicine. 2018;379(19):1789–1791. doi: 10.1056/NEJMp1809698. [DOI] [PubMed] [Google Scholar]
  8. Babbott S, Manwell LB, Brown R, et al. Electronic medical records and physician stress in primary care: Results from the MEMO Study. Journal of the American Medical Informatics Association. 2013;21(e1):e100–e106. doi: 10.1136/amiajnl-2013-001875. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bærøe K, Miyata-Sturmb A, Hendenb E. How to achieve trustworthy artificial intelligence for health. Bulletin of the World Health Organization. 2020;98:257–262. doi: 10.2471/BLT.19.237289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bates DW, Landman AB. Use of medical scribes to reduce documentation burden: Are they where we need to go with clinical documentation? JAMA Internal Medicine. 2018;178(11):1472–1473. doi: 10.1001/jamainternmed.2018.3945. [DOI] [PubMed] [Google Scholar]
  11. Baxt WG. Application of artificial neural networks to clinical medicine. The Lancet. 1995;346(8983):1135–1138. doi: 10.1016/S0140-6736(95)91804-3. [DOI] [PubMed] [Google Scholar]
  12. Beam AL, Kohane IS. Big data and machine learning in health care. JAMA. 2018;319(13):1317–1318. doi: 10.1001/jama.2017.18391. [DOI] [PubMed] [Google Scholar]
  13. Becker A. Artificial intelligence in medicine: What is it doing for us today? Health Policy and Technology. 2019;8(2):198–205. doi: 10.1016/j.hlpt.2019.03.004. [DOI] [Google Scholar]
  14. Benhabib J, Day RH. Rational choice and erratic behaviour. The Review of Economic Studies. 1981;48(3):459–471. doi: 10.2307/2297158. [DOI] [Google Scholar]
  15. Benzmüller, C., and B. Woltzenlogel Paleo. 2016. The inconsistency in Gödel’s ontological argument: A success story for AI in metaphysics. Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence.
  16. Biller-Andorno N, Biller A. Algorithm-aided prediction of patient preferences— An ethics sneak peek. New England Journal of Medicine. 2019;381(15):1480–1485. doi: 10.1056/NEJMms1904869. [DOI] [PubMed] [Google Scholar]
  17. Borenstein J, Arkin R. Robotic nudges: The ethics of engineering a more socially just human being. Science and Engineering Ethics. 2016;22(1):31–46. doi: 10.1007/s11948-015-9636-2. [DOI] [PubMed] [Google Scholar]
  18. Braun, M., P. Hummel, S. Beck, and P. Dabrock. 2020. Primer on an ethics of AI-based decision support systems in the clinic. Journal of Medical Ethics. doi:10.1136/medethics-2019-105860. [DOI] [PMC free article] [PubMed]
  19. Brisson GE, Barnard C, Tyler PD, Liebovitz DM, Johnson Neely K. A framework for tracking former patients in the electronic health record using an educational registry. Journal of General Internal Medicine. 2018;33(4):563–566. doi: 10.1007/s11606-017-4278-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Brisson GE, Tyler PD. Medical student use of electronic health records to track former patients. JAMA Internal Medicine. 2016;176(9):1395–1397. doi: 10.1001/jamainternmed.2016.3878. [DOI] [PubMed] [Google Scholar]
  21. Bronsema J, Brouwer S, de Boer MR, Groothoff JW. The added value of medical testing in underwriting life insurance. PLOS ONE. 2015;10(12):e0145891. doi: 10.1371/journal.pone.0145891. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Broome, J. 1990. Fairness. Proceedings of the Aristotelian Society.
  23. Buch VH, Ahmed I, Maruthappu M. Artificial intelligence in medicine: Current trends and future possibilities. The British Journal of General Practice. 2018;68(668):143–144. doi: 10.3399/bjgp18X695213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Canham LE, Silber MH, Millstein LS, et al. Resident use of the electronic medical record (EMR) outside of work. Academic Pediatrics. 2018;18(5):e38–e39. doi: 10.1016/j.acap.2018.04.105. [DOI] [Google Scholar]
  25. Caplan A, Friesen P. Health disparities and clinical trial recruitment: Is there a duty to tweet? PLOS Biology. 2017;15(3):e2002040. doi: 10.1371/journal.pbio.2002040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Char DS, Shah NH, Magnus D. Implementing machine learning in health care—addressing ethical challenges. The New England Journal of Medicine. 2018;378(11):981–983. doi: 10.1056/NEJMp1714229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Ching T, Himmelstein DS, Beaulieu-Jones BK, et al. Opportunities and obstacles for deep learning in biology and medicine. Journal of The Royal Society Interface. 2018;15(141):20170387. doi: 10.1098/rsif.2017.0387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Cohen MR, Smetzer JL. Understanding human over-reliance on technology; It’s Exelan, Not Exelon; crash cart drug mix-up; risk with entering a “test order”. Hospital Pharmacy. 2017;52(1):7–12. doi: 10.1310/hpj5201-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Coiera E. The fate of medicine in the time of AI. The Lancet. 2018;392(10162):2331–2332. doi: 10.1016/S0140-6736(18)31925-1. [DOI] [PubMed] [Google Scholar]
  30. Coiera E. On algorithms, machines, and medicine. The Lancet Oncology. 2019;20(2):166–167. doi: 10.1016/S1470-2045(18)30835-0. [DOI] [PubMed] [Google Scholar]
  31. Colicchio TK, Cimino JJ. Clinicians’ reasoning as reflected in electronic clinical note-entry and reading/retrieval: A systematic review and qualitative synthesis. Journal of the American Medical Informatics Association. 2018;26(2):172–184. doi: 10.1093/jamia/ocy155. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Colineau N, Paris C. Talking about your health to strangers: Understanding the use of online social networks by patients. New Review of Hypermedia and Multimedia. 2010;16(1-2):141–160. doi: 10.1080/13614568.2010.496131. [DOI] [Google Scholar]
  33. Collins JL, Wellman B. Small town in the internet society: Chapleau is no longer an island. American Behavioral Scientist. 2010;53(9):1344–1366. doi: 10.1177/0002764210361689. [DOI] [Google Scholar]
  34. Cowls, J., T.C. King, M. Taddeo, and L. Floridi. 2019. Designing AI for social good: Seven essential factors. SSRN, May 15. https://ssrn.com/abstract=3388669 [DOI] [PMC free article] [PubMed]
  35. Dar-Nimrod I, Heine SJ. Genetic essentialism: On the deceptive determinism of DNA. Psychological Bulletin. 2011;137(5):800. doi: 10.1037/a0021860. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Davis M. “Globalist war against humanity shifts into high gear”: Online anti-vaccination websites and “anti-public” discourse. Public Understanding of Science. 2019;28(3):357–371. doi: 10.1177/0963662518817187. [DOI] [PubMed] [Google Scholar]
  37. De Simone DM. When is accessing medical records a HIPAA breach? Journal of Nursing Regulation. 2019;10(3):34–36. doi: 10.1016/S2155-8256(19)30146-2. [DOI] [Google Scholar]
  38. Deane, K.D. 2019. Using new autoantibodies in rheumatic disease: An update. Medscape, May 23. https://www.medscape.com/viewarticle/913168.
  39. Dhillon NK, Francis SE, Tatum JM, et al. Adverse effects of computers during bedside rounds in a critical care unit. JAMA Surgery. 2018;153(11):1052–1053. doi: 10.1001/jamasurg.2018.1752. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Dhillon NK, Gewertz BL, Ley EJ. Leveraging the right technology in health care—Having a cow—Reply. JAMA Surgery. 2019;154(4):365–366. doi: 10.1001/jamasurg.2018.4700. [DOI] [PubMed] [Google Scholar]
  41. Dhir V, Sandhu A, Kaur J, et al. Comparison of two different folic acid doses with methotrexate—a randomized controlled trial (FOLVARI Study) Arthritis Research & Therapy. 2015;17(1):1. doi: 10.1186/s13075-015-0668-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Dorsey ER, Ritzer G. The McDonaldization of medicine. JAMA Neurology. 2016;73(1):15–16. doi: 10.1001/jamaneurol.2015.3449. [DOI] [PubMed] [Google Scholar]
  43. Doval H. In the era of artificial intelligence, what will the destiny of doctors be? Argentine Journal of Cardiology. 2018;86(6):453–455. [Google Scholar]
  44. Downes, L., and P. Nunes. 2013. Big bang disruption. Harvard Business Review, March, 44–56.
  45. Dyer, O. 2019. Measles: Samoa declares emergency as cases continue to spike worldwide. BMJ 367. doi:10.1136/bmj.l6767. [DOI] [PubMed]
  46. Emanuel EJ, Wachter RM. Artificial intelligence in health care: Will the value match the hype? JAMA. 2019;321(23):2281–2282. doi: 10.1001/jama.2019.4914. [DOI] [PubMed] [Google Scholar]
  47. Ercal F, Chawla A, Stoecker WV, Lee H-C, Moss RH. Neural network diagnosis of malignant melanoma from color images. IEEE Transactions on Biomedical Engineering. 1994;41(9):837–845. doi: 10.1109/10.312091. [DOI] [PubMed] [Google Scholar]
  48. European Group on Ethics in Science and New Technologies. 2018. Statement on artificial intelligence, robotics and “autonomous” systems. European Commission. http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf.
  49. Evans EL, Whicher D. What should oversight of clinical decision support systems look like? AMA Journal of Ethics. 2018;20(9):857–863. doi: 10.1001/amajethics.2018.857. [DOI] [PubMed] [Google Scholar]
  50. Farnan JM, Snyder Sulmasy L, Worster BK, Chaudhry HJ, Rhyne JA, Arora VM. Online medical professionalism: Patient and public relationships: Policy statement from the American College of Physicians and the Federation of State Medical Boards. Annals of Internal Medicine. 2013;158(8):620–627. doi: 10.7326/0003-4819-158-8-201304160-00100. [DOI] [PubMed] [Google Scholar]
  51. Fensel, D. 2001. Ontologies. In Ontologies: A silver bullet for knowledge management and electronic commerce, 11–18. Berlin: Springer Berlin Heidelberg.
  52. Ferguson, T. 2007. E-patients: How they can help us heal healthcare. http://rawarrior.com/wp-content/uploads/2013/10/e-Patients_White_Paper.pdf.
  53. Fleetwood J. Public health, ethics, and autonomous vehicles. American Journal of Public Health. 2017;107(4):532–537. doi: 10.2105/AJPH.2016.303628. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Fogel AL, Kvedar JC. Artificial intelligence powers digital medicine. NPJ Digital Medicine. 2018;1(1):5. doi: 10.1038/s41746-017-0012-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Fonseka TM, Bhat V, Kennedy SH. The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Australian & New Zealand Journal of Psychiatry. 2019;53(10):954–964. doi: 10.1177/0004867419864428. [DOI] [PubMed] [Google Scholar]
  56. Fox, S. 2011. The social life of health information, 2011. Pew Research Center’s Internet & American Life Project. http://pewinternet.org/Reports/2011/Social-Life-of-Health-Info.aspx.
  57. Fraccaro P, Vigo M, Balatsoukas P, et al. Presentation of laboratory test results in patient portals: Influence of interface design on risk interpretation and visual search behaviour. BMC Medical Informatics and Decision Making. 2018;18(1):11. doi: 10.1186/s12911-018-0589-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Frank JW, Mustard JF. The determinants of health from a historical perspective. Daedalus. 1994;123(4):1–19. [PubMed] [Google Scholar]
  59. Fraser H, Coiera E, Wong D. Safety of patient-facing digital symptom checkers. The Lancet. 2018;392(10161):2263–2264. doi: 10.1016/S0140-6736(18)32819-8. [DOI] [PubMed] [Google Scholar]
  60. Gardner GG, Keating D, Williamson TH, Elliott AT. Automatic detection of diabetic retinopathy using an artificial neural network: A screening tool. British Journal of Ophthalmology. 1996;80(11):940–944. doi: 10.1136/bjo.80.11.940. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Gichoya, J.W., S.Nuthakki, P.G. Maity, and S. Purkayastha. 2018. Phronesis of AI in radiology: Superhuman meets natural stupidity. Cornell University. https://arxiv.org/ftp/arxiv/papers/1803/1803.11244.pdf.
  62. Gill KS. Artificial intelligence: Looking though the Pygmalion lens. AI & Society. 2018;33(4):459–465. doi: 10.1007/s00146-018-0866-0. [DOI] [Google Scholar]
  63. Goddard K, Roudsari A, Wyatt JC. Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association. 2011;19(1):121–127. doi: 10.1136/amiajnl-2011-000089. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Goldenberg I, Moss AJ, Zareba W. QT interval: How to measure it and what is “normal”. Journal of Cardiovascular Electrophysiology. 2006;17(3):333–336. doi: 10.1111/j.1540-8167.2006.00408.x. [DOI] [PubMed] [Google Scholar]
  65. Goldhahn J, Rampton V, Spinas GA. Could artificial intelligence make doctors obsolete? BMJ. 2018;363:k4563. doi: 10.1136/bmj.k4563. [DOI] [PubMed] [Google Scholar]
  66. González WJ. From intelligence to rationality of minds and machines in contemporary society: The sciences of design and the role of information. Minds and Machines. 2017;27(3):397–424. doi: 10.1007/s11023-017-9439-0. [DOI] [Google Scholar]
  67. Graddy R, Fingerhood M. How community health workers can affect health care. JAMA Internal Medicine. 2018;178(12):1643–1644. doi: 10.1001/jamainternmed.2018.4626. [DOI] [PubMed] [Google Scholar]
  68. Gray NJ, Klein JD, Noyce PR, Sesselberg TS, Cantrill JA. Health information-seeking behaviour in adolescence: The place of the internet. Social Science & Medicine. 2005;60(7):1467–1478. doi: 10.1016/j.socscimed.2004.08.010. [DOI] [PubMed] [Google Scholar]
  69. Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics. 2019;46:205–211. doi: 10.1136/medethics-2019-105586. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Grundmann R. The problem of expertise in knowledge societies. Minerva. 2017;55(1):25–48. doi: 10.1007/s11024-016-9308-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Hamet, P., and J. Tremblay. 2017. Artificial intelligence in medicine. Metabolism 69: S36–S40. [DOI] [PubMed]
  72. Hartzband P, Groopman J. Off the record—avoiding the pitfalls of going electronic. New England Journal of Medicine. 2008;358(16):1656–1657. doi: 10.1056/NEJMp0802221. [DOI] [PubMed] [Google Scholar]
  73. Hassan NU. Ransomware attack on Medstar: Ethical position statement. SEISENSE Journal of Management. 2018;1(4):29–31. [Google Scholar]
  74. Hausman DM, Welch B. Debate: To nudge or not to nudge. Journal of Political Philosophy. 2010;18(1):123–136. doi: 10.1111/j.1467-9760.2009.00351.x. [DOI] [Google Scholar]
  75. Hay MC, Cadigan RJ, Khanna D, et al. Prepared patients: Internet information seeking by new rheumatology patients. Arthritis Care & Research. 2008;59(4):575–582. doi: 10.1002/art.23533. [DOI] [PubMed] [Google Scholar]
  76. Helbing, D., B.S. Frey, G. Gigerenzer, et al. 2019. Will democracy survive Big Data and artificial intelligence? In Towards digital enlightenment: Essays on the dark and light sides of the digital revolution, edited by Dirk Helbing, 73–98. Cham: Springer International Publishing.
  77. Herr TM, Peterson JF, Rasmussen LV, Caraballo PJ, Peissig PL, Starren JB. Pharmacogenomic clinical decision support design and multi-site process outcomes analysis in the eMERGE Network. Journal of the American Medical Informatics Association. 2018;26(2):143–148. doi: 10.1093/jamia/ocy156. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Heywood, A. 2019. Measles elimination–the end of an era? Global Biosecurity 1(3). doi:10.31646/gbio.48.
  79. Hilgers M. The three anthropological approaches to neoliberalism. International Social Science Journal. 2010;61(202):351–364. doi: 10.1111/j.1468-2451.2011.01776.x. [DOI] [Google Scholar]
  80. Hirschtick RE. Copy-and-paste. JAMA. 2006;295(20):2335–2336. doi: 10.1001/jama.295.20.2335. [DOI] [PubMed] [Google Scholar]
  81. Hoffman BL, Rosenthal EL, Colditz JB, McGarry R, Primack BA. Use of Twitter to assess viewer reactions to the medical drama, code black. Journal of Health Communication. 2018;23(3):244–253. doi: 10.1080/10810730.2018.1426660. [DOI] [PubMed] [Google Scholar]
  82. Hofmann B, Svenaeus F. How medical technologies shape the experience of illness. Life Sciences, Society and Policy. 2018;14(3):11. doi: 10.1186/s40504-018-0069-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Huang, S., N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel. 2017. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284: 1–10.
  84. Hughes H, Foth M, Dezuanni M, Mallan K, Allan C. Fostering digital participation and communication through social living labs: A qualitative case study from regional Australia. Communication Research and Practice. 2018;4(2):183–206. doi: 10.1080/22041451.2017.1287032. [DOI] [Google Scholar]
  85. Hughes, O. 2017. WannaCry impact on NHS considerably larger than previously suggested. Digital Health Intelligence Limited. https://www.digitalhealth.net/2017/10/wannacry-impact-on-nhs-considerably-larger-than-previously-suggested/.
  86. Hwang, T.J., A.S. Kesselheim, and K.N. Vokinger. 2019. Lifecycle regulation of artificial intelligence- and machine learning–based software devices in medicine. JAMA. doi:10.1001/jama.2019.16842. [DOI] [PubMed]
  87. Ickenroth MHP, Ronda G, Grispen JEJ, Dinant G-J, de Vries NK, van der Weijden T. How do people respond to self-test results? A cross-sectional survey. BMC Family Practice. 2010;11(1):77–84. doi: 10.1186/1471-2296-11-77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Illich I. Medical nemesis. The expropriation of health. London: Calder & Boyars Ltd.; 1975. [Google Scholar]
  89. Introna, L. 2005. Phenomenological approaches to ethics and information technology. Stanford Encyclopedia of Philosophy.
  90. Israni ST, Verghese A. Humanizing artificial intelligence. JAMA. 2019;321(1):29–30. doi: 10.1001/jama.2018.19398. [DOI] [PubMed] [Google Scholar]
  91. James JT. A new, evidence-based estimate of patient harms associated with hospital care. Journal of Patient Safety. 2013;9(3):122–128. doi: 10.1097/PTS.0b013e3182948a69. [DOI] [PubMed] [Google Scholar]
  92. Kaczmarczyk JM, Chuang A, Dugoff L, et al. E-professionalism: A new frontier in medical education. Teaching and Learning in Medicine. 2013;25(2):165–170. doi: 10.1080/10401334.2013.770741. [DOI] [PubMed] [Google Scholar]
  93. Karches KE. Against the iDoctor: Why artificial intelligence should not replace physician judgment. Theoretical Medicine and Bioethics. 2018;39(2):91–110. doi: 10.1007/s11017-018-9442-3. [DOI] [PubMed] [Google Scholar]
  94. Kim MO, Coiera E, Magrabi F. Problems with health information technology and their effects on care delivery and patient outcomes: A systematic review. Journal of the American Medical Informatics Association. 2017;24(2):246–250. doi: 10.1093/jamia/ocw154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Klincewicz M. Artificial intelligence as a means to moral enhancement. Studies in Logic, Grammar and Rhetoric. 2016;48(1):171–187. doi: 10.1515/slgr-2016-0061. [DOI] [Google Scholar]
  96. Kluge EH, Lacroix P, Ruotsalainen P. Ethics certification of health information professionals. Yearbook of Medical Informatics. 2018;27(1):37–40. doi: 10.1055/s-0038-1641196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Kose U, Pavaloiu A. Dealing with machine ethics in daily life: A view with examples. Intelligent Systems. 2017;30:200–205. [Google Scholar]
  98. Koutsouleris N, Kahn RS, Chekroud AM, et al. Multisite prediction of 4-week and 52-week treatment outcomes in patients with first-episode psychosis: A machine learning approach. The Lancet Psychiatry. 2016;3(10):935–946. doi: 10.1016/S2215-0366(16)30171-7. [DOI] [PubMed] [Google Scholar]
  99. Kovachev LS, Tonchev PT, Nedialkov KL. The e-patient. Journal of Biomedical and Clinical Research. 2017;10(2):79–86. doi: 10.1515/jbcr-2017-0013. [DOI] [Google Scholar]
  100. Krishan, B., L.C. Chang, and R.G. Lambert. 2002. Methods and apparatus for delivering targeted information and advertising over the internet. Google Patents.
  101. Kublickis, P. 2007. System and methods for a micropayment-enabled marketplace with permission-based, self-service, precision-targeted delivery of advertising, entertainment and informational content and relationship marketing to anonymous internet users. Google Patents.
  102. Kuo K-M, Talley PC, Hung M-C, Chen Y-L. A deterrence approach to regulate nurses’ compliance with electronic medical records privacy policy. Journal of Medical Systems. 2017;41(12):198. doi: 10.1007/s10916-017-0833-1. [DOI] [PubMed] [Google Scholar]
  103. Lakhani P, Sundaram B. Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574–582. doi: 10.1148/radiol.2017162326. [DOI] [PubMed] [Google Scholar]
  104. Lamanna C, Byrne L. Should artificial intelligence augment medical decision making? The case for an autonomy algorithm. AMA Journal of Ethics. 2018;20(9):902–910. doi: 10.1001/amajethics.2018.902. [DOI] [PubMed] [Google Scholar]
  105. Laranjo L, Dunn AG, Tong HL, et al. Conversational agents in healthcare: A systematic review. Journal of the American Medical Informatics Association. 2018;25(9):1248–1258. doi: 10.1093/jamia/ocy072. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. LaRosa, E., and D. Danks. 2018. Impacts on trust of healthcare AI. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society.
  107. Leach, M. 2019. Rugby wife slams “Nazi Samoa” for mandatory vaccination in face of measles crisis. nine.com.au. https://honey.nine.com.au/latest/anti-vaccine-wag-taylor-winterstein-nazi-samoa/b31c6b0d-e924-49cf-81b2-ea9781e109ff. Accessed April 2, 2020.
  108. Leben D. A Rawlsian algorithm for autonomous vehicles. Ethics and Information Technology. 2017;19(2):107–115. doi: 10.1007/s10676-017-9419-3. [DOI] [Google Scholar]
  109. LeDrew S. Scientism and utopia: New atheism as a fundamentalist reaction to relativism. In: Stenmark M, Fuller S, Zackariasson U, editors. Relativism and post-truth in contemporary society: Possibilities and challenges. Cham: Springer International Publishing; 2018. pp. 143–155. [Google Scholar]
  110. Loper PL. The electronic health record and acquired physician autism. JAMA Pediatrics. 2018;172(11):1009–1009. doi: 10.1001/jamapediatrics.2018.2080. [DOI] [PubMed] [Google Scholar]
  111. Lu J. Will medical technology deskill doctors? International Education Studies. 2016;7(9):130–134. doi: 10.5539/ies.v9n7p130. [DOI] [Google Scholar]
  112. Luxton DD. Should Watson be consulted for a second opinion? AMA Journal of Ethics. 2019;21(2):131–137. doi: 10.1001/amajethics.2019.131. [DOI] [PubMed] [Google Scholar]
  113. Mandl KD, Manrai AK. Potential excessive testing at scale: Biomarkers, genomics, and machine learning. JAMA. 2019;321(8):739–740. doi: 10.1001/jama.2019.0286. [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Mann, S.P., J. Savulescu, and B.J. Sahakian. 2016. Facilitating the ethical use of health data for the benefit of society: Electronic health records, consent and the duty of easy rescue. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374(2083): 20160130. [DOI] [PMC free article] [PubMed]
  115. Mark A, Kerridge I. Accelerating the de-personalization of medicine: The ethical toxicities of COVID-19. Journal of Bioethical Inquiry. 2020;17(4):815–821. doi: 10.1007/s11673-020-10026-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Martin M, Wilson FP. Utility of electronic medical record alerts to prevent drug nephrotoxicity. Clinical Journal of the American Society of Nephrology. 2019;14(1):115–123. doi: 10.2215/CJN.13841217. [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Martinez-Martin N, Dunn LB, Roberts LW. Is it ethical to use prognostic estimates from machine learning to treat psychosis? AMA Journal of Ethics. 2018;20(9):804–811. doi: 10.1001/amajethics.2018.804. [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Mason, J.C., J. Grant, A. Behrman, and D. Stillwell. 2002. Methods of placing, purchasing and monitoring internet advertising. Google Patents.
  119. Masters K. Preparing medical students for the e-patient. Medical Teacher. 2017;39(7):681–685. doi: 10.1080/0142159X.2017.1324142. [DOI] [PubMed] [Google Scholar]
  120. McCann SK, Campbell MK, Entwistle VA. Reasons for participating in randomised controlled trials: Conditional altruism and considerations for self. Trials. 2010;11(1):31–41. doi: 10.1186/1745-6215-11-31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Mechanic D, Schlesinger M. The impact of managed care on patients’ trust in medical care and their physicians. JAMA. 1996;275(21):1693–1697. doi: 10.1001/jama.1996.03530450083048. [DOI] [PubMed] [Google Scholar]
  122. Merriman, D.A., and K. Joseph O’Connor. 1999. Method of delivery, targeting, and measuring advertising over networks. Google Patents.
  123. Meyer S, Ward P, Coveney J, Rogers W. Trust in the health system: An analysis and extension of the social theories of Giddens and Luhmann. Health Sociology Review. 2008;17(2):177–186. doi: 10.5172/hesr.451.17.2.177. [DOI] [Google Scholar]
  124. Min S, Lee B, Yoon S. Deep learning in bioinformatics. Briefings in Bioinformatics. 2016;18(5):851–869. doi: 10.1093/bib/bbw068. [DOI] [PubMed] [Google Scholar]
  125. Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minimally Invasive Therapy & Allied Technologies. 2019;28(2):73–81. doi: 10.1080/13645706.2019.1575882. [DOI] [PubMed] [Google Scholar]
  126. Mittelman M, Markham S, Taylor M. Patient commentary: Stop hyping artificial intelligence—patients will always need human doctors. BMJ. 2018;363:k4669. doi: 10.1136/bmj.k4669. [DOI] [PubMed] [Google Scholar]
  127. Mohapatra S. Use of facial recognition technology for medical purposes: Balancing privacy with innovation. Pepperdine Law Review. 2015;43:1017–1064. [Google Scholar]
  128. Mosaly PR, Guo H, Mazur L. Toward better understanding of task difficulty during physicians’ interaction with electronic health record system (EHRs) International Journal of Human–Computer Interaction. 2019;35(20):1883–1891. doi: 10.1080/10447318.2019.1575081. [DOI] [Google Scholar]
  129. Mosier KL, Skitka LJ. Human decision makers and automated decision aids: Made for each other? In: Parasuraman R, Mouloua M, editors. Automation and human performance: Theory and applications. Hillsdale: Laurence Erlbaum Associates Inc.; 1996. pp. 201–220. [Google Scholar]
  130. Moynihan R. The making of a disease: Female sexual dysfunction. BMJ. 2003;326(7379):45–47. doi: 10.1136/bmj.326.7379.45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  131. Moynihan R, Heath I, Henry D. Selling sickness: The pharmaceutical industry and disease mongering: Medicalisation of risk factors. BMJ. 2002;324(7342):886–891. doi: 10.1136/bmj.324.7342.886. [DOI] [PMC free article] [PubMed] [Google Scholar]
  132. Müller, V.C., and N. Bostrom. 2016. Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence, 555–572. Springer.
  133. Nagler, J., J. van den Hoven, and D. Helbing. 2018. Ethics for times of crisis. SSRN. doi:10.2139/ssrn.3112742.
  134. O’Donnell HC, Kaushal R, Barrón Y, Callahan MA, Adelman RD, Siegler EL. Physicians’ attitudes towards copy and pasting in electronic note writing. Journal of General Internal Medicine. 2009;24(1):63–68. doi: 10.1007/s11606-008-0843-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. O’Malley AS, Grossman JM, Cohen GR, Kemper NM, Pham HH. Are electronic medical records helpful for care coordination? Experiences of physician practices. Journal of General Internal Medicine. 2010;25(3):177–185. doi: 10.1007/s11606-009-1195-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Osborne R, Kayser L. Skills and characteristics of the e-health literate patient. BMJ. 2018;361:k1656. doi: 10.1136/bmj.k1656. [DOI] [PubMed] [Google Scholar]
  137. Papadakos, P.J., and S. Bertman. 2017. Introduction: The problem of distracted doctoring. In Distracted Doctoring, edited by P.J. Papadakos and S. Bertman, 1–3. Springer.
  138. Parasuraman R, Manzey DH. Complacency and bias in human use of automation: An attentional integration. Human Factors. 2010;52(3):381–410. doi: 10.1177/0018720810376055. [DOI] [PubMed] [Google Scholar]
  139. Parikh, R.B., S. Teeple, and A.S. Navathe. 2019. Addressing bias in artificial intelligence in health care. JAMA. doi:10.1001/jama.2019.18058. [DOI] [PubMed]
  140. Parkinson P. Fiduciary law and access to medical records: Breen v. Williams. Sydney Law Review. 1995;17:433–445. [Google Scholar]
  141. Passos IC, Mwangi B, Cao B, et al. Identifying a clinical signature of suicidality among patients with mood disorders: A pilot study using a machine learning approach. Journal of Affective Disorders. 2016;193:109–116. doi: 10.1016/j.jad.2015.12.066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  142. Polito JM. Ethical considerations in internet use of electronic protected health information. The Neurodiagnostic Journal. 2012;52(1):34–41. [PubMed] [Google Scholar]
  143. Rannenberg, K., D. Royer, and A.Deuker. 2009. The future of identity in the information society: Challenges and opportunities. Springer Science & Business Media.
  144. Rasmussen KB. Should the probabilities count? Philosophical Studies. 2012;159(2):205–218. doi: 10.1007/s11098-011-9698-1. [DOI] [Google Scholar]
  145. Rassolian M, Peterson LE, Fang B, et al. Workplace factors associated with burnout of family physicians. JAMA Internal Medicine. 2017;177(7):1036–1038. doi: 10.1001/jamainternmed.2017.1391. [DOI] [PMC free article] [PubMed] [Google Scholar]
  146. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. Journal of the American Medical Informatics Association. 2019;27(3):491–497. doi: 10.1093/jamia/ocz192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  147. Risse M. Human rights and artificial intelligence: An urgently needed agenda. Human Rights Quarterly. 2019;41(1):1–16. doi: 10.1353/hrq.2019.0000. [DOI] [Google Scholar]
  148. Ritschl V, Lackner A, Boström C, et al. I do not want to suppress the natural process of inflammation: New insights on factors associated with non-adherence in rheumatoid arthritis. Arthritis Research & Therapy. 2018;20(1):234–244. doi: 10.1186/s13075-018-1732-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  149. Robertson I, Miles E, Bloor J. The iPad and medicine. BMJ. 2010;341:c5689. doi: 10.1136/bmj.c5689. [DOI] [Google Scholar]
  150. Rosenstein AH, O’Daniel M. Disruptive behavior and clinical outcomes: Perceptions of nurses and physicians. Nursing Management. 2005;36(1):18–28. doi: 10.1097/00006247-200501000-00008. [DOI] [PubMed] [Google Scholar]
  151. Rycroft-Malone J, Seers K, Titchen A, Harvey G, Kitson A, McCormack B. What counts as evidence in evidence-based practice? Journal of Advanced Nursing. 2004;47(1):81–90. doi: 10.1111/j.1365-2648.2004.03068.x. [DOI] [PubMed] [Google Scholar]
  152. Sade RM. Breaches of health information: Are electronic records different from paper records? Journal of Clinical Ethics. 2010;21(1):39–41. doi: 10.1086/JCE201021106. [DOI] [PubMed] [Google Scholar]
  153. Salla, E., M. Pikkarainen, J. Leväsluoto, H. Blackbright, and P.E. Johansson. 2018. AI innovations and their impact on healthcare and medical expertise. ISPIM Innovation Symposium.
  154. Salvador, V. 2018. On analogical knowledge: Metaphors in biotechnology discourse. Mètode Science Studies Journal: Annual Review. doi:10.7203/metode.9.10940
  155. Savulescu, J., and H. Maslen. 2015. Moral enhancement and artificial intelligence: Moral AI? In Beyond artificial intelligence: The disappearing human-machine divide, edited by J. Romportl, E. Zackova and J. Kelemen, 79–95. Springer.
  156. Schaefer GO, Emanuel EJ, Wertheimer A. The obligation to participate in biomedical research. JAMA. 2009;302(1):67–72. doi: 10.1001/jama.2009.931. [DOI] [PMC free article] [PubMed] [Google Scholar]
  157. Schenarts PJ, Schenarts KD. Educational impact of the electronic medical record. Journal of Surgical Education. 2012;69(1):105–112. doi: 10.1016/j.jsurg.2011.10.008. [DOI] [PubMed] [Google Scholar]
  158. Schiff D, Borenstein J. How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA Journal of Ethics. 2019;21(2):138–145. doi: 10.1001/amajethics.2019.138. [DOI] [PubMed] [Google Scholar]
  159. Schirrmeister RT, Springenberg JT, Fiederer LDJ, et al. Deep learning with convolutional neural networks for EEG decoding and visualization. Human Brain Mapping. 2017;38(11):5391–5420. doi: 10.1002/hbm.23730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  160. Schönberger D. Artificial intelligence in healthcare: A critical analysis of the legal and ethical implications. International Journal of Law and Information Technology. 2019;27(2):171–203. [Google Scholar]
  161. Schwarzer G, Vach W, Schumacher M. On the misuses of artificial neural networks for prognostic and diagnostic classification in oncology. Statistics in medicine. 2000;19(4):541–561. doi: 10.1002/(SICI)1097-0258(20000229)19:4<541::AID-SIM355>3.0.CO;2-V. [DOI] [PubMed] [Google Scholar]
  162. Scott-Samuel A. Less medicine, more health: A memoir of Ivan Illich. Journal of Epidemiology and Community Health. 2003;57(12):935–935. doi: 10.1136/jech.57.12.935. [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Selwyn N. Digital downsides: Exploring university students’ negative engagements with digital technology. Teaching in Higher Education. 2016;21(8):1006–1021. doi: 10.1080/13562517.2016.1213229. [DOI] [Google Scholar]
  164. Shanafelt, T.D, O. Hasan, L.N Dyrbye, et al. 2015. Changes in burnout and satisfaction with work-life balance in physicians and the general US working population between 2011 and 2014. Mayo Clinic Proceedings. [DOI] [PubMed]
  165. Shanafelt TD, Dyrbye LN, West CP. Addressing physician burnout: The way forward. JAMA. 2017;317(9):901–902. doi: 10.1001/jama.2017.0076. [DOI] [PubMed] [Google Scholar]
  166. Sharma L, Chandrasekaran A, Boyer KK, McDermott CM. The impact of health information technology bundles on hospital performance: An econometric study. Journal of Operations Management. 2016;41(1):25–41. doi: 10.1016/j.jom.2015.10.001. [DOI] [Google Scholar]
  167. Sikora, C., and W. Burleson. 2017. The dance of emotion: Demonstrating ubiquitous understanding of human motion and emotion in support of human computer interaction. Seventh International Conference on Affective Computing and Intelligent Interaction (ACII).
  168. Slomski A. Scribes improved ED productivity: Clinical trials update. JAMA. 2019;321(12):1149–1149. doi: 10.1001/jama.2019.2492. [DOI] [PubMed] [Google Scholar]
  169. Smith SW, Koppel R. Healthcare information technology’s relativity problems: A typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ. Journal of the American Medical Informatics Association. 2013;21(1):117–131. doi: 10.1136/amiajnl-2012-001419. [DOI] [PMC free article] [PubMed] [Google Scholar]
  170. Souto-Otero M, Beneito-Montagut R. From governing through data to governmentality through data: Artefacts, strategies and the digital turn. European Educational Research Journal. 2016;15(1):14–33. doi: 10.1177/1474904115617768. [DOI] [Google Scholar]
  171. Stanila, L. 2018. Artificial Intelligence and Human Rights: A challenging approach on the issue of equality. Journal of Eastern European Criminal Law, no. 2: 19–30.
  172. Steels L. What needs to be done to ensure the ethical use of AI? In: Falomir Z, Gilbert K, Plaza E, editors. Artificial Intelligence Research and Development. Current Challenges, New Trends and Applications. Amsterdam: IOS Press; 2018. pp. 10–13. [Google Scholar]
  173. Stenmark M. What is scientism? Religious Studies. 1997;33(1):15–32. doi: 10.1017/S0034412596003666. [DOI] [Google Scholar]
  174. Stern RJ. Teaching medical students to engage meaningfully and judiciously with patient data. JAMA Internal Medicine. 2016;176(9):1397. doi: 10.1001/jamainternmed.2016.3886. [DOI] [PubMed] [Google Scholar]
  175. Sunstein CR, Thaler RH. Libertarian paternalism is not an oxymoron. The University of Chicago Law Review. 2003;70(4):1159–1202. doi: 10.2307/1600573. [DOI] [Google Scholar]
  176. Susło, R., J. Trnka, and J. Drobnik. 2017. Current threats to medical data security in family doctors’ practices. Family Medicine & Primary Care Review(3): 313–318.
  177. Suziedelyte A. How does searching for health information on the Internet affect individuals’ demand for health care services? Social Science & Medicine. 2012;75(10):1828–1835. doi: 10.1016/j.socscimed.2012.07.022. [DOI] [PubMed] [Google Scholar]
  178. Swinglehurst D, Roberts C, Li S, Weber O, Singy P. Beyond the “dyad”: A qualitative re-evaluation of the changing clinical consultation. BMJ Open. 2014;4(9):e006017. doi: 10.1136/bmjopen-2014-006017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  179. Thielke S, Hammond K, Helbig S. Copying and pasting of examinations within the electronic medical record. International Journal of Medical Informatics. 2007;76:S122–S128. doi: 10.1016/j.ijmedinf.2006.06.004. [DOI] [PubMed] [Google Scholar]
  180. Thornton JD, Schold JD, Venkateshaiah L, Lander B. Prevalence of copied information by attendings and residents in critical care progress notes. Critical Care Medicine. 2013;41(2):382–388. doi: 10.1097/CCM.0b013e3182711a1c. [DOI] [PMC free article] [PubMed] [Google Scholar]
  181. Tsou AY, Lehmann CU, Michel J, Solomon R, Possanza L, Gandhi T. Safe practices for copy and paste in the EHR. Applied Clinical Informatics. 2017;26(1):12–34. doi: 10.4338/ACI-2016-09-R-0150. [DOI] [PMC free article] [PubMed] [Google Scholar]
  182. Van Dijck J. Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society. 2014;12(2):197–208. doi: 10.24908/ss.v12i2.4776. [DOI] [Google Scholar]
  183. van Wynsberghe A, Robbins S. Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics. 2019;25(3):719–735. doi: 10.1007/s11948-018-0030-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  184. Venditti, L.F., J. Fleming, and K. Kugelmeyer. 2019. Algorithmic surveillance: A hidden danger in recognizing faces. Honours thesis, Colby College.
  185. Verghese A, Shah NH, Harrington RA. What this computer needs is a physician: Humanism and artificial intelligence. JAMA. 2018;319(1):19–20. doi: 10.1001/jama.2017.19198. [DOI] [PubMed] [Google Scholar]
  186. Walker K, Ben-Meir M, O’Mullane P, Phillips D, Staples M. Scribes in an Australian private emergency department: A description of physician productivity. Emergency Medicine Australasia. 2014;26(6):543–548. doi: 10.1111/1742-6723.12314. [DOI] [PubMed] [Google Scholar]
  187. Wallach W, Allen C, Smit I. Machine morality: Bottom-up and top-down approaches for modelling human moral faculties. AI & Society. 2008;22(4):565–582. doi: 10.1007/s00146-007-0099-0. [DOI] [Google Scholar]
  188. Walsh CG, Ribeiro JD, Franklin JC. Predicting risk of suicide attempts over time through machine learning. Clinical Psychological Science. 2017;5(3):457–469. doi: 10.1177/2167702617691560. [DOI] [Google Scholar]
  189. Wang YC, DeSalvo K. Timely, granular, and actionable: Informatics in the public health 3.0 era. American Journal of Public Health. 2018;108(7):930–934. doi: 10.2105/AJPH.2018.304406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  190. Wangberg SC, Andreassen HK, Prokosch H-U, Santana SMV, Sørensen T, Chronaki CE. Relations between Internet use, socio-economic status (SES), social support and subjective health. Health Promotion International. 2007;23(1):70–77. doi: 10.1093/heapro/dam039. [DOI] [PubMed] [Google Scholar]
  191. Wasserman RC. Electronic medical records (EMRs), epidemiology, and epistemology: Reflections on EMRs and future pediatric clinical research. Academic Pediatrics. 2011;11(4):280–287. doi: 10.1016/j.acap.2011.02.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  192. Weis JM, Levy PC. Copy, paste, and cloned notes in electronic health records. Chest. 2014;145(3):632–638. doi: 10.1378/chest.13-0886. [DOI] [PubMed] [Google Scholar]
  193. West CP, Dyrbye LN, Shanafelt TD. Physician burnout: Contributors, consequences and solutions. Journal of Internal Medicine. 2018;283(6):516–529. doi: 10.1111/joim.12752. [DOI] [PubMed] [Google Scholar]
  194. Wu DCN, Corbett K, Horton S, Saleh N, Mosha TCE. Effectiveness of social marketing in improving knowledge, attitudes and practice of consumption of vitamin A-fortified oil in Tanzania. Public Health Nutrition. 2018;22(3):466–475. doi: 10.1017/S1368980018003373. [DOI] [PMC free article] [PubMed] [Google Scholar]
  195. Yeung K. “Hypernudge”: Big Data as a mode of regulation by design. Information, Communication & Society. 2017;20(1):118–136. doi: 10.1080/1369118X.2016.1186713. [DOI] [Google Scholar]
  196. Zhao JY, Kessler EG, Yu J, et al. Impact of trauma hospital ransomware attack on surgical residency training. Journal of Surgical Research. 2018;232:389–397. doi: 10.1016/j.jss.2018.06.072. [DOI] [PubMed] [Google Scholar]
  197. Zhou SH, McCarthy ID, McGregor AH, Coombs RRH, Hughes SPF. Geometrical dimensions of the lower lumbar vertebrae—analysis of data from digitised CT images. European Spine Journal. 2000;9(3):242–248. doi: 10.1007/s005860000140. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Bioethical Inquiry are provided here courtesy of Nature Publishing Group

RESOURCES