Skip to main content
The Linacre Quarterly logoLink to The Linacre Quarterly
. 2023 May 29;90(4):375–394. doi: 10.1177/00243639231162431

“Just the Facts Ma’am”: Moral and Ethical Considerations for Artificial Intelligence in Medicine and its Potential to Impact Patient Autonomy and Hope

Charles S Love 1,
PMCID: PMC10638968  PMID: 37974568

Abstract

Applying machine-based learning and synthetic cognition, commonly referred to as artificial intelligence (AI), to medicine intimates prescient knowledge. The ability of these algorithms to potentially unlock secrets held within vast data sets makes them invaluable to healthcare. Complex computer algorithms are routinely used to enhance diagnoses in fields like oncology, cardiology, and neurology. These algorithms have found utility in making healthcare decisions that are often complicated by seemingly endless relationships between exogenous and endogenous variables. They have also found utility in the allocation of limited healthcare resources and the management of end-of-life issues. With the increase in computing power and the ability to test a virtually unlimited number of relationships, scientists and engineers have the unprecedented ability to increase the prognostic confidence that comes from complex data analysis. While these systems present exciting opportunities for the democratization and precision of healthcare, their use raises important moral and ethical considerations around Christian concepts of autonomy and hope. The purpose of this essay is to explore some of the practical limitations associated with AI in medicine and discuss some of the potential theological implications that machine-generated diagnoses may present. Specifically, this article examines how these systems may disrupt the patient and healthcare provider relationship emblematic of Christ's healing mission. Finally, this article seeks to offer insights that might help in the development of a more robust ethical framework for the application of these systems in the future.

Keywords: Autonomy, Artificial intelligence, Catholic theology, Computer-based diagnoses, Communication between healthcare professional and patient, Hope, Healthcare, Hope and despair, Machine-based learning, Theology and bioethics

Introduction

The use of artificially intelligent systems to generate diagnostic data within healthcare is accelerating at a rate commensurate with the global technological revolution. While these systems present exciting opportunities for the democratization and precision of healthcare, their use raises important moral and ethical considerations around Christian ideas of autonomy and hope. This essay considers some of the basic concepts behind the use of artificial intelligence (AI) in medicine and how reliance on machines to make our most fundamental decisions about life and death may conflict with Christian virtues. Specifically, this essay will consider whether machine-based diagnoses have the potential to interfere with the virtue of hope that is both central to the Christian faith and is associated with positive clinical outcomes.

Christians understand that hope is more than a coping mechanism. It is a motive force, empowered by grace, which leads us to the physical presence of God. The humble medieval monk and principal architect of modern Catholic theology, St. Thomas Aquinas, understood the tangibility of hope by connecting it to a future good (Aquinas 1966, II-II Q17, a1). That is, hope becomes revelatory by leading us to new realities. It is the basis of prayer and central to the fulfillment of Christ's promise. It is in this regard that Christians understand that divine intercession is most profoundly recognized in the physical altering of our realities.

The confrontation with one's mortality in times of illness brings hope into sharp focus. Difficult diagnoses depend on hope to cope with uncertainty by replacing despair with the possibility of a future good. In this regard, hope uniquely attaches itself to creation by opening the door to the agency of the Holy Spirit. Healthcare must always be concerned with preserving the gift of hope and its power to alter spiritual and physical outcomes. In this regard, the inexorability of modern medicine places a burden on the magisterium to ensure that technologies like AI do not abrogate the foundations of hope and limit God's participation in the outcome.

To illustrate this point, we can consider the hypothetical situation of an eighty-five-year-old grandfather who is rushed to the emergency department for chest pain. After a diagnostic workup that only included an electrocardiogram, the family meets with the attending physician. She tells them that their grandfather is resting comfortably and should be able to go home later in the day. She also tells the family that their grandfather is extremely sick and has a high probability of dying within the next two years and that this prognosis limits further intervention. The doctor recommends palliative care and at some point, when the symptoms become worse, the family should consider hospice care. This news stuns the family because until this event their grandfather was the epitome of health. He exercised regularly, ate a healthy diet, and was free of bad behavioral habits. As the family struggles to process the news, they also wonder how the physician can be so sure of the prognosis.

When the grandfather presented to the emergency department, his age and health history, demographic profile, socioeconomic status, and other related data were accessed by the hospital's electronic health record (EHR) system. A computer algorithm combined these data with the recent diagnostic data related to the patient's complaint of chest pain to create a synthetic prognosis based on a statistical comparison to others who are similarly matched. Based on these data, the healthcare system made a decision that additional therapies would neither significantly enhance nor prolong the patient's life.

While this scenario may seem futuristic, complex computer algorithms are routinely used to enhance diagnoses in fields like oncology, cardiology, and neurology. These algorithms have found utility in making healthcare decisions that are often complicated by seemingly endless relationships between exogenous and endogenous variables. They have also found utility in the allocation of limited healthcare resources and the management of end-of-life issues. Like other healthcare technologies such as genetic sequencing, complex computer algorithms present difficult moral dilemmas including the potential for the removal of hope which has been associated with poor outcomes.

The purpose of this essay is to first explore the limitations associated with AI in medicine and some of the ethical and moral implications these systems can present. Second, the essay will consider how these technologies can negatively affect patient autonomy and hope and compromise Christian concepts of the missiological significance of caring for the sick. Finally, I will consider a few guiding principles that may help us to develop a more robust ethical framework for the application of these systems in the future.

A Brief Background on Machine-Based Learning

Machine-based learning belongs to a form of computer architecture termed artificial intelligence (AI) and encompasses everything from relational computing to neural networks (Ngiam and Khor 2019, 262–73). AI is often poorly defined and frequently casually applied to any computer program that is capable of aggregating large amounts of data and is programmed to examine complex and often imperceptible relationships within disparate data sets to implicate an outcome (Wang 2019, 1–37). Terms such as “machine learning”, “synthetic cognition”, “robotics”, “augmented reality”, and “expert systems” are frequently amalgamated into the broader definition of AI and often used as synonyms (Monett and Lewis 2018, 212–14). The difference between conventional computing, such as adding rows of numbers in a spreadsheet, and AI algorithms is that the latter retains a “knowledge” of past relationships and applies that knowledge to subsequent data analyses. For example, an observed analysis might assume that a Tanzanian zebra population is generally equally distributed between males and females, yet year over year that ratio varies slightly. A deep learning algorithm (DLA) might reveal the relationship between zebra gender ratios and the total amount of snowfall on Mount Kilimanjaro which is affected by atmospheric carbon dioxide. The yearly run-off causes slight alterations in the phytoestrogen levels in the local grasses. Therefore, a heavier snowfall than normal on Kilimanjaro favors female fouls. An analysis and predictive conclusion that might consume the entire career of a field biologist can be greatly accelerated using multitudinous simulations at speeds and accuracy that significantly exceed human capabilities to do the same.

Can machines really think? There are many types of machine-based learning systems that range from semi-autonomous systems that “learn” by applying identified relationships to new data—all stoplights are red, therefore a red light may be a stoplight—to completely autonomous systems that can seek out data independently (p. 214). This essay will focus on a form of AI called a deep learning algorithm (DLA). These systems, also sometimes called neural networks, are intended to, at one level, mimic the way the human brain looks for relationships within a given set of data (observations) by applying experience to the analysis of new and additional data sets (prediction). Complex abstractions follow a series of simpler abstractions resulting in a hierarchical set of relationships that can be applied to new and expanding data sets (Najafabadi et al. 2015, 1–21). For simplicity, I will refer to this process as “synthetic cognition”; a system that attempts to replicate the way the human mind relies on relatedness within a body of learned experience to create an inference. As the machine stores a greater body of experience it can more confidently predict an outcome. What separates a DLA from conventional computational algorithms is that these systems do not necessarily require the data to be structured like the rows and columns found in a spreadsheet. DLAs can process very large, seemingly unrelated, and nonlinear data sets to examine potential relationships (p. 6). Moreover, these systems are highly scalable and are only limited in their ability to test relationships by available computing power.

In their simplest form computers are machines that automate complex computational tasks. They perform these tasks in a similar way that humans perform tasks. A complex set of numbers in a spreadsheet, for example, can be quickly reduced to means, medians, standard deviations, and other distributions far faster than doing the same calculations manually although the logic is the same. Tasks that might take humans hours or even days to complete depending on the number of entries can be done in seconds by the computer. Even the most inexpensive laptop is a marvel of computer processing power, but the conventional computer is not “aware” when data may be suspicious. A table listing the ages of children in kindergarten shows 5, 5, 6, 5, 5, 6, 5, 15, and 5. The computer would not immediately recognize the entry “15” as potentially incorrect without additional programming, whereas the teacher, or any human reviewer of the data, would immediately see that the entry requires further analysis.

DLAs take computational tasks a step further by repeatedly testing the data for relationships. In the example above, a DLA could be programmed to examine the age of every kindergarten student in the United States. Based on this acquired knowledge, the computer would likely reject the entry of fifteen years because 15 lies too far from the mean. However, some human minds might still create an imaginary situation of a fifteen-year-old kindergarten student thus illustrating the central difference between synthetic cognition and the human mind: human cognition begins and ends with imagination.

The strength of the relationships is measured (outcomes) and weighted (probability) and applied to subsequent analyses. Consequently, the more data that is available the more thoroughly the strength of the relationship can be tested and assessed. Relationships are established based on an assigned probability that within similar conditions the relationship will be repeated. In a standard coin flip scenario, for example, it is understood that each flip has the same probability of landing heads or tails. However, when the coin can be virtually flipped a billion or more times using simulated conditions such as the metallurgical composition of the coin, the minute differences in the weight distribution or density of the coin, aerodynamics, humidity, and other factors that influence the coin's behavior as it flies in the air and bounces to a landing, the model may be able to predict how the coin will land with greater precision. The greatly expanded set of data and analysis provides the opportunity to test for relationships that go far beyond what is assumed and well beyond what could be tested in a conventional laboratory setting. If enough data on the relationships that precondition an event can be captured, then it would be possible to predict the outcome of the coin flip with reasonable certainty. This is of course because a coin flip is not a random event but is governed by the laws of physics (Mukherjee 2020).

The veracity of machine-based analyses is determined by their statistical significance. There are many ways to measure the strength of the relationship between events. The “certainty” that comes from the relationship between the predictive factors and the outcome is a statistical probability. One main difference between the classical statistical analysis and the statistical analyses used in DLAs is the first is inferential and the latter implicative (Azzolina et al. 2019, 1–14). That is, classical statistical analysis is rule-based and descriptive of data including distributions, means, standard deviations, etc. Deep learning statistical models are not rule-based per se and can adjust the rules by applying prior relationships to new problems. For example, a DLA can help the dermatologist to identify the nature of a suspicious mole by aggregating the learned experience from a global database of melanomas and their associated outcomes (Rajula et al. 2020, 1–10).

This essay is not intended to be an exhaustive examination of the statistics used in medical research. There are too many statistical tools that cannot be adequately discussed in such a brief summary. However, a couple of concepts are used to briefly illustrate how machines can arrive at conclusions and how healthcare providers may rely on that information to make certain medical decisions.

When approaching the challenge of determining the strength of a relationship between events, statistics can describe the probability of an outcome, which is the difference between the expectation and the occurrence. Going back to the coin flip example, we accept that the probability of a single coin flip is fifty-fifty. The expectation is half the time the coin will land tails down and is conditioned upon experience. Consequently, the distribution curve or what is commonly referred to as a bell curve for the probability of a coin flipped one hundred times is sharply delineated. In contrast, a dart thrown at the bullseye on a dartboard will have a wider distribution. The shape of the bell curve defines the probability of an outcome. A narrow bell curve suggests that most events will occur at or near the mean; action A will almost always produce result B. A wide bell curve indicates a greater distribution and therefore results in less confidence in the mean; action A may result in B some percentage of the time.

Another frequently used statistical tool is the receiver operating characteristic (ROC) curve (Polo and Miot 2020, 1–4). A ROC curve is a form of statistical prediction that is based on a known result. ROC curves were originally developed by British radar operators during the Second World War as a means of increasing confidence in reflected radar signals by statistically eliminating false positives (Green and Swets 1966). If the radar operator knows generally where the plane is coming from, then they could compare the strength and duration of the predicted radar signal with the actual signal. For example, a plane over the English Channel would exhibit a certain relationship to radar intensity as it approached the coastline. Similarly, a weaker signal from the Hebrides could be disregarded as an artifact from a large albatross. The closer the “blip” was to its calculated values, the greater the confidence a radar operator could have in the signal. With more data collected an artifact blip that turned out to be a real threat or vice versa, operators could adjust their assumptions to achieve greater confidence (sensitivity) in the relationship between predicted and actual values (specificity). When enough data (experience) has been compiled, the ROC curve can be used to establish a cut-off point that can discriminate between a false positive and false negative.

ROC curves are particularly useful in medicine in distinguishing between true and false outcomes. Medical science relies on the sensitivity and specificity of data to establish cutoff points for a variety of tests. Pregnancy test strips, for example, assess the presence and concentration of human chorionic gonadotropin (hCG) in the mother's urine by discriminating between the upper and lower ranges during pregnancy (Kariman et al. 2011, 415–19). Oncological radiology can benefit from ROC curves by evaluating the radiographic appearance of a tumor to assess its malignancy (Rajula et al. 2020, 1–10). Vaccines can be titrated for seropositive results in large and diverse populations to measure efficacy sooner than what would be possible in large longitudinal studies (Yu et al. 2018, 2692–700). As the cutoff criteria narrow, the sensitivity increases while the specificity decreases, and vice versa (Polo and Miot 2020, 2). Going back to the radar example above, the greater the confidence in the quality of the blip, less needs to be known about the conditions that caused it.

Many statistical tools can be used to estimate the conditions (relationships) that lead to disease (events) while at the same time predicting which therapies are most likely to be effective (outcomes). DLAs exponentially expand the utility of predictive statistics by looking for relationships within massively diverse data sets that would be difficult to manage using traditional computational tools. DLAs can be used to efficiently test the strength of relationships in the data against outcomes and repeat the process ad infinitum in search of medical certainty.

For these reasons, DLAs have been met with wide enthusiasm because of their ability to unlock secrets held within complex data sets. Moreover, these systems are accepted by the general population because they can address the 4Ps of medicine, predictive, preventive, personalized, and participatory (Briganti and Le Moine 2020, 1–6).

With the increase in computing power and the ability to test a virtually unlimited number of relationships, scientists and engineers have the unprecedented ability to increase the prognostic confidence that comes from complex data analysis (Choi et al. 2020, 1–10). DLAs are transforming healthcare in multiple ways including accelerating the diagnosis, enhancing the precision of the diagnosis, and increasing the reliability of the diagnosis (p. 5). DLAs have also been used to ration healthcare resources and improve healthcare economics in end-of-life and palliative care situations (Avati et al. 2018, 56–64). Each of these applications can positively enhance the delivery of healthcare, but they also present certain moral and ethical considerations.

Three principal characteristics of DLA systems applied to healthcare decision making will be considered. These include the so-called “black-box” conundrum, programming bias, and “complacency syndrome.”

The term “black-box” refers to a computer system that provides output without the knowledge of how the output was derived (Price 2018). Healthcare providers may understand the data supplied to and processed by the DLA but may not be able to assess the validity of the decision-making process because of a lack of understanding of how the algorithm operates. DLAs benefit from massive amounts of data that allow machines to make a prediction, but because of the amount of data that is typically processed, and the methods employed to test relationships within the data, physicians and healthcare providers are often unable to reasonably explain how certain relationships and dependencies were adjudicated by the DLA. These issues lead to a layered problem for healthcare delivery. Doctors or other healthcare providers may not understand why an algorithm made a certain decision yet still must explain it to their patients or colleagues (Woodcock et al. 2021, 1–20). The second, and more pernicious conflict arises when certain DLA outcomes are at odds with a healthcare provider's intuition and experience putting a DLA in conflict with the conscious mind (Vogel 2018, 998–99).

Many people rely comfortably on data outputs from systems they do not understand. An indicator on the dashboard informs the driver the air pressure is low in one of the car's tires. A mobile phone indicates that rain is imminent, and so on. In one sense, an argument might be made that like most of us, healthcare providers do not need to understand the inner workings of complex algorithms provided the output is reliable. However, unlike algorithms that automate daily tasks, the physician is often required to act in proxy (Fridman et al. 2019, 1335–43). A DLA-derived decision suggesting that the biopsy of a solid tumor is not necessary is based on a comparison to thousands of similar images, may be at odds with a physician who has seen far fewer images but intuitively has concerns about its appearance based on a single bad experience (Vogel 2018, 998). Moreover, if it were a lesion on the physician's body, might he or she biopsy it just to be sure?

In 2015, researchers at Mount Sinai Hospital applied a highly complex algorithm to seventy thousand EHRs to establish a correlation between sleep disorders and schizophrenia. The authors found a significant correlation in a notoriously difficult disease to diagnose, yet one of the principal authors was unable to explain why the algorithm was successful (Miotto et al. 2016, 1–10). How can a doctor inform a patient about the onset of a devastating disease when they do not understand how the diagnosis was determined?

While society is beginning to encounter the opacity of synthetic cognition such as the way credit scores are assessed (Rice and Swesnik 2013, 950), or how IRS looks for tax cheats (Rubin 2020), it is clear that these systems are prone to error and misidentification based on incorrect data or assumptions. While errors in credit scores and taxes can be remediated—albeit often painfully—errors in medicine may not be so easily addressed.

Having considered some of the implications of “black-box” algorithms, attention must turn to their programming. The potential for statistical and social bias remains one of the significant limitations to the use of synthetic cognition in medicine (Norori et al. 2021, 1–9). Statistical bias occurs when a data set does not represent the true distribution of a population causing the algorithm to produce a result that differs from the true estimate. There have been numerous reports of algorithms that discriminate against vulnerable groups in the same fields in which DLAs have shown promising results in other populations (Norori et al. 2021, 1–9).

DLAs rely on existing data sets. They cannot make their own data per se. Data accessed by these systems may come from large clinical trials, epidemiological studies, results from morbidity and mortality committees, hospital and network outcomes, case reports, etc. Each source of data is weighted according to the method by which it was collected and analyzed. Data sets are frequently developed with a set of rules called a protocol. DLAs can neither assess the process by which the data were collected, nor can they independently assess the relative strength of the data that may influence its veracity (Wang, Casalino and Khullar 2018, 293–94).

Biases can be introduced in a variety of ways (Parikh, Teeple and Navathe 2019, 2377–378). Under sampling of data is a common form of biasing and can skew distributions. Minority populations are often underrepresented in clinical trials. For example, the study of genomics is compromised when data from African American women and other minorities are underrepresented (McCarthy et al. 2016, 2610–8). The Framingham Heart Study collected data from a predominantly white, non-Hispanic population. When results are applied to other ethnic groups the outcomes are less reliable due to genomic and social differences not accounted for in the study (Gijsberts et al. 2015, 1–13). Sex and gender bias remains a problem in medical research. Women are often underrepresented in clinical studies, and their different physiologies are often confounding to data developed in predominantly male populations (Tower 2017, 735–47). These differences can, for example, confound the understanding of differing drug interactions between male and female patients (Simon 2005, 1557). Similarly, harmful variants in the BReast CAncer (BRCA) 1 and 2 genes can be an indicator of early onset breast cancer in both women and men, yet the preponderance of data on the BRCA genes has come from large longitudinal studies conducted on women (Cancer Treatment Centers of America 2017). Consequently, the role of variants in BRCA genes in men is not as well understood (Liede, Karlan and Narod 2004, 735–42). This potential for bias undermines the utility of DLAs in predictive medicine and can often mislead physicians on establishing causative relationships (Maddox, Rumsfeld and Payne 2018, 31–32).

Retrospective inference is the process human beings go through to understand and apply existing data to a new problem. The collective experience of the past leads to fixed beliefs about the future. Smoking leads to lung disease and cancer ergo a patient who develops a new habit of smoking is likely to die from related causes. A high-fat diet elevates cholesterol and leads to coronary artery disease, and so on. The problem is that these rules do not always apply. There will be some smokers who escape lung disease and some fast-food aficionados who dodge heart disease. Consequently, the ongoing relevancy of these relationships needs to be constantly updated with new data. In this instance, one might argue that machine-based algorithms are perfect for this task. They can look at exceptional data and find new relationships that might explain the unpredicted result such as a genetic or environmental component, yet these machines remain at the mercy of available data and, despite some arguments to the contrary (Johnson-Laird 1993, 14), are unable to think prospectively in the way a human ideates (Awad et al. 2018, 59–64). Finally, a derivative challenge of retrospective inference is the potential for new data that may confound the starting assumption. To use the smoker example above, if the chemical composition of a cigarette today is different than it was twenty years ago, then the algorithm must have the ability to account for these types of changes for the output to be reliable. Consequently, for a DLA to behave adaptively, it must constantly be updated with new data (FitzGerald et al. 2020, 2).

Finally, it must be understood that DLAs and other machine-based systems are often employed not just as diagnostic tools but are used in the context of managing resource allocation in large healthcare networks. What cannot be known to the physician or patient is whether the system has been programmed for purposes other than diagnostic reasons. Certain algorithms may be developed as a risk-based decision model that includes a secondary analysis of the financial impact on the system. A healthcare system may want to understand the financial implications if a patient remains in-network when more aggressive therapy is necessary (Goodarzian et al. 2021, 761–825).

One of the promises of synthetic cognition in healthcare is the avoidance of biases. Yet, the algorithms that establish the backbone of these systems are only as good as the humans that programmed them and the intent for which they are built. Once the algorithm is running, it has few options to self-adjust for bias on its own (Gianfrancesco et al. 2018, 1544–47).

The increasing reliance on machines that utilize complex algorithmic calculations of varying opacity can lead to an even more perilous problem of complacency. This comes from habits and behaviors that are formed from situations that exhibit reliable patterns. We anticipate the shower water to be the same temperature each morning and are surprised when it is not hot. The sense of security that comes from complacency can delay responses to pandemics (Kotsimbos and McCormack 2007, 432–35) or delay definitive therapy (Baird 2014, 7–10). Machines and computers, whose logic is often opaque to the human mind, are particularly susceptible to complacency syndrome (Parasuraman and Manzey 2010, 381–410). Automation complacency syndrome occurs when physicians or other healthcare providers become dependent on predictive algorithms and has been linked to fatal errors (Merritt et al. 2019).

In a study that evaluated the inter-operator variability of cardiologists and non-cardiologists to assess the validity of an electrocardiogram when the automated diagnosis was purposely in error, interpretation accuracies achieved by both readers dropped by 43 percent and 59 percent, respectively (p < 0.001) meaning that half the time the operator accepted the automated diagnosis without question (Bond et al. 2018, 6–11). Healthcare providers are bombarded with data in increasingly busy practices. Machine-based diagnoses can interfere with human intuition and experience and may lead to suboptimal outcomes (Challen et al. 2019, 231–37).

It is human nature to accept what the computer tells us. Humans become dependent on the data and may rely on a set of interpretations without questioning its merits. The light on the dashboard indicates a problem with the car, the mobile phone indicates the food is on the way, and so on. We do not question the car's sensor any more than we question the location of the scooter on which our food is carried. The reliance on complex, black-box-derived diagnostic data can lead to medical errors and raises important questions about patient safety as these systems become more ubiquitous in medicine (Challen et al. 2019, 231–37).

These limitations are not to suggest that machines and DLAs should not have a role in medicine nor should it be suggested that these technologies have not greatly contributed to the collective understanding of effectively diagnosing and treating disease. The use of complex computing algorithms to granularize large data sets to a point where disease can be understood at a molecular level represents an important development in healthcare. The ability to understand the genetic and environmental origins of disease and the potential to correct disease according to the same fundamentals represents a quantum leap in the precision and democratization of healthcare. The next breakthroughs in medicine are believed to be a direct result of complex data analysis facilitated by AI (Noorbakhsh-Sabet et al. 2019, 795–801). Machine-based learning impacts the practice of medicine in three ways. Clinicians have access to faster and more complete diagnoses including increased accuracy of image interpretation and genomic data. Health systems benefit by improving workflow, reducing medical errors, and costs. Patients are increasingly able to access and evaluate personal data to promote personal health. The societal benefits of these systems are less clear, but there is broad anticipation that these systems will ultimately reduce the overall cost of healthcare while improving access and quality (Accenture 2020).

Preserving Patient Autonomy and Hope with Machine-Based Diagnoses

With these exciting breakthroughs come important ethical considerations about the use of machine-based learning systems in the delivery of healthcare. Of the many concerns is the impact the use of synthetic cognition may have on the course of the disease through the mitigation of human inference. Specifically, machine-based diagnoses and data-driven prognoses absent of human emotion, intuition, or compassion may adversely affect a patient's physical and psychological health by mitigating patient autonomy and hope.

In their landmark book, Principles of Biomedical Ethics, Beauchamp and Childress identify four imperatives to guide ethical decision making in healthcare (Beauchamp and Childress 2013). Among these is the principle of autonomy (p. 101). Is the patient, or their duly authorized agent, able to exercise their free will in a medical directive? It is reasonable to ask if DLA-driven healthcare directives work for or against patient autonomy. Asked a different way, can DLAs act in the best interests of patient autonomy?

It can be argued that the principle of autonomy is the single most important consideration in healthcare, yet what constitutes autonomy is more difficult to define. The clarion call “my body, my choice” has been loudly chanted at both pro-choice and anti-vaccination rallies. In this sense, autonomy might be construed to mean the individual gets to decide what is the best course of action based on a particular moral framework or life view. A Jehovah's Witness forgoes a blood transfusion knowing that without donor blood there is an increased risk of death. A cancer patient may choose not to have chemotherapy to stay the course of the disease. A patient may elect to defer definitive therapy until the disease or symptoms become worse. In western cultures like the United States, which put autonomy above all else, it is tempting to view autonomous decision making as sacrosanct yet seemingly isolated decisions can reverberate through the framework of biomedical ethics. Abortion and euthanasia are two primary examples. 1

The risk to patient autonomy when diagnoses are made wholly or partially by a computer algorithm is not well understood. Take, for example, the application of DLAs to mental health screening (Xie et al. 2020, 1–10). At what point does the machine-based diagnosis override the patient's preference? Are the keys to the automobile taken from grandpa long before he loses the ability to safely operate his vehicle because of a predictive algorithm? A DLA-driven diagnosis suggests that a coronary intervention will be most efficacious at a predetermined point in the course of the disease, but the patient elects to delay treatment. Waiting will increase the risk–benefit calculation. At what point is the patient denied intervention because they elected to wait? Decisions that have been traditionally left to the individual, their families, and healthcare providers are now being encroached on by machines that can presumably provide a more certain prognosis. In the future, DLAs may be used to assess whether a salvageable patient in an ICU can be sacrificed because of an influx of infectious disease patients anticipated from a recent pandemic (Arabi et al. 2021, 282–91). DLAs not only have the potential to diminish autonomy, but potentially dehumanize medicine altogether by excluding the healthcare provider from difficult moral decisions by placing the burden on the machine.

The potential of machines to dehumanize medicine not only threatens patient autonomy but they can disrupt hope when it is needed most. Medical science uniquely reveals the spiritual and physical realities of hope not just as a prayerful abstraction, but its presence or loss can profoundly affect physical and mental health (Morey et al. 2015, 13–17). Depressive disorders have been linked to elevated biomarkers for inflammatory disease (Gałecki and Talarowska 2018, 437–47). Increased mental stress levels have been implicated in the onset and progression of myocardial disease (Hammadah et al. 2018, 90–97; Vaccarino et al. 2021, 74–82). Indeed, the abject loss of hope in the acute hospital setting is an independent predictor of mortality (Gruber and Schwanda 2021, 53; Reichardt et al. 2019, 477–85). These studies and many others make an important connection between loss of hope leading to despair as being positively correlated with inflammation, morbidity, and mortality.

What about the opposite; can health be positively influenced by the presence of hope such as through the attenuation of inflammatory processes? Many studies suggest that the emotional holdfast of hope can improve a patient's quality of life even in terminal disease processes. A few studies even suggest that a positive mental attitude which is derivative of hope can reverse inflammation and attenuate biomarkers associated with stress (DuBois et al. 2012, 303–18). Researchers from the United Kingdom conducted a systematic review of the literature and concluded, with some reservations, that spirituality and religious identity is positively correlated with survival (Chida, Steptoe and Powell 2009, 81–90). More recent studies have shown a positive correlation between attenuation of biomarkers for inflammation and religiosity (Shattuck and Muehlenbein 2018, 1035–54).

A cautionary step must be taken at this juncture. There is no direct evidence that a positive mental framework and the hopefulness that follows through prayer and other spiritual exercises are necessarily curative, but it is interesting to ponder the idea that some diseases may be overcome with a certain mindset. Christians may view these as miracles. Nevertheless, the evidence seems compelling that reducing stress, providing hope, with or without spiritual interventions, can reduce certain inflammatory responses and improve overall health suggesting that hopefulness is integral to health. In this regard, it might be possible to think of some miracles as self-actualizing.

Of equal importance is the recognition that despair may come from any diagnosis, synthetic or not, leaving one to ask the question about the importance of how such information is communicated? There have been several studies on the way physicians deliver news to their patients can affect the psychological profile and the degree of hopefulness for relief and recovery (Choe et al. 2019, 1–19; Newell and Jordan 2015, 76–87). In this, we understand that physicians and healthcare providers can play a crucial role in how the patient will confront disease and the associated therapy. Positive encouragement during treatment can enhance patient health and well-being. A powerful mediator of hope in the healthcare system comes from the patient-physician interface which raises an important moral question, is it right for the physician to offer false hope if the intention is to improve the patient's outlook? Could false optimism have a beneficial placebo effect? While the answers to these questions may be beyond the scope of this essay, a more topical question is whether black-box diagnoses restrict the ability of the physician to offer hope? If it is believed that a machine-based diagnosis is less ambiguous how does that change the obligations of the physician to the patient?

This question has less to do with what information is communicated, and more to do with the compassion that must accompany the diagnosis. A doctor rarely begins a conversation with an elderly patient who recently suffered a hip fracture that they have a 10 percent chance of dying from a fatal blood clot or starts a conversation with a cancer patient by telling them they only have months to live. Hope becomes the figurative glue that holds the physician-patient relationship together. Nicholas Christakis offers that even in the light of certain prognoses, physicians often rely on the uncertainty of the future as a means of preserving the relationship (Christakis 1999, 130).

Offering a devastating diagnosis is an incredibly difficult task, but compassion must always be offered as a part of the healing mission. Catholic physicians and physicians of faith may find ways to offer hope that go beyond the physical situation. Christians and other faithful that find power and consolation in prayer are likely to have a positive impact on the course of their disease (Shattuck and Muehlenbein 2018, 1035–54). Unfortunately, too often it is becoming an occurrence that the physician will deliver the bad news to the patient and leave the hospital chaplain or others to pick up the pieces. Training often fails physicians in their skills to deliver bad news to their patients (Monden, Gentry and Cox 2016, 101–102). Moreover, many of the protocols used to deliver bad news such as SPIKES, 2 pay little attention to the gift of hope. “[P]hysicians should be prepared, find out what the patient already knows, convey some measure of hope, allow for emotional expression and questions” (p. 102). The source of the diagnostic information, even one believed to be unimpeachable, does not relieve the physician from the obligation of offering compassion in the form of hope.

A further deepening of the importance of maintaining hope in medicine is evidenced in the nature of the patient-physician relationship which is more than transactional. In his book, Birth of a Clinic, Michael Foucault intimates a metaphysical component to medicine that moves the relationship beyond individual gain or loss.

For us, the human body defines, by natural right, the space of origin and of distribution of disease: a space whose lines, volumes, surfaces, and routes are laid down, in accordance with a now familiar geometry, by the anatomical atlas. But this order of the solid, visible body is only one way—in all likelihood neither the first, nor the most fundamental—in which one spatializes disease. (Foucault 1973, 3)

The human spirit brings a dimension to healthcare that machines cannot. Moreover, the metaphysical concepts of life and death, loss of hope, and eternal life do not belong exclusively to the religious. Secular ideas of the relationship between the sacred and the profane articulated by Taylor (2007), Somerville (2006), and others argue that a part of the human condition is an innate desire to transcend the obvious and ordinary and may be thought of as hope in the abstract. Despite our increasingly mechanistic ways—or what Taylor calls the “loss of enchantment”—humanity possesses an eternally optimistic bond with forces unseen. For some, this kind of relationship with the unknown is a form of Pascal's Wager, 3 exercising a latent transcendent connection that often causes nonbelievers to turn to prayer in times of crisis as a part of a “no-lose” strategy (Siegler 1975, 853–57). In the end, conceptualities of immortality become a powerful coping mechanism for many facing death including atheists (Heflick and Goldenberg 2012, 385–92).

For Christians, the concept of hope and immortality is more tangible. The basis for Christian hope is found in God's promise of life (Benedict XVI 2007, 4). It is a personal promise that portends to a better future in companionship with the Creator. Along with the other virtues of faith and charity (love), hope offered in the promise of the risen Christ is the bedrock of faith (Jn 3:15). It offers succor in times of despair by connecting us with the future good through prayer and intercession.

Indeed, the core of Christ's earthly ministry was the demonstration and restoration of hope, not just hope in an afterlife but in the hope that Christ would bring relief and comfort to our immediate circumstances (Mk 5:34). One cannot read the Gospel without understanding the importance and centrality of Jesus’ healing mission potentiated by hope. From the example of Christ comes the Church's understanding of medicine (Love 2008, 225–38). Although the inner compulsion to relieve suffering and bring healing predates Christ's actions, Christians understand these actions, modeled by Christ, as outward signs of Christian virtue, but can these actions carried out by human hands be viewed as a manifestation of a creative agency that comes from shared divinity? Asked differently, can human actions alter a medical outcome in ways similar to Christ?

Christian theologians understand hope as a contingent component of creation. Aquinas offers that hope is a tangible extension of our created endowment. “[It] is a movement of the appetitive faculty, since its object is a good” (Aquinas 1966 II-II Q18 a1). In this, Aquinas animates the concept of hope as a force that propels the soul toward God and away from despair. In contrast, despair, the opposite of hope, draws us away from the good by placing limits on our desires (Miller 2012, 387–396). Limits can be self-imposed as in “I am not worthy”, or they can be institutionally imposed as in “you are not welcome.” In the image of Christ, Christians can transcend these limitations by substituting substantive hope for despair.

Aquinas further sharpens the point by suggesting that the motive force behind hope is empowered by the incarnate Christ and the resurrection in a way that physically coopts one into God's salvific plan. He articulates that our will to contribute to creation is neither passive nor undirected; this will exist as a “natural appetite” that brings us to a likeness of God (Aquinas 1966, I, Q80, a1). In this sense, Aquinas gives the Aristotelian view that humankind's creative endowment includes the ability to make decisions that we can innately recognize to be oriented toward truth and light greater weight when, as Aquinas suggests, we exercise our will in the capacity of co-creators in the furtherance of God's kingdom (Schoot 2020, 33–46). Solely within the human consciousness lies the ability to offer hope. In effect, we act as Christ's proxy by replacing despair with hope, and with hope comes the possibility of divine intervention.

Confronted by a devastating diagnosis bring many to the precipice of despair and quicken the desire for a miracle. It is in these trying times that the patient-physician relationship modeled in Christ uniquely opens the door to the possibility of divine intervention. One way a physician can offer authentic hope is the reliance on an intuition that comes from a vastly more capacious understanding of the human condition that stems from imagination (Vogel 2018, 998). Through the human imagination, what some have called the “moral imagination” (Scott 1997, 45–50), physicians and healthcare providers can integrate the science of medicine with the hopes and fears of the patient. In Rebirth of the Clinic (Sulmasy 2006, 24), Sulmasy suggests that the shared vulnerability that develops between the patient and physician leads to an emotional connection—what he calls a “radical equality” between the two—that transcends the transactional practice of medicine and moves the relationship to a spiritual domain not unlike the way Jesus touches the leper (Mt. 8:3).

“I have a feeling you might beat this” is not just a gratuitous statement, such statements come from the active agency of hope that informs the imagination leading to new possibilities. In this light, we understand that the Aquinian connection between hope and a tangible good is a present reality and an opportunity for divine intervention. Edward Schillebeeckx, a Belgium Catholic theologian and member of the Dominican Order argued that the ability of human reason to interpret reality goes beyond texts and their literal meaning through the experience that comes from the “living relationship” with Christ (de Jong 2020, 216). Applying Shcillebeeckx's concept of metaphysical knowledge that includes the imaginative domain alluded to by Foucault and Sulmasy, to Aquinas’ concept of a fraternal economy to which we are bound through an incarnate Christ, we find a greater opportunity for hope embedded in the mystery of the Trinity and enter into far more capacious reality with God that opens the possibility for divine intervention that can come from our individual agency.

These realities lead us to an understanding that hope is an extent manifestation of the Trinitarian mystery and bestows a divine approbation on the Aristotelian concept of the common good. It is through this extraordinary relationship the human conscience gains insight into God's heavenly plans through which we may understand that unanticipated clinical outcomes are more than accidental.

The ability to participate in these divine realities comes from being transformed in the Holy Spirit. In Peter's second letter we read: “Thus he has given us, through these things, his precious and very great promises, so that through them you may escape from the corruption that is in the world because of lust, and may become participants (emphasis added) of the divine nature” (2 Pt 1:4). The word participate used in this epistle and elsewhere in scripture is translated from the word “κοινωνός” which portents to a change in nature. Biblical commentators have interpreted κοινωνός to mean a transformation that allows us to share in God's immortal substance (Hafemann 2013, 95). The distinction is important. While contemporary translations use the word participate, which is typically defined in the English language as an opportunity to share with, the apostles’ use of the word κοινωνός was intended to signify a change in identity or state through one's incorporation into the Holy Spirit. In this sense, the verb becomes the noun as in I can sing verses, I am a singer. Similarly, other gospels use the word κοινωνός to convey the idea of a change in identity facilitated through the Holy Spirit. The first letter to Corinth is adamant that members of the Church participate (emphasis added) in the body of Christ and enter (emphasis added) the presence of heavenly realities (Barber and Kincaid 2015, 239). Once again, the word κοινωνός is used to describe a transformation that comes from being subsumed into the Holy Spirit (1Cor 1:16).

Taken in sum, we may view the physical manifestations of hope as prima facie evidence that its virtues are tangible marks of creation and reinforce the belief that God not only enters the world but remains active in it through human intervention. Therefore, removing the human conscience from the care of others by transferring it to the auspices of a machine may place limits on God's creative agency.

Future Directions

Within the last ten years, there have been several initiatives undertaken to examine the ethical and moral framework underlying the use of AI in medicine (Baric-Parker and Anderson 2020, 471–81). Among these is a recent effort sponsored by the Pontifical Academy for Life to consider the utility and ethical boundaries associated with AI. In their symposium Rome Call for AI Ethics, chaired by Msgr. Vincenzo Paglia, President of the Pontifical Academy for Life and sponsor of the initiative, the committee which included the private sector proposed an initial framework for the ethical considerations underlying AI.

The six ethical principles include: Transparency, AI must be understandable to all; Inclusion, systems must not discriminate against anyone because every human being has equal dignity; Accountability, there must always be someone who takes responsibility for what a machine does; Impartiality, systems must not follow or create biases; Reliability, systems must be reliable; and Security and privacy, systems must be secure and respect the privacy of users.

We can compare these to the four ethical pillars proposed by Beauchamp and Childress which are considered the standard theoretical framework for bioethics in medicine (Beauchamp and Childress 2013). These include: Autonomy, right for an individual to make his or her own choice; Beneficence; principle of acting with the best interest of the other in mind; Non-maleficence, “above all, do no harm,” as stated in the Hippocratic Oath; and Justice, fairness, and equality among individuals.

It is interesting to consider that of the six principles proposed by the Pontifical Academy for Life, all of them, Inclusion, Accountability, Transparency, Impartiality, Reliability, Security, and Privacy, speak to the concept of social justice but offer little guidance on Autonomy, Benefice, and Non-maleficence as they relate to healthcare. While credit is to be given to the Catholic Church for tackling issues like AI, the lack of commentary on these other principles suggest that the Church has not waded into the issue of AI in medicine in a sufficiently material way.

Indeed, Francis’ encyclical, Laudato si, expresses critical concern for unchecked use of technology in contemporary society, but stops short of establishing an ethical framework for how these technologies should be considered in the context of healthcare (Francis 2015). The Pontifical Academy for Life's Global Bioethics Working Group, following the guidance of Laudato Si appears to be primarily concerned with social justice and the sanctity of life including such issues as the poor and destitute, the abandoned and underprivileged, and the vulnerable classes, juxtaposing these concerns against the era of modern consumerism. While these are important and laudable efforts, neither effort directly addresses a Catholic theological framework for the use of machine-based learning and AI in medicine.

We can arrive at an important starting point for this discussion in Evangelium Vitae which emphasizes the Christian obligation to care for the poor and sick. In it, we are reminded that we must always be concerned with awakening hope in others as a principle of new life. “Where life is involved, the service of charity must be profoundly consistent. It cannot tolerate bias and discrimination, for human life is sacred and inviolable at every stage and in every situation” (John Paul II 1995, 87).

In the above essay, we can understand that AI and DLAs are far from perfect, and work is needed to improve their reliability by reducing bias and standardizing rubrics for transparency. On these issues, the Pontifical Academy for Life's efforts should bear fruit. Equally, society can expect that these systems will only grow more capable and will exacerbate issues like “black-box” diagnoses and complacency syndrome. How the Catholic Church will receive and integrate the theological and moral implications of AI-enabled medicine is less clear.

To summarize, synthetic cognition has the potential to enhance the delivery of medicine by offering greater accuracy in the diagnoses and prognoses of disease. Both patient and physician benefit from reliable and actional information. At the same time, machine-derived diagnoses may have the unintended effect of altering or removing beneficial hope. As these systems gain greater traction in the delivery of healthcare, extreme caution must be taken to ensure that they do not interfere with the patient/physician relationship or infringe on the virtue of hope that is so foundational to the Christian faith and is clearly implicated in better outcomes. Of the numerous challenges to deploying artificially intelligent systems into the practice and delivery of healthcare, their lack of humanness is the most problematic. The human mind is captivated by the imagination, the power of intentional prayer, and the possibility of change.

For Christians, the power of hope comes from our innermost imaginations and longings promised by the Abrahamic covenant and renewed through the incarnate Christ. We participate in a fraternal economy that is enabled by the mystery of the Trinity. Our participation in this holy order opens us to the mysteries of God through his Son. The promise of creation and everlasting life is not just a powerful means of coping with uncertainty on a spiritual level but makes hope tangible in ways that contribute to physical and emotional well-being. Like Aquinas who held that we are called to be active participants in salvation, there may be no greater example of our participation in the divine nature than the helping of others to find the firmament of hope during times of despair. In this, medicine can be viewed as being uniquely participative in creation through the understanding of how Christ's example of healing is more than eternal hope offered in the desiderata found at the end of time but becomes a vivid example of God dwelling among us. Care for others and the nurturing of hope are the manifestation of God's creative power in the active fulfillment of his earthly kingdom.

The Catholic Church must continue to seek ways to ensure that Christ's healing mission facilitated by human hands and minds remains central to the precepts of medical care and guard against the threat that machines will interfere with the patient/physician relationship and the ability of hope, faith, and charity to alter clinical outcomes.

Biographical Note

Charles S. Love, MATL is a biomedical engineer who has developed artificial organs for the past thirty-five years and has over thirty patents and patents pending in the field. He has founded several medical device companies and has held senior executive roles at large, multinational companies, currently serving as an investor and advisor to medical technology start-ups. With a BA degree in biology from Westmont College, Mr. Love completed his master's degree in theology and leadership at Gonzaga University in 2022. He intends further research on the theological implications of modern healthcare technologies.

1.

The author acknowledges that the Christian concept of autonomy is far more complex than what can be covered in this essay. The intention of this essay is to convey the concept that in the Christian tradition, the patient’s will is of paramount consideration in the construction of a biomedical ethic framework. However, we also acknowledge that the concept of autonomy exists within a more capacious moral framework that implicates actions and consequences within a community. It is not construed as a choice to do with one’s life as one pleases (Pellegrino 1999, 70–78).

2.

S, setting up the interview; P, assessing the patient’s perception; I, obtaining the patient’s invitation; K, giving knowledge and information to the patient; E, addressing the patient's emotions with empathic responses; and S, strategy and summary (Baile et al. 2000, 305).

3.

Blaise Pascal (1623–1662) posited that in the absence of irrefutable proof of God’s existence, it is prudent to accept that God exists. A wager for God offers an infinite gain if God exists, while a wager is against God brings infinite loss. If God does not exist, then there is neither gain nor loss.

Footnotes

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iD: Charles S. Love https://orcid.org/0000-0003-2709-8450

References

  1. Accenture. 2020. “ Artificial Intelligence: Healthcare’s New Nervous System .” July 20, Accessed October 5, 2021. https://www.accenture.com/au-en/insights/health/artificial-intelligence-healthcare.
  2. Aquinas, Thomas. 1966. Summa Theologica (ST). Thomas Gilby (ed). Cambridge: Blackfriars, Vols 60. [Google Scholar]
  3. Arabi Yaseen M., Azoulay Elie, Al-Dorzi Hasan, Phua Jason M., Salluh Jorge, Binnie Alexana, Hodgson Carol, et al. 2021. “How the COVID-19 Pandemic Will Change the Future of Critical Care.” Intensive Care Medicine 47 (3): 282–91. 10.1007/s00134-021-06352-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Avati Anand, Jung Kenneth, Harman Stephanie, Downing Lance, Ng Andrew, Shah Nigam H.. 2018. “Improving Palliative Care with Deep Learning.” BMC Medical Informatics and Decision Making 18 (Suppl 4): 122. 10.1186/s12911-018-0677-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Awad Edmond, Dsouza Sohan, Kim Richard, Schulz Jonathan, Henrich Joseph, Shariff Azim, Bonnefon Jean-François, Rahwan Iyad. 2018. “The Moral Machine Experiment.” Nature (London) 563 (7729): 59–64. 10.1038/s41586-018-0637-6. [DOI] [PubMed] [Google Scholar]
  6. Azzolina D., Baldi I., Barbati G., Berchialla P., Bottigliengo D., Bucci A., Calza S., P. et al. 2019. “Machine Learning in Clinical and Epidemiological Research: Isn't it Time for Biostatisticians to Work on it?” Epidemiology Biostatistics and Public Health 16 (4): e13245-1-3. 10.2427/13245. [DOI] [Google Scholar]
  7. Baile Walter F., Buckman Robert, Lenzi Renato, Glober Gary, Beale Estela A., Kudelka Andrzej P.. 2000. “SPIKES–A Six-Step Protocol for Delivering Bad News: Application to the Patient with Cancer.” The Oncologist 5 (4): 302–11. 10.1634/theoncologist.5-4-302. [DOI] [PubMed] [Google Scholar]
  8. Baird Macaran A. 2014. “Primary Care in the Age of Reform-Not a Time for Complacency.” Family Medicine 46 (1): 7–10. https://www.ncbi.nlm.nih.gov/pubmed/24415502. [PubMed] [Google Scholar]
  9. Barber Michael P., Kincaid John.. 2015. “Cultic Theosis in Paul and Second Temple Judaism.” Journal for the Study of Paul and His Letters 5 (2): 237–256. https://www.jstor.org/stable/26371768. [Google Scholar]
  10. Baric-Parker Jean, Anderson Emily E.. 2020. “Patient Data-Sharing for AI: Ethical Challenges, Catholic Solutions.” The Linacre Quarterly 87 (4): 471–81. 10.1177/0024363920922690. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Beauchamp Tom L., Childress James F.. 2013. Principles of Biomedical Ethics. 7th ed. New York: Oxford University Press. [Google Scholar]
  12. Benedict. 2007. Spe Salvi. Vatican: Rome Catholic Church. [Google Scholar]
  13. Bond Raymond R., Novotny Tomas, Andrsova Irena, Koc Lumir, Sisakova Martina, Finlay Dewar, Guldenring Daniel, et al. 2018. “Automation Bias in Medicine: The Influence of Automated Diagnoses on Interpreter Accuracy and Uncertainty When Reading Electrocardiograms.” Journal of Electrocardiology 51 (6): S6–S11. 10.1016/j.jelectrocard.2018.08.007. [DOI] [PubMed] [Google Scholar]
  14. Briganti Giovanni, Le Moine Olivier. 2020. “Artificial Intelligence in Medicine: Today and Tomorrow.” Frontiers in Medicine (Lausanne) 7: 27. 10.3389/fmed.2020.00027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Cancer Treatment Centers of America. 2017. “What does a BRCA Gene Mutation Mean for Men?” June 1, Accessed October 2, 2021. https://www.cancercenter.com/community/blog/2017/06/what-does-a-brca-gene-mutation-mean-for-men.
  16. Challen Robert, Denny Joshua, Pitt Martin, Gompels Luke, Edwards Tom, Tsaneva-Atanasova Krasimira. 2019. “Artificial Intelligence, Bias and Clinical Safety.” BMJ Quality & Safety 28 (3): 231–37. 10.1136/bmjqs-2018-008370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Chida Yoichi, Steptoe Andrew, Powell Lynda H.. 2009. “Religiosity/Spirituality and Mortality.” Psychotherapy and Psychosomatics 78 (2): 81–90. 10.1159/000190791. [DOI] [PubMed] [Google Scholar]
  18. Choe Eun Kyoung, Duarte Marisa E., Suh Hyewon, Pratt Wanda, Kientz Julie A.. 2019. “Communicating Bad News: Insights for the Design of Consumer Health Technologies.” JMIR Human Factors 6 (2): e8885. 10.2196/humanfactors.8885. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Choi Rene Y., Coyner Aaron S., Kalpathy-Cramer Jayashree, Chiang Michael F., Campbell J. Peter. 2020. “Introduction to Machine Learning, Neural Networks, and Deep Learning.” Translational Vision Science & Technology 9 (2): 14. https://www.ncbi.nlm.nih.gov/pubmed/32704420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Christakis Nicholas A. 1999. Death Foretold; Prophecy and Prognosis in Medical Care. 1st ed. Chicago: The University of Chicago Press. [Google Scholar]
  21. de Jong Marijn. 2020. Metaphysics of Mystery. In T&T Clark Studies in Edward Schillebeeckx, 1st ed. London: Bloomsbury Publishing Plc. [Google Scholar]
  22. DuBois Christina M., Beach Scott R., Kashdan Todd B., Nyer Maren B., Park Elyse R., Celano Christopher M., Huffman Jeff C.. 2012. “Positive Psychological Attributes and Cardiac Outcomes: Associations, Mechanisms, and Interventions.” Psychosomatics (Washington, D.C.) 53 (4): 303–18. 10.1016/j.psym.2012.04.004. [DOI] [PubMed] [Google Scholar]
  23. FitzGerald Thomas H. B., Penny Will D., Bonnici Heidi M., Adams Rick A.. 2020. “Retrospective Inference as a Form of Bounded Rationality, and its Beneficial Influence on Learning.” Frontiers in Artificial Intelligence 3 (2), 10.3389/frai.2020.00002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Foucault Michel. 1973. The Birth of the Clinic; An Archaeology of Medical Perception. New York: Patheon Books. [Google Scholar]
  25. Francis. 2015. Laudato si. Vatican. [Google Scholar]
  26. Fridman Lex, Ding Li, Jenik Benedikt, Reimer Bryan. 2019. “Arguing Machines: Human Supervision of Black Box AI Systems that make Life-Critical Decisions.” IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 10.1109/CVPRW.2019.00173. [DOI] [Google Scholar]
  27. Gałecki Piotr, Talarowska Monika.. 2018. “Inflammatory Theory of Depression.” Psychiatria Polska 52 (3): 437–47. 10.12740/PP/76863. [DOI] [PubMed] [Google Scholar]
  28. Gianfrancesco Milena A., Tamang Suzanne, Yazdany Jinoos, Schmajuk Gabriela.. 2018. “Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data.” JAMA Internal Medicine 178 (11): 1544. 10.1001/jamainternmed.2018.3763. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Gijsberts Crystal M., Groenewegen Karlijin A., Hoefer Imo E., Eijkemans Marinus J.C., Asselbergs Folkert W., Anderson Todd J., Britton Annie R., et al. 2015. “Race/Ethnic Differences in the Associations of the Framingham Risk Factors with Carotid IMT and Cardiovascular Events.” PLoS One 10 (7): e0132321. 10.1371/journal.pone.0132321. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Goodarzian Fariba, Ghasemi Peiman, Gunasekaren Angappa, Taleizadeh Ata Allah, Abraham Ajith. 2021. “A Sustainable-Resilience Healthcare Network for Handling COVID-19 Pandemic.” Annals of Operations Research 312 (2): 761–825. 10.1007/s10479-021-04238-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Green David Marvin, Swets John A.. 1966. Signal Detection Theory and Psychophysics. New York: Wiley. [Google Scholar]
  32. Gruber Rita, Schwanda Manuel.. 2021. “Hopelessness During Acute Hospitalisation is a Strong Predictor of Mortality.” Evidence-Based Nursing 24 (2): 53. 10.1136/ebnurs-2019-103154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Hafemann Scott. 2013. “‘Divine Nature’” in 2 Pet 1,4 within its Eschatological Context.” Biblica 94 (1): 80–89. https://www.jstor.org/stable/42614730. [Google Scholar]
  34. Hammadah Muhammad, Sullivan Samaah Pearce, Brad Al Mheid, Ibhar Wilmot, Kobina Ramadan, Ronnie Tahhan, Ayman Samman, et al. 2018. “Inflammatory Response to Mental Stress and Mental Stress Induced Myocardial Ischemia.” Brain, Behavior, and Immunity 68: 90–97. 10.1016/j.bbi.2017.10.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Heflick Nathan A., Goldenberg Jamie L.. 2012. “No Atheists in Foxholes: Arguments for (but Not Against) Afterlife Belief Buffers Mortality Salience Effects for Atheists.” British Journal of Social Psychology 51 (2): 385–92. 10.1111/j.2044-8309.2011.02058.x. [DOI] [PubMed] [Google Scholar]
  36. John Paul. 1995. Evangelium Vitae. Vatican: Rome Catholic Church. [Google Scholar]
  37. Johnson-Laird P. 1993. Human and Machine Thinking. New York: Psychology Press. [Google Scholar]
  38. Kariman N., Hedayati M., Taheri Z., Fallahian M., Salehpoor S., Alavi Majd S. H.. 2011. “Comparison of ELISA and Three Rapid HCG Dipsticks in Diagnosis of Premature Rupture of Membranes.” Iranian Red Crescent Medical Journal 13 (6): 415–9. [PMC free article] [PubMed] [Google Scholar]
  39. Kotsimbos T., McCormack J.. 2007. “Respiratory Infectious Disease: Complacency with Empiricism in the Age of Molecular Science. we can do Better.” Internal Medicine Journal 37 (7): 432–35. 10.1111/j.1445-5994.2007.01424.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Liede Alexander, Karlan Beth Y., Narod Steven A.. 2004. “Cancer Risks for Male Carriers of Germline Mutations in BRCA1 or BRCA2: A Review of the Literature.” Journal of Clinical Oncology 22 (4): 735–42. 10.1200/JCO.2004.05.055. [DOI] [PubMed] [Google Scholar]
  41. Love John W. 2008. “The Concept of Medicine in the Early Church.” The Linacre Quarterly 75 (3): 225–38. 10.1179/002436308803889503. [DOI] [Google Scholar]
  42. Maddox Thomas M., Rumsfeld John S., Payne Philip R. O.. 2018. “Questions for Artificial Intelligence in Health Care.” Journal of the American Medical Association (JAMA) 321 (1): 31. 10.1001/jama.2018.18932. [DOI] [PubMed] [Google Scholar]
  43. Mahmood SS, Levy D, Vasan RS, Wang TJ. 2014. The Framingham Heart Study and the epidemiology of cardiovascular disease: A historical perspective. Lancet 383(9921): 999–1008. 10.1016/S0140-6736(13)61752-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. McCarthy Anne Marie, Bristol Mirar, Domchek Susan M., Groeneveld Peter W., Kim Younji, Motanya U. Nkiru, Shea Judy A., Armstrong Katrina. 2016. “Health Care Segregation, Physician Recommendation, and Racial Disparities in BRCA1/2 Testing among Women with Breast Cancer.” Journal of Clinical Oncology 34 (22): 2610–618. 10.1200/JCO.2015.66.0019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Merritt Stephanie M., Ako-Brew Alicia, Bryant William J., Staley Amy, McKenna Michael, Leone Austin, Shirase Lei. 2019. “Automation-Induced Complacency Potential: Development and Validation of a New Scale.” Frontiers in Psychology 10: 225. 10.3389/fpsyg.2019.00225. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Miller Michael R. 2012. “Aquinas on the Passion of Despair.” New Blackfriars 93 (1046): 387–96. 10.1111/j.1741-2005.2010.01395.x. [DOI] [Google Scholar]
  47. Miotto Riccardo, Li Li, Kidd Brian A., Dudley Joel T.. 2016. “Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records.” Scientific Reports 6 (1): 26094. 10.1038/srep26094. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Monden Kimberley R., Gentry Lonnie, Cox Thomas R.. 2016. “Delivering Bad News to Patients.” Proceedings - Baylor University. Medical Center 29 (1): 101–102. 10.1080/08998280.2016.11929380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Monett Dagmar, Lewis Colin W. P.. 2018. “Getting Clarity by Defining Artificial Intelligence—A Survey.” In Philosophy and Theory of Artificial Intelligences. Cham: Springer International Publishing. ISBN 978-3-319-96447-8. [Google Scholar]
  50. Morey Jennifer N., Boggero Ian A., Scott April B., Segerstrom Suzanne C.. 2015. “Current Directions in Stress and Human Immune Function.” Current Opinion in Psychology 5: 13–17. 10.1016/j.copsyc.2015.03.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Mukherjee Ibrahim. 2020. “Is there a 50% Chance of Heads the Next Time You Toss a Coin? Understanding Probability is Key to Making Progress in AI in the Next Decade.” The AI Journal, last modified December 14, Accessed October 30, 2021, https://aijourn.com/is-there-a-50-chance-of-heads-the-next-time-you-toss-a-coin-understanding-probability-is-key-to-making-progress-in-ai-in-the-next-decade/. [Google Scholar]
  52. Najafabadi Maryam M., Villanustre Flavio, Khoshgoftaar Taghi M., Seliya Naeem, Wald Randall, Muharemagic Edin. 2015. “Deep Learning Applications and Challenges in Big Data Analytics.” Journal of Big Data 2 (1): 415–9. 10.1186/s40537-014-0007-7. [DOI] [Google Scholar]
  53. Newell Stephanie, Jordan Zoe. 2015. “The Patient Experience of Patient-Centered Communication with Nurses in the Hospital Setting: A Qualitative Systematic Review Protocol.” JBI Database of Systematic Reviews and Implementation Reports 13 (1): 76–87. 10.11124/jbisrir-2015-1072. [DOI] [PubMed] [Google Scholar]
  54. Ngiam Kee Yuan, Khor Ing Wei. 2019. “Big Data and Machine Learning Algorithms for Health-Care Delivery.” The Lancet Oncology 20 (5): e262–e273. 10.1016/S1470-2045(19)30149-4. [DOI] [PubMed] [Google Scholar]
  55. Noorbakhsh-Sabet Nariman, Zand Ramin, Zhang Yanfei, Abedi Vida. 2019. “Artificial Intelligence Transforms the Future of Healthcare.” The American Journal of Medicine 132 (7): 795–801. 10.1016/j.amjmed.2019.01.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Norori Natalia, Hu Qiyang, Aellen Florence Marcelle, Faraci Francesca Dalia, Tzovara Athina. 2021. “Addressing Bias in Big Data and AI for Health Care: A Call for Open Science.” Patterns (New York) 2 (10): 100347. 10.1016/j.patter.2021.100347. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Parasuraman Raja, Manzey Dietrich H.. 2010. “Complacency and Bias in Human use of Automation: An Attentional Integration.” Human Factors: The Journal of the Human Factors and Ergonomics Society 52 (3): 381–410. 10.1177/0018720810376055. [DOI] [PubMed] [Google Scholar]
  58. Parikh Ravi B., Teeple Stephanie, Navathe Amol S.. 2019. “Addressing Bias in Artificial Intelligence in Health Care.” The Journal of the American Medical Association (JAMA) 322 (24): 2377. 10.1001/jama.2019.18058. [DOI] [PubMed] [Google Scholar]
  59. Pellegrino E. D. 1999. “Christ, Physician and Patient, the Model for Christian Healing.” The Linacre Quarterly 66 (3): 70–78. 10.1080/20508549.1999.11877550. [DOI] [PubMed] [Google Scholar]
  60. Polo Tatiana Cristina Figueira, Miot Hélio Amante. 2020. “Use of ROC Curves in Clinical and Experimental Studies.” Jornal Vascular Brasileiro 19: e20200186. 10.1590/1677-5449.200186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Price W. Nicholson. 2018. “Big Data and Black-Box Medical Algorithms.” Science Translational Medicine 10 (471): 1–7. 10.1126/scitranslmed.aao5333. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Rajula Hema Sekhar Reddy, Verlato Giuseppe, Manchia Mirko, Antonucci Nadia, Fanos Vassilios. 2020. “Comparison of Conventional Statistical Methods with Machine Learning in Medicine: Diagnosis, Drug Development, and Treatment.” Medicina 56 (9): 455. 10.3390/medicina56090455. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Reichardt Lucienne A., Nederveen Floor E., van Seben Rosanne, Aarden Jesse J., van der Schaaf Marike, Engelbert Raoul H. H., van der Esch Martin, et al. 2019. “Hopelessness and Other Depressive Symptoms in Adults 70 Years and Older as Predictors of all-Cause Mortality within 3 Months After Acute Hospitalization: The Hospital-ADL Study.” Psychosomatic Medicine 81 (5): 477–85. 10.1097/PSY.0000000000000694. [DOI] [PubMed] [Google Scholar]
  64. Rice Lisa, Swesnik Deidre. 2013. “Discriminatory Effects of Credit Scoring on Communities of Color.” Suffolk University Law Review 46 (3): 936–65. [Google Scholar]
  65. Rubin Richard. 2020. “AI Comes to the Tax Code.” Dow Jones Institutional News , February 26, Accessed October 30, 2021. https://www.wsj.com/articles/ai-comes-to-the-tax-code-11582713000.
  66. Schoot Henk. 2020. “Thomas Aquinas on Human Beings as Image of God.” European Journal for the Study of Thomas Aquinas 38 (1): 33–46. 10.2478/ejsta-2020-0003. [DOI] [Google Scholar]
  67. Scott Anne P. 1997. “Imagination in Practice.” Journal of Medical Ethics 23 (1): 45–50. 10.1136/jme.23.1.45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Shattuck Eric C., Muehlenbein Michael P.. 2018. “Religiosity/Spirituality and Physiological Markers of Health.” Journal of Religion and Health 59 (2): 1035–054. 10.1007/s10943-018-0663-6. [DOI] [PubMed] [Google Scholar]
  69. Siegler M. 1975. “Pascal's Wager and the Hanging of Crepe.” New England Journal of Medicine 293 (17): 853–57. 10.1056/NEJM197510232931705. [DOI] [PubMed] [Google Scholar]
  70. Simon Viviana. 2005. “Wanted: Women in Clinical Trials.” Science (American Association for the Advancement of Science) 308 (5728): 1517–1517. 10.1126/science.1115616. [DOI] [PubMed] [Google Scholar]
  71. Somerville Margaret. 2006. The Ethical Imagination: Journeys of the Human Spirit. Toronto: House of Anansi Press. [Google Scholar]
  72. Sulmasy Daniel. 2006. The Rebirth of the Clinic: An Introduction to Spirituality in Healthcare. Washington, D.C.: Georgetown University Press. [Google Scholar]
  73. Taylor Charles. 2007. A Secular Age. Cambridge: The Belknap Press of Harvard University Press. [Google Scholar]
  74. Tower John. 2017. “Sex-Specific Gene Expression and Life Span Regulation.” Trends in Endocrinology and Metabolism 28 (10): 735–47. 10.1016/j.tem.2017.07.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Vaccarino Viola, Shah Amit J., Mehta Puja, Pearce Brad K., Raggi Paolo, Bremner J. Douglas, Quyyumi Arshed A.. 2021. “Brain-Heart Connections in Stress and Cardiovascular Disease: Implications for the Cardiac Patient.” Atherosclerosis 328: 74–82. 10.1016/j.atherosclerosis.2021.05.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Vogel Lauren. 2018. “Gut Feelings a Strong Influence on Physician Decisions.” Canadian Medical Association Journal (CMAJ) 190 (33): E998–E999. 10.1503/cmaj.109-5647. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Wang Pei. 2019. “On Defining Artificial Intelligence.” Journal of Artificial General Intelligence 10 (2): 1–37. 10.2478/jagi-2019-0002. [DOI] [Google Scholar]
  78. Wang Fei, Casalino Lawrence Peter, Khullar Dhruv. 2018. “Deep Learning in Medicine—Promise, Progress, and Challenges.” JAMA Internal Medicine 179 (3): 293. 10.1001/jamainternmed.2018.7117. [DOI] [PubMed] [Google Scholar]
  79. Woodcock Claire, Mittelstadt Brent, Busbridge Dan, Blank Grant. 2021. “The Impact of Explanations on Layperson Trust in Artificial Intelligence–Driven Symptom Checker Apps: Experimental Study.” Journal of Medical Internet Research 23 (11): e29386. 10.2196/29386. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Xie Bo, Tao Cui, Li Juan, Hilsabeck Robin C., Aguirre Alyssa. 2020. “Artificial Intelligence for Caregivers of Persons with Alzheimer’s Disease and Related Dementias: Systematic Literature Review.” JMIR Medical Informatics 8 (8): e18189. 10.2196/18189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Yu Li, Esser Mark T., Falloon Judith, Villafana Tonya, Yang Harry. 2018. “Generalized ROC Methods for Immunogenicity Data Analysis of Vaccine Phase I Studies in a Seropositive Population.” Human Vaccines & Immunotherapeutics 14 (11): 1–9. 10.1080/21645515.2018.1489191. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Linacre Quarterly are provided here courtesy of SAGE Publications

RESOURCES