September 29, 2022
Introduction
Clinical diagnosis is essentially a data curation and analysis activity through which clinicians seek to gather and synthesize enough pieces of information about a patient to determine their condition. The art and science of clinical diagnosis dates to ancient times, with the earliest diagnostic practices relying primarily on clinical observations of a patient’s state, coupled with methods of palpation and auscultation (Mandl and Bourgeois, 2017; Berger, 1999). Following a period of stagnation in clinical diagnostic practices, the 17th through 19th centuries marked a period of discovery that transformed modern clinical diagnostics, with the advent of the microscope, laboratory analytic techniques, and more precise physical examination and imaging tools (e.g., the stethoscope, ophthalmoscope, X-ray, and electrocardiogram) (Walker, 1990). These foundational achievements, among many others, laid the groundwork for modern clinical diagnostics. However, the volume and breadth of data for which clinicians are responsible has exponentially grown, generating challenges for human cognitive capacity to assimilate.
Computerized diagnostic decision support (DDS) tools emerged to alleviate the burden of data overload, enhance clinicians’ decision-making capabilities, and standardize care delivery processes. DDS tools are a subcategory of clinical decision support (CDS) tools, with the distinction that DDS tools focus on diagnostic functions, whereas CDS tools more broadly can offer diagnostic, treatment, and/ or prognostic recommendations. Debuting in the 1970s and 1980s, expert-based DDS tools such as MYCIN, Iliad, and Quick Medical Reference operated by encoding then-current knowledge about diseases through a series of codified rules, which rendered a diagnostic recommendation (Miller and Geissbuhler, 2007). While these early DDS tools initially achieved pockets of success, the promise of many of these tools diminished as several shortcomings became evident. Most prominently, the capacity of data collection and the complexity of knowledge representation prevented accurate representation of the pathophysiological relationships between a disease and treatments. Programmed with a limited set of information and decision rules, several expert-based DDS tools could not generalize to all settings and cases. Some suffered from performance issues as well, often struggling to generate a result or yielding an errant diagnosis. Moreover, users were frustrated. Since these tools existed outside of the main clinical information systems, clinicians had to reenter a long list of information to use them, which created significant friction in their workflows. Similarly, updating the knowledge base of a DDS system often required cumbersome manual entry. Finally, there was a lack of incentives to drive adoption. Thus, provider acceptance remained low, and expert-based DDS tools faded from use (Miller, 1994).
The revitalization of the artificial intelligence (AI) field—the ability of computer algorithms to perform tasks that typically require human intelligence—offers an opportunity to augment human diagnostic capabilities and address the limitations of expert-based DDS tools (Yu, Beam, and Kohane, 2018). Current AI techniques possess not only remarkable processing power, speed, and ability to link and organize large volumes of multimodal data, but also the ability to learn and adjust based on novel inputs, building upon previous knowledge to generate new insights. For this reason, AI approaches, specifically machine learning (ML), are especially well suited to the problems of clinical diagnosis, shortening the time for disease detection, diagnostic accuracy, and reducing medical errors. By doing so, AI diagnostic decision support (AI-DDS) tools could reduce the cognitive burden on providers, mitigate burnout, and further enhance care quality.
While contemporary AI-DDS tools are more sophisticated than their expert-based predecessors, concerns about their development, interoperability, workflow integration, maintenance, sustainability, and workforce requirements remain, hampering the adoption of AI-DDS tools. Additionally, the “black box” nature of some AI systems poses liability and reimbursement challenges that can affect provider trust and adoption. This paper examines the key factors related to the successful adoption of AI-DDS tools, organized into four domains: reason to use, means to use, method to use, and desire to use. Additionally, the paper discusses the crosscutting issues of bias and equity as they relate to provider trust and adoption of these tools. Addressing biases and inequities perpetuated by AI tools is paramount to preventing the widening of disparities experienced by certain populations and to engendering confidence and trust among clinicians who are responsible for providing care to these populations. To conclude, the authors discuss the policy implications around the adoption of AI-DDS systems and propose action priorities for providers, health systems leaders, legislators, and policy makers to consider as they engage in collaborative efforts to advance the longevity and success of these tools in supporting safe, effective, efficient, and equitable diagnosis.
A Primer on AI-Diagnostic Decision Support Tools
AI-DDS tools come in various forms, use myriad AI techniques (see Table 1), and can be applied to a growing number of conditions and clinical disciplines. In this paper, the authors focus on adoption factors as they relate to assistive AI-DDS tools. Unlike autonomous AI tools, which operate independently from a human, assistive AI tools involve a human to some degree in the analysis and decision-making process (see Figure 1) (Bitterman, Aerts, and Mak, 2020). The authors in this paper focus on AI-DDS tools designed to support health care professionals in decision-making processes, rather than consumer-facing tools in which a layperson interacts with an AI-DDS system.
TABLE 1. A Non-Exhaustive Glossary of Key Terms Related to Artificial Intelligence.
Artificial Intelligence (AI) | A collection of computer algorithms displaying aspects of human-like intelligence for solving specific tasks. |
Machine Learning (ML) | A subset of AI that harnesses a family of statistical modeling approaches to automatically learn trends from the input data and improve the prediction of a target state. |
Deep Learning (DL) | A subset of ML consisting of multiple computational layers between the input and output that form a “neural network” used for complex feature learning. |
Convolutional Neural Networks (CNN) | A subset of DL techniques that is particularly efficient in AI-based pattern recognition. It is the foundation of many image processing AI algorithms, for instance in radiology. |
Supervised Learning | A type of AI/ML algorithm that is trained to “learn” associations from labeled data (i.e., input and desired output data). |
Unsupervised Learning | A type of AI/ML algorithm that is trained on unlabeled data and intended to “independently” find underlying structures of patterns in input data. |
Random Forests Method | A type of ML/AI algorithm involving several decision trees, whose output is the statistical mode (in classification) or mean (in regression) of each of the decision trees. |
Natural Language Processing (NLP) | A type of AI that refers to algorithms that employ computational linguistics to understand and organize human speech. |
Computer Vision (CV) | Scientific field that deals with how computers process, evaluate, and interpret digital images or videos. |
AI Diagnostic Decision Support (AI-DDS) | A computer-based tool, driven by AI algorithms, that uses clinical knowledge and patient-specific health information to inform, aid, and augment health care providers’ diagnostic decision-making processes. |
SOURCE: Adapted from Abdulkareem, M. and S. E. Petersen. 2021. The Promise of AI in Detection, Diagnosis, and Epidemiology for Combating COVID-19: Beyond the Hype. Frontiers in Artificial Intelligence. https://doi.org/10.3389/frai.2021.652669 and Aggarwal, N., M. Ahmed, S. Basu, J. J. Curtin, B. J. Evans, M. E. Matheny, S. Nundy, M. P. Sendak, C. Shachar, R. U. Shah, and S. Thadaney-Israni. 2020. Advancing Artificial Intelligence in Health Settings Outside the Hospital and Clinic. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC. https://doi.org/10.31478/202011f.
Current AI-DDS tools reflect artificial narrow intelligence (ANI), i.e., the application of high-level processing capabilities on a single, predetermined task, as opposed to artificial general intelligence (AGI), which refers to human-level reasoning and problem-solving skills across a broad range of domains. AI-aided diagnostic tools are designed to address specific clinical issues related to a prescribed range of clinical data. They do not (and are not intended to) comprise omniscient, science-fiction-like algorithmic interfaces that can span all disease contexts. Ultimately, the purpose of AI-DDS tools is to augment provider expertise and patient care rather than dictate it.
Generally, assistive AI-DDS tools currently use a combination of computer vision and ML techniques such as deep learning, working to identify complex non-linear relationships between features of image, video, audio, in vitro, and/or other data types, and anatomical correlates or disease labels. The authors highlight a few representative examples below.
Most prominently, assistive AI-DDS tools can be found in the field of diagnostic imaging, given the highly digital and increasingly computational nature of the field. In fact, radiology boasts more Food and Drug Administration (FDA)-authorized (that is, cleared or approved) AI tools than any other medical specialty (Benjamens et al., 2020). A well-studied algorithm within the cardiac imaging space is HeartFlow FFRCT. Trained on large amounts of computed tomography (CT) scans, this algorithm employs deep learning to create a precise 3D visualization of a patient’s heart and major vessels to assist in the detection of arterial blockage (Heartflow, 2014). Deep learning methods can also be applied to gauge minute variations in cardiac features such as ventricle size and cardiac wall thickness to make distinctions between hypertrophic cardiomyopathy and cardiac amyloidosis—two conditions which have similar clinical manifestations and can often be misdiagnosed (Duffy et al., 2022). Within oncology, ML techniques in the form of computer-aided detection systems have been used since the 1990s to support early detection of breast cancer (Fenton et al., 2007; Nakahara et al., 1998). Since then, the FDA has approved several AI-based cancer detection tools to help detect anomalies in breast, lung, and skin images, among others (Shen et al., 2021; Ray and Gupta, 2020; Ardila et al., 2019). Many of these models have been shown to improve diagnostic accuracy and prediction of cancer development well before onset (Yala et al., 2019).
Beyond imaging, AI applications include the early recognition of sepsis, one of the leading causes of death worldwide. Electronic health record (EHR)-integrated decision tools such as Hospital Corporation of America (HCA) Healthcare’s Sepsis Prediction and Optimization Therapy (SPOT) and the Sepsis Early Risk Assessment (SERA) algorithm developed in Singapore draw on a vast repository of structured and unstructured clinical data to identify signs and symptoms of sepsis up to 12–48 hours sooner than traditional methods. In this regard, natural language processing (NLP) of unstructured clinical notes is particularly promising. NLP helps to discern information from a patient’s social history, admission notes, and pharmacy notes to supplement findings from blood results, creating a richer picture of a person’s risk for sepsis (Goh et al., 2021; HCA Healthcare Today, 2018). However, there are significant concerns about the clinical utility and generalizability of these tools across different geographic settings (Wong et al., 2021).
In the fields of mental health and neuropsychiatry, AI-DDS tools hold potential for combining multimodal data to uncover pathological patterns of psychosocial behavior that may facilitate early diagnosis and intervention. For instance, the FDA recently authorized marketing of an AI-based diagnostic aid for autism spectrum disorder (ASD) developed by Cognoa, Inc. As a departure from deep learning and CNNs, the Cognoa algorithm is based in random forest decision trees. It integrates information from three sources to provide a binary prediction of ASD diagnosis:
-
1.
a brief parent questionnaire regarding child behavior completed via mobile app,
-
2.
key behaviors identified in videos of child behaviors, and
-
3.
a brief clinician questionnaire.
The tool has demonstrated safety and efficacy for ASD diagnosis in children ages 18 months to five years, performing at least as well as conventional autism screening tools (Abbas et al., 2020). There have also been promising demonstrations of AI for diagnosing depression, anxiety, and posttraumatic stress disorder (Lin et al., 2022; Khan et al., 2021; Marmar et al., 2019).
AI-DDS systems are also becoming increasingly common in the field of pathology, particularly in vitro AI-DDS tools. Akin to the radiological examples, AI techniques can analyze blood and tissue samples for the presence of diagnostic biomarkers and characterize cell or tissue morphology. For example, a model developed by PreciseDx uses CNNs to calculate the density of Lewy-type synucleinopathy, a bio-marker of early Parkinson’s disease, in the peripheral nerve tissue of saliva glands (Signaevsky et al., 2022).
Facilitating Provider Adoption of AI-Diagnostic Decision Support Tools
Despite the significant potential AI-DDS tools hold in augmenting medical diagnosis, these tools may fail to achieve wide clinical uptake if there is insufficient clinical acceptance. A particularly telling example is that of many early expert-based DDS examples (the forerunners to modern AI-DDS systems, as discussed in the Introduction), which disappointed provider expectations because of a host of usability and performance issues as discussed in the Introduction.
However, the deficiencies of these early AI-DDS tools are instructive for facilitating the adoption of contemporary AI-DDS tools. Additionally, lessons learned from implementing current non-AI-based DDS tools, or systems that generate recommendations by matching patient information to a digital clinical knowledge base, can offer insight. The authors of this paper present a model for understanding the key drivers of clinical adoption of AI-DDS tools by health systems and providers alike, drawing from these historical examples and the current discourse around AI, as well as notable frameworks of human behavior (Ajzen, 1985; Ajzen, 1991). This model focuses on eight major determinants across four interrelated core domains, and the issues covered within each domain are as follows (see Figure 2):
Domain 1: Reason to use explores the alignment of incentives, market forces, and reimbursement policies that drive health care investment in AI-DDS.
Domain 2: Means to use reviews the data and human infrastructure components as well as the requisite technical resources for deploying and maintaining these tools in a clinical environment.
Domain 3: Method to use discusses the workflow considerations and training requirements to support clinicians in using these tools.
Domain 4: Desire to use considers the psychological aspects of provider comfort with AI, such as the extent to which the tools alleviate clinician burnout, provide professional fulfillment, and engender overall trust. This section also examines medicolegal challenges, one of the biggest hurdles to fostering provider trust in and the adoption of AI-DDS.
Domain 1: Reason to Use
At the outset, the adoption and scalability of a given AI-DDS tool are driven by two simple but critical factors that dictate the fate of nearly any novel technology being introduced into a health setting. The first factor is the ability of a tool to address a pressing clinical need and improve patient care and outcomes (alignment with providers’ and health systems’ missions). Considering that these tools require sufficient financial investment for deployment and maintenance, the second factor is the tool’s affordability both to the patient and health system, including the incentives for the provider, patient, and health system to justify the costs of acquiring the tool and investments needed to implement it. The issues related to Alignment and Incentives and Reimbursements are, in practice, deeply intertwined and codependent. However, for the purposes of the discussion that follows, the authors have separated the two for clarity, emphasizing the logistical and technical steps relevant to Incentives and Reimbursement.
Alignment with Health Care Missions
AI-DDS tools must facilitate the goals and core objectives of the health care institution and care providers they serve, although the specific impetus and pathway for AI-DDS tool adoption can vary by organization. For instance, risk prediction and early diagnosis AI-DDS tools being developed and implemented by the Veterans Health Administration (VHA)—the largest integrated health care system in the United States—were initiated by governmental mandates and congressional acts requiring VHA to improve specific patient outcomes in this population (i.e., the Comprehensive Addiction and Recovery Act) (114th Congress, 2016b). Such initiatives, mandated on a national level, benefit immensely because the VHA is a nationalized health care service, capable of deploying resources in an organized fashion and on a large scale. Another pathway by which these tools can be introduced into clinical settings is through private AI developers collaborating with academic health centers or other independent health systems. These collaborations can result in the creation of novel AI-DDS tools or the customization of “off-the-shelf” commercial tools. A recent example of this type of partnership is Anumana, Inc., a newly founded health technology initiative between Nference (a biomedical start-up company) and Mayo Clinic focused on leveraging AI for early diagnosis of heart conditions based on ECG data (Anumana, 2022). In this context, the AI-DDS development process may be geared toward a given health system’s specific needs or strategic missions. However, this does not necessarily preclude its broader utility in other health systems.
A useful framework for evaluating the necessity and utility of AI-DDS tools relates to the Quintuple Aim of health care–better outcomes, better patient experiences, lower costs, better provider experiences, and more equitable care (Matheny et al., 2019). Given the link between patient outcomes and provider experience, it is also important to establish and validate the accuracy of new AI-DDS tools at the start of the adoption process and throughout its use. However, there are often discrepancies between AI-DDS developers’ scope and the realities of clinical practice, resulting in tools that can be either inefficient or only tangentially useful. To reassure providers that their tools are optimized for clinical effectiveness, health system leaders must be committed to regular evaluations of AI-DDS models and performance, as well as efficient communication with developers and companies to update algorithms based on changes like diagnosis prevalence and risk-factor profiles. As algorithms are deployed, and their output is presented to providers in EHR systems, special attention must be paid to the information design and end-user experience to optimize providers’ ability to extract key information and act on it efficiently (Tadavarthi et al., 2020). Another critical step in proving robust clinical utility of an AI-DDS tool will be to demonstrate low burden of unintended harms and consequences with use of a given tool (i.e., high sensitivity and high specificity) (Unsworth et al., 2022). The degree to which provider reasoning impacts the AI-DDS will also play a role in this regard. Finally, in implementing care plans based in part on AI-DDS output, all care team members must be coordinated in their response and long-term follow-up roles (see Domain 2: Means to Use for discussion about requisite resources and roles to accomplish these tasks).
Incentives and Reimbursement
Many health care systems operate on razor-thin financial margins (Kaufman Hall & Associates, 2022). Moving forward, robust insurance reimbursement programs for the purchase and use of AI-DDS tools will be critical to promoting greater adoption by providers and health systems (Chen et al., 2021). However, incentive structures and payer reimbursement protocols for AI-DDS tools are in their nascent stages. Furthermore, insurance dynamics, including for AI-DDS systems, are particularly complex in the U.S., due in part to the heterogeneity of potential payers that range from governmental entities to private insurers to self-insured employers.
In the current fee-for-service environment, a general trend is for the Centers for Medicare and Medicaid Services (CMS), the federal agency that is the nation’s largest health care payer, to be the first to establish payment structures for new technologies and for private payers to then emulate the standards set by CMS (Clemens and Gottlieb, 2017). In determining whether to reimburse the use of a novel AI-DDS tool (and to what extent), a primary consideration for payers, regardless of type, is to assess whether the technology in question pertains to a condition or illness that falls under the coverage benefits of the organization. For instance, an AI-DDS system may be deemed as a complementary or alternative health tool, which may fall outside the scope of many insurance plans and, therefore, be ineligible for reimbursement. If the AI-DDS tool is indeed related to a covered benefit by an insurer [(for examples of AI-DDS tools currently reimbursed by U.S. Medicare, see (Parikh and Helmchen, 2022)], developers must provide payers with an adequate evidentiary basis for the utility and safety of the new tool. For this assessment, payers often require data similar to what the FDA would require for premarket approval of a device–for example, clinical trial data showing effectiveness (clinical validity and utility) or other solid evidence that clinical use of the tool improves health care outcomes (Parikh and Helmchen, 2022). Developers bringing new DDS systems to market through FDA’s other market authorization pathways, such as 510(k) clearance or de novo classification, may lack such data and need to generate additional evidence of safety and effectiveness to satisfy payers’ data requirements (Deverka and Dreyfus, 2014). Ongoing post-marketing surveillance to verify the clinical safety and effectiveness of new AI-DDS tools thus is important not only to support the FDA’s continuing safety oversight but also as a source of data to support payers’ evaluation processes.
Experts in health care technology assessment highlight two components of AI-DDS evaluation that are of particular interest to payers: potential algorithm bias and product value. Payers must be convinced that a given AI-DDS will perform accurately and improve outcomes in the specific populations they serve. As described later in this paper, algorithm bias can arise with the use of non-representative clinical data in AI-DDS algorithm development and testing and may lead to suboptimal performance in disparate patient populations based on geographic or socioeconomic factors, as well as in historically marginalized populations (e.g., the elderly and disabled, homeless/displaced populations, and LGBTQ communities). To avoid such biases, monitoring and local validation need to be incorporated into reimbursement frameworks. With regard to product value, payers may weigh the potential clinical benefits of an AI-DDS tool relative to standard diagnostic approaches against the logistical and workflow disruptions that introducing and integrating a new tool into health systems may cause (Tadavarthi et al., 2020; Parikh and Helmchen, 2022). Furthermore, payers can also seek assurance of long-term technical support from algorithm developers.
Although there are not direct reimbursement channels for many types of AI-DDS tools, within the scope of CMS payment systems, there are currently two primary mechanisms through which AI-DDS services can be reimbursed. The first is that CMS reimburses physician office payments through the Medicare Physician Fee Schedule (MPFS). Within MPFS, payment details are specified via the Current Procedure Terminology (CPT), maintained by the American Medical Association (AMA). CPT codes denote different procedures and services provided in the clinic. New AI-CDS/DDS systems that receive approval for reimbursement by CMS may be assigned a CPT code, as was done in 2020 for IDx-DR, an autonomous AI tool for the diagnosis of diabetic retinopathy (Digital Diagnostics, 2022). The second CMS mechanism is through the Inpatient Prospective Payment System (IPPS) for hospital outpatient services. Within IPPS, the Diagnosis Related Groups (DRG) coding system describes bundles of procedures and services provided to clusters of medically similar patients. Novel AI-DDS tools can be reimbursed in the context of a DRG via a mechanism known as the New Technology Add-on Payment (NTAP). NTAP, created to encourage the adoption of promising new health technologies, provides supplemental payment to a hospital for using a given new technology in the context of a broader care plan that may be covered in the original DRG (Chen et al., 2021).
As AI-DDS systems become more prevalent, sophisticated, and integrated into broader diagnostic workflows, distinguishing their specific role in the diagnostic process and ascribing specific reimbursement values to an algorithm may become difficult. AI-DDS tools may fare better and enjoy greater adoption under value-based payment frameworks, where efficiency and overall quality of care are incentivized rather than individual procedures (Chen et al., 2021).
Domain 2: Means to Use
Paramount to establishing the value proposition is ensuring that clinical environments are properly equipped to support and sustain the implementation of AI-DDS tools. This consists of two interrelated elements: (a) the data and computing infrastructure required to collect and clean health care data, develop and validate an AI algorithm at the point of care, and perform routine maintenance and troubleshooting of technical problems in a high-throughput environment; and (b) the human and operational resources needed to conduct these technical functions so clinicians can seamlessly interface with these tools.
Infrastructure
Building the necessary infrastructure to deploy AI-DDS relies on developing the hardware and software capabilities to support a range of functions beginning with data processing and curation. Concurrent with developing and implementing a working AI-DDS pipeline, several health IT infrastructure and data flow steps are required to support the implementation and sustainment of an AI-DDS tool. The first point of entry into the pipeline is data ingestion. This step requires linking a data producer, such as an MRI machine, into a data collection and processing workflow to maintain and represent the data in a way that can be leveraged by an AI-DDS algorithm. Many AI-DDS systems currently in use are “locked,” which means that the algorithms are static. However, in the case of a continuous learning/adaptive AI system, in which the system continuously ingests new data to update the algorithm in “real-time,” this could be performed on a fixed schedule (e.g., every day, month, etc.) or a trigger. The next consideration is determining where and how the raw data is stored (e.g., enterprise data warehouse [EDW] versus a data lake). In practice, these considerations are constrained by, first, the specific clinical problem being addressed and, second, the extent to which the available resources can accommodate the complexity of the pipeline. An EDW, which contains structured, filtered data for specific uses, may be preferred for operational analysis, whereas a data lake house, which is a large repository of raw data for purposes yet to be specified, may be selected by institutions seeking to perform deep research analysis. While model development is a distinct step in building an AI pipeline, it is nonetheless interdependent on deployment considerations. For example, an institution seeking to build analytic tools that are robust to future changes in imaging (e.g., adding a new MRI machine) may opt for a more flexible architecture of a data lake house instead of a traditional EDW. This, in turn, creates dependency cascades since data storage choice changes the order and extent to which data cleaning and other pre-processing pipelines are implemented. Thus, AI-DDS development and implementation choices are both business operations and data science decisions since their steps are codependent.
Some clinical problems may require more frequent data updates or “data meals” to ensure that adaptive AI systems can appropriately address rapidly evolving issues with a nascent foundation of data. For instance, a COVID-19 diagnostic model at the beginning of the pandemic might have been built around admission vital signs and complete blood count (CBC) results. However, as knowledge about the natural history of the illness progressed, the model may have evolved to include additional data types such as erythrocyte sedimentation rates (ESR), chest X-ray (CXR) images, and metabolic panel data. In many hospital systems, adding the ESR values is not particularly challenging from a data ingestion standpoint because this data originates from the same system that provides the CBC values. However, the addition of CXR images is challenging because it requires working with another department—radiology, in this instance—and interfacing with another information system (picture archiving and communication system [PACS]). Finally, extending predictions from a single outcome at a discrete point in time (i.e., cross-sectional analysis) to multiple predictions or ones relying on time series data can impact upstream choices for data ingestion pipelines.
It is also important to consider that health care AI needs to be deployed in clinical workflows. In these settings, the demand for near real-time data can result in added hardware complexity, expense, and risk. Notably, for most AI-DDS systems, raw data is insufficient; high-quality data that has been curated and annotated is required for robust algorithm training. At a minimum, redundant storage and processing cores capable of model training and validation are essential. While the granular technical requirements are specific to the algorithm employed, the amount and type of data (e.g., images vs. audio vs. text) institutions seek to implement AI-DDS tools may necessitate the ability to access storage on the terabyte and potentially petabyte scale. However, not all data are required to be available for real-time access. Furthermore, while discussion of data privacy and security is beyond the scope of this section, there are numerous Health Insurance Portability and Accountability Act (HIPAA)-compliant cloud solutions that could address the issues of availability of real-time data access and storage. These issues should be carefully considered in an institution’s data plan when seeking to develop and deploy AI-DDS tools.
Another major consideration beyond storage is processing power, particularly for model development and model updating. The types and number of specific chipsets that would be most beneficial should be determined by expert consultation once there is some understanding of the clinical use case and the amount and type of medical data involved. Due to the computational requirements, deep learning-based models might require use of graphical processing units (GPUs), which, in contrast to central processing units (CPUs), offer the ability to do parallel processing with multiple cores, which is particularly useful in deep learning models. While such models could be run on conventional CPUs, efficiency may be reduced by several orders of magnitude depending on model complexity, resulting in models that take weeks to train rather than hours.
Finally, with respect to deployment, it is essential that there is a local solution permitting any mission-critical AI-DDS tools to continue to function at times when internet connectivity is disrupted. Previously, these “downtime” events were often limited to a few hours or days. However, in the age of hospitals becoming an increasing target for ransomware attacks, some planning should be made for what to do if a downtime event lasts weeks or months.
With respect to software needs, the ability of models to run on mobile devices is becoming increasingly important. As such, the ability to either securely log on to a hospital’s server or perform the computations for an AI-DDS on a mobile device is becoming the industry standard, rather than a bespoke one-off requirement for providers enthusiastic about technology. The extent to which health systems should invest in such technology depends on the amount and type of data, the complexity and efficiency of AI/ML models, and the clinical scenario the AI-DDS is addressing. To illustrate, consider an AI-DDS that predicts the need for hospital admission based on data collected from traveling wound care nurse checking capillary blood glucose and uploading a picture of a patient’s worsening extremity wound. All of this can now be done on a mobile device. A model could be implemented such that a traveling wound care nurse takes a picture and runs the model at the point of care using an application on a mobile device.
Another key consideration for deployment of AI-DDS tools is system interoperability. This issue can be conceptualized from many different “pain points”. One occurs at the data ingestion stage, as discussed previously. This may be due to incompatible EHR systems (e.g., the hospital’s inpatient system uses Cerner, but the outpatient clinics use Epic), which cannot “speak” to one another. Alternatively, a health system may have hospitals that use the same EHR, but the EHRs do not share a common data storage repository. Although everyone uses the same PACS system, pulling imaging data from hospitals A, B, and C requires accessing one server, while pulling data from hospitals X, Y, and Z across the state requires accessing a different server, an issue of interoperability related to information exchange. A second ingestion scenario would require harmonization of different sensors into the same repository. For example, the hospital may use multiple types of point-of-care glucose monitors. The workflow workaround is often that the bedside technician looks at the monitor reading and then types it into the EHR. However, if this data needed to be transitioned into an automatically collected format, there may need to be different integrations for each type of glucose monitor. A second “pain point” occurs in the data cleaning stage, known as the data curation stage. Consider the ramifications of a hospital changing from reporting hemoglobins to hematocrits or traditional troponins to high sensitivity troponins. While this makes little difference at the bedside, it has the potential to significantly complicate AI/ML modeling if the change is not recognized and a standardized process for addressing the inconsistency is not developed. Although a hospital’s primary focus should be on selecting tools that enhance value for patients, some attention should be devoted to considering how these tools may impact AI-DDS pipelines. As the reliance on cyber-physical systems grows, health systems should plan to mitigate how physical equipment upgrades change AI/ML data ingestion and use pipelines. Usually, such changes have a trivial effect on overall model performance; however, they can significantly impact the time and effort required to pre-process data. The most efficient way would be to have members of the AI-DDS team with expertise in cyber-physical systems and extract, transform, and load (ETL) data pipelines.
In addition, ensuring providers can readily access AI-DDS tools is critical to adoption. Successfully deploying an AI-DSS tool requires optimizing the multitude of human and software factors involved in the patient care workflow. However, as a preliminary consideration, the essential task is building infrastructure that avoids clinician devising workarounds. There is ample evidence that clinicians will avoid using or develop workarounds for poorly-tailored solutions or requirements that are perceived as being foisted on them and otherwise constitute yet another inefficiency in an already inefficient system. Regarding software, developers must be prepared to ensure that the tool can be used and viewed on both desktop and mobile devices and potentially by provider-facing and patient-facing versions of the EHR software. Transitioning between these various contexts should be seamless and, more importantly, provide the same information.
Resources
Apart from the data and computational infrastructure necessary to develop, implement, and maintain a health care AI-DDS solution, there are also significant human capital requirements. Practices and health systems often lack the required human resources to run a minimum data infrastructure that can support AI-powered applications. Key requirements include, but are not limited to, frontline IT staff, data architects, and AI-machine learning specialists to understand the context of use and tailor the solution to be fit for purpose. The infrastructure also requires information security and data privacy officers, legal and industrial contract officers for business and data use agreements, and IT educators to train and retrain providers and staff.
To ensure sustainable and safe integration of AI-DDS tools into clinical care, it is crucial that the tools meet the clinical needs of the institution while also maintaining alignment with best practice guidelines, which change over time (Sutton et al., 2020). This requires a governance process in the health care system, with time investments from executive leadership and sponsorship as well as committee and oversight mechanisms to provide regular review (Kawamanto et al., 2018). Direct clinical champions must also have dedicated time to interface between front-line clinicians and the leadership, informatics, and data science teams. These models and tools need to be assessed for accuracy in the local environment and modified and updated if they do not perform as expected. Lastly, they must be surveilled over time and checked regularly to ensure performance maintenance.
One of the major challenges in effectively deploying AI in health care is managing implementation and maintenance costs. Nationally, non-profit hospital systems report an average profit margin of around 6.5%. (North Carolina State Health Plan and Johns Hopkins Bloomberg School of Public Health, 2021). These relatively slim margins encourage health care systems to be conservative in investing in unproved or novel technologies. Robust analysis of cost savings and cost estimates in the deployment of AI in health care is still lagging, with only a small number of articles found in recent systematic reviews, most of which focus on specific cost elements (Wolff et al., 2020). In general, industry estimates the overall cost of development and implementation of such tools can range from $15,000 to $1 million, depending on the complexity of the system and integration with workflow (Sanyal, 2021).
Another challenge is the tension between hiring a health care technology firm to develop or adapt the algorithms and tools into a health care environment versus hiring and supporting internal staff, which could cost between $600 and $1,550 a day (Luzniak, 2021). Even when much of the core data science expertise is hired into a system, data scientists spend about 45% of their time on data cleaning (GlobeNewswire, 2020). Because familiarity and ongoing business intelligence and clinical operations needs require managing data, many systems choose to hire internally for a portion of their infrastructure needs, which require a continued injection of capital.
Domain 3: Method to Use
Operationalizing and scaling innovations within the health care delivery system is costly and challenging. This is partly due to the heterogeneity of clinical workflows across and within organizations, medical specialties, patient populations, and geographic areas. Thus, AI-DDS tools must contend with this heterogeneity by plugging into key process steps that are universally shared. However, a weakness that limits options for reshaping physician workflows is the still nascent implementation science for deploying interventions that change provider behavior as well as the non-modularity and non-modifiability of extant, sometimes antiquated point-of-care software, including EHRs (Mandl and Kohane, 2012).
Coupled with workflow challenges is the issue of developing and deploying these tools in a manner that improves efficiency of practice and frees up cognitive and emotional space for providers to interact with their patients. The risk of unsuccessful systems interfering with or detracting from the diagnostic process, through user interface distractions or data obfuscation, exists and must be guarded against. In addition, extensive user training, both onboarding and ongoing and equally nimble educational infrastructure, is necessary to ensure technical proficiency.
Workflow
AI-DDS tools must be effectively integrated into clinical workflows to impact patient care. Unfortunately, many integrations of AI solutions into clinical care fail to improve outcomes because context-specific factors limit efficacy when tools are diffused across sites. Although numerous details are crucial to integrating AI/ML tools into practice, three key insights have emerged from experiences integrating AI/ML tools into practice at various locations and drawn from literature reviews of the AI clinical care translation process (Kellogg et al., 2022; Sendak et al., 2020a; Yang et al., 2020; He et al., 2019; Wiens et al., 2019; Kawamoto, 2005).
First, health systems looking to use AI-DDS tools must recognize the factors that shape adoption and be willing to restructure roles and responsibilities to allow these tools to function optimally. The current state of health information technology centers workflows around the EHR, and AI tools often automate tasks that historically required manual data entry or review. Similarly, AI tools often codify clinical expertise and can prompt concern from clinicians who value autonomy (Sandhu et al., 2020). To navigate these complexities, health systems may need to develop new workflows that change clinical roles and responsibilities, including new ways for interdisciplinary teams to respond to AI alerts. For example, an increasing number of AI tools require staff in a remote, centralized setting to support bedside clinical teams (Escobar et al., 2020; Sendak et al., 2020b). Many hospitals already benefit from more manual remote, interdisciplinary support through services such as cardiac telemetry, eICU, and overnight teleradiology. Similarly, AI can decentralize the location of specialized services. For example, instead of diabetic retinopathy screening requiring a visit to a retina specialist, Digital Diagnostics now hosts automated AI machines at grocery stores (Digital Diagnostics, 2019).
Second, health systems must closely examine the unique impacts of AI integration on different stakeholders along the care continuum and balance stakeholder interests. This is a key facet in establishing the value proposition for the introduction of a new AI-DDS tool. Experience in AI integration reveals that “predictive AI tools often deliver the lion’s share of benefits to the organization, not to the end user” (Kellogg et al., 2022). Predictive AI tools often identify events before they happen, meaning the optimal setting for AI use is upstream of the setting typically affected by the event. For example, patients with sepsis die in the hospital and often in intensive care units, but timely intervention to prevent complications must occur within the emergency department (ED). Similarly, patients with end-stage renal disease often present to the ED to initiate dialysis, but preventive interventions must occur in primary care. Project leaders looking to integrate AI into workflows must map out value streams, and if value is captured by downstream stakeholders in a different setting, project leaders must identify other opportunities to create value for end users. One approach is to identify “how a tool can help the intended end users fix problems they face in their day-to-day work” (Kellogg et al., 2022). For example, when a team of cardiologists and vascular surgeons aimed to reduce unnecessary hospital admissions for patients with low-risk pulmonary embolisms (PEs), ED clinicians initially pushed back. Scheduling outpatient followup for a low-risk PE had historically been challenging, so the specialists offered to coordinate care for patients identified by the AI/ML tool and block off outpatient appointments to ensure timely follow-up, allowing both the tool and the clinicians to operate as efficiently as possible (Vinson et al., 2022).
Third, workflows should be continuously monitored and adapted to respond to optimize the labor effort required to effectively use AI tools. For example, when a chronic kidney disease algorithm was implemented on a Duke Health Medicare population of over 50,000 patients, many patients identified by the algorithm as high risk for dialysis were already on dialysis or seeing a nephrologist outside of Duke (Sendak et al., 2017). Early intervention was no longer as relevant for these patients, so the team agreed to establish a new pre-rounding process by which a nurse filtered out patients already impacted by the outcome of interest. However, after months of manually reviewing alerts for patients identified by an AI tool as high risk of inpatient mortality, the lead nurse felt confident that the algorithm identified appropriate patients (Braier et al., 2020). The team agreed to remove the manual review step and directly automate emails to hospitalist attendings to consider goals of care conversations. Lastly, there must also be feedback loops with end users to ensure that the AI tool continues to be appropriately used. For example, hospitalists using the inpatient mortality tool inquired about using the tool to triage patients to intensive care units. Similarly, nurses responding to sepsis alerts began asynchronously messaging clinicians in the ED through the EHR rather than calling and talking directly with provider. These changes in communication approach and intended use may seem subtle but can undermine validity of the tool and potentially harm patients. To avoid drift in workflow or use of AI tools, project leaders should clearly document algorithms and regularly train staff on appropriate use (Sendak et al., 2020c).
Efficiency of Practice
The impact of AI-DDS tools and systems on the cognitive and clerical burdens of health care providers remains unclear. Successful tools would ideally reduce both burdens by delivering just-in-time diagnostic assistance in the most unobtrusive manner to providers while minimizing clerical tasks that might be generated by their use (e.g., extra clicks, menu navigation, more documentation). Experience with traditional CDS systems has shown that these tools are significantly more likely to be used if they are integrated into EHRs instead of existing as stand-alone systems. However, integration alone is insufficient. How that integration is executed—from the design of the user interfaces to the way alerts and notifications are displayed (e.g., triggers, cadence) or handled (e.g., non-interruptive versus interruptive alert)—is critical to practice efficiency and, ultimately, provider acceptance and adoption.
One major impediment is the high degree of difficulty integrating new software with vendor EHR products. Most integrations are “one-offs,” and, therefore, the technology fails to diffuse broadly. The 21st Century Cures Act (“Cures Act”) specifies a new form of health IT interoperability underpinning the redesign of provider-facing applications as modular components that can be launched within the context of the EHR, and which may be instrumental in delivering AI capabilities to the point of care (114th Congress, 2016a). The Cures Act and the federal rule that implements interoperability provisions require that EHRs have an application programming interface (API) granting access to patient records “with no special effort” (Wu et al., 2021; HHS, 2020). “APIs are how modern computer systems talk to each other in standardized, predictable ways. The Substitutable Medical Applications, Reusable Technologies (SMART) on Fast Healthcare Interoperability Resource (FHIR) API, required under the rule, enables researchers, clinicians, and patients to connect applications to the health system across EHR platforms” (Wu et al., 2021). Top EHR vendors have all incorporated common API standards (“SMART on FHIR”) into their products, creating a substantial opportunity for innovation in software and data-assisted health care delivery. Illustrative of the transformative potential of the integration of AI-DDS with EHRs is Apple’s decision to use the SMART API to connect its Health App to EHRs at over 800 health systems, giving 200 million Americans the option to acquire standardized and computable copies of their medical record data on their phones. The implementation science underpinning translation of machine learning to practice is nascent, however. Cultivating support for standards is driving an emerging ecosystem of substitutable apps, which can be added to or deleted from EHRs (like apps on a smartphone can). Such apps yield opportunity to deliver the output of diagnostic algorithms within the provider workflow during an EHR session within a patient context (Barket and Johnson, 2021; Kensaku et al., 2021; Khalifa et al., 2021).
EHR alert fatigue is a widespread and well-studied phenomenon among providers that has been linked to avoidable medical errors and burnout (Ommaya et al., 2018). How the introduction of AI-DDS systems into next-generation EHRs might affect alert fatigue and the provider experience is unclear. Successful deployment of these AI-DDS tools likely requires use of both human factors engineering and informatics principles, as the problem arises from the technology and how busy humans interact with it. Diagnostic outputs provided by the DDS should be specific, and clinically inconsequential information should be reduced or eliminated. Outputs should be tiered according to severity with any alternative diagnoses presented in a way that signals providers to clinically important data. Alerts must be designed with human factors principles in mind (e.g., format, content, legibility, placement, colors). Only the most important, high-level, or severe alerts should be made interruptive.
While thoughtful human-centered design can facilitate adoption to an extent, some degree of health care provider training will be required to ensure the necessary competencies to use AI-based DDS tools. The rapid pace of technological change requires such educational infrastructure to be equally nimble. Training opportunities must be integrated across undergraduate medical education, graduate medical education, and continuing medical education. To the extent that some AI-DDS tools are designed to support collaborative team workflows, interprofessional and multidisciplinary training is also necessary. While competencies surrounding the use of AI-DDS systems are still evolving and yet to be established, the authors of this paper have identified the following core areas as essential:
-
1.
Foundational knowledge (“What is this tool?”);
-
2.
Critical appraisal (“Should I use this tool?”);
-
3.
Clinical decision-making (“When should I use this tool?”);
-
4.
Technical use (“How should I use this tool?);
-
5.
Addressing unintended consequences (“What are the side effects of this tool and how should I manage them?”)
For foundational knowledge, health care providers need to understand the fundamentals of AI, how AI-DDS are created and evaluated, their critical regulatory and medicolegal issues, and the current and emerging roles of AI in health care. For critical appraisal, providers need to be able to evaluate the evidence behind AI-DDS systems and assess their benefits, harms, limitations, and appropriate uses via validated evaluation frameworks for health care AI. For clinical decision-making, providers need to identify the appropriate indications for and incorporate the outputs of AI-DDS into decision-making such that effectiveness, value, and fairness are enhanced. For technical use, providers need to perform the tasks critical to operating AI-based DDS in a way that supports efficiency, builds mastery, and preserves or augments patient-provider relationships. To address unintended consequences, providers need to anticipate and recognize the potential adverse effects of AI-DDS systems and take appropriate actions to mitigate or address them. Determining how to integrate this education into an already crowded training space, whether extra certification or credentialing is required for providers to use AI- DDS, and how institutions can adapt to rapidly changing training needs on the frontlines remain open questions.
Domain 4: Desire to Use
Ultimately, the success of AI-DDS tools in optimizing health system performance is dependent on the desire of clinicians to incorporate these tools into routine practice. Indeed, the factors discussed in the previous three core domain sections are crucial variables in the “desire to use” calculus. Additionally, it is important to attend to psychological factors, such as addressing how these tools can facilitate professional fulfillment among providers, including mitigating burnout. The other indispensable element within the desire to use core domain is trust. Clinicians must be able to trust that these tools can deliver quality care outcomes for their patients without creating harm or error and align with both patients’ and clinicians’ ethics and values.
Professional Fulfillment
Continued alignment of AI technology with the element of the Quintuple Aim to improve the work-life balance of health care professionals remains an indispensable aspect of the potential success and adoption of AI tools. Health care providers report high levels of professional burnout, partially attributable to EHRs and related technologies (Melnick et al., 2020). Generally, for every one hour spent with patients, providers spend another two hours in front of their computers (Colligan et al., 2016). The exponential rise in digital work since the COVID-19 pandemic began has exacerbated burnout and amplified some providers’ deeply rooted reluctance to adopt new technologies (Lee et al., 2022). Successful AI-DDS tools will need to overcome this hesitancy and tap into positive sources of fulfillment for providers, including facilitating professional pride, autonomy, and security; reassessing or expanding their scope of practice; and augmenting their sense of proficiency and mastery.
A major contributing source of professional fulfillment is the strength of the patient-provider relationship. As discussed, AI-DDS tools hold the potential to greatly improve diagnostic accuracy and reduce medical errors. If seamlessly integrated, they could also unburden providers of rote tasks, enabling them to allocate more attention to engaging and establishing meaningful bonds with patients. However, by deferring certain higher-order data analysis and synthesis tasks—functions traditionally within the scope of providers—to an AI-based system, providers may experience a sense of detachment from their work. There also is concern that AI systems could erode the patient-provider relationship if patients begin to preferentially value the diagnostic recommendation of an AI system. While the personal qualities of interacting with a human might be preferred, some believe that AI’s ability to emulate human conversation (via chatbots or conversational agents) could eventually supplant providers (Goldhahn et al., 2018). However, it should be noted that this concern only applies to autonomous systems, and the assistive systems this paper focuses on involve, by definition, a health care professional in the workflow.
As observed in previous cycles of AI diffusion, potential threats to job security have negatively impacted provider receptivity to AI. Anxiety has been particularly acute in certain specialties, such as radiology, where in 2016, speculation arose that radiologists would be irrelevant in five years (Hinton, 2016). However, instead of replacing providers, AI in radiology has assumed an assistive role, supporting providers in the sorting, highlighting, and prioritizing key findings that might otherwise be missed (Parakh, 2019). Therefore, to foster the adoption of AI-DDS, it is important to uphold the paradigm of augmented intelligence—in which these tools enhance human cognition, and the human is ultimately the arbiter of the action recommended. A key element of this is to empower providers to co-exist in an increasingly digital world through skill-building and instilling trust and transparency in AI systems. It is also important to reconsider expectations about provider roles and responsibilities. With the potential of increased practice efficiency, AI-DDS tools may expand provider bandwidth and purview. In this regard, providers could see patients in greater numbers, through multiple media, and in geographically distant areas.
Despite increasingly sophisticated AI algorithms, it is imperative to value the human qualities that can correct or counteract the shortcomings of AI systems. For instance, biased algorithms struggle with diagnosing melanoma in darker-skinned patients (Krueger, 2022). Having a provider carefully review and assess results produced or interpreted by an AI tool is essential to avoiding a missed or erroneous diagnosis in this case. Above all, provider involvement is critical in shared decision-making. Even in circumstances when an AI-DDS tool is highly accurate, providers are indispensable in helping patients select the right course of treatment based on their health goals and preferences.
Trust
Trust within human-AI-diagnostic partnerships requires a human willingness to be vulnerable to an AI system. Trust overall is a complex concept and trust in technology is equally complex (Lankton et al., 2015). A human user may distrust an AI-DDS tool whose recommendations go against their intuitive conclusions, especially if that person has professional training and significant experience. A user may also distrust AI-DDS recommendations if the user finds something faulty with the development process of the tool, such as inadequate testing or a lack of process transparency. Another potential impediment can include concern that the tool’s development and use is motivated by profits over people or a lack of professional values alignment (Rodin and Madsbierg, 2021). Clarity in individual clinician and health care organizational governance and standards setting for various AI tools remains unclear, which also may inhibit trust. Drivers of trust, on the other hand, can include positive past experiences with a particular manufacturer or service provider, seamless interoperability of a new application with an existing suite of tools from a familiar and currently trusted company or product, or company reputation among the professional health care community (Adiekum et al., 2018; Benjamin, 2021; European Commission, 2019).
In this section of the paper, the authors focus on two significant sources of distrust with AI-DDS products as especially relevant to the adoption of AI-DDS by clinicians:
-
1.
bias (real or perceived) and
-
2.
liability.
Providers may be concerned that AI-DDS tools underperform in care for certain patients, especially marginalized populations, as AI trained on biased data can produce algorithms that reproduce these biases. However, it is critical to recognize that bias has multiple sources. It could arise, for example, if the data used to train the AI did not adequately represent all population subgroups that eventually will rely on the AI-DDS tool. It is crucial to ensure that training data are as inclusive and diverse as the intended patient populations, and that deficiencies in the training data are frankly disclosed. Using all-male training data for a tool intended for use only in males to detect a male health condition would not result in bias, but using all-male data would cause bias in tools intended for more general use. Other bias types could exist, for example, if AI tools are trained using real-world data incorporating systemic deficiencies in past health care. For example, if doctors in the past systematically underdiagnosed kidney disease in Black patients, the AI can “learn” that bias and then underdiagnose kidney disease in future Black patients. Thus, it is crucial to design and monitor AI tools with a lens toward preventing, detecting, and correcting bias and disclosing limitations of the resulting AI-DDS tools.
Complicating this issue is the fact that it can be very difficult to understand the inner workings of many AI-DDS algorithms. The terms “transparency” and “explainability” can have various technical meanings in different contexts, but this paper conceives them broadly to denote that the user of an AI tool, such as a health care professional, would be able to understand the underlying basis for its recommendations and how it arrived at them. It can be challenging, and at times impossible, to understand how an AI arrives at its output and to determine whether the tool in question problematically replicates social biases in its predictions. Furthermore, developers rarely reveal the underlying data sets used to train AI-DDS algorithms, making it difficult for providers to ascertain if a particular product is trained to reflect their patient populations. There may also be tension between the AI-DDS purchasing decisions made by hospital leadership and the providers affiliated with the institutions, with the perception that hospital leadership is “imposing” use of specific AI-DDS algorithms on the providers.
To foster trust among clinician users, a regulatory framework that prospectively aims to prevent injuries (see discussion in Tools to Promote Trust), coupled with mechanisms to assign accountability and compensate patients if problematic outcomes occur, must exist. Because AI-DDS tools sit at the intersection of technology and clinical practice, there are two potential avenues for compensating patient injuries through the American tort system. The first is medical malpractice, which implies that the ultimate responsibility for problematic clinical decisions rests with the provider. The second is product liability, which implies that the responsibility for problematic clinical decisions rests instead with the developer and manufacturer of the AI-DDS tool.
Currently, the dividing line appears to be whether an independent professional, such as an end-user provider, could review the recommendations from an AI tool and understand how it arrived at them. As commentators note:
The Cures Act parses the product/practice regulatory distinction as follows: Congress sees it as a medical practice issue (instead of a product regulatory issue) to make sure health care professionals safely apply CDS [clinical decision support] software recommendations that are amenable to independent professional review. In that situation, safe and effective use of CDS software is best left to clinicians and to their state practice regulators, institutional policies, and the medical profession. When CDS software is not intended to be independently reviewable by the health care provider at the point of care, there is no way for these bodies to police appropriate clinical use of the software. In that situation, the Cures Act tasks the FDA with overseeing its safety and effectiveness. Doing so has the side effect of exposing CDS software developers to a risk of product liability suits (Evans and Pasquale, 2022).
This distinction is a workable and sensible one, reflecting the limitations of the average provider’s abilities to evaluate new AI-DDS tools. It would be helpful to educate providers and hospital administrators on the dividing line between explainable CDS tools, which allow health care providers to understand and challenge the basis for algorithmic decision-making and “black box” algorithms, for which the basis of algorithmic decisions making is obscure, on the other hand. This distinction carries implications for liability insofar as courts may hesitate to hold providers accountable for “black box” tools that precluded the possibility of provider control. Providers who hesitate to adopt AI-DDS out of fear of medical malpractice liability may find that distinction comforting and trust-building. For patient injuries arising when AI-DDS systems are in use, policymakers and courts may wish to consider shifting the balance of liability from the current norm (which focuses almost entirely on medical malpractice) to one that also includes product liability in situations where the AI tool, rather than the provider, appears primarily at fault. This shift could further encourage trust and desire to use these tools among providers and would incentivize developers to design algorithms and select training data with a view to minimizing poor outcomes.
Product liability generally arises when a product inflicts “injuries that result from poor design, failure to warn about risks, or manufacturing defects” (Maliha et al., 2021). Product liability, to date, has only been applied in limited and inconsistent fashion to software in general and to health care software in particular (Brown and Miller, 2014). For example, in Singh v. Edwards Lifesciences Corp, the court permitted a jury to award damages against a developer because its software resulted in a catheter malfunctioning (CaseText, 2009b). On the other hand, in Mracek v. Bryn Mawr Hospital, a court rejected via summary judgment the plaintiff’s argument that product liability should be imposed when the da Vinci surgical robot malfunctioned in the course of a radical prostatectomy (CaseText, 2009a). Further complicating the product liability landscape, the Supreme Court concluded in Riegel v. Medtronic that devices going through the FDA premarket approval process, as opposed to other market authorization pathways such as 510(k) clearance, can enjoy certain protection against state product liability cases (CaseText, 2008). Thus, available redress for patients can vary depending on the market authorization pathway for the specific AI tool. The conflicting and limited case law in this area suggests that there is room to explore an expanded product liability landscape for AI-DDS software. One clear point from prior case law is that clinicians will bear the brunt of liability for injuries that occur when using AI-DDS tools “off-label” (e.g., using a tool that warns it is only intended for use on one patient population on a different population). This fact may help incentivize AI tool developers to disclose limitations of their training data since doing so can shift liability to providers who venture beyond the tool’s intended use.
It is also important to note that opening the door to product liability suits does not foreclose the potential for medical malpractice suits against providers who use AI-DDS tools. A provider who relies on AI-DDS tools in good faith could still face medical malpractice liability if their actions fall below the generally accepted standard of care for use of such tools or if the AI-DDS tool is used “off label”, i.e. using an AI-DDS tool developed for one type of MRI interpretation on another type of MRI image (Prince et al., 2019). Overall, courts are reluctant to excuse physician liability, allowing malpractice claims to proceed against physicians even in cases where:
-
1.
there was a mistake in the medical literature or an intake form;
-
2.
a pharmaceutical company failed to warn of a therapy’s adverse effect; or
-
3.
there were errors by system technicians or manufacturers (Maliha et al., 2021).
These cases, taken together, suggest that providers cannot simply point to an AI-DDS error as a shield from medical malpractice liability.
Eventually, widespread adoption of AI-DDS could open the door for medical malpractice liability for providers who do not incorporate these tools into their practice, i.e., “failure to use”. Physicians, specifically, open themselves to medical malpractice liability when they fail to deliver care at the level of a competent physician of their specialty (Price et al., 2019). Currently, the standard of care does not include relying on AI-DDS tools. But as more and more providers incorporate AI-DDS tools into their practice, that standard may shift. Once the use of AI-DDS is considered part of the standard of care, medical malpractice liability will create a strong incentive for all providers to rely on these tools, regardless of their personal views on appropriateness.
Tools to Promote Trust
Two of the most impactful mechanisms to promote trust in AI-DDS among clinicians (and, thus, improving desire to use) would be to further refine the existing regulatory landscape for AI-DDS tools and to promote collaborations between stakeholders. This section of the paper explores avenues to promote trust.
To minimize concerns about liability, nuanced, thoughtful regulation and governance from all levels of the U.S. government—federal, state, and local—can reassure providers that they can trust available AI-DDS tools and move forward with implementation. A key factor affecting clinicians’ willingness to adopt AI-DDS tools is likely whether the tools will receive a rigorous, data-driven review of safety and effectiveness by the FDA before moving into clinical use. A potential concern is that some, but not necessarily all, AI-DDS software is subject to FDA medical device regulation under the Cures Act. It remains difficult for providers to intuit whether a given type of AI-DDS tool is or is not likely to have received oversight under FDA’s medical device regulations. Uncertainty about which tools will receive FDA oversight–and which marketing authorization process the FDA may require (e.g., premarket approval, 510(k), or de novo classification)–likely fuels provider discomfort with using AI-DDS tools.
A key source of this uncertainty, at present, is that the Cures Act addresses the scope of the FDA’s power to regulate various types of medical software but does not itself define or use the terms DDS or CDS software (114th Congress, 2016a; 21 U.S. Code § 360j, 2017). As used in this paper, AI-DDS tools broadly refer to computer-based tools, driven by AI algorithms, that use clinical knowledge and patient-specific health information to inform health care providers’ diagnostic decision-making processes (see Table 1), with DDS tools being a subset of CDS tools more generally. This paper thus follows the definition provided by the Office of the National Coordinator for Health Information Technology (ONC), which stresses that CDS tools “provide … knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health and health care” (ONC, 2018). The FDA has used this ONC definition when discussing how CDS software is broadly understood (FDA, 2019b). Central to the ONC definition, and this paper, is the notion that DDS and CDS tools combine general medical “knowledge” with patient-specific information to produce recommended diagnoses. With AI-DDS systems, that knowledge can include inferences generated internally by an AI/ML algorithm.
The Cures Act authorizes the FDA to regulate only some of the software that might fit into the broader, more common conception of AI-DDS systems just described. Thus, FDA lacks authority to regulate all of the tools that clinicians might think of as being DDS/CDS tools. The Cures Act expressly excludes five categories of medical software from the definition of a “device” that FDA can regulate (114th Congress, 2016a [21 U.S.C. § 360j(o)(1), 2017]). One of these exclusions places restrictions on FDA’s power to regulate CDS and DDS software (114th Congress, 2016a [21 U.S.C. § 360j(o)(1)(E)]). Box 1 shows the specific wording of the relevant Cures Act exclusion.
BOX 1. Provisions of the Cures Act that Exclude Some AI-DDS Tools from FDA Oversight.
Section 3060 of the Cures Act, codified at Title 21 of the U.S. Code, Section 360j(o)(1)(E), excludes certain medical software from being treated as a “device” that the FDA can regulate.
Basic exclusion from the medical device definition. Subject to the two specific exceptions noted below, software is not an FDA-regulable medical device if it is intended:
“for the purpose of –
- (i)
displaying, analyzing, or printing medical information about a patient or other medical information (such as peer-reviewed clinical studies and clinical practice guidelines);
- (ii)
supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition; and
- (iii)
enabling such health care professional to independently review the basis for such recommendations that such software presents so that it is not the intent that such health care professional rely primarily on any of such recommendations to make a clinical diagnosis or treatment decision regarding an individual patient.” (21 U.S.C. § 360j(0)(1)(E)(i)-(iii)).
Exceptions. Two exceptions allow software that meets the above description to nevertheless be regulated by the FDA as a medical device. These exceptions are:
-
1.
Jurisdictional saving clause. The opening passage of Section 360j(o)(1)(E) contains a “saving” clause preserving the FDA’s authority to regulate certain software that meets the above three conditions. This clause states that the basic exclusion just quoted applies to a software tool “unless the function is intended to acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system” [emphasis added]. Put more simply, a tool is not excluded from being an FDA-regulable device, if its function is to acquire, process, or analyze images or signals of such types.
-
2.
Procedure for overriding the basic exclusion. The Secretary of HHS can restore the FDA’s power to regulate a CDS or DDS tool that otherwise would fit into the basic exclusion, by making a finding that use of the tool “would be reasonably likely to have serious adverse health consequences” and issuing a final order after notice and public comment (21 U.S.C. § 360j(o)(3)). Through this procedure, the Secretary has the power to determine that the tool is a medical device and therefore subject to FDA oversight.
SOURCE: 114th Congress, 2016a.
Looking at the basic exclusion in Box 1, the first two conditions, (i) and (ii), describe CDS and DDS software without using those names. The third condition, shown at (iii), bears on the concept this paper refers to as explainability, again without using that term. When all three conditions are met, this passage of the Cures Act creates a potential exclusion from FDA regulation for CDS/DDS software that meets the criterion for explainability set out in condition (iii) of Box 1. This exclusion, however, is subject to the two exceptions shown at the bottom of Box 1.
The first exception–the saving clause–confirms the FDA’s power to regulate many types of software whose function supports diagnostic testing, such as software used in the bioinformatics pipeline for genomic testing. Before the Cures Act, FDA’s medical device authority included oversight covering both in vitro diagnostic devices (which support clinical laboratory testing of biospecimens) and in vivo devices (such as X-rays and MRI machines that produce images of tissues within a patient’s body). FDA has long regulated software embedded in diagnostic hardware devices, for example, software internal to sequencing analyzers and MRI machines. The saving clause confirms FDA’s power to regulate “stand-alone” diagnostic software that is not necessarily part of a hardware device but processes signals from in vitro and in vivo testing devices.
This power is crucial in light of the modern trend for many clinical laboratories to use third-party software service providers and vendors for data analysis supporting complex diagnostic tests, such as genomic tests (Curnutte et al., 2014). In vitro diagnostic testing by clinical laboratories is subject to the Clinical Laboratory Improvement Amendments of 1988 (CLIA) regulations (100th Congress, 1988). The CLIA framework focuses on the quality of clinical laboratory services but does not provide an external, data-driven regulatory review of the safety and effectiveness of tests used in providing those services, nor does it evaluate the software laboratories use when analyzing and interpreting test results. FDA’s authority to regulate stand-alone diagnostic software positions FDA to oversee clinical laboratory software, even in situations where FDA exercises discretion and declines to regulate an underlying laboratory-developed test (Evans et al., 2020). In a 2019 draft guidance document, circulated for comment purposes only, the FDA noted that “bioinformatics products used to process high volume ‘omics’ data (e.g., genomics, proteomics, metabolomics) process a signal from an in vitro diagnostic (IVD) and are generally not considered to be CDS” tools (FDA, 2019b). The saving clause clarifies that FDA can regulate such software, even in situations where it might technically be considered CDS software falling within the basic exclusion in Box 1 (114th Congress, 2016a [21 U.S.C. § 360j(o)(1)(E)]).
Much of the AI-DDS software providers use in clinical health care settings would not fall under the saving clause (see Box 1), which seems directed at software processing signals from diagnostic devices as part of the workflow for producing finished diagnostic test reports and medical images. However, there is some ambiguity. An example would be an AI-DDS tool that analyzes several of a patient’s gene variants along with the patient’s reported symptoms, clinical observations, treatment history, and environmental exposures to recommend a diagnosis to a clinician. It is unclear if the fact that the tool processes gene variant data means that it is “processing a signal from an IVD device” and thus FDA-regulated, or if the saving clause only applies when the signal is directly fed to the software as part of the clinical laboratory workflow. Without knowing how the FDA interprets the breadth of the saving clause, it is hard for clinicians to understand what is and is not regulated.
Assuming the saving clause does not apply, AI/DDS tools are generally excluded from FDA regulation if they meet all three of the conditions listed at (i)-(iii) in Box 1. The first two conditions are fairly straightforward, but it is still not clear how the FDA plans to assess whether the third condition, bearing on the concept of explainability, has been met. How, precisely, the FDA will decide whether an AI/DDS tool is “intended” to be “for the purpose of” “enabling [a] health care professional to independently review the basis for [its] recommendations” (see Box 1) is unknown. The FDA’s regulation on the “Meaning of intended uses” offers insight into the range of direct and circumstantial evidence the agency can consider when assessing objective intent (FDA, 2017b [21 C.F.R. § 801.4]). Yet how the agency will apply those principles in the specific context of AI/ML software tools is not clear.
Without greater clarity on these matters, clinicians lack a sense of whether a given type of AI-DDS tool usually is, or usually is not, subject to FDA oversight or what FDA’s oversight process entails. Almost six years after the Cures Act, FDA’s approach for regulating AI/ML CDS/DDS software remains a work in progress, leaving uncertainties that can erode clinicians’ confidence when using these tools. Through two rounds of draft guidance (in 2017 and 2019), the FDA solicited public comments to clarify its approach to regulating CDS/DDS tools. A final guidance on Clinical Decision Support Software appears on the list of “prioritized device guidance documents the FDA intends to publish during FY2022” (October 1, 2021 – September 30, 2022) (FDA, 2021c). As this paper went to press in September 2022, the final guidance was not yet available, but the authors hope it may clarify these and other unresolved questions around the regulation of CDS/DDS tools.
Unfortunately, guidance documents–whether draft or final–have no binding legal effect and do not establish clear, enforceable legal rights and duties on which software developers, clinicians, state regulators, and members of the public can rely. There is fairly wide scholarly agreement that the use of guidance as a regulatory tool can be appropriate for emerging technologies where knowledge is rapidly evolving and flexibility is warranted, but there can be long-term costs when agencies choose to rely on guidance and voluntary compliance instead of promulgating enforceable regulations (Wu, 2011; Cortez, 2014). FDA’s Digital Innovation Action Plan (FDA, 2017a; Gottlieb, 2017) and its Digital Health Software Precertification (Pre-Cert) Program (FDA, 2021b) both acknowledge that its traditional premarket review process for moderate and higher-risk devices is not well suited for “the faster iterative design, development, and type of validation used for software-based medical technologies” (FDA, 2017a). The FDA’s 2021 AI/ML Action Plan envisions incorporating ongoing post-marketing monitoring and updating of software tools after they enter clinical use (FDA, 2021a). This may leave health care providers in the uncomfortable position of using tools that may be modified even after the FDA clears them for clinical use and potentially facing liability if patient injuries occur. Also, it implies that vendors and developers of AI/ML tools will need access to real-world clinical health care data to support ongoing monitoring of how the tools perform in actual clinical use.
Future reliance on post-marketing monitoring offers an example of why regulating via non-binding guidance documents can create long-term problems. The HIPAA Privacy Rule contains an exception that lets HIPAA-covered health care providers, such as hospitals, share data with device manufacturers to help them meet their FDA regulatory compliance obligations (for example, to help manufacturers comply with the FDA’s adverse-event reporting requirements) (HHS, 2003). Unfortunately, when FDA regulates manufacturers by means of guidance documents and other non-mandatory programs, this important HIPAA pathway for accessing data may be unavailable, because guidance documents create no enforceable legal obligations. To maximize software developers’ access to real-world evidence for post-marketing monitoring and updating of AI/DDS tools, the FDA will ultimately need to set binding regulatory requirements (for example, for developers to monitor for racial, gender, or other biases in the post market period). Related concerns surround the future development of state law, including both state regulations and tort law. Safe clinical use of AI/DDS tools will ultimately require state-level medical practice regulations and common law addressing issues such as appropriate staffing for, and use of, AI/DDS tools in clinical settings. To foster optimal development of state law, it is helpful to have federal regulations providing a stable demarcation between the FDA’s role versus that of the states. Federal guidance documents, due to their nonbinding nature and ease of revision, may not meet this need. The FDA’s current heavy reliance on guidance documents and voluntary measures may be appropriate in the early years as AI/DDS tools emerge as a new technology, but the agency should stay mindful of the need to promulgate regulations whenever appropriate and feasible.
Apart from the regulatory framework, another mechanism to instill trust is through increased and consistent collaboration among developers, ethicists, and clinical diagnosticians during various phases of the AI lifecycle. Early innovation in the process of AI pre-market design, testing, clinical application, and post-market oversight resulted in fragmented and siloed professional stakeholder groups with different goals, expertise, ethical frameworks, and paradigms of professionalism and professional accountability. While a great deal of health care professional ethical attention, input, and engagement has been integrated into AI use and application in the post market phase, there has been an important gap in full integration of professional end-user partnership within the AI tool development process needed to build trustworthy AI tools.
Numerous AI and digital health ethical frameworks have been published as part of the concerted effort to build trustworthy human-AI partnerships. For example, the European Commission’s Ethics Guidelines for Trustworthy AI is a foundational work on the topic, with seven key requirements:
-
1.
Human agency and oversight,
-
2.
Technical robustness and safety,
-
3.
Privacy and data governance,
-
4.
Transparency,
-
5.
Diversity, non-discrimination and fairness,
-
6.
Environmental and societal well-being, and
-
7.
Accountability (European Commission, 2019).
Additionally, over 40 different U.S. technology companies and venture capital firms have signed on to a Responsible Innovations Charter, with similar key principles:
-
1.
Innovating intentionally,
-
2.
Operating with accountability and transparency,
-
3.
Advancing inclusive prosperity,
-
4.
Building sustainably,
-
5.
Respecting people,
-
6.
Championing diversity, and
-
7.
Promoting healthy societies (Responsible Innovation Labs, 2022).
The American Medical Association has developed policies and frameworks for practicing diagnosticians to govern and assess AI integration into clinical practice (Crigger et al., 2022). Essentially, the structured assessment aids the clinician in ascertaining: whether a tool is beneficial to patient outcomes; whether a tool appears to work; and whether a tool appears to work for their patients. These guidelines, along with several global government-produced assessments for organizational leaders, provide a systematic and structured assessment for providers to select and utilize trustworthy and beneficial AI for their practice.
Ensuring and Promoting Health Equity in the Deployment of AI-Assisted Diagnostic Tools
In addition to facilitating uptake and overcoming barriers to the adoption of AI-DDS tools elucidated in this review, being cognizant of the implications for equity throughout the life cycle of these tools and making a consistent effort to address past, current, and potential equity issues are critical to preventing widening disparities in health care delivery. While there is excitement and demonstrated benefits to bringing AI-DDS tools into clinical practice, poor data quality, prevalent biases in health care, and a lack of structural supports available to end users jeopardize progress toward achieving health equity and fuel ongoing uncertainties and hesitancies about adopting these tools.
AI/ML algorithms are often developed using limited data samples that may not represent the people they are meant to impact (Zou and Schiebiner, 2021). Furthermore, social determinants of health data are generally not well captured in data sets used to train these algorithms. Data elements derived from diverse sources that could help provide a more holistic view of the patient may not be available to certain care settings due to the limitations of EHR systems, data privacy concerns, a lack of data standardization, and financial constraints on the part of health systems to obtain large data sets (Zusterzeel et al., 2022; Alami et al., 2020). Inaccurate representation in training, testing, and validation data sets also results in the development of flawed models. Models not accurately trained in the context that they are intended for may also have difficulty performing when there is a shift in population demographics (Singh et al., 2020).
AI tools rely on human interaction from their inception to deployment, and AI algorithms can replicate explicit and implicit biases in human decision-making in health care settings (Char et al., 2018). Inherent discrimination occurring within care delivery can be challenging to predict and uncover, and biases could easily transfer over into the design and use of AI algorithms (Leslie et al., 2021; Char et al., 2018). For example, the biases of developers, researchers, and designers can manifest early in the development phase if they choose target variables and proxies for those variables without considering upstream social determinants of health and related confounders (Leslie et al., 2021). Along with the data collection issues summarized above, other data extraction and measurement errors due to biases built into physical devices can negatively influence care decisions and perpetuate inequities (Leslie et al., 2021; Zou and Schiebiner, 2021). In the case of the pulse oximeter, this medical device uses infrared and red light signaling that interacts with skin pigmentation to read the oxygen saturation in the patient’s blood and shows varying results based on skin color (Zou and Schiebiner, 2021). Previous studies have shown how patients with darker skin received inaccurate oxygen readings compared to White patients (Leslie et al., 2021; Zou and Schiebiner, 2021). This data is fed into algorithms to assist with decision-making, and clinicians may unintentionally accept results and act on flawed recommendations, affecting the ability of patients to acquire needed care, such as supplementary oxygen (Zou and Schiebiner, 2021; Rajkomar et al., 2018).
In addition to the adverse effects of incorrect data usage and biases, the absence of infrastructure to support equitable AI in developing and deploying AI-DDS tools will ultimately widen disparities. The digital gap perpetuates inequities through many social factors that may intertwine, including a lack of broadband internet access across regions and an inability to purchase up-to-date and well-equipped devices (Ramsetty and Adams, 2020). For example, AI tools extracting data from EHR systems may be more prevalent in larger health care organizations in well-resourced cities than small rural hospitals or physician practices, which have fewer resources and expertise readily available (Goldfarb and Teodoridis, 2022; Reisman, 2017). The associated financial costs for EHR implementation continue to be a primary barrier to the adoption of AI-DDS tools (Goldfarb and Teodoridis, 2022). AI algorithms applied to clinical settings that disproportionately serve populations that experience a form of privilege (i.e., wealthy populations) marginalize groups that do not actively seek care in the same settings (DeCamp and Lindvall, 2020; Rajkomar et al., 2018). Nevertheless, data collection issues persist in settings with EHR systems due to the lack of compatibility between these systems and certain providers serving different hospitals and health care facilities, further contributing to data silos and insufficiently informed AI tools (Goldfarb and Teodoridis, 2022).
Path Forward - Policy Implications and Action Priorities
Fostering provider adoption of novel AI-DDS systems will require broad infrastructural support, beginning with robust tool evaluations by health systems and payers, clear commitments from health systems and developers to regular monitoring and updating of algorithms, and training care teams to effectively interpret and implement changes based on AI-DDS outputs. Developers, payers, health systems, and providers are becoming increasingly aware of potential biases in AI algorithms and their deployment. Data representativeness and robust model training must be a top priority in algorithm development to increase trust and adoption among all relevant stakeholders. Data integrity and reliability are at the very core of sound algorithm development, yielding better prospects for provider adoption of those algorithms. Therefore, collaborative efforts aimed at curating rich and multimodal patient data—including crucial social determinants information—will be paramount. Such efforts need to be coupled with robust and consistent standards for data access, sharing, harmonization, and interoperability, while simultaneously prioritizing data privacy and security to ultimately drive excellent model development. In a similar vein, boosting provider comfort and adoption may also depend on model transparency. Providing health care teams with key parameters driving an AI-DDS output that can serve as modifiable targets for patient outcome improvement may facilitate greater adoption. To conclude, this paper presents key action priorities in each of the four domains related to provider adoption of AI-DDS tools outlined in this paper:
Domain 1: Reason to Use
Establishing clear impetus to incorporate novel AI-DDS tools into health systems is contingent on a given tool’s clinical efficacy, specifically as it relates to a health system’s target population, and affordability, both to the health system and patient. Developers, payers, health systems, and providers are becoming increasingly aware of potential biases in AI algorithms and their deployment. Data representativeness and robust model training and testing must be the top priority in algorithm development in efforts to increase trust and adoption among all relevant stakeholders.
Collaborative efforts among multiple health care systems aimed at curating rich and multimodal patient data—including essential social determinants information—will be paramount. Such efforts need to be coupled with robust and consistent standards for data access, sharing, and interoperability, while simultaneously prioritizing data privacy and security, to ultimately drive excellent model development.
In addition to ensuring robust clinical utility, algorithm developers must design AI-DDS tools to integrate seamlessly into existing care team infrastructures, ensuring that their product value is not diminished by logistical inefficiency and cognitive burden.
Domain 2: Means to Use
Policy makers and payers should consider promoting sustainability through reimbursement to create a sustainable environment for the adoption and continual use of AI-DDS tools and to further promote capital infrastructure investments by health systems to facilitate this goal.
If consensus-based standards do not emerge, ensuring interoperability could require a “top-down” regulatory approach. For instance, the United States Office of the National Coordinator for Health Information Technology (ONC) could develop health IT certification criteria that assess the ability of EHR systems to support data lifecycles. However, given the nascent understanding of ideal workflows and life cycles, standardization at this time is likely premature.
Policy makers and payers should consider using incentives to encourage the use of evidence-based AI-DDS in clinical practice. As per prior payment models, if adoption is sufficient and the evidence of improved processes and outcomes becomes established, AI-DDS tools may become the standard of care in specific clinical scenarios.
Domain 3: Method to Use
Public and private research funders should increase focus and funding opportunities to advance the still nascent implementation science of AI-DDS, for example, through RFPs that focus on integrating AI-DDS into clinical workflows and health IT systems and its impact on the behaviors of clinical teams.
Institutions of medical education and accreditation organizations should review emerging competencies for the use of AI-DDS and consider how to integrate these into the current training and certification ecosystem to adapt to the rapidly changing needs of the clinical front line.
Professional societies, trade associations, and health care quality organizations should identify diagnostic centers of excellence that specialize in AI-DDS to facilitate the surfacing and effective diffusion of best practices through interdisciplinary learning networks and capacity-building programs.
Software and algorithm designers of point-of-care AI-DDS for providers and patients at home should leverage the public SMART on FHIR and SMART/HL7 Bulk FHIR APIs regulated under the ONC 21st Century Cures Act Rule, so that algorithms can be widely and uniformly integrated into care across EHR vendor products and other IT tools.
Regulators should monitor, for example through the 21st Century Cures Act EHR Reporting Program, EHR vendor implementation of public FHIR APIs to ensure their turnkey use by apps made accessible at the point of care.
Domain 4: Desire to Use
Professional societies, trade associations, and health care quality organizations should center AI-related efforts to promote clinician well-being through human-centered design in AI technology, aligned with the work-life balance of health care professionals outlined in the Quintuple Aim. The FDA should offer guidance and/or other communications, specifically tailored to health care providers tasked with using AI/DDS tools, to aid their understanding of the types of software are – and are not – likely to receive FDA oversight under 21 U.S.C. § 360j(o)(1)(E). Specifically, it will be imperative to clarify how broadly the agency construes the saving clause for “software that processes signals…”, and the agency’s approach for assessing whether software is “intended … for the purpose … of enabling” a health care professional to independently review the basis of its recommendations. Encouraging clinicians to trust these tools may require helping them develop an intuitive grasp of the FDA’s role and its jurisdictional limits.
The FDA should continue to explore the special considerations affecting design, validation review, market authorization, and post marketing oversight for AI-DDS tools, offering timely guidance while recognizing that, over the long term, notice-and-comment rulemaking may offer advantages over the continued use of guidance documents – for example – to enhance developers’ access to HIPAA-protected real-world data for use in regulatory compliance activities, and to provide needed clarity and stability to foster development of state regulations and common law addressing clinical use of AI-DDS systems.
Professional medical, nursing, and other health care societies should develop clinical practice guidelines for AI system applications.
The FDA, CDC, and ONC should ensure transparency and publicly accessible reporting for flaws and safety incidents related to AI-DDS tools, malfunctions, and patient harm.
Software developers should integrate human clinical diagnosticians at all phases of software development, design, validation, implementation, and iterative improvements.
AI-DDS systems are becoming increasingly prevalent, sophisticated, and reliable. Across medical specialties, these tools demonstrate potential to make the clinical diagnostic process more efficient and accurate, ultimately improving patient outcomes. Focused efforts to create equitable and robust AI-DDS algorithms, streamline integration of new AI-DDS tools into clinical workflows, and train health care providers to effectively use such tools—coupled with strong regulatory oversight and financial incentives—will optimize the likelihood that innovative, clinically impactful AI-DDS systems are adopted and used responsibly by health care providers to the ultimate benefit of their patients.
Acknowledgments
This paper benefitted from the insights of Matthew Diamond, U.S. Food and Drug Administration; Maryellen Giger, University of Chicago; Brian Gurbaxani, Centers for Disease Control and Prevention; and Christina Silcox, Duke University.
Sections of the paper were developed based on the thoughtful input of Clifford Goodman, PhD, Lewin Group; Vivian Lee, MD, PhD, MBA, Verily; and Suzanne Tamang, PhD, Stanford University and Veterans Affairs.
Jessica Castner acknowledges the support of the American Academy of Nursing, the American Nurses Association, and the American Nurses Foundation.
Funding Statement
The views expressed in this paper are those of the authors and not necessarily of the authors’ organizations, the National Academy of Medicine (NAM), or the National Academies of Sciences, Engineering, and Medicine (the National Academies). The paper is intended to help inform and stimulate discussion. It is not a report of the NAM or the National Academies. Copyright by the National Academy of Sciences.
Footnotes
Conflict-of-Interest Disclosures: Jessica Castner discloses receiving grants and fees from the National Institutes of Health, fees from the Emergency Nurses Association, and serving as co-chair of the American Thoracic Society’s Health Policy Committee on Terrorism and Inhalation Disasters section. Barbara Evans discloses receiving grants from the National Institutes of Health. Steven Lin discloses serving as VP of Health Sciences for Codex Health, where he is a paid consultant; and receiving grants administered through Stanford University from Amazon, American Academy of Family Physicians, American Board of Family Medicine, Center for Professionalism and Value in Health Care, DeepScribe, Google Health, Omada Health, Predicta Med, Quadrant Technologies, Soap Health, Society of Teachers of Family Medicine, UCSF, and Verily. Kenneth Mandl discloses that his laboratory receives sponsored research funding from Quest Diagnostics; and that Boston Children’s Hospital receives corporate philanthropic support for his laboratory from SMART Advisory Committee members, which include the American Medical Association, BMJ Group, Eli Lilly and Company, Google Cloud, Hospital Corporation of America, Microsoft, Optum, Cambia Health Solutions, Quest Diagnostics, and Humana. Mark Sendak discloses that he is co-inventor of technology licensed from Duke University to Cohere Med, Inc and Clinetic, Inc.; and that he holds equity in Clinetic, Inc. Carmel Shachar discloses that she is a member of Advarra’s Institutional Research Board.
Contributor Information
Julia Adler-Milstein, University of California-San Francisco.
Nakul Aggarwal, University of Wisconsin-Madison.
Mahnoor Ahmed, National Academy of Medicine.
Jessica Castner, Castner Incorporated.
Barbara J. Evans, University of Florida.
Andrew A. Gonzalez, Regenstrief Institute.
Cornelius A. James, University of Michigan.
Steven Lin, Stanford University.
Kenneth D. Mandl, Boston Children’s Hospital.
Michael E. Matheny, Vanderbilt University Medical Center and Veterans Affairs.
Mark P. Sendak, Duke University.
Carmel Shachar, Harvard University.
Asia Williams, National Academy of Medicine.
References
- 1.21 U.S. Code § 360j. General provisions respecting control of devices intended for human use. 2017. [July 26, 2022]. https://www.law.cornell.edu/uscode/text/21/360j . [Google Scholar]
- 2.100th Congress. Public Law 100-578, 102 STAT. 2903. 1988. [July 27, 2022]. https://www.govinfo.gov/content/pkg/STATUTE-102/pdf/STATUTE-102-Pg2903.pdf . [Google Scholar]
- 3.114th Congress. H.R. 34 – 21st Century Cures Act. 2016a. [July 26, 2022]. https://www.congress.gov/bill/114th-congress/house-bill/34/text . [Google Scholar]
- 4.114th Congress. S.524 – Comprehensive Addiction and Recovery Act of 2016. 2016b. [July 27, 2022]. https://www.congress.gov/bill/114th-congress/sen-ate-bill/524/text . [Google Scholar]
- 5.Abbas H, Garberson F, Liu-Mayo S, Glover E, Wall DP. Multi-modular AI Approach to Streamline Autism Diagnosis in Young Children. Scientific Reports. 2020;10(5014) doi: 10.1038/s41598-020-61213-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Abdulkareem M, Petersen SE. Frontiers in Artificial Intelligence. 2021. The Promise of AI in Detection, Diagnosis, and Epidemiology for Combating COVID-19: Beyond the Hype. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Adiekum A, Blasimme A, Vayena E. Elements of Trust in Digital Health Systems: Scoping Review. Journal of Medical Internet Research. 2018;20(12):e11254. doi: 10.2196/11254. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Aggarwal N, Ahmed M, Basu S, Curtin JJ, Evans BJ, Matheny ME, Nundy S, Sendak MP, Shachar C, Shah RU, Thadaney-Israni S. NAM Perspectives. National Academy of Medicine; Washington, DC: 2020. Advancing Artificial Intelligence in Health Settings Outside the Hospital and Clinic. Discussion Paper. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Ajzen I. The theory of planned behavior. Organizational Behavior and Human Decision Processes. 1991;50(2):179–211. doi: 10.1016/07495978(91)90020-T. [DOI] [Google Scholar]
- 10.Ajzen I. In: Action Control. Kuhl J, Beckmann J, editors. Berlin: Springer; 1985. From Intentions to Actions: A Theory of Planned Behavior; pp. 11–39. [Google Scholar]
- 11.Alami H, Lehoux P, Auclair Y, de Guise M, Gagnon MP, Shaw J, Roy D, Fleet R, Ag Ahmed MA, Fortin JP. Artificial Intelligence and Health Technology Assessment: Anticipating a New Level of Complexity. Journal of Medical Internet Research. 2020;22(7) doi: 10.2196/17707. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Anumana. About Us. 2022. [May 12, 2022]. https://www.anumana.ai/aboutus . [Google Scholar]
- 13.Ardila D, Kiraly AP, Bharadwaj S, Choi B, Reicher JJ, Peng L, Tse D, Etemadi M, Ye W, Corrado G, Naidich DP, Shetty S. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature Medicine. 2019;(6):954–961. doi: 10.1038/s41591-019-0447-x. [DOI] [PubMed] [Google Scholar]
- 14.Barker W, Johnson C. The Ecosystem of Apps and Software Integrated with Certified Health Information Technology. Journal of the American Medical Informatics Association. 2021;28(11):2379–2384. doi: 10.1093/jamia/ocab171. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Benjamins R. A choices framework for the responsible use of AI. AI and Ethics. 2021;1(1):49–53. doi: 10.1007/s43681-020-00012-5. [DOI] [Google Scholar]
- 16.Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. npj Digital Medicine. 2020;3(118) doi: 10.1038/s41746-020-00324-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Berger D. A brief history of medical diagnosis and the birth of the clinical laboratory. Part 1—Ancient times through the 19th century. [July 26, 2022];MLO: Medical Laboratory Observer. 1999 31(7):28–30. 32, 34-40. https://pubmed.ncbi.nlm.nih.gov/10539661/ [PubMed] [Google Scholar]
- 18.Bitterman DS, Aerts HJWL, Mak RH. Approaching autonomy in medical artificial intelligence. The Lancet Digital Health. 2020;2(9):e447–e449. doi: 10.1016/S2589-7500(20)30187-4. [DOI] [PubMed] [Google Scholar]
- 19.Brajer N, Cozzi B, Gao M, Nichols M, Revoir M, Balu S, Futoma J, Bae J, Setji N, Hernandez A, Sendak M. Prospective and External Evaluation of a Machine Learning Model to Predict In-Hospital Mortality of Adults at Time of Admission. JAMA Network Open. 2020;3(2):e1920733. doi: 10.1001/jamanetworkopen.2019.20733. [DOI] [PubMed] [Google Scholar]
- 20.Brown SH, Miller RA. Legal and regulatory issues related to the use of clinical software in health care delivery. In. In: Greenes RA, editor. Clinical Decision Support. 2nd. New York: Elsevier; 2014. [Google Scholar]
- 21.CaseText. Mracek v. Bryn Mawr Hospital. 2009a. [July 26, 2022]. 610 F Supp 2d, 401 (ED Pa 2009) https://casetext.com/case/mracek-v-bryn-mawr-hosp-2 . [Google Scholar]
- 22.CaseText. Singh v. Edwards Lifesciences. 2009b. [July 27, 2022]. https://casetext.com/case/singh-v-ed-wards-lifesciences . [Google Scholar]
- 23.CaseText. Riegel v. Medtronic, Inc. 2008. [September 16, 2022]. https://casetext.com/case/riegel-v-medtronic-inc-3 . [Google Scholar]
- 24.Char D, Shah N, Magnus D. Implementing machine learning in health care – addressing ethical challenges. New England Journal of Medicine. 2018;378(11):981–983. doi: 10.1056/NEJMp1714229. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Chen MM, Golding LP, Nicola GN. Who Will Pay for AI? Radiology: Artificial Intelligence. 2021;3(3) doi: 10.1148/ryai.2021210030. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Clemens J, Gottlieb JD. In the Shadow of a Giant: Medicare’s Influence on Private Physician Payments. Journal of Political Economy. 2017;125(1):1–39. doi: 10.1086/689772. https://www.journals.uchicago.edu/doi/10.1086/689772 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Cortez N. Regulating Disruptive Innovation. Berkeley Technology Law Journal. 2014;29:175–218. doi: 10.2139/ssrn.2436065. [DOI] [Google Scholar]
- 28.Crigger E, Reinbold K, Hanson C, Kao A, Blake K, Irons M. Trustworthy Augmented Intelligence in Health Care. Journal of Medical Systems. 2022;46(12) doi: 10.1007/s10916-02101790-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Curnutte MA, Frumovitz KL, Bollinger JM, McGuire AL, Kaufman DJ. Development of the clinical next-generation sequencing industry in a shifting policy climate. Nature Biotechnology. 2014;32(10):980–982. doi: 10.1038/nbt.3030. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.DeCamp M, Lindvall C. Latent bias and the implementation of artificial intelligence in medicine. Journal of the American Medical Informatics Association. 2020;27(12):2020–2023. doi: 10.1093/jamia/ocaa094. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Deverka PA, Dreyfus JC. Clinical Integration of Next Generation Sequencing: Coverage and Reimbursement Challenges. Journal of Law, Medicine & Ethics. 2014;42:22–41. doi: 10.1111/jlme.12160. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Digital Diagnostics. IDx-DR. 2022. [July 26, 2022]. https://www.digitaldiagnostics.com/products/eyedisease/idx-dr/ [Google Scholar]
- 33.Digital Diagnostics. Autonomous AI diagnostics launch in retail health clinics. 2019. Nov 19, [May 11, 2022]. https://www.digitaldiagnostics.com/newsroom/autonomous-ai-diagnostics-launch-in-retail-healthclinics/ [Google Scholar]
- 34.Duffy G, Cheng PP, Yuan N, He B, Kwan AC, Shun-Shin MJ, Alexander KM, Ebinger J, Lundgren MP, Rader F, Liang DH, Schnittger I, Ashley EA, You JY, Patel J, Witteles R, Cheng S, Ouyang D. High-Throughput Precision Phenotyping of Left Ventricular Hypertrophy with Cardiovascular Deep Learning. Journal of the American Medical Association, Cardiology. 2022;7(4):386–395. doi: 10.1001/jamacardio.2021.6059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Escobar GJ, Liu VX, Schuler A, Lawson B, Greene JD, Kipnis P. Automated Identification of Adults at Risk for In-Hospital Clinical Deterioration. New England Journal of Medicine. 2020;383(20):1951–1960. doi: 10.1056/NEJMsa2001090. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.European Commission. High-level expert group on Artificial Intelligence. 2019. [March 10, 2022]. Ethics Guidelines for Trustworthy AI. https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf . [Google Scholar]
- 37.Evans B, Ossorio P. The Challenge of Regulating Clinical Decision Support Software After 21st Century Cures. American Journal of Law & Medicine. 2018;44(2-3):237–251. doi: 10.1177/0098858818789418. [DOI] [PubMed] [Google Scholar]
- 38.Evans B, Pasquale F. Product Liability Suits for AI/ML Software 22-46. In: Cohen IG, Minsen T, Price WN II, Robinson C, Shachar C, editors. The Future of Medical Device Regulation: Innovation and Protection. London: Cambridge University Press; 2022. [Google Scholar]
- 39.Evans BJ, Javitt G, Hall R, Robertson M, Ossorio P, Wolf SM, Morgan T, Clayton EW. How Can Law and Policy Advance Genomic Analysis and Interpretation for Clinical Care? Journal of Law, Medicine, and Ethics. 2020;48(Supp 1):44–68. doi: 10.1177/1073110520916995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Fenton JJ, Taplin SH, Carney PA, Abraham L, Sickles EA, D’Orsi C, Berns EA, Cutter G, Hendrick E, Barlow WE, Elmore JG. Influence of Computer-Aided Detection on Performance of Screening Mammography. New England Journal of Medicine. 2007;365:1399–1409. doi: 10.1056/NEJMoa066099. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.U.S. Food and Drug Administration (FDA) Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. 2021a. [May 11, 2022]. https://www.fda.gov/media/145022/download . [Google Scholar]
- 42.FDA. Digital Health Software Precertification (Pre-Cert) Program. 2021b. [May 11, 2022]. https://www.fda.gov/medical-devices/digital-health-center-excellence/digital-health-software-precertification-precert-program . [Google Scholar]
- 43.FDA. CDRH Proposed Guidances for Fiscal Year 2022 (FY2022) 2021c. [September 14, 2022]. https://www.fda.gov/medical-devices/guidance-documents-medical-devices-and-radiation-emitting-products/cdrh-proposed-guidances-fiscal-year-2022-fy2022 . [Google Scholar]
- 44.FDA. Clinical Decision Support Software: Draft Guidance for Industry and Food and Drug Administration Staff. 2019a. [May 11, 2022]. p. 28. https://www.fda.gov/media/109618/download . [Google Scholar]
- 45.FDA. Clinical Decision Support Software: Draft Guidance for Industry and Food and Drug Administration Staff. 2019b. [May 11, 2022]. https://www.fda.gov/media/109618/download . [Google Scholar]
- 46.FDA. Digital Health Innovation Action Plan. 2017a. [May 11, 2022]. https://www.fda.gov/downloads/MedicalDevices/DigitalHealth/UCM568735.pdf . [Google Scholar]
- 47.FDA. CFR - Code of Federal Regulations Title 21. 2017b. [September 16, 2022]. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=801.4 . [Google Scholar]
- 48.FDA. Step 3: Pathway to Approval. 2016. [May 15, 2022]. https://www.fda.gov/patients/device-developmentprocess/step-3-pathway-approval . [Google Scholar]
- 49.GlobeNewswire. Anaconda Releases 2020 State of Data Science Survey Results. 2020. [May 11, 2022]. https://www.globenewswire.com/news-release/2020/06/30/2055578/0/en/Anaconda-Releases-2020-State-of-Data-Science-Survey-Results.html . [Google Scholar]
- 50.Goh KH, Wang L, Yeow AYK, Poh H, Li K, Yeow JJL, Tan GYH. Artificial intelligence in sepsis early prediction and diagnosis using unstructured data in healthcare. Nature Communication. 2021;12 doi: 10.1038/s41467-021-20910-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Goldfarb A, Teodoridis F. Brookings. 2022. Mar 9, [May 17, 2022]. Why is AI adoption in health care lagging? https://www.brookings.edu/research/whyis-ai-adoption-in-health-care-lagging/ [Google Scholar]
- 52.Goldhahn J, Rampton V, Spinas GAA. Could artificial intelligence make doctors obsolete? BMJ. 2018;363:k4563. doi: 10.1136/bmj.k4563. [DOI] [PubMed] [Google Scholar]
- 53.Gottlieb S. FDA Announces New Steps to Empower Consumers and Advance Digital Healthcare. 2019. [May 11, 2022]. https://www.fda.gov/news-events/fda-voices/fda-announces-new-steps-empower-consumers-and-advance-digital-healthcare . [Google Scholar]
- 54.He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nature Medicine. 2019;25:30–36. doi: 10.1038/s41591-0180307-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.U.S. Department of Health and Human Services (HHS) 21st Century Cures Act: Interoperability, Information Blocking, and the ONC Health IT Certification Program. 2020. [July 26, 2022]. https://www.federalregister.gov/d/2020-07419 . [Google Scholar]
- 56.HHS. Disclosures for Public Health Activities. 2003. [September 16, 2022]. https://www.hhs.gov/hipaa/for-profes-sionals/privacy/guidance/disclosures-public-health-activities/index.html . [Google Scholar]
- 57.Heartflow. Heartflow Secures De Novo Clearance from the U.S. Food and Drug Administration for Breakthrough FFRCT Technology. 2014. [March 15, 2022]. https://www.heartflow.com/newsroom/heartflowsecures-de-novo-clearance/ [Google Scholar]
- 58.Hinton G. On Radiology. 2016. [May 15, 2022]. https://www.youtube.com/watch?v=2HMPRXstSvQ . [Google Scholar]
- 59.Kaufman Hall & Associates. National Hospital Flash Report. 2022. [May 25, 2022]. https://www.kaufmanhall.com/sites/default/files/2022-03/National-Hospital-Flash-Report-March-2022.pdf . [Google Scholar]
- 60.Kawamoto K, Kukhareva PV, Weir C, Flynn MC, Nanjo CJ, Martin DK, Warner PB, Shields DE, Rodriguez-Loya S, Bradshaw RL, Cornia RC, Reese TJ, Kramer HS, Taft T, Curran RL, Morgan KL, Borbolla D, Hightower M, Turnbull WJ, Strong MB, Chapman WW, Gregory T, Stipelman CH, Shakib JH, Hess R, Boltax JP, Habboushe JP, Sakaguchi F, Turner KM, Narus SP, Tarumi S, Takeuchi W, Ban H, Wetter DW, Lam C, Caverly TJ, Fagerlin A, Norlin C, Malone DC, Kaphingst KA, Kohlmann WK, Brooke BS, Del Fiol G. Establishing a multidisciplinary initiative for interoperable electronic health record innovations at an academic medical center. JAMIA Open. 2021;4(3) doi: 10.1093/jamiaopen/ooab041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Kawamoto K. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005;330(7494):765. doi: 10.1136/bmj.38398.500764.8F. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Kellogg KC, Sendak M, Balu S. MIT Sloan Management Review. 2022. [May 11, 2022]. AI on the Frontlines. https://sloanreview.mit.edu/article/ai-onthe-front-lines/ [Google Scholar]
- 63.Kensaku K, Kukhareva P, Weir C, Flynn M, Nanjo C, Martin D, Warner PB, Shields DE, Rodriguez-Loya S, Bradshaw RL, Cornia RC, Reese TJ, Kramer HS, Taft T, Curran RL, Morgan KL, Borbolla D, Hightower M, Turnbull WJ, Strong MB, Chapman WW, Gregory T, Stipelman CH, Shakib JH, Hess R, Boltax JP, Habboushe JP, Sakaguchi F, Turner KM, Narus SP, Tarumi S, Takeuchi W, Ban H, Wetter DW, Lam C, Caverly TJ, Fagerlin A, Norlin C, Malone DC, Kaphingst KA, Kohlmann WK, Brooke BS, Del Fiol G. Establishing a Multidisciplinary Initiative for Interoperable Electronic Health Record Innovations at an Academic Medical Center. JAMIA Open. 2021;4(3):ooab041. doi: 10.1093/jamiaopen/ooab041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Khalifa A, Mason C, Garvin H, Williams M, Del Fiol G, Jackson B, Bleyl S, Alterovitz G, Huff S. Interoperable Genetic Lab Test Reports: Mapping Key Data Elements to HL7 FHIR Specifications and Professional Reporting Guidelines. Journal of the American Medical Informatics Association. 2021;28(12):2617–25. doi: 10.1093/jamia/ocab201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Khan NS, Ghani MS, Anjum G. ADAM-sense: Anxiety-displaying activities recognition by motion sensors. Pervasive and Mobile Computing. 2021;78(21) doi: 10.1016/j.pmcj.2021.101485. [DOI] [Google Scholar]
- 66.Krueger L. Abstract presented at 18th Skin of Color Society Scientific Symposium. Boston, MA: 2022. [March 24, 2022]. Clinical decision-making bias in darker skin types: a prospective survey study identifying diagnostic bias in decision to biopsy. [DOI] [PubMed] [Google Scholar]
- 67.Lankton NK, McKnight DH, Tripp J. Technology, Humanness, and Trust: Rethinking Trust in Technology. Journal of the Association for Information Systems. 2015;16(10):880–918. doi: 10.17705/1jais.00411. [DOI] [Google Scholar]
- 68.Lee P, Abernethy A, Shaywitz D, Gundlapalli AV, Weinstein J, Doraiswamy PM, Schulman K, Madhavan S. NAM Perspectives. National Academy of Medicine; Washington, DC: 2022. Digital Health COVID-19 Impact Assessment: Lessons Learned and Compelling Needs. Discussion Paper. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Lee Y, Kim YS, Lee D-I, Jeong S, Kang G-H, Jang YS, Kim W, Choi HY, Kim JG, Choi S-H. The application of a deep learning system developed to reduce the time for RT-PCR in COVID-19 detection. Science Reports. 2022;12(1234) doi: 10.1038/s41598-022-05069-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Leslie D, Mazumder A, Peppin A, Wolters MK, Hagerty A. Does “AI” stand for augmenting inequality in the era of COVID-19 healthcare? BMJ. 2021;372:1–5. doi: 10.1136/bmj.n304. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Lin D, Nazreen T, Rutowski T, Lu Y, Harati A, Shriberg E, Chlebek P, Aratow M. Feasibility of a Machine Learning-Based Smartphone Application in Detecting Depression and Anxiety in a Generally Senior Population. Frontiers in Psychology. 2022;13 doi: 10.3389/fpsyg.2022.811517. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Luzniak K. Neoteric. 2021. Dec 16, [May 9, 2022]. “What’s the cost of artificial intelligence in healthcare?”. https://neoteric.eu/blog/whats-the-cost-ofartificial-intelligence-in-healthcare/ [Google Scholar]
- 73.Mäkelä K, Mäyränpää MI, Sihvo HK, Bergman P, Sutinen E, Ollila H, Kaarteenaho R, Myllärniemi M. Artificial intelligence identifies inflammation and confirms fibroblast foci as prognostic tissue biomarkers in idiopathic pulmonary fibrosis. Human Pathology. 2021;(107):58–68. doi: 10.1016/j.humpath.2020.10.008. [DOI] [PubMed] [Google Scholar]
- 74.Maliha G, Gerke S, Cohen IG, Parikh RB. Artificial Intelligence and Liability in Medicine: Balancing Safety and Innovation. Milbank Quarterly. 2021;99(3):629–647. doi: 10.1111/14680009.12504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Mandel JC, Kreda DA, Mandl KD, Kohane IS, Ramoni RB. SMART on FHIR: A Standards-Based, Interoperable Apps Platform for Electronic Health Records. Journal of the American Medical Informatics Association. 2016;23(5):899–908. doi: 10.1093/jamia/ocv189. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Mandl KD, Bourgeois FT. The Evolution of Patient Diagnosis: From Art to Digital Data-Driven Science. Journal of American Medical Association. 2017;318(19):1859–1860. doi: 10.1001/jama.2017.15028. https://doi:10.1001/jama.2017.15028 . [DOI] [PubMed] [Google Scholar]
- 77.Mandl KD, Kohane IS. A 21st-Century Health IT System - Creating a Real-World Information Economy. New England Journal of Medicine. 2017;376(20):1905–1907. doi: 10.1056/NEJMp1700235. [DOI] [PubMed] [Google Scholar]
- 78.Mandl KD, Kohane IS. Escaping the EHR Trap - The Future of Health IT. New England Journal of Medicine. 2012;366(24):2240–2242. doi: 10.1056/NEJMp1203102. [DOI] [PubMed] [Google Scholar]
- 79.Mandl KD, Mandel JC, Murphy SN, Berns-tam EV, Ramoni RL, Kreda DA, McCoy JM, Adida B, Kohane IS. The SMART Platform: Early Experience Enabling Substitutable Applications for Electronic Health Records. Journal of the American Medical Informatics Association. 2012;19(4):597–603. doi: 10.1136/amiajnl-2011-000622. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Marmar CR, Brown AD, Qian M, Laska E, Siegel C, Li M, Abu-Amara D, Tsiartas A, Richey C, Smith J, Knoth B, Vergyri D. Speech-based markers for posttraumatic stress disorder in US veterans. Depression and Anxiety. 2019;(36):607–616. doi: 10.1002/da.22890. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Matheny M, Thadaney Israni S, Ahmed M, Whicher D, editors. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. National Academy of Medicine; Washington, DC: 2019. NAM Special Publication. [Google Scholar]
- 82.Melnick ER, Dyrbye LN, Sinsky CA, Trockel M, West CP, Nedelec L, Tutty MA, Shanafelt T. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clinic Proceedings. 2020;95(3):476–487. doi: 10.1016/j.mayocp.2019.09.024. [DOI] [PubMed] [Google Scholar]
- 83.Miller AR. Medical Diagnostic Decision Support Systems – Past, Present, and Future: A Threaded Bibliography and Brief Commentary. Journal of Medical Informatics. 1994;1(1):8–27. doi: 10.1136/jamia.1994.95236141. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84.Miller AR, Geissbuhler A. Diagnostic Decision Support Systems. In: Hannah KJ, Ball MJ, editors. Clinical Decision Support Systems: Theory and Practice. 2nd. New York, NY: Springer Science; 2007. [Google Scholar]
- 85.Nakahara, Namba HK, Fukami A, Watanabe R, Mizutani M, Matsu T, Nishimura S, Jinnouchi S, Nagamachi S, Ohnishi T, Futami S, Flores LG, Nakahara M, Tamura S. Computer-Aided Diagnosis (CAD) for Mammography: Preliminary Results. Breast Cancer. 1998;5:401–405. doi: 10.1007/BF02967438. [DOI] [PubMed] [Google Scholar]
- 86.North Carolina State Health Plan and John Hopkins Bloomberg School of Public Health. North Carolina Hospitals: Charity Care Case Report. 2021. [May 11, 2022]. https://s3.documentcloud.org/documents/21094171/download-1.pdf . [Google Scholar]
- 87.Office of the National Coordinator for Health Information Technology (ONC) Clinical Decision Support. 2018. [September 14, 2022]. https://www.healthit.gov/topic/safety/clinical-decision-support . [Google Scholar]
- 88.Ommaya AK, Cipriano PF, Hoyt DB, Horvath KA, Tang P, Paz HL, DeFrancesco MS, Hingle ST, Butler S, Sinsky CA. NAM Perspectives. National Academy of Medicine; Washington, DC: 2018. Care-Centered Clinical Documentation in the Digital Environment: Solutions to Alleviate Burnout. Discussion Paper. [DOI] [Google Scholar]
- 89.Parakh A, Lee H, Lee JH, Eisiner BH, Sahani DV, Do S. Urinary stone detection on CT images using deep convolutional neural networks: Evaluation of model performance and generalization. Radiology: Artificial Intelligence. 2019;1(4) doi: 10.1148/ryai.2019180066. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90.Parikh RB, Helmchen LA. Paying for artificial intelligence in medicine. npj Digital Medicine. 2022;5(63):1–5. doi: 10.1038/s41746-02200609-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91.Price WN, Gerke S, Cohen IG. Potential liability for physicians using artificial intelligence. JAMA. 2019;322(18):1765. doi: 10.1001/jama.2019.15064. [DOI] [PubMed] [Google Scholar]
- 92.Rajkomar A, Hardt M, Howell M, Corrado G, Chin M. Ensuring Fairness in Machine Learning to Advance Health Equity. Annals of Internal Medicine. 2018;169:866–872. doi: 10.7326/M181990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Ramsetty A, Adams C. Impact of the digital divide in the age of COVID-19. Journal of American Medical Informatics Association. 2020;27(7):1147–1148. doi: 10.1093/jamia/ocaa078. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94.Ray A, Gupta A, Al A. Skin Lesion Classification with Deep Convolutional Neural Network: Process Development and Validation. Journal of Medical Internet Research Dermatology. 2020;(1):e18438. doi: 10.2196/18438. [DOI] [Google Scholar]
- 95.Reisman M. EHRs: The Challenge of Making Electronic Data Usable and Interoperable. [May 23, 2022];P&T. 2017 42(9):572–575. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5565131/ [PMC free article] [PubMed] [Google Scholar]
- 96.Responsible Innovation Labs. Charter. 2022. [June 30, 2022]. https://www.rilabs.org/charter . [Google Scholar]
- 97.Ridgely MS, Greenberg MD. Too many alerts, too much liability: sorting through the malpractice implications of drug-drug interaction clinical support. [July 27, 2022];Saint Louis University Journal of Health Law & Policy. 2012 5(2):257–295. https://scholarship.law.slu.edu/jhlp/vol5/iss2/4 . [Google Scholar]
- 98.Rodin J, Madsbjerg S. Making Money Moral : How a New Wave of Visionaries Is Linking Purpose and Profit. Wharton School Press; Philadelphia: 2021. [Google Scholar]
- 99.Sandhu S, Lin AL, Brajer N, Sperling J, Ratliff W, Bedoya AD, Balu S, O’Brien C, Sendak MP. Integrating a Machine Learning System into Clinical Workflows: Qualitative Study. JMIR. 2020;22(11) doi: 10.2196/22421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Sanyal S. Analytics Insights. 2021. [May 11, 2022]. How much does artificial intelligence cost in 2021? https://www.analyticsinsight.net/how-much-does-artificial-intelligence-cost-in-2021/ [Google Scholar]
- 101.Sendak MP, D’Arcy J, Kashyap S, Gao M, Nichols M, Corey K, Ratliff W, Balu S. European Medical Journal Innovations. 2020a. A Path for Translation of Machine Learning Products into Healthcare Delivery. [DOI] [Google Scholar]
- 102.Sendak MP, Ratliff W, Sarro D, Alderton E, Futoma J, Gao M, Nichols M, Revoir M, Yashar F, Miller C, Kester K, Sandhu S, Corey K, Brajer N, Tan C, Lin A, Brown T, Engelbosch S, Anstrom K, Elish MC, Heller K, Donohoe R, Theiling J, Poon E, Balu S, Bedoya A, O’Brien C. Real-World Integration of a Sepsis Deep Learning Technology into Routine Clinical Care: Implementation Study. JMIR Medical Informatics. 2020b;8(7):e15182. doi: 10.2196/15182. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103.Sendak MP, Gao M, Brajer N, Balu S. Presenting machine learning model information to clinical end users with model facts labels. npj Digital Medicine. 2020c;3(4) doi: 10.1038/s41746-020-0253-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 104.Sendak MP, Balu S, Schulman KA. Barriers to Achieving Economies of Scale in Analysis of EHR Data: A Cautionary Tale. Applied Clinical Informatics. 2017;8(3):826–831. doi: 10.4338/ACI-2017-03-CR-0046. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105.Shen Y, Shamout FE, Oliver JR, Witowski J, Kannan K, Park J, Wu N, Huddleston C, Wolfson S, Millet A, Ehrenpreis R, Awal D, Tyma C, Samreen N, Gao Y, Chhor C, Gandhi S, Lee C, Kumari-Subaiya S, Leonard C, Mohammed R, Moczuski C, Altabet J, Babb J, Lewin A, Reig B, Moy L, Heacock L, Geras KJ. Artificial intelligence system reduces false-positive findings in the interpretation of breast ultrasound exams. Nature Communication. 2021;12(5645) doi: 10.1038/s41467-021-26023-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 106.Signaevsky M, Marami B, Prastawa M, Tabish N, Iida MA, Zhang XF, Sawyer M, Duran I, Koenigsberg DG, Bryce CH, Chahine LM, Mollenhauer B, Mosovsky S, Riley L, Dave KD, Eberling J, Coffey CS, Adler CH, Serrano GE, White CL, III, Koll J, Fernandez G, Zeineh J, Cordon-Cardo C, Beach TG, Crary JF. Antemortem detection of Parkinson’s disease pathology in peripheral biopsies using artificial intelligence. Acta Neuropathological Communications. 2022;10(21) doi: 10.1186/s40478-022-01318-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 107.Singh RP, Hom GL, Abramoff MD, Campbell JP, Chiang MF. Current Challenges and Barriers to Real-World Artificial Intelligence Adoption for the Healthcare System, Provider, and the Patient. Translational Vision Science & Technology. 2020;9(2):1–6. doi: 10.1167/tvst.9.2.45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 108.HCA Healthcare Today. SPOT: How HCA Healthcare is “sniffing out” sepsis early. 2018. [May 1, 2022]. https://hcahealthcaretoday.com/2018/09/10/spot-how-hca-is-sniffing-out-sepsis-early/ [Google Scholar]
- 109.Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. npj Digital Medicine. 2020;3(17) doi: 10.1038/s41746-0200221-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 110.Syrowatka A, Kuznetsova M, Alsubai A, Beck-man AL, Bain PA, Thomas Craig KJ, Hu J, Jackson GP, Rhee K, Bates DW. Leveraging artificial intelligence for pandemic preparedness and response: a scoping review to identify key use cases. npj Digital Medicine. 2021;4(96) doi: 10.1038/s41746021-00459-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 111.Tadavarthi YBV, Krupinski E, Prater A, Gichoya J, Safdar N, Trivedi H. The State of Radiology AI: Considerations for Purchase Decisions and Current Market Offerings. Radiology: Artificial Intelligence. 2020;2(6) doi: 10.1148/ryai.2020200004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112.Unsworth H, Wolfram V, Dillon B, Salmon M, Greaves F, Liu X, MacDonald T, Denniston AK, Sounderajah V, Ashrafian H, Darzi A, Ashurst C, Holmes C, Weller A. Building an evidence standards framework for artificial intelligence-enabled digital health technologies. The Lancet Digital Health. 2022;4(4):e216–e217. doi: 10.1016/S25897500(22)00030-9. [DOI] [PubMed] [Google Scholar]
- 113.Vinson DR, Casey SD, Vuong PL, Huang J, Ballard DW, Reed ME. Sustainability of a Clinical Decision Support Intervention for Outpatient Care for Emergency Department Patients With Acute Pulmonary Embolism. JAMA Network Open. 2022;5(5):e2212340. doi: 10.1001/jamanetworkopen.2022.12340. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 114.Walker HK. In: The Origins of the History and Physical Examination, Clinical Methods: The History, Physical, and Laboratory Examinations. 3rd. Hall WD, Hurst JW, editors. Boston, MA: 1990. [PubMed] [Google Scholar]
- 115.Wiens J, Saria S, Sendak M, Ghassemi M, Liu VX, Doshi-Velez F, Jung K, Heller K, Kale D, Saeed M, Ossorio PN, Thadaney Israni S, Goldenberg A. Do no harm: a roadmap for responsible machine learning for health care. Nature Medicine. 2019;25(9):1337–1340. doi: 10.1038/s41591-019-0548-6. [DOI] [PubMed] [Google Scholar]
- 116.Wolff J, Pauling J, Keck A, Baumbach J. The Economic Impact of Artificial Intelligence in Health Care: Systematic Review. Journal of Medical Internet Research. 2020;22(2):e16866. doi: 10.2196/16866. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 117.Wong A, Otles E, Donnelly JP, Krumm A, McCullough J, DeTroyer-Cooley O, Pestrue J, Phillips M, Konye J, Penoza C, Ghous M, Singh K. External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients. JAMA Internal Medicine. 2021;181(8):1065–1070. doi: 10.1001/jamainternmed.2021.2626. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 118.Wu AC, Graif C, Mitchell SG, Meurer J, Mandl KD. Creative Approaches for Assessing Long-term Outcomes in Children. Pediatrics. 2021;148(Suppl 1):s25–s32. doi: 10.1542/peds.2021050693F. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Wu T. Agency Threats. [September 26, 2022];Duke Law Journal. 2011 60(8):1841–1857. https://scholarship.law.duke.edu/dlj/vol60/iss8/4 . [Google Scholar]
- 120.Wynants L, Van Calster B, Collins GS, Riley RD, Heinze G, Schuit E, Bonten MMJ, Dahly DL, Damen JA, Debray TPA, de Jong VMT, De Vos M, Dhiman P, Haller MC, Harhay MO, Henckaerts L, Heus P, Kammer M, Kreuzberger N, Lohmann A, Luijken K, Ma J, Martin GP, McLernon DJ, Andaur Navarro CL, Reitsma JB, Sergeant JC, Shi C, Skoetz N, Smits LJM, Snell KIE, Sperrin M, Spijker R, Steyerberg EW, Takada T, Tzoulaki I, van Kuijk SMJ, van Bussel BCT, van der Horst ICC, van Royen FS, Verbakel JY, Wallisch C, Wilkinson J, Wolff R, Hooft L, Moons KGM, van Smeden M. Prediction models for diagnosis and prognosis of COVID-19: Systematic review and critical appraisal. BMJ. 2020;269:m1328. doi: 10.1136/bmj.m1328. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 121.Yala A, Lehman C, Schuster T, Portnoi T, Barzilay R. A Deep Learning Mammography-based Model for Improved Breast Cancer Risk Prediction. Radiology. 2019;292(1):60–66. doi: 10.1148/radiol.2019182716. [DOI] [PubMed] [Google Scholar]
- 122.Yang Z, Silcox C, Sendak M, Rose S, Rehkopf D, Phillips R, Peterson L, Marino M, Maier J, Lin S, Liaw W, Kakadiaris IA, Heintzman J, Chu I, Bazemore A. Advancing primary care with Artificial Intelligence and Machine Learning. Health care (Amsterdam, Netherlands) 2022;10(1) doi: 10.1016/j.hjdsi.2021.100594. [DOI] [PubMed] [Google Scholar]
- 123.Yu K, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nature Biomedical Engineering. 2018;2:719–731. doi: 10.1038/s41551018-0305-z. [DOI] [PubMed] [Google Scholar]
- 124.Zou J, Schiebiner L. Ensuring that biomedical AI benefits diverse populations. eBioMedicine. 2021;67:1–6. doi: 10.1016/j.ebiom.2021.103358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 125.Zusterzeel R, Goldstein BA, Evans BJ, Roades T, Mercon K, Silcox C. Duke Margolis Center for Health Policy. 2022. [October 13, 2022]. Evaluating AI-Enabled Clinical Decision and Diagnostic Support Tools Using Real-World Data. https://healthpolicy.duke.edu/publications/evaluating-ai-enabled-clinical-decision-and-diagnostic-support-tools-using-real-world . [Google Scholar]