Abstract
One of the greatest strengths of artificial intelligence (AI) and machine learning (ML) approaches in health care is that their performance can be continually improved based on updates from automated learning from data. However, health care ML models are currently essentially regulated under provisions that were developed for an earlier age of slowly updated medical devices—requiring major documentation reshape and revalidation with every major update of the model generated by the ML algorithm. This creates minor problems for models that will be retrained and updated only occasionally, but major problems for models that will learn from data in real time or near real time. Regulators have announced action plans for fundamental changes in regulatory approaches. In this Viewpoint, we examine the current regulatory frameworks and developments in this domain. The status quo and recent developments are reviewed, and we argue that these innovative approaches to health care need matching innovative approaches to regulation and that these approaches will bring benefits for patients. International perspectives from the World Health Organization, and the Food and Drug Administration’s proposed approach, based around oversight of tool developers’ quality management systems and defined algorithm change protocols, offer a much-needed paradigm shift, and strive for a balanced approach to enabling rapid improvements in health care through AI innovation while simultaneously ensuring patient safety. The draft European Union (EU) regulatory framework indicates similar approaches, but no detail has yet been provided on how algorithm change protocols will be implemented in the EU. We argue that detail must be provided, and we describe how this could be done in a manner that would allow the full benefits of AI/ML-based innovation for EU patients and health care systems to be realized.
Keywords: artificial intelligence, machine learning, regulation, algorithm change protocol, healthcare, regulatory framework, health care
Introduction
Automated image analysis and segmentation [1], autonomous soft tissue suturing [2], and brain-machine interfaces [3]: these are technologies that until recently were only science fiction imaginings. They now all represent the state of the art in ML-based health care tools and all share one characteristic: all these systems are trained on patient data and can be quickly and automatically improved through retraining and on the basis of new patient data. ML is a subset of AI approaches, and this Viewpoint deals with the subset of ML applications that are classified as medical devices. The concept that an ML model will remain static over time is anathema to the concept of learning. Technologies are rapidly advancing to allow true real-time machine learning, and when the regulatory regimes allow it, ML-based health care tools have the potential to “learn” from new observations and continuous use and to retrain their models fully “on the job” [4,5].
Many ML-based health care tools are classified in the European Union (EU) and United States (and most other jurisdictions) as Software as a Medical Device (referred to in this Viewpoint as ML-based SaMD, with this term used to refer to technologies that learned from patient data sets and that will be further trained after being placed on the market). Learning and updating models pose a regulatory challenge, more so for ML-based SaMD that will learn from data in real time or near real time. Changes of this type pose a new regulatory challenge: the changes will often affect the fundamental clinical safety, clinical performance, and clinical benefit of the algorithm. Should they require full regulatory reassessment, a process that generally takes many months? Alternatively, can novel, faster, robust methods of quality oversight and approval be established? This Viewpoint compares the differing proposals put forward by the US and EU regulatory bodies for adapting the existing medical device frameworks to include consideration of learning ML-based SaMD. The crux of our argument is that highly proactive responses from regulators are required. The US Food and Drug Administration (FDA) and the EU have proposed strategies. The FDA approach is structured and comprehensive, while the EU approach overlaps the US approach but lacks detail on requirements and has not involved detailed stakeholder consultation.
There is evidence that more nuanced regulation of ML-based SaMD is being developed. A 2021 position paper of the American Medical Informatics Association recommended proactive regulatory approaches to bring improvement in clinical decision support (CDS) regulation including transparency standards, real-world performance (RWP) monitoring requirements, and improved postmarket surveillance (PMS) strategies [6]. A recent external validation of a widely implemented proprietary sepsis prediction model, the Epic Sepsis Model (ESM), used in hundreds of hospitals throughout the United States, found that it has poorly identified the onset of sepsis [7]. The authors concluded that the widespread adoption of this CDS, despite its poor performance, raised fundamental concerns about sepsis management. In our view, it also raises regulatory oversight and PMS concerns. The ESM is a penalized logistic regression model, included as part of a widely used electronic health record system, and it was developed and validated based on data from 405,000 patient encounters in three health systems between 2013 and 2015 [7]. Although only limited information is publicly available about the ESM, we recognize that it is not an example of an adaptive CDS (as defined in [6]); however, the ML approach used has the potential to be used in future adaptive CDS systems. Even as an example of a static CDS system, periodic updates based on PMS/RWP monitoring would form part of the lifecycle of this CDS, greatly increasing patient safety. Many of the considerations in this Viewpoint are applicable to this example.
The regulation of ML-based SaMD has been identified as one of the more substantial barriers to their clinical adoption [8]. Explorations of ML-based medical software in the United States found that many tools are not regulated by the FDA, that there is no FDA-maintained public record/database of approved ML-based SaMD, that many devices are approved through the 510(k)-clearance route (claim of substantial equivalence to an already-approved device), and where specific clinical evidence was provided for approval, this was exclusively from retrospective rather than prospective data [7,8]. Some of the medical applications of ML discussed in [7,8] were classified by the FDA as low risk or not classified as ML-based SaMD. For low-risk applications, existing regulatory frameworks may be sufficient. However, for the higher risk class devices discussed by [8,9], and also for adaptive ML-based SaMD and autonomous applications, there is a requirement for smarter regulation in the EU, United States, and worldwide, both to ensure patient safety and to remove a hurdle to adoption and advancement of the technologies [10-13].
The Current Regulatory Framework for Learning ML-Based SaMD
ML-based health care tools with a role in individual patient diagnosis or therapy are currently regulated in the United States and EU as Software as a Medical Device. The regulation of medical devices has been included in US legislation since the 1938 Federal Food, Drug, and Cosmetic Act (FD&C Act), and more comprehensively in the 1976 Medical Device Amendments to the FD&C Act and subsequent updates [14]. In the EU, a legislative framework has been in place since the Medical Devices Directive 93/42/EEC and the Active Implantable Medical Devices Directive 90/385/EEC and was recently updated to the Medical Device Regulation (MDR) [15,16]. Both US and EU frameworks heavily rely on guidelines and harmonized norms, which define a set of standardized best practices for the development and deployment of medical devices. Neither of the current regulatory frameworks in their present form adequately considers the special properties of ML-based systems.
Historically, as hardware medical devices preceded software medical devices, the principles of a relatively static product (ie, following linear steps from initial concept to early and later stages of development, verification, validation, clinical testing, approval, and market release) were logical. As software in/as medical devices became established, the fundamental principles of medical device regulation were adopted largely unchanged to software. Allowance was provided for special properties of software; however, the level of detail in the legislation itself was low [15] and was instead provided through a set of international standards, including standards for the software life cycle [17], software usability testing, validation, and release and update. Initially, these processes were required to proceed in a linear “waterfall” cascade that had to be followed for software updates, release, and approval [17]. Later international guidance (see [18]) provided approaches for the development of SaMD within agile frameworks, the generally recognized optimal approach to software design [19], which is an iterative and incremental model of software development. Whether using waterfall or agile approaches, the international guidelines provide methodologies to adequately verify and validate SaMD software, a prerequisite for its safety and effectiveness.
Limitations of Current Approaches
This raises the question, “does the current legislation provide a regulatory approval framework for ML-based SaMD?” Although burdensome, the current EU approach can be (and has been) adopted for ML-based SaMD (see [20]). SaMD manufacturers can optimize their software development processes to maximize their efficiency in this linear process, particularly for documenting the effects of model change on SaMD performance between updates (these are aspects of “change control,” a fundamental principle of medical device quality management systems). When applied to ML-based SaMD, software verification and validation are not in themselves sufficient, as they do not ensure that the ML model is safe; it could have safety problems related either to low-quality input data or to a poorly designed ML algorithm.
Manufacturers tend to relegate information about ML model updates to software development life cycle (SDLC) activities and postmarket clinical follow-up (PMCF). Although conventional PMCF and SDLC are highly valuable activities, provided they are executed in the correct environment and phase, they are generally inadequate to address ML problems. SDLC activities focus on software design controls that do not ensure clinically acceptable ML performance (ie, a software can be perfectly written and documented and yet the ML that it hosts can fail because of ML model problems). One reason for the inadequacy of current PMCF practices for ML model updates is that they typically generate data on a sufficient number of patients only months to years after changes are made to the SaMD. In addition, historically, the data quantity typically explored in PMCF approaches has been insufficient for the requirements of modern data-driven learning algorithms. As discussed later in this Viewpoint, PMCF can be adapted to allow for the rapid gathering of detailed data related to ML model updates. The adaptation of PMCF to this purpose and the definition of a systematic “protocol” for the implications of this data stream on device regulatory status are the foundation of proposed novel regulatory approaches.
Proposed Solutions: The US FDA Action Plan
The issues of the appropriateness of the ML algorithm and input data and the safety of the derived ML model are tackled to a degree in recent standards, some of which are still under development (see for example ISO [International Organization for Standardization]/IEC [International Electrotechnical Commission] TR 24028 on trustworthiness in AI [21] and ISO/IEC DTR 24027 on bias in AI systems and AI-aided decision-making [22]) but have not yet been addressed in a joined-up fashion in legislation. However, these issues are addressed by novel and comprehensive proposals in the US FDA’s 2021 action plan [23], which effectively provides a roadmap for ML model validation. The FDA has conducted a structured consultation and has published a comprehensive action plan on regulatory approval strategies for adaptive ML-based SaMD [23,24], which has been accompanied by a high degree of engagement with the themes in the literature [25-27]. The action plan does not yet fully resolve the problems described in this Viewpoint, as the action plan is not complete or implemented. Nevertheless, the proactive and open approach of the FDA is commendable.
The action plan clearly recognizes that adaptive ML-based SaMD presents a challenge to traditional approaches: “The FDA’s traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies” [23]. The FDA started the formal online consultation process in April 2019 [28]—with contributions guided by a well-conceived detailed consultation document—and conducted a public workshop in February 2020, followed by the publication of the action plan in January 2021 [23]. The consultation document and action plan are based on the FDA’s premarket programs, and took into consideration the International Medical Device Regulators Forum’s (IMDRF) medical device risk categorization principles [29], the benefit-risk framework, software modifications guidance, and the organization-based total product life cycle approach. Good machine learning practice (GMLP) principles will be used to ensure rigorous ML-based SaMD development. Algorithm changes will be transparently labeled for users, while methodologies for ensuring robustness and identification and elimination of bias will be incorporated. For each device, a two-component predetermined change control plan (PCCP) is envisioned. This will include a SaMD prespecification (SPS)—a predetermined change control plan setting out the scope of the permissible modifications—and an algorithm change protocol (ACP; note that it is the prediction model that changes, see Textbox 1), which sets out the methodology used in the ML-based SaMD to implement the defined changes within the scope of the SPS. The ACP is a step-by-step delineation of procedures to be followed so that the modification achieves its goals and the ML-based SaMD remains safe and effective. The action plan is notable for its strengths in harnessing the iterative improvement power of ML-based SaMD, while at the same time ensuring patient safety through continuous RWP monitoring. As a next step, the FDA will publish a complete draft guidance on the PCCP in 2021 [23].
Definitions of machine learning algorithm software and models. These definitions are set out explicitly here as some of the regulatory discussion documents use machine learning terminology imprecisely.
Artificial intelligence/machine learning algorithm
Machine learning algorithms are mathematical procedures that are implemented in code and are run on data to create an output machine learning model. Machine learning algorithms perform pattern recognition tasks to learn from data. For example, a machine learning algorithm could be trained on physician-labeled radiographs used to develop a machine learning model for tumor detection.
Artificial intelligence/machine learning software
Machine learning software is the code (ie, programming language) implementation of the machine learning algorithm. It is possible to implement a machine learning algorithm in many alternative ways or different programming languages.
Artificial intelligence/machine learning model
The machine learning model is created by running a machine learning algorithm on data. The machine learning model represents what was learned by a machine learning algorithm. The machine learning model consists of model data and a prediction algorithm, which can be regarded as an automatically created computer program. Once created, the machine learning model can be used for a specific task (eg, the machine learning model can be applied to unlabeled radiographs to locate possible tumors).
Do the Innovative ACP-Based Approaches Adequately Ensure Safety?
The status quo that the FDA action plan will alter has the foundational principle that a medical device should be clearly defined, definitively tested, and meticulously documented before approval and then should effectively have unchanged clinical safety, performance, and benefit on the market, and this should be ensured through tight change control processes and postmarket surveillance. Any substantial change in clinical behavior would require reapproval. This framework has advantages in the simplicity of traceability and maintenance of safety oversight. A disadvantage is that this framework prohibits the rapid change of ML-based SaMD. The FDA action plan effectively proposes the same system, except that a boundary of change of the clinical behavior of the adaptive ML-based SaMD can be predefined, along with methods to oversee the degree of change and the resulting effects while on the market. If this is a paradigm shift, it is a small one—it effectively shifts the approval at the stage of readiness of a new product revision and the postmarket evaluation of change to a premarket comprehensive consideration of the changes that would be acceptable for the device.
By definition, changes that do not fall within the predefined risk-assessed thresholds are not allowed and require the normal processes of examination by the regulator before approval for the market. The FDA noted in the action plan that “stakeholders provided specific feedback about the elements that might be included in the SPS/ACP to support safety and effectiveness as the SaMD and its associated algorithm(s) change over time” and has reacted to this by promising detailed guidance on what should be included in an SPS and ACP to support the safety and effectiveness of ML-based SaMD algorithms [23]. It is the view of the authors that the overall principles set out in the FDA action plan (ie, predefining acceptable clinical safety, performance, and benefit on the market, and conducting RWP monitoring of these) represent an approach that is both rational and proportionate, and one that would ensure patient safety, provided the regulator is sufficiently involved in the oversight of RWP monitoring data and evaluation of this data in the context of the PCCP.
The EU Artificial Intelligence Act
Following a European Commission white paper on AI in February 2020 and a subsequent public consultation [30,31], the European Commission published a draft Artificial Intelligence Act in April 2021 [32]. This draft legislation lays down harmonized rules for AI applications and it extends classical EU product conformity and CE marking concepts to all “high-risk” AI applications. The draft legislation is very similar to MDR in its core approaches, which are based around product intended use and postmarket monitoring systems. All use of AI in medical devices is defined as “high-risk,” and the draft legislation is designed to be compatible with MDR [16], to be overseen by the same Notified Bodies as MDR (although the detail on how oversight will operate remains to be established), and devices are to be covered by a single CE-mark representing conformity to both MDR and the Artificial Intelligence Act [16,32]. It is striking that there is no single mention of ML in MDR, its annexes, or its associated guidance (MDR [16] and Medical Device Coordination Group guidelines [33,34]), despite the fact these documents were released in 2017 or later. Essentially, the new draft Artificial Intelligence Act extends MDR [16], bringing it into the AI era.
The FDA approach had a clear and focused published proposal on ML-based SaMD regulation to frame the discussion for the public consultation. In contrast, the European Commission consultation on the 2020 white paper that preceded the draft legislation did not have an associated published proposal and was broad, bringing in all high-risk AI applications, not just health care. We studied the contributions to the consultation, and although there are some well-considered submissions, overall, there was little focused discussion on precisely how ML-based SaMD should be overseen in the EU. This lack of detail in proposals is also reflected in the draft legislation. For the first time in EU medical device legislation, the draft describes the concept of ACPs, but—unlike in the FDA action plan [23]—these are implied, rather than being specifically named or their requirements being set out in detail. Likewise, the draft legislation does not set out an analogue to the FDA’s PCCP approach, although again, the need for this is implied.
The critical clause in the draft legislation is as follows:
In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e., they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification.
The importance of postmarket performance monitoring is described, but no details are provided on special considerations for this in the context of ML-based SaMD: “all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed.”
EU Regulatory Oversight: What Is Still Needed?
In September 2020, a thorough analysis of the EU legal requirements for ML-based SaMD [35] was carried out by the European medical devices trade association (European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry [COCIR]), who concluded that deployment is possible in a way that is consistent with MDR, but recommended that practical guidance should be made available, supported by the development of international standards. More specifically, the group recommends that the international standard describing software life cycle processes (IEC 62304) [17] should be updated, requiring manufacturers to define an ACP for adaptive ML-based SaMD. COCIR’s recommendation has been included in the text of the new “under review” edition of the standard [36]. We agree that the updating of standards is an important stepping-stone toward a clear framework, but changes can be long in the incubation. Simply updating standards documents may not bring the clarity required by EU Notified Bodies to allow them to make the judgment calls required to approve learning ML-based SaMD. The modifications to the IEC standard are also discrete, and do not guarantee cohesiveness of the EU regulatory framework. Moreover, updates in standards are generally conducted by a narrow group of domain experts; in the example above, the expertise group will largely consist of medical device software life cycle experts. As fully acknowledged by the FDA, consultation on the design of new regulatory frameworks for adaptive ML-based SaMD should also bring together experts on postmarket surveillance, RWP measurement, clinical evaluation, and labelling, as well as patient representatives. Although not specifically stated by the FDA, we argue that experts in real-time adaptive ML approaches, which are likely to be increasingly proposed for ML-based SaMD, should also be a key part of these discussions. The awaited EU guidance was not published with the draft Artificial Intelligence Act, and as such there will be a clear legal requirement for ACPs for adaptive ML-based SaMD, but without detail on how ACPs should be implemented.
There are several major implications for the EU if standardized procedures are not provided for premarket review of adaptive ML-based SaMD, for ACP, or for manufacturer oversight of systems on the market. Unclear or unspecified regulatory approaches required to fulfil requirements could lead to frameworks that are too burdensome to be worthwhile for manufacturers to deploy their technologies in a particular region. This may put patients there at considerable disadvantage, as they may not be able to access new diagnostic, therapeutic, or preventive modalities, or may only be able to access them after a significant delay. Unclear regulatory strategies could also significantly disadvantage the growth and prosperity of EU AI businesses. Lastly, as discussed in the general context of EU and US medical device harmonization and regulation in [6,37], unclear regulatory requirements are unlikely to function to ensure safety as they will likely lead to highly uneven regulatory oversight and enforcement.
It is unclear the degree to which the detailed EU approaches to adaptive ML-based SaMD will piggyback on the results of the already well-progressed consultative process undertaken by the US FDA. Other international approaches, such as those of the IMDRF and the joint WHO/International Telecommunication Union strategy for an independent standard evaluation framework for ML model benchmarking [38], could also provide input to an EU approach; however, concerted and prompt action is required on the part of the European Commission, to consult on and define EU-specific guidelines for providers to enable them to “establish[...] that changes to the algorithm and its performance that have been pre-determined.” The benefits of the US FDA approach have been discussed at length in this Viewpoint but as described in detail in a July 2020 viewpoint by Cohen et al [19], there are aspects of the US approach that cannot easily be translated to the EU. EU-specific solutions are required in three domains: (1) EU data protection considerations relating to the update problem, (2) the relatively less established system in the EU for RWP monitoring, and (3) EU differences in public perceptions and stated community values regarding the role of AI. Point 2 may be partly addressed through complaint and incident registration in the proposed EU database for stand-alone high-risk AI systems but this is not yet sufficiently defined in the draft legislation to determine this with certainty.
It is our view that the EU needs to provide specific guidelines for adaptive ML-based SaMD ACPs and RWP monitoring. Waiting for coordination to be achieved through alignment of international standards is an approach without a proven track record of success, and it is unclear whether international approaches alone are sufficient for the EU’s special circumstances. This should not be the basis for the development of the EU’s health care ML-based SaMD ecosystem, on which we depend to bring the benefits of health care AI to European society and its economy. What is needed is a clear standardized approach, similar to the now 2-year-old approach of the FDA, which sets clear procedures required for ML-based SaMD approval and postmarket provider and regulatory oversight. This could be achieved, with or without a focused public consultation, through published guidance from the EU Medical Devices Coordination Group, which should bring together the aspects addressed in the FDA action plan, the COCIR report, and the developing harmonized standards [23,35,36].
The EU approach to regulation of medical devices has faced criticism for lacking both harmonization and approaches for ensuring patient safety [37]. Although both of these aspects have been improved due to the MDR [16] through the introduction of greater transparency for patients and a central vigilance and PMS database (EUDAMED), the key underlying problems of market fragmentation and lack of clarity and harmonization still exist [37]. The main issue that this Viewpoint addresses is the potential for continued lack of standardized adoption of proactive oversight of health AI, and therefore the potential for the EU Artificial Intelligence Act [32] to fail in its objectives of harmonizing AI regulation and ensuring future-ready oversight and the safety of EU ML-based SaMD. We call for a detailed action plan and public consultation, echoing the FDA’s 2020 approach, that will allow manufacturers and other stakeholders in the medical community to inform the discussion on requirements and to definitively understand requirements as they relate to oversight, market surveillance, and checkpoints of safety and performance, reactive to ML-based SaMD adoption (ie, the details of ACPs and associated RWP monitoring and ACP mechanisms).
Abbreviations
- ACP
algorithm change protocol
- AI
artificial intelligence
- CDS
clinical decision support
- COCIR
European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry
- FDA
Food and Drug Administration (US)
- GMLP
good machine learning practices
- IEC
International Electrotechnical Commission
- IMDRF
International Medical Device Regulators Forum
- ISO
International Organization for Standardization
- MDR
Medical Device Regulation
- ML
machine learning
- PCCP
predetermined change control plan
- PMCF
postmarket clinical follow-up
- PMS
postmarket surveillance
- RWP
real-world performance
- SaMD
Software as a Medical Device
- SDLC
software development life cycle
- SPS
SaMD prespecification
Footnotes
Authors' Contributions: SG wrote the initial version of the manuscript and MF, MH, SU, AB, and JS made extensive revisions and contributed to the editing process.
Conflicts of Interest: SG, MF, MH and SU are or were employees, contractors, or equity holders in Ada Health GmbH. AB is a shareholder of companies active in neurotechnology and regulatory consulting and is a paid employee of confinis ag. All authors are considered to have an interest in Ada Health GmbH.
References
- 1.Nagendran M, Chen Y, Lovejoy CA, Gordon AC, Komorowski M, Harvey H, Topol EJ, Ioannidis JPA, Collins GS, Maruthappu M. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020 Mar 25;368:m689. doi: 10.1136/bmj.m689. http://www.bmj.com/lookup/pmidlookup?view=long&pmid=32213531 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Shademan A, Decker RS, Opfermann JD, Leonard S, Krieger A, Kim PCW. Supervised autonomous robotic soft tissue surgery. Sci Transl Med. 2016 May 04;8(337):337ra64–337ra64. doi: 10.1126/scitranslmed.aad9398.8/337/337ra64 [DOI] [PubMed] [Google Scholar]
- 3.Collinger JL, Wodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ, McMorland AJ, Velliste M, Boninger ML, Schwartz AB. High-performance neuroprosthetic control by an individual with tetraplegia. The Lancet. 2013 Feb;381(9866):557–564. doi: 10.1016/s0140-6736(12)61816-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Nishihara R, Moritz P, Wang S, Tumanov A, Paul W, Schleier-Smith J, Liaw R, Niknami M, Jordan MI, Stoica I. Real-Time Machine Learning: The Missing Pieces. Proceedings of the 16th Workshop on Hot Topics in Operating Systems; HotOS '17; May 7-10, 2017; Whistler, BC. 2017. May 10, [DOI] [Google Scholar]
- 5.Vokinger KN, Feuerriegel S, Kesselheim AS. Continual learning in medical devices: FDA's action plan and beyond. The Lancet Digital Health. 2021 Jun;3(6):e337–e338. doi: 10.1016/S2589-7500(21)00076-5. doi: 10.1016/S2589-7500(21)00076-5. [DOI] [PubMed] [Google Scholar]
- 6.Petersen C, Smith J, Freimuth RR, Goodman KW, Jackson GP, Kannry J, Liu H, Madhavan S, Sittig DF, Wright A. Recommendations for the safe, effective use of adaptive CDS in the US healthcare system: an AMIA position paper. J Am Med Inform Assoc. 2021 Mar 18;28(4):677–684. doi: 10.1093/jamia/ocaa319.6100038 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Wong A, Otles E, Donnelly JP, Krumm A, McCullough J, DeTroyer-Cooley O, Pestrue J, Phillips M, Konye J, Penoza C, Ghous M, Singh K. External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients. JAMA Intern Med. 2021 Aug 01;181(8):1065–1070. doi: 10.1001/jamainternmed.2021.2626.2781307 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020 Sep 11;3(1):118. doi: 10.1038/s41746-020-00324-0. doi: 10.1038/s41746-020-00324-0.10.1038/s41746-020-00324-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Wu E, Wu K, Daneshjou R, Ouyang D, Ho DE, Zou J. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nat Med. 2021 Apr;27(4):582–584. doi: 10.1038/s41591-021-01312-x.10.1038/s41591-021-01312-x [DOI] [PubMed] [Google Scholar]
- 10.Higgins D, Madai VI. From Bit to Bedside: A Practical Framework for Artificial Intelligence Product Development in Healthcare. Advanced Intelligent Systems. 2020 Jul 02;2(10):2000052. doi: 10.1002/aisy.202000052. doi: 10.1002/aisy.202000052. [DOI] [Google Scholar]
- 11.Madai V, Higgins DC. Artificial Intelligence in Healthcare: Lost In Translation? arXiv. Preprint posted online on July 28, 2021 https://arxiv.org/pdf/2107.13454.pdf. [Google Scholar]
- 12.Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis. The Lancet Digital Health. 2021 Mar;3(3):e195–e203. doi: 10.1016/s2589-7500(20)30292-2. [DOI] [PubMed] [Google Scholar]
- 13.Lyell D, Coiera E, Chen J, Shah P, Magrabi F. How machine learning is embedded to support clinician decision making: an analysis of FDA-approved medical devices. BMJ Health Care Inform. 2021 Apr 14;28(1):e100301. doi: 10.1136/bmjhci-2020-100301. https://informatics.bmj.com/lookup/pmidlookup?view=long&pmid=33853863 .bmjhci-2020-100301 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.US PUBLIC LAW 94-295—MAY 28, 1976, 94th Congress An Act to amend the Federal Food, Drug, and Cosmetic Act to provide for the safety and effectiveness of medical devices intended for human use, and for other purposes. 1976. [2021-02-24]. https://www.govinfo.gov/content/pkg/STATUTE-90/pdf/STATUTE-90-Pg539.pdf .
- 15.Council Directive 90/385/EEC of 20 June 1990 on the approximation of the laws of the Member States relating to active implantable medical devices. Official Journal of the European Communities. 1990. Jun 20, [2021-02-24]. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:31990L0385&from=EN .
- 16.Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices. Official Journal of the European Communities. 2017. Apr 05, [2021-02-24]. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32017R0745 .
- 17.IEC 62304:2006/AMD 1:2015 Medical device software — Software life cycle processes — Amendment 1. ISO. 2006. [2021-02-24]. https://www.iso.org/standard/64686.html .
- 18.Guidance on the use of AGILE practices in the development of medical device software AAMI TIR45:2012 (R2018) AAMI. 2012. [2021-02-24]. https://webstore.ansi.org/standards/aami/aamitir452012r2018 .
- 19.Dikert K, Paasivaara M, Lassenius C. Challenges and success factors for large-scale agile transformations: A systematic literature review. Journal of Systems and Software. 2016 Sep;119:87–108. doi: 10.1016/j.jss.2016.06.013. [DOI] [Google Scholar]
- 20.Gilbert S, Fenech M, Idris A, Türk E. Periodic Manual Algorithm Updates and Generalizability: A Developer's Response. Comment on "Evaluation of Four Artificial Intelligence-Assisted Self-Diagnosis Apps on Three Diagnoses: Two-Year Follow-Up Study". J Med Internet Res. 2021 Jun 16;23(6):e26514. doi: 10.2196/26514. https://www.jmir.org/2021/6/e26514/ v23i6e26514 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.ISO/IEC TR 24028:2020 Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence. ISO. 2020. May, [2021-10-11]. https://www.iso.org/standard/77608.html .
- 22.ISO/IEC TR 24027 Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making. ISO. 2020. Nov, [2021-10-11]. https://www.iso.org/standard/77607.html .
- 23.Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. Food and Drug Administration. [2021-02-24]. https://www.fda.gov/media/145022/download .
- 24.Artificial Intelligence and Machine Learning in Software as a Medical Device. Food and Drug Administration. [2021-02-24]. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device .
- 25.Minssen T, Gerke S, Aboy M, Price N, Cohen G. Regulatory responses to medical machine learning. J Law Biosci. 2020 May 01;7(1):lsaa002. doi: 10.1093/jlb/lsaa002. http://europepmc.org/abstract/MED/34221415 .lsaa002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Babic B, Gerke S, Evgeniou T, Cohen IG. Algorithms on regulatory lockdown in medicine. Science. 2019 Dec 06;366(6470):1202–1204. doi: 10.1126/science.aay9547.366/6470/1202 [DOI] [PubMed] [Google Scholar]
- 27.Cohen IG, Evgeniou T, Gerke S, Minssen T. The European artificial intelligence strategy: implications and challenges for digital health. The Lancet Digital Health. 2020 Jul;2(7):e376–e379. doi: 10.1016/s2589-7500(20)30112-6. [DOI] [PubMed] [Google Scholar]
- 28.Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback (2019) Food and Drug Administration. 2019. [2021-02-24]. https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf .
- 29."Software as a Medical Device": Possible Framework for Risk Categorization and Corresponding Considerations. IMDRF Software as a Medical Device (SaMD) Working Group. 2014. Dec 18, [2021-02-24]. http://www.imdrf.org/docs/imdrf/final/technical/imdrf-tech-140918-samd-framework-risk-categorization-141013.pdf .
- 30.Legal and regulatory implications of Artificial Intelligence (AI): the case of autonomous vehicles, e-health and data mining, Brussels (2018) European Commission. 2019. [2021-02-24]. https://publications.jrc.ec.europa.eu/repository/bitstream/JRC116235/jrc116235_report_on_ai_%281%29.pdf .
- 31.White Paper On Artificial Intelligence - A European approach to excellence and trust. European Commission. 2020. Feb, [2021-02-24]. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf .
- 32.Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts. EUR-Lex. 2021. Apr 21, [2021-10-11]. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206 .
- 33.MDCG 2019-11 - Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 MDR and Regulation (EU) 2017/746 IVDR (2019) Medical Device Coordination Group. 2019. Oct, [2021-02-24]. https://ec.europa.eu/health/sites/health/files/md_sector/docs/md_mdcg_2019_11_guidance_qualification_classification_software_en.pdf .
- 34.MDCG 2020-1 - Guidance on Clinical Evaluation (MDR) / Performance Evaluation (IVDR) of Medical Device Software. Medical Device Coordination Group. 2020. Mar, [2021-02-24]. https://ec.europa.eu/health/sites/health/files/md_sector/docs/md_mdcg_2020_1_guidance_clinic_eva_md_software_en.pdf .
- 35.Artificial Intelligence in Healthcare. COCIR, the European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry. 2019. Apr, [2021-02-24]. https://www.cocir.org/uploads/media/COCIR_White_Paper_on_AI_in_Healthcare.pdf .
- 36.IEC/DIS 62304 Health software — Software life cycle processes. ISO. 2021. [2021-02-24]. https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/07/16/71604.html .
- 37.Jarman H, Rozenblum S, Huang TJ. Neither protective nor harmonized: the crossborder regulation of medical devices in the EU. HEPL. 2020 Jul 07;16(1):51–63. doi: 10.1017/s1744133120000158. [DOI] [PubMed] [Google Scholar]
- 38.Wiegand T, Krishnamurthy R, Kuglitsch M, Lee N, Pujari S, Salathé M, Wenzel M, Xu S. WHO and ITU establish benchmarking process for artificial intelligence in health. The Lancet. 2019 Jul;394(10192):9–11. doi: 10.1016/s0140-6736(19)30762-7. [DOI] [PubMed] [Google Scholar]