Table 1.
Title | Proposed regulatory framework for modifications to Artificial Intelligence/Machine Learning (AI/ML)—Based Software as Medical Device (SaMD) Discussion paper and request for feedback (29) | Ethics guidelines for trustworthy AI (30) |
---|---|---|
Key content (excerpts) | - Establishment of quality systems and Good Machine Learning Practices (GMLP), including usage of only relevant data, the separation between training, tuning and test datasets or transparency of the output - Conduction of initial pre-market reviews to assure safety and effectiveness - Monitoring of the AI devices based on development, validation, and execution of algorithm changes such as “Algorithm Change Protocol” - Post-market real-world evidence performance reporting for maximized safety and effectiveness |
-Independent high-level expert group on artificial intelligence set up by the European Commission/April 8, 2019 |
- Ethical principles as foundations of trustworthy AI (respect for human autonomy, prevention of harm, fairness, and explicability) - Seven key requirements of realizing reliable AI [(1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination, and fairness, (6) environmental and societal well-being and (7) accountability] - Assessing trustworthy AI (assessment list when developing, deploying or using AI systems) |