Table 2.
AI categories | Data source | Security and privacy threats | |
|
|
Unintentional (integrity and privacy threats) | Intentional (availability and integrity attacks) |
Medical diagnostics |
|
1-4: Incorrect, missing, or incomplete patient data or images occur owing to hardware or software errors, measurement and label errors, and human errors (eg, distorted images, partial images, and mismatched data or laboratory results or images) 1-4: Data integration errors occur when integrating data from various sources (eg, by mislabeling data attributes and mismatching patient information with their images and laboratory results) 1-4: Organic biases occur because of the nature of the disease and the demographics of patients, and selection biases rise because of human biases 5. Annotation errors and biases occur in all sources of data because of expert mistakes and human biases 1-4: Errors and bias in synthetic data or images 1-7: Privacy breaches (eg, reidentify patients) |
1-3: Software tampering, medical sensor spoofing, medical equipment tampering or poisoning (eg, CT and MRI scanning equipment tampering), medical image tampering (eg, image scaling, copy-move tampering, sharpening, blurring, and resampling), generative fake data and images (eg, generative fake CT and MRI images undetectable by both human experts and generative AI), and medial data tampering or poisoning (eg, noise injection and maliciously synthesized data) 5: Annotation errors by intention |
Drug discovery |
|
1-2: Duplication issues (eg, sequence redundancies or sequence duplications with minor variations), structural errors, and assembly or carried-over errors owing to poor data quality of sources 1-6: Data integration errors occur when integrating data from various sources 4-5: Wrong findings and errors in trials 1-6: Missing and incomplete data, missing or incorrect annotations, and human errors 1-6: Errors and bias in synthetic data 6: Incorrect or inaccurate models 1-7: Privacy breaches (eg, reidentify patients) |
1-5: Genomic data tampering or poisoning (eg, maliciously forge and inject structures or sequences, analyses, and findings) 1-5: Annotation errors by intention 6: Model tampering |
Virtual health assistants |
|
1-5: Incorrect, missing, or incomplete patient data 1-7: Data integration errors occur when integrating data from various sources 1-7: Organic biases occur because of the nature of the disease and the demographics of patients, and selection biases rise because of human biases 2: Errors owing to unknown fraudulent claims 6: Incorrect or inaccurate models 5-7: Errors and bias in synthetic data and AI hallucination 1-7: Privacy breaches (eg, reidentify patients) |
1-7: Data or records tampering or poisoning (eg, noise injection using maliciously synthesized data, analyses, and findings) 1-7: Annotation errors by intention 1-7: AI hallucination |
Medical research |
|
1-7: All the errors and biases mentioned in the above cells could be applicable | 1-7: All the attacks mentioned in the above cells could be applicable |
Clinical decision support |
|
1-7: All the errors and biases mentioned in the above cells could be applicable | 1-7: All the attacks mentioned in the above cells could be applicable |
aAI: artificial intelligence.
bCT: computed tomography.
cMRI: magnetic resonance imaging.
dEHR: electronic health record.
eNIH: National Institutes of Health.