Medical diagnostics |
Adversarial training and classification manipulation (eg, image classification manipulation) |
Model performance deteriorating by feeding poisonous data |
AI hallucination (eg, made-up diagnosis), misinformation or disinformation, and adversarial use exploitation |
Data extraction from carefully crafted prompts and privacy attacks |
Drug discovery |
Adversarial training and classification manipulation |
Model performance deteriorating by feeding poisonous data |
AI hallucination (eg, made-up chemical compound or protein structures), misinformation or disinformation, and adversarial use exploitation |
Data extraction from carefully crafted prompts and privacy attacks |
Virtual health assistants |
Adversarial training and classification manipulation |
Model performance deteriorating by feeding poisonous data |
AI hallucination (eg, made-up medical advice), misinformation or disinformation, and adversarial use exploitation |
Data extraction from carefully crafted prompts and privacy attacks |
Medical research |
Adversarial training and classification manipulation |
Model performance deteriorating by feeding poisonous data |
AI hallucination (eg, made-up findings, hypothesis, and citations), misinformation or disinformation, and adversarial use exploitation |
Data extraction from carefully crafted prompts and privacy attacks |
Clinical decision support |
Adversarial training and classification manipulation |
Model performance deteriorating by feeding poisonous data |
AI hallucination (eg, made-up conclusions, findings, and recommendations), misinformation or disinformation, and adversarial use exploitation |
Data extraction from carefully crafted prompts and privacy attacks |