Table 6.
Central themes in GAI research and ethical considerations in PHE.
| Theme | Pointer | Details | Source |
|---|---|---|---|
| Central Concepts in AI Research | Keyword Centrality | AI, intelligent tutoring systems, machine learning, deep learning, higher education are central keywords | Talan et al[19] |
| Popular Topics | Data science, learning analytics, computer-based learning, educational data mining | ||
| AI in Health Professions Education | Current Benefits | PHE currently benefits from AI advancements | Ken et al[20] |
| Future Potential | AI set to offer more benefits to PHE in the future | ||
| Ethical Considerations in AI | Ethical Focus | Focus on ethics in the use of AI systems within PHE | |
| Ethical Issues | Data gathering, anonymity, privacy, consent, data ownership, security, bias, transparency, responsibility, autonomy, beneficence | ||
| Ethical Guidance | Guide provides concepts, their importance, coping strategies, and suggests further reading | ||
| Proactive Measures | Encouragement for PHE teachers and administrators to be proactive in ethical AI usage | ||
| Educational strategies | LLMs as teaching assistants | LLMs can provide summaries, presentations, translations, explanations, and guides in customizable formats, enhancing personalized education. They are suitable for creating interactive simulations, like patient history taking practices for medical students | Clusmann et al[21] |
| Critical thinking concerns | Impact on student skills | LLMs might negatively affect students’ critical thinking and creativity by providing easy answers, risking the externalization of factual knowledge and medical reasoning. Transparent regulation and differentiation between LLM-generated content and student work are necessary | |
| Responsible use of LLMs | Education & interaction guidelines | Essential to establish guidelines for LLM use to prevent misuse. Students in medical education should receive an introduction to LLMs, learning about biases, limitations, and proper prompt engineering to avoid misinformation | |
| AI performance on exams | AMBOSS users comparison | ChatGPT scored in the 30th percentile on Step 1 without Attending Tip and 66th with it; on Step 2, 20th and 48th percentiles, respectively. Accuracy decreased with question difficulty | Gilson et al[22] |
| NBME vs AMBOSS performance | ChatGPT performed better on Step 1 than Step 2 and better on NBME questions than AMBOSS for both levels. Outperformed GPT-3 and InstructGPT on all data sets | ||
| Logical use of information | ChatGPT provided logical explanations, using internal information for most responses and external information significantly more often in correct responses |