Table 4. The summary of existing studies compiling examinations that evaluate knowledge using ChatGPT.
| Author (year) | Exam | Language | Model | Result | |
| General Medicine | Kung et al. (2023) [11] | United States Medical Licensing Examination | English | GPT-3.5 | Pass |
| Takagi et al. (2023) [16] | Japanese Medical Licensing Examination | Japanese | GPT-4 | Pass | |
| Orthopedics | Kung et al. (2023) [12] | Orthopaedic In-Training Examination | English | GPT-4 | Pass |
| GPT-3.5 | Fail | ||||
| Saad et al. (2023) [17] | Orthopaedic Fellow of the Royal College of Surgeons | English | GPT-4 | Fail | |
| Massey et al. (2023) [18] | ResStudy Orthopaedic Examination Question Bank | English | GPT-4 | Fail | |
| Neurosurgery | Ali et al. (2023) [19] | Neurosurgery Written Board Examinations | English | GPT-4 | Pass |
| GPT-3.5 | Pass | ||||
| Gastroenterology | Suchman et al. (2023) [20] | American College of Gastroenterology Self-Assessment Test | English | GPT-4 | Fail |
| Radiology | Bhayana et al. (2023) [21] | Canadian Royal College and American Board of Radiology examinations | English | GPT-3.5 | Pass |