Abstract
Introduction
A novel chatbot mobile app for training of undergraduate medical students’ clinical history taking skills was developed in 2021. Students were able to take clinical history from the virtual patient for bedside teaching. A case-control study was conducted to evaluate the effectiveness of learning with chatbot mobile app, versus conventional bedside teachings with real patients.
Methods
132 final year medical students were randomized into two groups – Conventional bedside teaching with clinical history taken from a real patient, and Bedside teaching with clinical history taken from the Chatbot. Independent blinded assessment of students' history taking skills was conducted. Students’ performance were assessed by standardized marking scheme.
Results
Median age was 23 years old (Range 21–30 years old). There were 62 female and 70 male students.
64 students were randomized into conventional group while 68 students were randomized into the chatbot group. Baseline demographic data were comparable between the two groups.
Blinded assessment of students’ performance in clinical history taking were comparable between the conventional group and chatbot group (p > 0.05).
Conclusion
With the promising results we have demonstrated in this study, we believe training of history taking skills by chatbot will be a feasible alternative to conventional bedside teaching.
Keywords: Clinical education, Computers, Simulation, New technology
Clinical Education; Computers; Simulation; New technology.
1. Introduction
Undergraduate medical education has been severely affected by the COVID-19 outbreak. Many clinical teaching activities were replaced by virtual or online formats, which were subsequently published on peer-reviewed journals [1, 2]. While lectures can be easily replaced by pre-recorded videos or delivered live by telecommunication software (e.g. Zoom), clinical bedside teachings, including training of history taking skills from patient cannot be easily replaced by online teaching.
Conventional clinical history taking skills were acquired through medical student-patient interaction. Medical students were allowed to visit hospital wards and talk to the patients. Usually, students were required to report and summarize the clinical history to the tutor afterwards where comments will be given (i.e. Conventional bedside clinical teaching). During COVID-19 pandemic in Hong Kong, medical students were not able to visit clinical areas due to infection risk. As such, medical students were not able to see patients in wards.
Chatbot app has been widely used in many commercial companies like banks, hotels and airline industries, its application in training medical students’ history taking skills has not been described in the medical education community. As such, we have developed a novel clinical history taking Chatbot app (Bennie and the Chats) for bedside clinical teaching, which has been rolled out to final year undergraduate medical students in our institution since March 2021. Medical students were asked to download the Chatbot app to their own mobile device (e.g. mobile phone or tablet). They can then ask questions directly to the Chatbot and collect clinical history by typing in questions. At instance, voice recognition function is not available in this Chatbot mobile app, students will need to type in questions manually.
Nevertheless, medical students were able to interact with the virtual patient Bennie (The chatbot) and take clinical history by asking questions. Chatbot Bennie will give relevant answers according to the pre-defined keyword database designed by the tutors (MC & BC). Students were able to discuss the case scenario and report the clinical history to the tutor (MC & BC) as in conventional bedside clinical teaching in pre-COVID-19 era.
This novel teaching method was introduced in March 2021 and here we present our initial experience with using chatbot mobile app for training of clinical history taking skills in final year undergraduate medical students.
2. Materials and methods
2.1. Development of the “chatbot app” (also named Bennie and the chats)
The Chatbot app (Bennie and the chats) was co-developed by authors M Co and TH Yuen in 2020, using a number of software from Google. At the front end, the student input is handled by “Actions on Google” in a conversational user interface. The input message from the student is processed by Dialogflow to understand the meaning of the questions. Dialogflow is a natural language processing platform which uses the machine learning technology to understand how people talk and it extracts the intent in a conversation. Afterwards, Dialogflow will find a suitable answer from a database.
In “Actions on Google”, user interface is designed using simple flow diagram. User input on the interface can be detected and the transitions between different stages can be easily designed over the Actions Console.
The app is equipped with artificial intelligence and natural language processing in which consistent answers will be given from the chatbot regardless of the words/phrases used by the student. For example, when a student asks “What is your main complaint?” the app will answer the pre-defined phrase, for example “I have a right breast lump” (Figure 1). Even when another student enters another question such as “What is your problem?” The chatbot will still answer “I have right breast lump”. Dialogflow enables developer to define synonyms for all keywords used in this app.
Figure 1.
Two separate screen captures of conversation between medical student and Chatbot Bennie, illustrating the student-chatbot interaction (Right) and chatbot's response to synonym (Left).
At the back end, we implemented the fulfilment that contains the logic to construct a dynamic conversational response. The Actions Console can connect the apps to the cloud function through the Webhook interface.
This Chatbot app is pre-installed with case scenarios from different surgical subspecialties, including Breast Surgery, Colorectal surgery, Endocrine Surgery, Esophageal and upper gastrointestinal surgery, Head and Neck Surgery, Hepatobiliary Surgery, Urology, and Vascular Surgery.
2.2. Introduction of chatbot app to medical curriculum
The undergraduate medical curriculum in Hong Kong is composed of three-year pre-clinical teaching followed by another three-year clinical teaching. From 4th year on, medical students are required to acquire clinical history taking and physical examination skills.
Conventional teaching of clinical history taking involves face-to-face patient clerking. Medical Students are required to take clinical history from the patient followed by presentation of medical history to the assigned clinical tutors in the ward. Chatbot app was rolled out to the final year medical students in Hong Kong in 2021. Students were advised to take clinical history from the virtual patient in the Chatbot, followed by interactive tutorial with the assigned tutor.
2.3. Research question and methodology
This study aims to evaluate the feasibility and efficacy of bedside clinical teaching using chatbot app, based on students’ performance on clinical history presentation.
Approval was sought from Center for Education and Training, Department of Surgery, University of Hong Kong for the curriculum adjustments. 132 final year medical students were recruited. Students were randomized into two groups – (A) Conventional bedside teaching with clinical history taken from a real patient, and (B) Bedside teaching with clinical history taken from the Chatbot (Virtual patient Bennie). Independent blinded assessment of students’ history taking skills was conducted.
All students were given 45 min to gather history from either genuine patient or from the chatbot app (virtual patient), followed by 45 min discussion on the clinical scenario with assigned tutor.
Clinical history taken by students were assessed based on the comprehensiveness in 10 aspects of history taking – Chief complaint, duration of symptom, history of present illness, associated symptoms, past medical history, past surgical history, family history, risk factors, investigations performed, and social history. Number of students who were unable to obtain information in each individual aspect were recorded.
Students who were randomized to the Chatbot group were invited to rate the chatbot system based on user friendliness, accuracy of keyword identification, student-chatbot interaction, efficiency of learning and overall learning experience in a Likert scale of 1–10.
3. Results
A total of 132 medical students were recruited into the study, median age was 23 years old (Range 21–30 years old). There were 62 female and 70 male students. All students were in their final year of undergraduate medical curriculum.
64 students were randomized into conventional group while 68 students were randomized into the chatbot group. Median age was 22 (21–30) in the conventional group while that in the chatbot group was 23 (21–26) (p = 0.31). 5 (7.8%) students from the conventional group and 6 (8.8%) students from the chatbot group were post-graduate students (i.e. who have already obtained another bachelor or above degree) (p = 0.83). 4 (6.3%) students from the conventional group and 4 (5.9%) students from the chatbot group had history of distinction (merit) award in previous examinations (p = 0.93). 5 (7.8%) students from the conventional group and 5 (7.4%) students from the chatbot group had history of failure in previous examinations (p = 0.92). Baseline demographic data were summarized in Table 1.
Table 1.
Baseline demographic data of each group of students.
| Conventional bedside N = 64 |
Chatbot N = 68 |
P-value | |
|---|---|---|---|
| Median age (Range) | 22 (21–30) | 23 (21–26) | 0.31 |
| Male gender | 32 (50%) | 38 (55.9%) | 0.50 |
| Post-graduate | 5 (7.8%) | 6 (8.8%) | 0.83 |
| Distinction (Merit) in previous exams | 4 (6.3%) | 4 (5.9%) | 0.93 |
| Failure in previous exams | 5 (7.8%) | 5 (7.4%) | 0.92 |
Students’ performance in clinical history taking were comparable between the conventional group and chatbot group (Table 2). All students from either group were able to obtain accurate clinical history on Chief complaint, history of present illness, duration of symptoms, history of present illness, past medical history and family history. 8 (12.5%) students from the conventional group and 12 (17.6%) students from the chatbot group were unable to obtain accurate surgical history (p = 0.41). 2 (3.1%) students from the conventional group and 2 (2.9%) students from the chatbot group were unable to obtain accurate risk factors (p = 0.95). 6 (9.4%) students from the conventional group and 6 (8.8%) students from the chatbot group were unable to obtain history on investigations (p = 0.91), 2 (3.2%) students from the conventional group and 3 (4.7%) students from the chatbot group were unable to obtain accurate surgical history (p = 0.70).
Table 2.
Number of students who were unable to take the correct history in each group.
| Conventional bedside (N = 64) | Chatbot (N = 68) | p-value | |
|---|---|---|---|
| Chief complaint | 0 (0%) | 0 (0%) | 1.00 |
| Duration of symptoms | 0 (0%) | 0 (0%) | 1.00 |
| History of present illness | 0 (0%) | 0 (0%) | 1.00 |
| Associated symptoms | 0 (0%) | 0 (0%) | 1.00 |
| Past medical history | 0 (0%) | 0 (0%) | 1.00 |
| Past surgical history | 8 (12.5%) | 12 (17.6%) | 0.41 |
| Family history | 0 (0%) | 0 (0%) | 1.00 |
| Risk factors | 2 (3.1%) | 2 (2.9%) | 0.95 |
| Investigations | 6 (9.4%) | 6 (8.8%) | 0.91 |
| Social history | 2 (3.2%) | 3 (4.7%) | 0.70 |
| Overall | 14 (21.9%) | 22 (32.4%) | 0.18 |
Students’ feedback on clinical history taking from the chatbot system were generally positive. Median Likert scores of user friendliness, keyword identification, student-chatbot interaction, efficiency of learning and overall experience were 8 (Range 6–10), 7 (Range 5–9), 7 (Range 5–9), 8 (Range 6–9) and 8 (Range 6–9), respectively (Table 3).
Table 3.
Students’ evaluation of the Bernie Chatbot System (Likert scale of 1–10).
| Areas of evaluation | Median students rating out of 10 (Range) |
|---|---|
| User friendliness | 8 (6–10) |
| Keyword identification | 7 (5–9) |
| Interaction | 7 (6–9) |
| Efficiency of learning | 8 (6–9) |
| Overall learning experience | 8 (6–9) |
4. Discussion
Distant learning has been widely adopted in teaching of medical students during COVID-19 pandemic. Medical students were not allowed to enter hospital premises during the episodes of the outbreak. Lectures were replaced by pre-recorded videos, clinical skills were replaced by live demonstration [1, 2]. However, learning of history taking skills, which involves student-patient interaction, cannot be easily replaced by online lectures. This has resulted significant stress to undergraduate medical students [3].
Prior to the pandemic. Medical students learnt and practiced clinical history taking by talking to the patients in the wards or clinics. COVID-19 pandemic has given the opportunity to explore alternative ways for clinical history clerking for medical students when face-to-face clinical teaching was not allowed [4, 5]. For example, simulated patients have been used to train communication skills in health professional education [6, 7].
Computer chatbot has been widely used in commercial companies, it has also been described as an online tutor for medical students [8, 9]. However, the use of a chatbot system in facilitating clinical history clerking among medical students has never been described. This new training method for clinical history taking will allow efficient training of medical students’ history taking skills without geographical and time constrain. Contents of the mobile app will also be updated regularly. The aim of this mobile app is to allow training of history taking skills among medical students by conversations with the virtual patient.
Distant learning will remain as an important teaching modality for medical students even when the pandemic is over [10, 11, 12]. Medical students will be able to take clinical history from the virtual patient Bennie (Chatbot mobile app) without time or geographical restriction. In addition, students will be able to interact with chatbots with pre-recorded rare clinical scenarios, which is not commonly seen in daily clinical practice.
This study was conducted on final year medical students, most of them have already acquired necessary clinical history taking skills from year 4 of the medical curriculum. However, the aim of the study is to evaluate the feasibility of using chatbot app for clinical bedside teaching. All students were able to gather necessary clinical history from the chatbot app, meaning that clinical teaching with chatbot app is a feasible alternative to conventional bedside teaching on real patients.
A pilot questionnaire on students' satisfaction on the chatbot app was adopted in this study. We do recognize this as a potential limitation of this study. System usability scale (SUS) could be a validated alternative to the pilot questionnaire that was used in this study [13]. However, SUS is a generic questionnaire and does not evaluate on specific areas of concerns such as students’ satisfaction on keyword identification and chatbot-user interaction. Nevertheless, feedbacks from medical students have been positive, although some students reflected on the limited keyword identification and student-chatbot by the Chatbot system. These can be overcome by updating the keyword database of the Chatbot system.
5. Conclusion
With the promising results we have demonstrated in this study, we believe training of history taking skills by chatbot will be a feasible alternative to conventional bedside teaching.
Declarations
Author contribution statement
Michael Co: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Tsz Hon John Yuen: Conceived and designed the experiments; Performed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Ho Hung Cheung: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability statement
Data will be made available on request.
Declaration of interests statement
The authors declare no conflict of interest. Dr. Michael Co serves as an unpaid Guest Editor of the Journal. He is not involved in the peer-review nor editorial process of this article.
Additional information
No additional information is available for this paper.
Acknowledgements
The authors would like to thank colleagues from the Department of Computer Science who have helped in the development of the novel chatbot mobile app “Bennie and the Chats”. They would also like to thank all medical students who have participated in the study. This study is dedicated to authors Co and Yuen's alma mater Salesian English School, Hong Kong.
Footnotes
This article is a part of the "Medical Education during the COVID-19 pandemic" Special issue.
References
- 1.Co M., Chu K.M. Distant surgical teaching during COVID-19 - a pilot study on final year medical students. Surg. Pract. 2020 Jul 10 doi: 10.1111/1744-1633.12436. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Co M., Chung P.H.Y., Chu K.M. Online teaching of basic surgical skills to medical students during the COVID-19 pandemic: a case–control study. Surg. Today. 2021 doi: 10.1007/s00595-021-02229-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Co M., Ho M.K., Bharwani A.A., Yan Chan V.H., Yi Chan E.H., Poon K.S. Cross-sectional case-control study on medical students' psychosocial stress during COVID-19 pandemic in Hong Kong. Heliyon. 2021 Nov;7(11) doi: 10.1016/j.heliyon.2021.e08486. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Alsoufi A., Alsuyihili A., Msherghi A., Elhadi A., Atiyah H., Ashini A., Ashwieb A., Ghula M., Ben Hasan H., Abudabuos S., et al. Impact of the COVID-19 pandemic on medical education: medical students' knowledge, attitudes, and practices regarding electronic learning. PLoS One. 2020 Nov 25;15(11) doi: 10.1371/journal.pone.0242905. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Liu C.H., You-Hsien Lin H. The impact of COVID-19 on medical education: experiences from one medical university in Taiwan. J. Formos. Med. Assoc. 2021 Mar;8 doi: 10.1016/j.jfma.2021.02.016. S0929-6646(21)00080-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.ElGeed H., El Hajj M.S., Ali R., Awaisu A. The utilization of simulated patients for teaching and learning in the pharmacy curriculum: exploring pharmacy students' and recent alumni's perceptions using mixed-methods approach. BMC Med. Educ. 2021 Nov 6;21(1):562. doi: 10.1186/s12909-021-02977-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Berman Norman B., Durning Steven J., Fischer Martin R., Huwendiek Soren, Triola Marc M. The role for virtual patients in the future of medical education. Acad. Med. September 2016;91(9):1217–1222. doi: 10.1097/ACM.0000000000001146. [DOI] [PubMed] [Google Scholar]
- 8.Kazi Hameedullah, Chowdhry B.S., Memon Zeesha. Article: MedChatBot: an UMLS based chatbot for medical students. Int. J. Comput. Appl. October 2012;55(17):1–5. [Google Scholar]
- 9.Kaur A., Singh S., Chandan J.S., Robbins T., Patel V. Qualitative exploration of digital chatbot use in medical education: a pilot study. Digit Health. 2021;7 doi: 10.1177/20552076211038151. 20552076211038151. Published 2021 Aug 31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Kim J.W., Myung S.J., Yoon H.B., Moon S.H., Ryu H., Yim J.J. How medical education survives and evolves during COVID-19: our experience and future direction. PLoS One. 2020 Dec 18;15(12) doi: 10.1371/journal.pone.0243958. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Papapanou M., Routsi E., Tsamakis K., Fotis L., Marinos G., Lidoriki I., Karamanou M., Papaioannou T.G., Tsiptsios D., Smyrnis N., et al. Medical education challenges and innovations during COVID-19 pandemic. Postgrad. Med. 2021 Mar;29 doi: 10.1136/postgradmedj-2021-140032. postgradmedj-2021-140032. [DOI] [PubMed] [Google Scholar]
- 12.Co M., Cheung K.Y.C., Cheung W.S. Distance education for anatomy and surgical training – a systematic review. Surgeon. 3 September 2021 doi: 10.1016/j.surge.2021.08.001. Available online. (In press) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Grier R.A., Bangor A., Kortum P., Peres S.C. The system usability scale: beyond standard usability testing. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2013;57(1):187–191. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data will be made available on request.

