Skip to main content
International Journal of Environmental Research and Public Health logoLink to International Journal of Environmental Research and Public Health
. 2022 Jul 18;19(14):8735. doi: 10.3390/ijerph19148735

Using a Virtual Patient via an Artificial Intelligence Chatbot to Develop Dental Students’ Diagnostic Skills

Ana Suárez 1, Alberto Adanero 2,*, Víctor Díaz-Flores García 1, Yolanda Freire 1, Juan Algar 2
Editor: Eitan Mijiritsky
PMCID: PMC9319956  PMID: 35886584

Abstract

Knowing how to diagnose effectively and efficiently is a fundamental skill that a good dental professional should acquire. If students perform a greater number of clinical cases, they will improve their performance with patients. In this sense, virtual patients with artificial intelligence offer a controlled, stimulating, and safe environment for students. To assess student satisfaction after interaction with an artificially intelligent chatbot that recreates a virtual patient, a descriptive cross-sectional study was carried out in which a virtual patient was created with artificial intelligence in the form of a chatbot and presented to fourth and fifth year dental students. After several weeks interacting with the AI, they were given a survey to find out their assessment. A total of 193 students participated. A large majority of the students were satisfied with the interaction (mean 4.36), the fifth year students rated the interaction better and showed higher satisfaction values. The students who reached a correct diagnosis rated this technology more positively. Our research suggests that the incorporation of this technology in dental curricula would be positively valued by students and would also ensure their training and adaptation to new technological developments.

Keywords: artificial intelligent, chatbot, virtual patient, diagnosis, dental students

1. Introduction

Diagnosis is the foundation on which all medical treatments are based. Making a correct, effective, and efficient diagnosis is a fundamental skill that dental students must acquire to be good practitioners. Diagnostic learning in the undergraduate curriculum can be effectively re-enacted through repeated practice of clinical cases with subsequent feedback from faculty, as well as by encouraging self-evaluation to hold students accountable for their deficiencies [1,2].

During undergraduate training it is common to focus on elaborate clinical cases in which trainees must rely on several diagnostic tests before they can make their diagnostic judgment. But it has been questioned whether an extremely detailed anamnesis can be counterproductive if trainees get lost in irrelevant details [3]. In fact, authors such as Bordage [4] urge practice with more focused cases that are based on important discriminative symptoms so that the student can practice with a larger number of clinical cases, a fundamental requirement for acquiring diagnostic competence [2].

In dental education, as a medical discipline, much of the students’ professional development occurs when they begin to interact with patients [5], i.e., when they begin to develop interpersonal communication. However, sometimes patients with good cases, from a teaching point of view, are not available for all students and this causes a limitation in the possibilities of student interaction with a large number of cases [5], which is why in recent years, the use of simulation for the development of students’ psychomotor skills has become standard in dental education because it allows them to follow an appropriate learning curve in a less stressful and controlled environment than in a clinic [6].

Simulation in which interactions with patients are recreated, such as role-plays with teachers, with patient-instructors, or standardized patients, are already commonly used in dental schools [7] and are perceived by students as very positive because of their similarity to their professional practice [8] and also allow increasing the realistic self-assessment of the students [7]. In order to perform these simulations of personal interaction with a standardized patient, a high level of planning and training is required by the organizers [9], which could make it difficult to perform them regularly, as well as the appearance of variables that are not foreseen in the original script that can cause the simulation to fail.

In this sense, virtual patients (VP) are part of the integration of new technologies in patient simulations and could favor a greater practice of clinical cases by students, printing knowledge more effectively [10], facilitating the planning of cases to teachers, and with less budget and infrastructure [11]. With the use of VPs, students can perform learning with greater self-nomination [12]; the learning of a strategic and self-reflective nature with the advantage of the ubiquity that is provided by technology [13].

It is, therefore, an excellent resource as a complement to interaction with real patients [14] when direct contact with the patient is not yet possible [10] due to a lack of preparation of the student or situations such as that which was caused by the COVID-19 pandemic, also allowing the recreation of unusual clinical cases in daily practice [15].

In general, VPs are usually well perceived by students because of all the advantages that were previously pointed out [16], but they are not free of limitations such as a disconnect between the available VP programs and the needs of educators [17], or that VPs are usually concentrated on a single pathology while in reality different pathologies can coexist at the same time [18]. Moreover, it should be taken into account that, according to different studies [19,20], students prefer certain features in VP design such as relevance, an adequate level of difficulty, feedback, high interactivity, and above all realism [16]. In this sense, artificial intelligence (AI), defined as that technology that uses machines to mimic intelligent human behavior [21], offers a range of possibilities in the development of VPs due to the ability of AI to allow a computer system to perform perceptual processes that are typical of a human being [22,23,24], offering more realism to the interaction with the VP, in addition to being part of the most promising areas of medicine [25].

In recent years, it has been observed how young people invest less time in learning and more in the use of their cell phone [26]. In this context, chatbots or conversational agents through an instant messaging service are presented in the literature as an application of the emerging field of AI [27] that could attract the attention of students and, therefore, be an interesting alternative in the development of VPs [6,28,29].

In relation to education, despite the fact that advances in clinical dentistry have been adapting to digital technological developments that integrate the area of diagnosis and treatment [30,31], it is suggested that there is a need for more research at the academic level on the impact of the use of these digital technologies in clinical practice, with special attention to the ethical issues that may arise as well as the need for dental educators to integrate them into the curriculum [31]. The integration of technology into dental education also makes it possible to implement improvements in patient safety, as it allows practice in scenarios in which the health of a real patient is not compromised [32].

In the specific field of dentistry, some works [6,32,33,34] investigate the use VPs in dentistry, but no studies were found that integrated VPs with AI. For all of the above, the creation and assessment of a VP through an AI chatbot for the development of diagnostic skills of pulp pathology in dental students was proposed as the objective of the present study.

2. Materials and Methods

The present descriptive cross-sectional study was approved by the research committee of the Universidad Europea de Madrid (CIPI/22.142).

2.1. Participants

Students in the 4th and 5th year of the degree in dentistry at the Universidad Europea de Madrid who were taking practical courses with patients participated in the study. All the students who wished to take part in the study had to sign an informed consent form in which they were informed about the study and were assured that their data would be treated anonymously.

2.2. Sample Size

With a total of 457 students of 4th and 5th year of dentistry enrolled in the subjects with clinical practice at the Universidad Europea de Madrid, the formula that is shown in Figure 1 was applied to calculate the sample size. A confidence percentage of 95% and a margin of error of 6% were taken into account and a minimum of 169 students were needed for the sample to be representative.

Figure 1.

Figure 1

N: population size; z: z-score; e: margin of error (percentage in decimal form).

2.3. Conceptualization

To create the virtual patient that we called Julia, we chose to create a conversational chatbot with AI. To this end, a working group was created with two professors of dentistry from the Universidad Europea de Madrid to begin the conceptualization work and define everything that was necessary for Julia to present as a patient. In this study, it was decided that she suffered from reversible pulpitis.

After an analysis of the literature [33,34], five main categories were defined for her to answer: anamnesis, description of the pain, relationship of the pain with stimuli, previous dental treatments, and intraoral exploration. In order to establish a dose of reality and to create more interest among the students, it was decided to create the chatbot using an informal language that could answer some questions that were unrelated to the clinical case. Figure 2.

Figure 2.

Figure 2

Chatbot conceptualization diagram.

Subsequently, work was done to create sub-categories in which the most frequently used expressions were included with more informal linguistic variations to which a response was associated in order to establish a flow of dialogue (Table 1).

Table 1.

Example of the expressions for a question about Julia’s response if heat is applied with a corresponding answer.

Sub-Categories Expressions Answer
Heat Do you have discomfort in the heat?
Do you have pain in the heat?
Are you sensitive to heat?
Are you bothered by hot things?
Does it bother you with high temperature?
Does it hurt with high temperatures?
Does it hurt with high thermal stimulation?
Does it hurt with high temperature?
Does it hurt if you eat something hot?
Does it hurt if you drink something hot?
No, with the heat I don’t feel any pain.

2.4. Chatbot Design

The Dialogflow® application (Palo Alto, Santa Clara County, CA, USA), was used for the creation of chatbot conversational flows through the use of intuitive artificial intelligence [35] that was capable of understanding the nuances of human language by learning through action and feedback.

Since the people who created the chatbot were not experts in the field, it was decided to design the chatbot in a simple way. To do this, we defined the “intents” (or what the user wanted to say), added all the expressions that a user could use to express that “intent” and that the group of experts had defined in the previous phase to add them in the “training phrases” space, and then associated a specific response to that intent. Through natural language processing algorithms, the AI will be able, with a few training phrases, to learn the different ways of asking the same question (Table 2 and Figure 3).

Table 2.

Question-answer sequence of the chatbot.

Intents Training Phrases Answer
Cold (pulp response to cold application) Does it hurt to drink something refrigerated?
Do you have pain with something cold?
Do you feel more sensitive when you drink something cold?
Do you feel more sensitivity when eating cold things?
Does it hurt to drink something cool?
Does it hurt if you drink something with ice?
Does it hurt more with cold?
If you drink something cold, do you feel it?
Yes, when I drink something cold I feel pain

Figure 3.

Figure 3

Data flow diagram.

Once the chatbot was created, it was integrated with an instant messaging application (Telegram) because it was intended to offer this experience easily, quickly, and using an application that was frequently used by students, also giving them the possibility of interacting with Julia at any time.

In order to carry out the integration of Julia in Telegram, the application was accessed and then the following steps were followed:

  1. Go to https://telegram.me/botfather (accessed on 19 April 2022)

  2. Type/start

  3. Type/newbot

  4. Create a name ending in “bot”.

  5. Then Telegram generates a token to access the hhtp API.

  6. In Dialogflow, go to “Integrations” and then click on the Telegram icon.

  7. Paste the token in the corresponding field and click on “start”.

In order for Julia to generate curiosity among the students and given the possibility that some questions were not focused on the clinical case, “intents” were created for various questions such a “Do you want to go out with me?” generating natural answers that would lead the student back to the main objective of the chatbot, the pulp diagnosis: “I’m a computer virus that right now is deleting all the papers you had to submit…it’s a joke! I’m an artificial intelligence named Julia and I’ve been created for you to learn pulp diagnosis well. You will thank me when you are in the clinic. So focus well and ask me about pulpal diagnosis”. When students gave an incorrect diagnosis, Julia encouraged them to keep asking “I’m not an expert… but that diagnosis sounds weird to me” in case of giving correct answer Julia replies and closes the chat “Thank you! I will make an appointment to see you”. Figure 4.

Figure 4.

Figure 4

Example of a conversation flow. (A) At the beginning of the interaction with Julia, she introduces herself and makes directions about what the student should do. (B) Julia is able to answer different questions about the current condition. (C) Colloquial responses to intimate questions that were unrelated to the case were established in order to arouse students’ curiosity and redirect them. (D) In case of reaching an incorrect diagnosis, Julia redirects the student.

2.5. Start-Up

The operationalization was carried out in two phases. In the first instance, a panel of experts consisting of 5 professors and doctors of dentistry interacted with Julia. All of the failed interactions or evidenced errors were reported for further adjustment to improve the chatbot conversation flow. For this purpose, the Dialogflow training function was used to test those interactions with users that the AI itself considers should be revised. In this way, the AI is learning from the actions that it performs and the feedback we give it (Figure 5).

Figure 5.

Figure 5

If the user misspelled a word and the AI was able to identify that it was an error and associate it with the correct intent.

When the validation by expert judgment was positive, we proceeded to a second phase in which Julia was sent to 4th and 5th year dental students with all the information and the route to interact with Julia via Telegram.

2.6. Survey

After four weeks of operation, the students who were interested in participating in the study were asked to fill out an eleven-question questionnaire in which nine questions dealt with their experience after their interaction with Julia and two open-ended questions (Table 3 and Table 4).

Table 3.

Questions of the questionnaire with possible answers.

Questions Possible Answers (Only One)
1-Were you satisfied when interacting with the artificial intelligence?
2-Did the artificial intelligence answer all your questions about the pulp pathology I presented?
3-Did the language used by the artificial intelligence seem natural and realistic to you?
4-Do you feel that this type of teaching methodology can help you improve your communication skills?
5-Do you think this type of teaching methodology can help you feel more confident and secure when treating patients?
6-Do you think that this type of teaching methodologies could help you grow as a future professional?
7-Did you manage to ask all the necessary questions to reach a pulp diagnosis?
8-Would you recommend this artificial intelligence-based technology to other students?
9-Do you think that interaction with artificial intelligences should be part of the dental degree curriculum?
1-Strongly Disagree
2-Disagree
3-Neutral
4-Agree
5-Strongly Agree

Table 4.

Free text response questions.

Open Questions
-What pulp pathology do you think the patient had?
-What would you modify or add after interacting with this artificial intelligence.

2.7. Statistical Análisis

The questionnaire responses were collected and the data were entered into a Microsoft Excel spreadsheet. They were then analyzed using SPSS software (IBM, SPSS Statistics, Version 20.0, Armonk, NY, USA: IBM Corp).

The Kolgomorov–Smirnov test was performed to evaluate whether the samples met the normality criterion. For comparisons between the courses and sex, the Student’s t-test was used for those samples that had a normal distribution and the Mann–Whitney U test for those that did not; for the association between the qualitative variables, the chi-square test was used, considering the p-value ≤ 0.05 as statistically significant.

3. Results

The sample size of the study was 193 subjects, of whom 58 belonged to the fourth year and 135 to the fifth year. There were 109 females and 84 males. In fourth year, women accounted for 55.2% and men 44.8% of the sample while in fifth year, women accounted for 57.04% and men 42.26%.

3.1. Global Data

The results of the response to the questionnaire, which were measured with a Likert scale (1–5), are shown in Table 5 and in Figure 6 and Figure 7.

Table 5.

Descriptive statistics by item.

4th 5th Total
Mean S. d. Mean S. d. Mean S. d.
1 Satisfaction with the interaction 4.02 0.083 4.51 0.068 4.36 0.056
2 AI answers the questions 3.22 0.130 4.07 0.081 3.82 0.074
3 Realistic natural language 4.03 0.095 4.04 0.063 4.04 0.053
4 Helps improve communication skills 3.22 0.089 3.45 0.074 3.38 0.059
5 Helps improve confidence and security 4.02 0.094 3.84 0.059 3.89 0.050
6 Improvement professionalism 3.22 0.107 4.07 0.068 3.81 0.064
7 Complete all the questions 3.14 0.093 3.93 0.084 3.69 0.070
8 AI Recommendation 3.86 0.087 4.09 0.082 4.02 0.063
9 Implement in the curriculum 4.24 0.093 4.18 0.071 4.20 0.057

S. d.: standard deviation.

Figure 6.

Figure 6

Distribution of responses per questionnaire item by fourth year dental students. 1-Strongly Disagree, 2-Disagree, 3-Neutral, 4-Agree, 5-Strongly Agree.

Figure 7.

Figure 7

Distribution of responses per questionnaire item by fifth year dental students. 1-Strongly Disagree, 2-Disagree, 3-Neutral, 4-Agree, 5-Strongly Agree.

When comparing the responses to the questionnaire by course, statistically significant differences were found, with fifth-year students showing the highest satisfaction values (Table 6 and Table 7).

Table 6.

Mann–Whitney U-test results.

Sig. (Bilateral)
1 Satisfaction with the interaction 0.000 ** 5th
2 AI answers the questions 0.000 ** 5th
6 Improvement professionalism 0.000 ** 5th
7 Complete all the questions 0.000 ** 5th
8 AI Recommendation 0.016 * 5th

* p-value < 0.05—statistically significant. ** p-value < 0.001—highly statistically significant. 5th = 5th > 4th.

Table 7.

Student’s t-test results.

d.f. Sig. (Bilateral)
3 Realistic natural language 191 0.982
4 Helps improve communication skills 136.002 0.051
5 Helps improve confidence and security 191 0.099
9 Implement in the curriculum 191 0.610

d.f.: degrees of freedom.

In relation to gender, the t-test found that women rated the realistic natural language of AI better (p-value = 0.008).

When the Chi-square test (χ2) was performed, the results showed that the fifth year students got the diagnosis right more frequently (p-value = 0.005) than the fourth year students. When comparing between sexes, females failed more often than males (p-value = 0.000).

We also looked for whether there was a correlation between establishing a correct diagnosis and a higher score on the questionnaire. When the Chi-square (χ2) test was performed, it was observed that a correct diagnosis implied a higher score on the questionnaire items (Table 8).

Table 8.

χ2 test. Diagnosis vs. questionnaire.

Value d.f. Asymptotic Significance (Bilateral)
1 Satisfaction with the interaction 9.496 4 0.050 *
2 AI answers the questions 23.992 4 0.000 **
3 Realistic natural language 11.647 3 0.009 *
4 Helps improve communication skills 22.166 4 0.000 **
5 Helps improve confidence and security 8.899 3 0.031 *
6 Improvement professionalism 24.636 4 0.000 **
7 Complete all the questions 97.764 4 0.000 **
8 AI Recommendation 14.320 4 0.006 *
9 Implement in the curriculum 13.362 3 0.004 *

d.f.: degrees of freedom. * p-value < 0.05—statistically significant ** p-value < 0.001—highly statistically significant.

In the second free field of the questionnaire, the students were asked about what could be modified or added to the AI. The responses are shown in Table 9.

Table 9.

Responses to the free text field in which students could add their impressions after the interaction.

The colloquial language should be expanded.
It should answer several questions at the same time.
The language is very complete but does not always respond to colloquial phrases.
Lack of feedback, although being like a real patient it is logical that you do not get it.
Very curious.
Very interesting. It would have been nice to see it in pre-clinical courses.
Should not replace a patient.
Cannot establish the diagnosis because the patient did not define time of pain. duration in the cold sensitivity test.
Does not resemble a patient.
Should have the possibility to add images.
We should be able to make an appointment.
I would like to get the right answer.
I would want an option to know the correct diagnosis after mine.
X-rays.
You could have many to practice.
Super interesting to practice.
A simpler patient.

3.2. Fourth Year Student Data

When the Mann-Whitney U test was used to compare the values of each of the responses to the questionnaire items with sex, no significant differences were obtained. When the χ2 test was performed to compare the correct diagnosis with sex, no significant differences were obtained.

3.3. Fifth Year Student Data

When comparing the values of the items with sex through the Mann–Whitney U-test, statistically significant values were obtained in the item “realistic natural language” (p = 0.022), with women scoring higher, and in the item “complete all the questions” (p = 0.042), with men scoring higher. When χ2 was performed to compare the correct diagnosis vs. sex, 19.26% of women failed more than men with 5.19% (p = 0.004).

4. Discussion

The university must respond to the dynamic needs of current technological updating. In this sense, AI presents itself as a novel and unfamiliar resource for many trainers, but it has the potential to achieve effective learning [36,37]. In fact, it is claimed that students can improve their skills and knowledge if, in addition to interacting with human teachers, they interact with technological trainers who have reasoning and decision-making capabilities that are similar to human ones [6,36,38,39].

AI has experienced great advances in recent years, causing a great impact on science, economics, and education [36]. In reference to the field of education, in some previous studies with students of health branches [40,41], they valued very positively, as in the present study, the interaction with artificial intelligences. Moreover, as in this study, they affirmed the need to implement this technology in the curricula. However, AI also presents certain limitations. It has been shown that a possible limitation would be related to the knowledge about artificial intelligence and machine learning of students [42]. In addition, it has been observed that some students may be reluctant to accept these technological developments as they consider that they have greater learning with a teacher interacting face-to-face and not on-line, being interaction and error correction one of the basic learning points for them [43,44]. Moreover, students who teach with patients highly value observational or vicarious learning [45] together with their fellow trainees. All of these reasons may explain the lack of updating in these developments in dental school [40].

Any simulation-based learning should be based on sound principles of prior knowledge [46], so this study was conducted with final-year dental students treating real patients, as there is an integration of theory with practice. In addition, students often present difficulties in diagnostic competence and VPs offer more practical opportunities to improve their future performance with patients [6]. This may be the reason why discrepancies between diagnostic successes are observed, with final year students scoring clearly higher than fourth year students.

With real patients, situations are very changeable, so varying degrees of difficulty, and these situations can be counterproductive for students due to the frustration and distress they may be subjected to [8]; in this sense, VPs can recreate in a controlled, stimulating, and safe environment, the doctor-patient relationship [47] and encourage reflective learning [6,41].

In the dental students’ interaction with the virtual patient Julia, we focused on the ability to obtain a preliminary diagnosis with the data provided in a direct conversation because the collection of information during the patient interview significantly influences the quality of the diagnosis [48]. As the preliminary diagnosis must be confirmed with complementary tests [49], Julia requested a subsequent appointment at a clinic when the diagnosis was correct.

In relation to the development budget, the economic view of this technological resource cannot be ruled out since it has been shown that virtual simulation minimizes the cost of the activity compared to simulation that is based on traditional simulators (mannequins), high-fidelity simulators, haptic simulators, as well as the use of standardized patients (actors) [6,11]. In the present study, the high economic investment that is traditionally also associated with innovative developments was ruled out, since it was possible to recreate a VP using the free version of a very intuitive software. In order to carry out the step-by-step creation of Julia and its integration into the instant messaging program, the indications of the numerous free tutorials that are available online were followed.

During the testing phases and in the first days of operation, it was observed that not being able to identify users increased the risk of asking controversial questions, off-target questions to make Julia feel bad, or simply questions that were asked to observe the possible reaction of the artificial intelligence. Due to this, a collection of insults, rude phrases, out-of-place comments, etc. was also carried out in order to redirect the users. During the implementation, it was possible to see how a small group of users tried to “troll” Julia and how she redirected the user to the activity using a sarcastic text.

The fifth year students showed greater satisfaction in all the items of the questionnaire, perhaps due to their almost two years of practice on patients and the global vision of curricular development that can be perceived when graduation is near. In addition, in the free text field, they were the ones who expressed greater satisfaction with the interaction and proposed the possibility of implementing this technology in pre-clinical courses. On the contrary, the fourth year students rated the interaction with Julia worse, being more critical with the difficulty of the case, with the language that was used, and they also needed the possibility that the patient could answer several questions at the same time, etc.

All the data that were collected in the study lead us to think that VPs through chatbot with AI should be adapted to each course and type of student. In the case of fourth year students, who are beginning to have contact with real patients, perhaps it should be more oriented towards practice and the development of anamnesis skills during medical history taking so that they could practice more times and thus feel more confident with their first patients. On the contrary for fifth year students, more complex and challenging scenarios should be developed by providing complementary material such as radiographs, laboratory tests, photographs, etc. Authors such as Joda et al. [50] also propose increasing the realism of VPs with avatars in which skin and tissues are replicated by superimposing and merging 3D images, these lines of research continue to be developed and it is hoped that, in the near future, it will be part of the curriculum for dental students as a complement to face-to-face interaction with patients. In relation to this last point, we should emphasize the importance in dental practice of the dentist’s empathy, the ability to recognize nonverbal communication, establish bonds of trust with patients, know their expectations and fears, etc. [21], feelings that today no machine can replicate as they are exclusive to human beings [51].

5. Conclusions

Our results highlight the usefulness of simulating a VP with AI by giving students the possibility of multiple clinical cases to practice, as well as offering an engaging and personal experience to students because of the interface and the natural language that are used, without underestimating the economic and space savings for universities. Therefore, our research suggests the need to incorporate AI into dental curricula while also ensuring that students are at the forefront of current technological developments.

Acknowledgments

The authors would like to thank the students who wanted to be part of this study.

Author Contributions

Conceptualization, A.S.; methodology, A.S.; software, A.S.; validation, J.A., A.A., V.D.-F.G. and Y.F.; formal analysis, J.A.; investigation, A.S.; resources, A.S. and A.A.; data curation, J.A.; writing—original draft preparation, A.S. writing—review and editing, J.A., A.A., V.D.-F.G. and Y.F. visualization, A.S.; supervision, A.S. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Universidad Europea de Madrid (CIPI/22.142).

Informed Consent Statement

Informed consent was obtained from all subjects that were involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This research received no external funding.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Littlefield J.H., Demps E.L., Keiser K., Chatterjee L., Cheng H., Yuan K.M., Hargreaves D.D.S. A Multimedia Patient Simulation for Teaching and Assessing Endodontic Diagnosis. J. Dent. Educ. 2003;67:669–677. doi: 10.1002/j.0022-0337.2003.67.6.tb03667.x. [DOI] [PubMed] [Google Scholar]
  • 2.Schubach F., Goos M., Fabry G., Vach W., Boeker M. Virtual Patients in the Acquisition of Clinical Reasoning Skills: Does Presentation Mode Matter? A Quasi-Randomized Controlled Trial. BMC Med. Educ. 2017;17:165. doi: 10.1186/s12909-017-1004-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Nendaz M.R., Gut A.M., Perrier A., Louis-Simonet M., Blondon-Choa K., Herrmann F.R., Junod A.F., Vu N.V. Brief Report: Beyond Clinical Experience: Features of Data Collection and Interpretation That Contribute to Diagnostic Accuracy. J. Gen. Intern. Med. 2006;21:1302–1305. doi: 10.1111/j.1525-1497.2006.00587.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Bordage G. Why Did I Miss the Diagnosis? Some Cognitive Explanations and Educational Implications. Acad. Med. 1999;74:S138–S143. doi: 10.1097/00001888-199910000-00065. [DOI] [PubMed] [Google Scholar]
  • 5.Zary N., Johnson G., Boberg J., Fors U.G.H. Development, Implementation and Pilot Evaluation of a Web-Based Virtual Patient Case Simulation Environment—Web-SP. BMC Med. Educ. 2006;6:10. doi: 10.1186/1472-6920-6-10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Isaza-Restrepo A., Gómez M.T., Cifuentes G., Argüello A. The Virtual Patient as a Learning Tool: A Mixed Quantitative Qualitative Study. BMC Med. Educ. 2018;18:297. doi: 10.1186/s12909-018-1395-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Näpänkangas R., Karaharju-Suvanto T., Pyörälä E., Harila V., Ollila P., Lähdesmäki R., Lahti S. Can the Results of the OSCE Predict the Results of Clinical Assessment in Dental Education? Eur. J. Dent. Educ. 2014;20:3–8. doi: 10.1111/eje.12126. [DOI] [PubMed] [Google Scholar]
  • 8.Heitzmann N., Seidel T., Hetmanek A., Wecker C., Fischer M.R., Ufer S., Schmidmaier R., Neuhaus B.J., Siebeck M., Stürmer K., et al. Facilitating Diagnostic Competences in Simulations in Higher Education A Framework and a Research Agenda. Front. Learn. Res. 2019;7:1–24. doi: 10.14786/flr.v7i4.384. [DOI] [Google Scholar]
  • 9.Shorbagi S., Sulaiman N., Hasswan A., Kaouas M., Al-Dijani M.M., El-hussein R.A., Daghistani M.T., Nugud S., Guraya S.Y. Assessing the Utility and Efficacy of E-OSCE among Undergraduate Medical Students during the COVID-19 Pandemic. BMC Med. Educ. 2022;22:156. doi: 10.1186/s12909-022-03218-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Kononowicz A.A., Woodham L.A., Edelbring S., Stathakarou N., Davies D., Saxena N., Car L.T., Carlstedt-Duke J., Car J., Zary N. Virtual Patient Simulations in Health Professions Education: Systematic Review and Meta-Analysis by the Digital Health Education Collaboration. J. Med. Internet Res. 2019;21:e14676. doi: 10.2196/14676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Haerling K.A. Cost-Utility Analysis of Virtual and Mannequin-Based Simulation. Simul. Health J. Soc. Simul. Health. 2018;13:33–40. doi: 10.1097/SIH.0000000000000280. [DOI] [PubMed] [Google Scholar]
  • 12.Phillips J., Berge Z.L. Second Life for Dental Education. J. Dent. Educ. 2009;73:1260–1264. doi: 10.1002/j.0022-0337.2009.73.11.tb04816.x. [DOI] [PubMed] [Google Scholar]
  • 13.Mardani M., Cheraghian S., Naeeni S.K., ZarifSanaiey N. Effectiveness of Virtual Patients in Teaching Clinical Decision-Making Skills to Dental Students. J. Dent. Educ. 2020;84:615–623. doi: 10.1002/jdd.12045. [DOI] [PubMed] [Google Scholar]
  • 14.Edelbring S., Dastmalchi M., Hult H., Lundberg I., Dahlgren L.O. Experiencing Virtual Patients in Clinical Learning: A Phenomenological Study. Adv. Health Sci. Educ. 2011;16:331–345. doi: 10.1007/s10459-010-9265-0. [DOI] [PubMed] [Google Scholar]
  • 15.Co M., Yuen T.H.J., Cheung H.H. Using Clinical History Taking Chatbot Mobile App for Clinical Bedside Teachings—A prospective case control study. Heliyon. 2022;8:e09751. doi: 10.1016/j.heliyon.2022.e09751. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Marei H.F., Al-Eraky M.M., Almasoud N.N., Donkers J., Van Merrienboer J.J.G. The Use of Virtual Patient Scenarios as a Vehicle for Teaching Professionalism. Eur. J. Dent. Educ. 2017;22:e253–e260. doi: 10.1111/eje.12283. [DOI] [PubMed] [Google Scholar]
  • 17.Berman N.B., Durning S.J., Fischer M.R., Huwendiek S., Triola M.M. The Role for Virtual Patients in the Future of Medical Education. Acad. Med. 2016;91:1217–1222. doi: 10.1097/ACM.0000000000001146. [DOI] [PubMed] [Google Scholar]
  • 18.Doloca A., Tanculescu O. Dental Materials and Their Selection-Virtual Patient (VP) Software from a Student Perspective. Mater. Plast. 2016;53:370–374. [Google Scholar]
  • 19.Botezatu M., Hult H., Fors U.G. Virtual Patient Simulation: What do Students Make of It? A Focus Group Study. BMC Med. Educ. 2010;10:91. doi: 10.1186/1472-6920-10-91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Huwendiek S., Reichert F., Bosse H.-M., de Leng B.A., van der Vleuten C.P.M., Haag M., Hoffmann G.F., Tönshoff B. Design Principles for Virtual Patients: A Focus Group Study among Students. Med. Educ. 2009;43:580–588. doi: 10.1111/j.1365-2923.2009.03369.x. [DOI] [PubMed] [Google Scholar]
  • 21.Shan T., Tay F., Gu L. Application of Artificial Intelligence in Dentistry. J. Dent. Res. 2020;100:232–244. doi: 10.1177/0022034520969115. [DOI] [PubMed] [Google Scholar]
  • 22.Ahmed N., Abbasi M.S., Zuberi F., Qamar W., Bin Halim M.S., Maqsood A., Alam M.K. Artificial Intelligence Techniques: Analysis, Application, and Outcome in Dentistry—A Systematic Review. BioMed Res. Int. 2021;2021:9751564. doi: 10.1155/2021/9751564. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Tran B.X., McIntyre R.S., Latkin C.A., Phan H.T., Vu G.T., Nguyen H.L.T., Gwee K.K., Ho C.S.H., Ho R.C.M. The Current Research Landscape on the Artificial Intelligence Application in the Management of Depressive Disorders: A Bibliometric Analysis. Int. J. Environ. Res. Public Health. 2019;16:2150. doi: 10.3390/ijerph16122150. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Achacoso T.B., Yamamoto W.S. Artificial Ethology and Computational Neuroethology: A Scientific Discipline and Its Subset by Sharpening and Extending the Definition of Artificial Intelligence. Perspect. Biol. Med. 1990;33:379–390. doi: 10.1353/pbm.1990.0020. [DOI] [PubMed] [Google Scholar]
  • 25.Rigamonti L., Estel K., Gehlen T., Wolfarth B., Lawrence J.B., Back D.A. Use of Artificial Intelligence in Sports Medicine: A Report of 5 Fictional Cases. BMC Sports Sci. Med. Rehabil. 2021;13:13. doi: 10.1186/s13102-021-00243-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Shen L., Wu X., Zhen R., Zhou X. Post-Traumatic Stress Disorder, Mobile Phone Dependence, and Academic Boredom in Adolescents During the COVID-19 Pandemic. Front. Psychol. 2021;12:724732. doi: 10.3389/fpsyg.2021.724732. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Kaur A., Singh S., Chandan J.S., Robbins T., Patel V. Qualitative Exploration of Digital Chatbot Use in Medical Education: A Pilot Study. Digit. Health. 2021;7:20552076211038151. doi: 10.1177/20552076211038151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Reiswich A., Haag M. Evaluation of Chatbot Prototypes for Taking the Virtual Patient’s History. Proc. Stud. Health Technol. Inform. 2019;260:73–80. [PubMed] [Google Scholar]
  • 29.Stuij S.M., Drossaert C.H.C., Labrie N.H.M., Hulsman R.L., Kersten M.J., van Dulmen S., Smets E.M.A., de Haes H., Pieterse A., van Weert J., et al. Developing a Digital Training Tool to Support Oncologists in the Skill of Information-Provision: A User Centred Approach. BMC Med. Educ. 2020;20:135. doi: 10.1186/s12909-020-1985-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Humagain M., Rokaya D. Integrating Digital Technologies in Dentistry to Enhance the Clinical Success. Kathmandu Univ. Med. J. 2019;17:256–257. [PubMed] [Google Scholar]
  • 31.Neville P., van der Zande M.M. Dentistry, e-Health and Digitalisation: A Critical Narrative Review of the Dental Literature on Digital Technologies with Insights from Health and Technology Studies. Community Dent. Health J. 2020;37:51–58. doi: 10.1922/CDH_4664NEVILLE08. [DOI] [PubMed] [Google Scholar]
  • 32.Yansane A., Lee J., Hebballi N., Obadan-Udoh E., White J., Walji M., Easterday C., Rindal B., Worley D., Kalenderian E. Assessing the Patient Safety Culture in Dentistry. JDR Clin. Transl. Res. 2020;5:399–408. doi: 10.1177/2380084419897614. [DOI] [PubMed] [Google Scholar]
  • 33.Abbott P.V., Yu C. A Clinical Classification of the Status of the Pulp and the Root Canal System. Aust. Dent. J. 2007;52:S17–S31. doi: 10.1111/j.1834-7819.2007.tb00522.x. [DOI] [PubMed] [Google Scholar]
  • 34.Levin L.G., Law A.S., Holland G., Abbott P., Roda R.S. Identify and Define All Diagnostic Terms for Pulpal Health and Disease States. J. Endod. 2009;35:1645–1657. doi: 10.1016/j.joen.2009.09.032. [DOI] [PubMed] [Google Scholar]
  • 35.Topal A.D., Eren C.D., Geçer A.K. Chatbot Application in a 5th Grade Science Course. Educ. Inf. Technol. 2021;26:6241–6265. doi: 10.1007/s10639-021-10627-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Luan H., Geczy P., Lai H., Gobert J., Yang S.J.H., Ogata H., Baltes J., Guerra R., Li P., Tsai C.-C. Challenges and Future Directions of Big Data and Artificial Intelligence in Education. Front. Psychol. 2020;11:580820. doi: 10.3389/fpsyg.2020.580820. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Daniel B.K. Big Data and Data Science: A Critical Review of Issues for Educational Research. Br. J. Educ. Technol. 2017;50:101–113. doi: 10.1111/bjet.12595. [DOI] [Google Scholar]
  • 38.Liu N., Shapira P., Yue X. Tracking Developments in Artificial Intelligence Research: Constructing and Applying a New Search Strategy. Scientometrics. 2021;126:3153–3192. doi: 10.1007/s11192-021-03868-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Aggarwal R., Mytton O., Derbrew M., Hananel D., Heydenburg M., Issenberg B., Macaulay C., Mancini M.E., Morimoto T., Soper N., et al. Training and Simulation for Patient Safety. Qual. Saf. Health Care. 2010;19:i34–i43. doi: 10.1136/qshc.2009.038562. [DOI] [PubMed] [Google Scholar]
  • 40.Bisdas S., Topriceanu C.-C., Zakrzewska Z., Irimia A.-V., Shakallis L., Subhash J., Casapu M.-M., Leon-Rojas J., dos Santos D.P., Andrews D.M., et al. Artificial Intelligence in Medicine: A Multinational Multi-Center Survey on the Medical and Dental Students’ Perception. Front. Public Health. 2021;9:795284. doi: 10.3389/fpubh.2021.795284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Carrard V., Bourquin C., Orsini S., Mast M.S., Berney A. Virtual Patient Simulation in Breaking Bad News Training for Medical Students. Patient Educ. Couns. 2020;103:1435–1438. doi: 10.1016/j.pec.2020.01.019. [DOI] [PubMed] [Google Scholar]
  • 42.Blease C., Kharko A., Bernstein M., Bradley C., Houston M., Walsh I., Hägglund M., DesRoches C., Mandl K.D. Machine Learning in Medical Education: A Survey of the Experiences and Opinions of Medical Students in Ireland. BMJ Health Care Inform. 2022;29:e100480. doi: 10.1136/bmjhci-2021-100480. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Tichavsky L.P., Hunt A., Driscoll A., Jicha K. “It’s Just Nice Having a Real Teacher”: Student Perceptions of Online versus Face-to-Face Instruction. Int. J. Scholarsh. Teach. Learn. 2015;9:2. doi: 10.20429/ijsotl.2015.090202. [DOI] [Google Scholar]
  • 44.Moazami F., Bahrampour E., Azar M.R., Jahedi F., Moattari M. Comparing Two Methods of Education (Virtual versus Traditional) on Learning of Iranian Dental Students: A Post-Test Only Design Study. BMC Med. Educ. 2014;14:45. doi: 10.1186/1472-6920-14-45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Modha B. Experiential Learning without Prior Vicarious Learning: An Insight from the Primary Dental Care Setting. Educ. Prim. Care. 2020;32:49–55. doi: 10.1080/14739879.2020.1813055. [DOI] [PubMed] [Google Scholar]
  • 46.Fink M.C., Reitmeier V., Stadler M., Siebeck M., Fischer F., Fischer M.R. Assessment of Diagnostic Competences With Standardized Patients Versus Virtual Patients: Experimental Study in the Context of History Taking. J. Med. Internet Res. 2021;23:e21196. doi: 10.2196/21196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Greene A., Greene C.C., Greene C. Artificial Intelligence, Chatbots, and the Future of Medicine. Lancet Oncol. 2019;20:481–482. doi: 10.1016/S1470-2045(19)30142-1. [DOI] [PubMed] [Google Scholar]
  • 48.Gashi F., Regli S.F., May R., Tschopp P., Denecke K. Developing Intelligent Interviewers to Collect the Medical History: Lessons Learned and Guidelines. Stud. Health Technol. Inform. 2021;279:18–25. doi: 10.3233/shti210083. [DOI] [PubMed] [Google Scholar]
  • 49.Al-Madi E.M., Al-Bahrani L., Al-Shenaiber R., Al-Saleh S.A., Al-Obaida M.I. Creation and Evaluation of an Endodontic Diagnosis Training Software. Int. J. Dent. 2020;2020:8123248. doi: 10.1155/2020/8123248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Joda T., Wolfart S., Reich S., Zitzmann N.U. Virtual Dental Patient: How Long Until It’s Here? Curr. Oral Health Rep. 2018;5:116–120. doi: 10.1007/s40496-018-0178-y. [DOI] [Google Scholar]
  • 51.Terblanche N., Molyn J., de Haan E., Nilsson V.O. Comparing Artificial Intelligence and Human Coaching Goal Attainment Efficacy. PLoS ONE. 2022;17:e0270255. doi: 10.1371/journal.pone.0270255. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from International Journal of Environmental Research and Public Health are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES