Skip to main content
BMC Medical Education logoLink to BMC Medical Education
. 2024 Dec 18;24:1478. doi: 10.1186/s12909-024-06484-x

Concept mapping to promote clinical reasoning in multimorbidity: a mixed methods study in undergraduate family medicine

Marta Fonseca 1,, Paula Broeiro-Gonçalves 5, Mariana Barosa 2, Pedro Marvão 2, Marta Carreira 3, Sofia Azeredo-Lopes 1, Joana Pires 4, António Rendas 2, Patrícia Rosado-Pinto 2, Bruno Heleno 1
PMCID: PMC11653553  PMID: 39695556

Abstract

Background

Clinical reasoning significantly impacts physicians’ performance and patient care quality. Research into learning transfer within clinical reasoning education, especially in managing multimorbidity in Family Medicine, is crucial. This study evaluates the impact of concept maps (CMs) on promoting clinical reasoning skills among undergraduate students, compared to traditional teaching methods (TM).

Methods

A mixed methods approach was used in a controlled, non-randomized study with fifth-year Family Medicine undergraduates allocated to sessions using either CMs or TM. Quantitative data included a feedback questionnaire and evaluation of an individual task. Qualitative data comprised responses to an open-ended question and analysis of problem representation in the individual task.

Results

Among 313 eligible students, 112 participated (CM: 60, TM: 52). Both groups reported high satisfaction with their teaching methods. The CM group valued the holistic view and organization for managing multimorbidity cases, showing higher odds of positive scores on individual tasks (differences not statistically significant). Additionally, the CM group had a more homogeneous code matrix for problem representation in two clinical vignettes.

Conclusions

While no definitive evidence supports the superiority of CMs over traditional methods, promising trends were noted. The CM group showed improved performance in individual tasks and better organization in managing multimorbidity cases. Further investigation is recommended to explore varying levels of CM usage and modifications to pre-class workloads.

Clinical trial number

Not applicable.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12909-024-06484-x.

Keywords: Concept map, Clinical reasoning, Multimorbidity, Family Medicine, Undergraduate medical education

Background

Even in the age of artificial intelligence, clinical reasoning remains the cornerstone of effective medical practice [1]. This core skill critically influences physicians’ performance and, consequently, the quality of patient care [2]. Paradoxically, despite its centrality to medical practice, a universally agreed-upon definition of clinical reasoning remains elusive, and it is often assumed to be universally understood. The terminology and boundaries between concepts like clinical reasoning, critical thinking, decision-making, problem-solving, clinical judgment, and diagnostic reasoning are often used interchangeably [3].

For the purposes of this manuscript, we define clinical reasoning as a comprehensive concept that includes reasoning skills, reasoning processes, and outcomes of reasoning [46]. Clinical reasoning encompasses several components: information gathering, hypothesis generation, problem representation, differential diagnosis, management and treatment, and diagnostic justification [4]. Acquiring these skills is essential for medical students yet, teaching them effectively remains challenging. Research into learning transfer within clinical reasoning education is necessary [7], as current literature on how to evaluate clinical reasoning remains fragmented, despite numerous available methods targeting different measurement components [3, 4, 8].

In Family Medicine, patients with multimorbidity — that is, with multiple chronic conditions — are present in daily clinical practice [9, 10]. Family physicians often do not feel sufficiently trained or confident in managing such patients. It is not feasible to apply the sum of guidelines for single diseases to patients with multiple conditions, and there are few specific guidelines for patients with multimorbidity. Additionally, managing polypharmacy, which often involves navigating drug-drug and drug-disease interactions, complicates care. There is also a need to integrate the participation of several healthcare professionals [11]. For these patients, a patient-centered, holistic approach aimed at improving quality of life is essential [11, 12]. Instead of focusing solely on diagnostic precision, clinical reasoning processes should embrace this comprehensive approach. Effective teaching should equip students with these clinical reasoning strategies to enable future physicians to better manage multimorbidity [11]. The critical components of clinical reasoning in the context of multimorbidity include information gathering, problem representation, and the management and treatment plan. Information gathering is the active process of acquiring patient data and, in the case of multimorbidity, involves synthesizing multiple inputs from various healthcare professionals. Problem representation is the dynamic mental representation of the patient’s relevant aspects and, in multimorbidity, should incorporate a patient-centered approach. Finally, the management and treatment plan enables tailoring care to each patient’s unique circumstances and illness experience [13].

Concept mapping emerges as a powerful pedagogical tool that fosters knowledge integration and critical thinking [14, 15]. In the context of clinical reasoning, concept maps (CMs) offer a structured approach to organizing complex information that can be adapted to different contexts, allowing educators to tailor them to specific learning objectives. CMs, as “schematic device for representing a set of concept meanings imbedded in a framework of propositions” [16], can simplify complexity and foster higher-order thinking skills (metacognition) in medical students. This facilitates the integration of information from various topics and promotes deeper understanding, which is essential in the management of multiple chronic conditions, drugs, and healthcare providers. Despite their potential, the use of CMs to enhance clinical reasoning among medical students remains unclear [15, 17], particularly its effectiveness in promoting clinical reasoning among patients with multimorbidity.

This manuscript is part of an action research project [10, 18] focused on a CM-based pedagogical intervention to facilitate clinical reasoning in multimorbidity patients during clinical clerkship [19]. We focused on the development and implementation of this educational intervention, as well as its evaluation to guide future planning of this strategy [20]. An overview of the larger project is provided in the Supplementary Material 1. In this study we aimed to evaluate the impact of CMs in promoting clinical reasoning skills, particularly information gathering, problem representation, and management and treatment, among undergraduate students in the management of multimorbidity, comparing it to traditional teaching methods.

Materials and methods

Study design

This manuscript reports a mixed methods approach using a convergent design within a controlled, non-randomized study. Quantitative and qualitative data were collected in parallel, first analyzed separately and then merged during the interpretation phase to provide a more comprehensive understanding of the results [21, 22]. We sought qualitative data to illuminate and validate the quantitative findings. Figure 1 presents a diagram illustrating the relationship and sequence of the quantitative and qualitative research components. We were guided by the checklist for mixed methods research manuscript preparation and review described by Lee et al. [23].

Fig. 1.

Fig. 1

Diagram illustrating the relationship and sequence of the quantitative and qualitative research components (* information gathering, problem representation, and management and treatment plan are components of clinical reasoning described by Daniel et al. [4])

Setting

The curriculum at NOVA Medical School (NMS) is a six-year degree comprising three distinct stages: the first two years are pre-clerkship, followed by three years of clerkship, and concluding with a transitional year into practice. Within this framework, this study aimed to extend the use of CMs into the clerkship stage, specifically within the Family Medicine course for fifth-year students. In the Family Medicine course, students attend one week of seminars and simulation training, three weeks of clinical experience, and a final week of seminars. These seminars, which are held in groups of 40 students, focus on discussing clinical cases related to the management of patients with chronic conditions. The educational sessions described in this study were held in the final session of the seminar series, which is repeated annually with six student cohorts.

Participants

All fifth-year students enrolled in the Family Medicine course across eight consecutive students’ cohorts over three semesters, spanning from the fall semester of 2022–2023 through the fall semester of 2023–2024, were eligible to participate. At the beginning of each session, students were informed that data collected would be anonymized, used for research purposes, and would not affect their final assessment. Written informed consent was obtained from each student.

Educational sessions

A brief description of how educational sessions were developed during the larger action research project is provided in the Supplementary Material 1.

The educational sessions were planned as a pre-class assignment followed by a two-hour face-to-face session for classes of approximately 40 students. Each session focused on a clinical vignette based on a real patient with multimorbidity. Three clinical vignettes with equivalent health problems were created and aligned with the course syllabus to avoid contamination between classes (the vignettes can be found in the Supplementary Material 2). For each clinical vignette, two versions of the educational session were developed: one used CMs (CM group), another used traditional teaching methods (TM group). Due to logistical constraints and to the small number of randomization units, we chose alternation over randomization to allocate students’ cohorts. Simple randomization was unlikely to ensure a balanced distribution of baseline characteristics. Alternation, however, guaranteed that each vignette was used at least once across the two groups (the sequence is shown in the Supplementary Material 3). This method not only reduced administrative burden but also enabled immediate implementation following an addendum of ethical approval.

The plan of the educational sessions, including the differences between the groups, is shown in Fig. 2. Eighteen days before the face-to-face session, on average, students received individual emails requesting them to complete the pre-class assignment. This task involved summarizing a specific section of the clinical vignette, focusing on only one disease. The email provided instructions for completing and submitting the pre-class assignment. The CM group was requested to create a small CM, while the TM group was asked to write a short text document. The CM group also received a video tutorial with instructions on CM construction. The face-to-face session had the same structure in both groups. Each session began with a presentation of the complete clinical vignette, followed by a brief discussion in small groups. Subsequently, a comprehensive whole-group discussion was facilitated by the tutors. In the CM group, the tutor constructed a CM on the whiteboard that incorporated the students’ comments and contributions. In contrast, the TM group engaged in a discussion, with key points recorded topically on the whiteboard. All students were then instructed to complete an individual task: write a management and treatment plan for the patient on a sheet of paper. Finally, after a brief final whole group discussion, students were asked to complete an individual feedback questionnaire. The tutors were part of the research team.

Fig. 2.

Fig. 2

Plan of the educational sessions

Data collection and analysis

Quantitative and qualitative data were collected concurrently and were extracted from:

  • baseline characteristics of the students were collected from a brief demographic questionnaire completed by the students and from the NMS academic division, which provided the students’ pre-intervention academic mean scores (on a scale of 0 to 20).

  • an anonymous, individual feedback questionnaire, which students completed at the end of the face-to-face session. It consisted of four closed-ended questions using a five-point Likert scale (strongly disagree = 1 to strongly agree = 5) and one open-ended question (Supplementary Material 4). We pre-tested the questionnaire with a group of five second-year medical students. These students had experience constructing CMs during their pre-clerkship period and were ineligible for study inclusion. No changes were suggested based on the respondents’ feedback during pre-testing.

  • the individual task performed in the face-to-face sessions. Three members of the research team (MF, MB, PB), trained to evaluate the individual tasks, independently and blindly assessed the individual tasks after all data collection was complete. They focused on three components of clinical reasoning: information gathering, problem representation, and management and treatment. The evaluation of these components was conducted using a scale developed by the research team for this purpose (Table 1), based on existing instruments [11, 2428] but adapted to our specific context. Since the scale was new, not validated, and not very discriminative (score of 1 to 3), the research team decided to conduct a parallel qualitative assessment, which was performed by two members (MF, JP). The plans were transcribed and coded deductively according to the problems represented in each of them (item II). This item was selected because it was more objective and suitable for an unconventional and innovative evaluation.

Table 1.

Scale for quantitative assessment of the individual task (individual management and treatment plan written by each student)

Item Score
I Information gathering The process of acquiring the data needed to generate or refine hypotheses. The selection of information is driven by knowledge representations of disease. 1 Unsatisfactory
2 Satisfactory
3 Good
II Problem representation The dynamic mental representation of all the relevant aspects of the case (including the patient’s clinical findings and biopsychosocial dimensions). 1 Unsatisfactory
2 Satisfactory
3 Good
III Management and treatment The actions that follow the clinical reasoning process, including prognostication, management, treatment, prevention strategies, and palliation of symptoms and justification for such actions. 1 Unsatisfactory
2 Satisfactory
3 Good

Quantitative and qualitative data were initially analyzed separately.

Quantitative data were summarized using descriptive statistics. The main analysis involved the Mann-Whitney test, appropriate for the ordinal outcome variables in our individual feedback questionnaire data and individual task assessments. Additionally, sensitivity analyses were conducted by converting ordinal variables to dichotomous outcomes (Satisfactory/Good plan vs. Unsatisfactory plan) and by creating a composite variable (Satisfactory/Good plan on all items vs. any item rated Unsatisfactory). Logistic regression models were applied to these dichotomous outcomes, both unadjusted and adjusted for age and gender. Descriptive statistics and Mann-Whitney tests were performed using IBM SPSS Statistics (version 29.0.0.0), whereas logistic regression analyses were conducted using R (R Core Team, 2024; R Foundation for Statistical Computing, Vienna, Austria).

Regarding qualitative data, we conducted an inductive thematic analysis using the results of the open-ended question from the individual feedback questionnaire. Data were compiled, and then coding was applied by MF. This process involved reading the transcripts, highlighting all text relevant to the research questions, categorizing the marked texts, and creating new codes. Each transcript was coded at least twice. Codes were assigned and organized into themes. Qualitative analysis of the problem representation item of individual tasks was performed using a deductive approach. A list of problems previously identified in each clinical vignette was used to ensure that participants’ responses appropriately addressed the problem representation item. The coded data were represented in a tabular format using a code matrix browser, allowing visual comparison between groups for each clinical vignette. Qualitative data were analyzed using MAXQDA Plus 2022 (Release 22.8.0, 2022 VERBI GmbH Berlin).

The quantitative and qualitative information were then merged in a second phase of analysis and interpretation of the results through narrative with a weaving approach [29]. The qualitative data enabled a deeper understanding of the students’ perceptions and illustrated the problem representation of individual task (Fig. 1).

Results

Participants

Among the 313 eligible students, 112 (35.8%) participated in the sessions. Participants had a slightly higher pre-intervention mean score than non-participants (16.00 ± 1.03 vs. 15.69 ± 0.96, P = 0.004).

The CM group included 60 students, 37 (61.7%) females, with a mean age of 22.63 ± 2.17 years. The TM group included 52 students, 40 (76.9%) females, with a mean age of 24.14 ± 3.64 years. The two groups had similar pre-intervention academic mean scores (the baseline characteristics of the participants are shown in Table 2).

Table 2.

Baseline characteristics of the students of the concept mapping group and the traditional method group

Concept mapping group Traditional method group P value
Students
Eligible
Participants (%)
156
60 (38.5)
157
52 (33.1%)
0.386 a
Gender
Male (%)
Female (%)
23 (38.3)
37 (61.7)
12 (23.1)
40 (76.9)
0.082 a

Age

Mean (SD)

Range

22.63 (2.17)

21–34

24.14 (3.64)

21–41

< 0.001 b
Students with a previous degree (%) 3 (5.0) 10 (19.2) 0.035 c

Participants pre-intervention mean score [0–20]

Mean (SD)

16.35 (1.01) 15.83 (1.04) 0.126 d

Abbreviations: SD, standard deviation

a Chi-Squared test. b Mann-Whitney test. c Fisher’s exact test. d Student’s t-test

Students’ perceptions

In the individual feedback questionnaire, 109 students (CM: n = 59, TM: n = 50) expressed overall satisfaction with both the concept mapping and the traditional sessions on closed-ended questions. A higher proportion of students in the CM group found the pre-class assignment to be highly beneficial for integrating knowledge (P = 0.020). However, there were no statistically significant differences between the two groups regarding perceptions of the whole group discussion, facilitation of information integration in multimorbidity, or promotion of clinical reasoning. A total of 49 students provided comments on the open-ended question (CM group: n = 28, TM group: n = 21). Comments from both groups highlighted the valuable learning experience and the positive and negative aspects of both methodologies. In the CM group, the students specifically mentioned that “CMs facilitate clinical reasoning”, “CMs allow for a holistic view”, and “CMs help to organize information in multimorbidity cases”. Interestingly, the TM group suggested adopting CMs for individual task. The results of the individual feedback questionnaire are presented side-by-side in Table 3.

Table 3.

Results of the individual feedback questionnaire

Concept mapping group Traditional method group P value a
Five-point Likert closed-ended questions n = 59 n = 50

Q1: The pre-class assignment facilitates knowledge integration.

Mean score (SD)

4.46 (0.65) 4.08 (0.88) 0.020

Q2: The whole group discussion facilitates knowledge integration.

Mean score (SD)

4.51 (0.57) 4.50 (0.71) 0.743

Q3: The methodology facilitates clinical information integration in multimorbidity.

Mean score (SD)

4.63 (0.58) 4.58 (0.61) 0.505

Q4: The methodology promotes clinical reasoning in multimorbidity.

Mean score (SD)

4.71 (0.49) 4.70 (0.46) 0.566
Students’ feedback from the open-ended question n = 28 n = 21
Valuable learning experience

- The session was educational and enriching.

- It was a very good way to approach multimorbidity, both for us as students and for future doctors.

- Very useful session.
Structured and organized session

- It was an enlightening session, and it is what happens in clinical practice.

- This approach helped me to consolidate my knowledge more.

-
Effective methodology

- CMs facilitate clinical reasoning, more than a physical map, it is the thinking tool that this methodology gives us.

- CMs allow for a holistic view.

- CMs help to organize multimorbidity.

- A complex clinical case helps to approach what multimorbidity really is.
Not effective methodology

- It would work better in team-based learning methodology.

- In practice, constructing the CM can be time-consuming.

- It can become graphically confusing.

- It would be better to do the treatment plan as a group task.

- It was necessary to better define the objectives of each task.

- The request for the pre-class assignment leads to lower session attendance.

- The construction of a CM would improve the process of writing the individual plan.

Abbreviations: CM, concept map; SD, standard deviation

a Mann-Whitney test

Students’ performance

The agreement between the raters was fair for all items (38.4% for information gathering, 34.8% for problem representation, and 32.2% for management and treatment). No statistically significant differences were found between the two groups for any of the individual items or the composite variable combining all three items (Table 4). In the sensitivity analyses, the CM group showed a trend toward higher odds of achieving satisfactory or good scores although these were not statistically significant. This pattern was observed across all three individual items and in the composite variable (Fig. 3).

Table 4.

Assessment of the individual task

Concept mapping group
n = 60
Traditional method group
n = 52
Unadjusted analysis
P value

Main analysis

Ordinal variable

I: Information gathering

II: Problem representation

III: Management/treatment

Mean (SD)

2.05 (0.44)

2.23 (0.52)

2.09 (0.48)

Mean (SD)

1.98 (0.48)

2.33 (0.55)

2.10 (0.55)

(Mann Whitney)

0.301

0.378

0.896

Sensitivity Analysis I

Dichotomous variable

I: Information gathering

II: Problem representation

III: Management/treatment

Satisfactory/Good

68.3%

73.3%

66.7%

Satisfactory/Good

63.5%

67.3%

63.5%

(Pearson’s Chi-Squared)

0.587

0.485

0.723

Sensitivity Analysis II

Composite variable

Satisfactory/Good on all items

53.3%

Satisfactory/Good on all items

46.2%

(Pearson’s Chi-Squared)

0.449

Abbreviations: SD, standard deviation

Fig. 3.

Fig. 3

Adjusted logistic regression model results from the sensitivity analysis, adjusting for age and gender. Each point represents the Odds Ratio for the corresponding variable, with error bars indicating the 95% confidence intervals

Qualitative analysis used a code matrix to facilitate a comprehensive visual comparison of the frequency distributions of patient problems across both groups. The CM group appeared to have a more homogeneous matrix for problem representation in clinical vignettes 1 and 3. A graphical representation of this code matrix is provided in the Supplementary Material 5.

Discussion

In this study, we found no definitive evidence supporting the superiority of CMs over traditional teaching methods in promoting clinical reasoning skills among undergraduate students in the management of multimorbidity. This observation was applied to both students’ perceptions (Kirkpatrick level 1) and performance (Kirkpatrick level 2b) [30]. However, encouraging trends emerged. Regarding perceptions, students valued the pre-class assignment in the form of CMs as a tool to facilitate knowledge integration. Students also considered face-to-face CM session to be an effective approach to multimorbidity and perceived CMs as facilitators of clinical reasoning, enabling a comprehensive and organized view of multimorbidity. Although there was no statistically significant difference between groups in the individual task assessment, the CM group consistently had higher odds of receiving positive scores compared to the TM group. This trend may be reflected by the greater robustness of the problem representation code matrix in the CM group, at least for two of the three clinical vignettes. Regarding the coherence of the quantitative and qualitative findings [29], the qualitative data played a confirmatory role for the individual task and confirmed and expanded upon the quantitative findings from the individual feedback questionnaire.

Several factors may explain the lack of significant differences in students’ perceptions of teaching methods. Historically, high satisfaction scores at our institution may lead to a ceiling effect that limits detectable differences. Although the students were aware of the research context, they were unaware of the experimental method, potentially elevating their satisfaction across both groups. Influences on student ratings include student characteristics and satisfaction with exam scores and evaluation processes [31]. Despite typically higher ratings from women and high achievers, no significant differences in baseline characteristics were observed between the groups. Furthermore, satisfaction was not influenced by exam outcomes, as data was collected before finals from a self-selected third of eligible students using positively phrased items, likely increasing ratings.

Regarding student’s performance, medical students’ tendency to excel regardless of teaching method may obscure methodological effects. The intervention involved tutors constructing CMs on a whiteboard, similar to the methods described by Richard et al. [32] and Langer et al. [33], which is less engaging than an active student-led CM construction that fosters deeper cognitive processing [34, 35]. This passive approach could have limited the intervention’s effectiveness.

Additionally, the challenge of reliably measuring clinical reasoning [4, 6, 7] and the potential unreliability of our assessment tools might not detect true differences. Agreement among raters assessing performance was only fair. The qualitative analysis comparing CM and TM across the three clinical vignettes did not reveal a clear pattern in the problem representation component of clinical reasoning, probably due to the limited number of students per vignette.

Our study focused on short-term impacts, but the findings suggest that CMs may support knowledge integration, clinical reasoning, a holistic perspective, and information organization in multimorbidity. We believe that CMs support analytical and critical thinking, which are essential for developing clinical reasoning competency over time [36]. Clinical reasoning requires repeated opportunities to integrate and apply knowledge in increasingly complex scenarios, grounded in a strong understanding of pathophysiological mechanisms [6]. CMs encourage these processes and may help bridge the gap between pattern recognition and analytical reasoning, emphasizing problem-solving skills that current artificial intelligence models are unable to do [37]. Literature supports the long-term educational value of CMs. Our earlier systematic review [15] highlighted their role in promoting knowledge integration and clinical reasoning in a long-term context. Penner et al. [38] proposed a ‘collect, cluster, and co-ordinate’ approach in Family Medicine, which aligns with the principles of CM construction: brainstorming concepts and organizing them into circles or boxes (collect), arranging them hierarchically (cluster), and establishing links between hierarchical and cross-linked concepts for essential elements (co-ordinate). Similarly, one of us demonstrated how problem maps — similar to CMs — help family physicians handle multimorbidity [39]. While our data did not show definitive long-term outcomes, observed trends and existing evidence suggest that CMs could prepare students for the cognitive demands of clinical practice. Longitudinal studies are needed to confirm these potential benefits, particularly in managing multimorbidity.

Our study had both strengths and limitations. We assessed students’ perceptions and performance, employing a control group with similar student characteristics and pre-intervention mean scores across both groups to enhance validity. The integration of quantitative and qualitative data proved beneficial, providing deeper insights into the quantitative results through qualitative evaluation. However, the low attendance rate may limit the generalizability of our findings. This study was not a trial with allocation concealment or blinding, but measures were taken to prevent contamination between groups and attribution bias despite the non-randomized design. Additionally, there was only fair inter-rater agreement on the assessment of students’ performance, which limited the potential to detect differences in clinical reasoning components. While the research team’s prior experience with CMs was acknowledged through reflexivity, it could have influenced interpretations.

While our results do not conclusively demonstrate the efficacy of CMs in supporting clinical reasoning, they are encouraging enough to warrant further research. A key priority is the development of more robust instruments for clinical reasoning assessment.

Regarding the educational intervention, we recommend enhancing the active engagement of students with CMs as they recognize the benefits of using these tools to organize complex information and integrate knowledge. This finding aligns with the idea that CMs promote the management of complex clinical information, as described by Wu et al. [40] and Wang et al. [41]. However, this must be balanced with the students’ perception of the process as time-consuming. In addition, promoting longitudinal evaluation by tracking student participants across subsequent semesters would be valuable for assessing clinical reasoning. Considering the low student participation rate—only one-third of those eligible—we should assess whether decreasing the workload in the pre-class assignments improves student participation.

Conclusions

No definitive evidence was found to support the superiority of CMs over traditional teaching methods in promoting clinical reasoning skills among undergraduate students in the management of multimorbidity. However, promising trends were noted, with the CM group showing improved performance on individual tasks. CMs also appeared beneficial in preparing students for educational sessions, promoting a holistic view, and organizing information in multimorbidity. There is a need for better instruments to assess clinical reasoning components. Future research should explore varying levels of active CM use in class and adjust the pre-class workload for students.

Electronic Supplementary Material

Below is the link to the electronic supplementary material.

12909_2024_6484_MOESM1_ESM.pdf (223.2KB, pdf)

Supplementary Material 1: The action research project.

12909_2024_6484_MOESM2_ESM.pdf (201.9KB, pdf)

Supplementary Material 2: Clinical vignettes.

12909_2024_6484_MOESM3_ESM.pdf (98.3KB, pdf)

Supplementary Material 3: Sequence of educational sessions.

12909_2024_6484_MOESM4_ESM.pdf (121.2KB, pdf)

Supplementary Material 4: The individual feedback questionnaire.

12909_2024_6484_MOESM5_ESM.pdf (289KB, pdf)

Supplementary Material 5: Qualitative code matrix.

Acknowledgements

The authors would like to thank the students who participated in this study, Manuel Gonçalves Pereira, PhD, MD, and Ana Rita Goes, PhD, for their help in planning the study methodology, and Paula Fernandes for her administrative support during the educational sessions.

Biographies

Marta Fonseca

is a Pathophysiology and Family Medicine Lecturer, Departments of Pathophysiology and Family Medicine, and a practicing Family Physician; Comprehensive Health Research Centre (CHRC), NOVA Medical School, NOVA University , Lisbon, Portugal.

Paula Broeiro-Gonçalves

is a Family Medicine Professor and a practicing Family Physician, Department of Family Medicine; Comprehensive Health Research Centre (CHRC), University of Évora, Évora, Portugal.

Mariana Barosa

is a Pathophysiology Lecturer and Internal Medicine Resident; NOVA Medical School, Lisbon, Portugal.

Pedro Marvão

is an Assistant Professor of Physiology and Head of Education Department; NOVA Medical School, Lisbon, Portugal.

Marta Carreira

 is a mathematician; NOVA National School of Public Health, Lisbon, Portugal.

Sofia Azeredo-Lopes

is an Assistant Guest Professor, Department of Statistics; Comprehensive Health Research Centre (CHRC), NOVA Medical School, NOVA University , Lisbon, Portugal.

Joana Pires

is a researcher at NOVA National School of Public Health, Lisbon, Portugal; Comprehensive Health Research Centre (CHRC), National School of Public Health, NOVA University, Lisbon, Portugal.

António Rendas

is an Emeritus Professor of Pathophysiology, Department of Pathophysiology; NOVA Medical School, Lisbon, Portugal.

Patrícia Rosado-Pinto

is a retired Professor of Education Department; NOVA Medical School, Lisbon, Portugal.

Bruno Heleno

is an Assistant Professor of Family Medicine, Department of Family Medicine, and a practicing Family Physician; Comprehensive Health Research Centre (CHRC), NOVA Medical School, NOVA University , Lisbon, Portugal.

Author contributions

MF, PBG, PM, AR, PRP, and BH contributed to the conception of the study. MF, PBG, PM, PRP, and BH made substantial contributions to the study design. MF, PBG, and BH were directly involved in data acquisition, and all authors contributed significantly to data interpretation. MF and BH drafted the original manuscript, and all authors critically revised it for important intellectual content. All authors provided final approval of the submitted version of the manuscript.

Funding

This study did not apply for any funding. Article processing charges were funded by Fundação Ciência e Tecnologia, IP national support through CHRC (UIDP/04923/2020).

Data availability

All relevant data generated or analyzed during this study are included in this published article and in its Additional Supporting Information files. The primary data were written in Portuguese, the official language of NOVA Medical School. Any additional data are available from the corresponding author upon reasonable request.

Declarations

Ethics approval and consent to participate

This study adhered to the ethical principles outlined in the Declaration of Helsinki. The study protocol was approved by the NOVA Medical School Ethics Committee on January 19, 2022 (No.214/ 2021/ CEFCM). An Addendum to the project was subsequently approved on October 4, 2022 (No.214/ ADENDA/ 2021/ CEFCM). Written informed consent was obtained from all participants, who were informed of their right to withdraw from the study at any time without penalty.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Connor DM, Durning SJ, Rencic JJ. Clinical reasoning as a core competency. Acad Med. 2020;95:1166–71. [DOI] [PubMed] [Google Scholar]
  • 2.Torre D, Chamberland M, Mamede S. Implementation of three knowledge-oriented instructional strategies to teach clinical reasoning: Self-explanation, a concept mapping exercise, and deliberate reflection: AMEE Guide 150. Med Teach. 2022;45:1–9. [DOI] [PubMed] [Google Scholar]
  • 3.Brentnall J, Thackray D, Judd B. Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review. Int J Environ Res Public Health. 2022;19. [DOI] [PMC free article] [PubMed]
  • 4.Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, et al. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. Acad Med. 2019;94:902–12. [DOI] [PubMed] [Google Scholar]
  • 5.Young ME, Thomas A, Lubarsky S, Gordon D, Gruppen LD, Rencic J, et al. Mapping clinical reasoning literature across the health professions: A scoping review. BMC Med Educ. 2020;20:1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Durning SJ, Jung E, Kim DH, Lee YM. Teaching clinical reasoning: principles from the literature to help improve instruction from the classroom to the bedside. Korean J Med Educ. 2024;36:145–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Rencic J, Trowbridge RL, Fagan M, Szauter K, Durning S. Clinical Reasoning Education at US Medical Schools: Results from a National Survey of Internal Medicine Clerkship Directors. J Gen Intern Med. 2017;32:1242–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Covin Y, Longo P, Wick N, Gavinski K, Wagner J. Empirical comparison of three assessment instruments of clinical reasoning capability in 230 medical students. BMC Med Educ. 2020;20:1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.National Institute for Health and Care Excellence. Multimorbidity: clinical assessment and management. In: NICE guideline. 2016. https://www.nice.org.uk/guidance/ng56. Accessed 25 September 2023.
  • 10.Swanwick T, Forrest K, O’Brian BC. Understanding Medical Education. 3rd ed. Wiley Blackwell; 2019.
  • 11.Cairo Notari S, Sader J, Caire Fon N, Sommer JM, Pereira Miozzari AC, Janjic D, et al. Understanding GPs’ clinical reasoning processes involved in managing patients suffering from multimorbidity: A systematic review of qualitative and quantitative research. Int J Clin Pract. 2021;75:1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bogerd MJL, Exmann CJC, Slottje P, Bont J, Van Hout HPJ. Predicting anticipated benefit from an extended consultation to personalise care in multimorbidity: a development and internal validation study of a prioritisation algorithm in general practice. Br J Gen Pract. 2024;74:e307–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Damarell RA, Morgan DD, Tieman JJ. General practitioner strategies for managing patients with multimorbidity: A systematic review and thematic synthesis of qualitative research. BMC Fam Pract. 2020;21:1–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Torre DM, Daley B, Stark-Schweitzer T, Siddartha S, Petkova J, Ziebert M. A qualitative evaluation of medical student learning with concept maps. Med Teach. 2007;29:949–55. [DOI] [PubMed] [Google Scholar]
  • 15.Fonseca M, Marvão P, Oliveira B, Heleno B, Carreiro-Martins P, Neuparth N, et al. The effectiveness of concept mapping as a tool for developing critical thinking in undergraduate medical education – a BEME systematic review: BEME Guide 81. Med Teach. 2023;0:1–14. [DOI] [PubMed] [Google Scholar]
  • 16.Novak JD. The Promise of New Ideas and New Technology for Improving Teaching and Learning. Cell Biol Educ. 2003;2:122–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Pierce C, Corral J, Aagaard E, Harnke B, Irby DM, Stickrath C. A BEME realist synthesis review of the effectiveness of teaching strategies used in the clinical setting on the development of clinical skills among health professionals: BEME Guide 61. Med Teach. 2020;0:1–12. [DOI] [PubMed] [Google Scholar]
  • 18.Tekin AK, Kotaman H. The Epistemological Perspectives on Action Research. Educational Social Res. 2013;3:81–91. [Google Scholar]
  • 19.Fonseca M, Marvão P, Rosado P, Rendas A, Heleno B. Promoting clinical reasoning in undergraduate Family Medicine curricula through concept mapping: a qualitative approach. Adv Health Sci Educ. 2024. [DOI] [PMC free article] [PubMed]
  • 20.Ferris HA, Collins ME. Research and Evaluation in Medical Education. Int J High Educ. 2015;4.
  • 21.Creswell JW. A Concise Introduction to Mixed Methods Research. SAGE; 2015.
  • 22.Creswell JW, Clark VLP. Designing and conducting mixed methods. 3rd ed. SAGE; 2018.
  • 23.Lee SYD, Iott B, Banaszak-Holl J, Shih SF, Raj M, Johnson KE, et al. Application of Mixed Methods in Health Services Management Research: A Systematic Review. Med Care Res Rev. 2022;79:331–44. [DOI] [PubMed] [Google Scholar]
  • 24.Park YS, Lineberry M, Hyderi A, Bordage G, Riddle J, Yudkowsky R. Validity evidence for a patient note scoring rubric based on the new patient note format of the United States medical licensing examination. Acad Med. 2013;88:1552–7. [DOI] [PubMed] [Google Scholar]
  • 25.Smith S, Kogan JR, Berman NB, Dell MS, Brock DM, Robins LS. The development and preliminary validation of a rubric to assess medical students’ written summary statements in virtual patient cases. Acad Med. 2016;91:94–100. [DOI] [PubMed] [Google Scholar]
  • 26.Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best Practices for Developing and Validating Scales for Health, Social, and Behavioral Research: A Primer. Front Public Health. 2018;6:1–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.McBee E, Ratcliffe T, Schuwirth L, O’Neill D, Meyer H, Madden SJ, et al. Context and clinical reasoning: Understanding the medical student perspective. Perspect Med Educ. 2018;7:256–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Gordon D, Rencic JJ, Lang VJ, Thomas A, Young M, Durning SJ. Advancing the assessment of clinical reasoning across the health professions: Definitional and methodologic recommendations. Perspect Med Educ. 2022;11:108–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Fetters MD, Curry LA, Creswell JW. Achieving integration in mixed methods designs - Principles and practices. Health Serv Res. 2013;48:2134–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Boet S, Sharma S, Goldman J, Reeves S. Review article: Medical education research: An overview of methods. Can J Anaesth. 2012;59:159–70. [DOI] [PubMed] [Google Scholar]
  • 31.Schiekirka S, Raupach T. A systematic review of factors influencing student ratings in undergraduate medical education course evaluations. BMC Med Educ. 2015;15:1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Richards J, Schwartzstein R, Irish J, Almeida J, Roberts D. Clinical physiology grand rounds. Clin Teach. 2013;10:88–93. [DOI] [PubMed] [Google Scholar]
  • 33.Langer AL, Block BL, Schwartzstein RM, Richards JB. Building upon the foundational science curriculum with physiology-based grand rounds: a multi-institutional program evaluation. Med Educ Online. 2021;26. [DOI] [PMC free article] [PubMed]
  • 34.Kumar S, Dee F, Kumar R, Velan G. Benefits of testable concept maps for learning about pathogenesis of disease. Teach Learn Med. 2011;23:137–43. [DOI] [PubMed] [Google Scholar]
  • 35.Brondfield S, Seol A, Hyland K, Teherani A, Hsu G. Integrating Concept Maps into a Medical Student Oncology Curriculum. J Cancer Educ. 2021;36:85–91. [DOI] [PubMed] [Google Scholar]
  • 36.Demeester A, Vanpee D, Marchand C, Eymard C. Formation au raisonnement clinique: perspectives d’utilisation des cartes conceptuelles. Clinical reasoning learning: concept mapping as a relevant strategy. Pédagogie Médicale. 2010;11:81–95. [Google Scholar]
  • 37.Schwartzstein RM. Clinical reasoning and artificial intelligence: can AI really think? Trans Am Clin Climatol Assoc. 2024;134:133–45. [PMC free article] [PubMed] [Google Scholar]
  • 38.Penner K, Wicklum S, Johnston A, Kelly MA. Teaching multimorbidity to medical students. Clin Teach. 2024;1–4. [DOI] [PubMed]
  • 39.Broeiro P, Ramos V, Barroso R. O mapa de problemas - um instrumento para lidar com a morbilidade múltipla. Rev Port Clin Geral. 2007;23:209–15. [Google Scholar]
  • 40.Wu B, Wang M, Grotzer TA, Liu J, Johnson JM. Visualizing complex processes using a cognitive-mapping tool to support the learning of clinical reasoning. BMC Med Educ. 2016;16:1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Wang M, Wu B, Kirschner PA, Michael Spector J. Using cognitive mapping to foster deeper learning with complex problems in a computer-based environment. Comput Hum Behav. 2018;87:450–8. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

12909_2024_6484_MOESM1_ESM.pdf (223.2KB, pdf)

Supplementary Material 1: The action research project.

12909_2024_6484_MOESM2_ESM.pdf (201.9KB, pdf)

Supplementary Material 2: Clinical vignettes.

12909_2024_6484_MOESM3_ESM.pdf (98.3KB, pdf)

Supplementary Material 3: Sequence of educational sessions.

12909_2024_6484_MOESM4_ESM.pdf (121.2KB, pdf)

Supplementary Material 4: The individual feedback questionnaire.

12909_2024_6484_MOESM5_ESM.pdf (289KB, pdf)

Supplementary Material 5: Qualitative code matrix.

Data Availability Statement

All relevant data generated or analyzed during this study are included in this published article and in its Additional Supporting Information files. The primary data were written in Portuguese, the official language of NOVA Medical School. Any additional data are available from the corresponding author upon reasonable request.


Articles from BMC Medical Education are provided here courtesy of BMC

RESOURCES