Skip to main content
Advances in Medical Education and Practice logoLink to Advances in Medical Education and Practice
. 2021 Mar 19;12:285–286. doi: 10.2147/AMEP.S311170

Suggestions for Improving the Assessment of a Learning Management System Used for Clinical Curriculum Development [Response to Letter]

Severin Pinilla 1,2,, Andrea Cantisani 3, Stefan Klöppel 1, Werner Strik 3, Christoph Nissen 3, Sören Huwendiek 2
PMCID: PMC7989053  PMID: 33776505

Dear editor

Thank you for the opportunity to respond to the observations of Oliveira et al We fully agree with their recommendations for further study of higher educational outcomes when implementing and developing the use of a learning management system (LMS) in clinical learning environments. As the authors correctly noted, our study focused on an early phase of an educational design research study cycle,1 and we primarily used student evaluations as outcome parameters. Importantly, our conclusion that using an LMS appeared to support student learning was not based on a single Likert item, but also on the improvement of our teaching hospital’s overall ranking. This ranking was calculated based on a 30-item evaluation that included questions on perceptions of learning goal achievements, educational support and learning climate, workplace-based assessments, global evaluation items, and narrative student feedback. It was administered through an external evaluation department at our University; therefore, the details were not included in our data.

Furthermore, we would like to emphasize that the development and implementation of an LMS prototype at our teaching hospital was not intended to replace clinical training. In contrast, it was developed to support and scaffold self-regulated learning by novice clinical students while they participated in clinical work for the first time. We agree with Oliveira et al that future studies should carefully examine which clinical learning contents can be effectively taught online or in a blended format, and which cannot. An additionally important aspect is the role of the clinical supervisor and how they facilitate the educational experience by enabling a community of inquiry.2

We commend Oliveira et al for their critical comments with regard to using Likert-item data and its implications for statistical analysis. While we agree that a numerical value of 3.9 or 4.4 does not convey much meaning in and of itself, it can be interpreted as a trend. As researchers in other fields have shown, treating Likert-item data as numerical is acceptable.3 However, we agree that the consequences of choosing one statistical test over another need to be carefully considered depending on the context and the potential implications of the research findings. Moreover, even if acquiescence bias is relevant, it should apply to both cohorts. As our study was designed as an exploratory pilot project, we considered the implications of using Likert-item data in this way as acceptable.

Finally, we would like to thank the authors for their insightful feedback and wish them all the best in their upcoming graduation and professional career, and we encourage their scholarly interest in medical education.

Disclosure

All authors declare that they have no conflicts of interest for this communication.

References

  • 1.Chen W, Reeves TC. Twelve tips for conducting educational design research in medical education. Med Teach. 2020;42(9):980–986. [DOI] [PubMed] [Google Scholar]
  • 2.Garrison DR, Anderson T, Archer W. Critical inquiry in a text-based environment: computer conferencing in higher education. Internet Higher Educ. 1999;2(2–3):87–105. doi: 10.1016/S1096-7516(00)00016-6 [DOI] [Google Scholar]
  • 3.De Winter JFC, Dodou D. Five-point likert items: t test versus Mann-Whitney-Wilcoxon (addendum added October 2012). Pract Assess Res Evaluation. 2010;15(1):11. [Google Scholar]

Articles from Advances in Medical Education and Practice are provided here courtesy of Dove Press

RESOURCES