Skip to main content
PLOS One logoLink to PLOS One
. 2021 Jun 24;16(6):e0253471. doi: 10.1371/journal.pone.0253471

Prioritising topics for developing e-learning resources in healthcare curricula: A comparison between students and educators using a modified Delphi survey

Hooi Min Lim 1, Chirk Jenn Ng 1,*, Chin Hai Teo 1, Ping Yein Lee 2, Puteri Shanaz Jahn Kassim 2, Nurul Amelina Nasharuddin 3, Phelim Voon Chen Yong 4, Renukha Sellappans 5, Wei Hsum Yap 4, Yew Kong Lee 1, Zahiruddin Fitri Abu Hassan 6, Kuhan Krishnan 7, Sazlina Shariff Ghazali 2, Faridah Idris 8, Nurhanim Hassan 9, Enna Ayub 9, Stathis Konstantinidis 10, Michael Taylor 10, Cherry Poussa 10, Klas Karlgren 11, Natalia Stathakarou 11, Petter Mordt 12, Arne Thomas Nilsen 12, Heather Wharrad 10
Editor: Jenny Wilkinson13
PMCID: PMC8224897  PMID: 34166432

Abstract

Background

Engaging students in the e-learning development process enhances the effective implementation of e-learning, however, students’ priority on the topics for e-learning may differ from that of the educators. This study aims to compare the differences between the students and their educators in prioritising the topics in three healthcare curricula for reusable e-learning object (RLO) development.

Method

A modified Delphi study was conducted among students and educators from University Malaya (UM), Universiti Putra Malaysia (UPM) and Taylor’s University (TU) on three undergraduate programmes. In Round 1, participants were asked to select the topics from the respective syllabi to be developed into RLOs. Priority ranking was determined by using frequencies and proportions. The first quartile of the prioritised topics was included in Round 2 survey, which the participants were asked to rate the level of priority of each topic using a 5-point Likert scale. The mean score of the topics was compared between students and educators.

Result

A total of 43 educators and 377 students participated in this study. For UM and TU Pharmacy, there was a mismatch in the prioritised topics between the students and educators. For UPM, both the educators and students have prioritised the same topics in both rounds. To harmonise the prioritisation of topics between students and educators for UM and TU Pharmacy, the topics with a higher mean score by both the students and educators were prioritised.

Conclusion

The mismatch in prioritised topics between students and educators uncovered factors that might influence the prioritisation process. This study highlighted the importance of conducting needs assessment at the beginning of eLearning resources development.

Introduction

Conventionally, in healthcare education, various learning methods have been used; these include lecture-based learning, the Socratic method with the use of questioning and cross-examining, case-based learning and problem-based learning [1]. In recent years, different modalities of e-learning have increasingly been used in healthcare education. The main advantages of e-learning are its flexibility and accessibility where learners can learn at their own pace wherever they are. E-learning in healthcare education has been shown to effectively enhance learners’ understanding of difficult topics or concepts with the use of technology [2].

Traditionally, practical skills learning in healthcare curriculum were deemed to be not suitable for e-learning [3]. Certain types of training in healthcare such as interpersonal skills and communication skills might be considered to be less appropriate for e-learning delivery [4]. With the introduction of the use of stimulation technology, virtual patients and synchronous learning delivery, more e-learning materials on practical skills and soft skills have been developed in the healthcare curriculum [5]. The use of blended learning has overcome some of the disadvantages of e-learning by integrating face-to-face teaching with online e-learning materials [6,7]. However, there is a lack of literature and guidelines to determine suitable topics that can be effectively implemented with e-learning in the healthcare curriculum.

Engagement of students in medical education especially student participation in curriculum development is getting more attention in recent years [810]. Students as co-creators in the medical curriculum provide input from the learners’ perspective, continuous feedback and innovations to improve the curriculum [8,9,11]. In e-learning development, learning needs assessment among the students is the initial step to identify the needs and preferences of the end-users [12]. It is important to assess students’ needs, views and preferences in e-learning as these are important aspects for effective implementation and integration of e-learning into the curriculum. Students can give valuable input on the topics they are struggling to understand that e-learning might be helpful or which topics can be supplemented with e-learning to enhance their learning experience. However, students and educators might have different opinions on the types of curriculum that can be effectively supplemented with e-learning. To date, there is no study on how to identify the topics for e-learning object development in the healthcare curriculum.

Identifying needs and topics for RLO development in the existing curriculum is the first step in the development process [13,14]. This study to compare the differences between the students and their educators in prioritising the topics for RLO development.

Materials and methods

This study was part of the Advancing Co-creation of RLOs to Digitise Healthcare Curricula (ACoRD) project, which is an Erasmus Plus funded 3-year project with the collaboration of six institutions from Europe and Malaysia. This project aimed to introduce innovative digital pedagogy methods that will benefit the healthcare and biomedical science students in partner countries. Reusable learning objects (RLOs), which are small 10–15 minutes of interactive e-learning objects which focus on a single learning goal, will be developed in this project.

The modified Delphi survey

A two-round modified Delphi survey was conducted from January to March 2019 in three different universities in Malaysia, i.e. University of Malaya (UM), Universiti Putra Malaysia (UPM) and Taylor’s University (TU) to identify the prioritised topics for RLO development. Table 1 shows the healthcare curricula selected for the development of RLOs. Each institution selected the specific curriculum for which the RLO development will be integrated into, based on the institution’s priority and researchers’ interests. For TU, two courses (Pharmacy and Biomedical) participated in this modified Delphi survey. As the response rate from TU Biomedical was low, where only one educator responded in Round 2 (Round 1: 3 educators, 48 students; Round 2: 1 educator, 40 students), we decided to exclude TU Biomedical from the results and only reported the findings from TU Pharmacy (S1 Table).

Table 1. Healthcare curricula selected for development of reusable learning objects (RLOs).

Institutions Programme Curriculum Number of topics for selection
University of Malaya (UM) Undergraduate Medical Programme (Undergraduate Year 4–5) Primary Care Medicine 84
Universiti Putra Malaysia (UPM) Undergraduate Medical programme (Year 1–5) Personal and Professional Development 34
Taylor’s University (TU Pharmacy) Pharmacy undergraduate Programme (Year 1–2) Microbiology, Biochemistry, Anatomy & Physiology 87

The Delphi technique uses systematically repeated rounds of iterative questionnaire exercise with controlled feedback to achieve expert consensus [15,16]. The Delphi method is commonly used in healthcare education to achieve consensus over curricular needs or to set priorities [15,1719]. It also has been used in different medical specialties to identify and prioritise the topics and procedures for training and teaching in medical education [2022]. In the original Classical Delphi process, round 1 is usually an open-ended questionnaire asking the panelists for their opinions on a certain issue for idea generation. These responses are then analysed by the researchers and feedbacked to the panellists for Round 2 in the form of statements or questions for rating or ranking. In subsequent rounds, the panellists are provided with the responses from other participants and are asked to reconsider their responses. The rounds continue until a consensus is reached [23].

In this study, the conventional first round of a Delphi study was omitted because the list of topics in the questionnaire can be identified from the existing curricula of the respective institutions. Fig 1 shows the flowchart of how the topics were prioritised using the modified Delphi survey. An expert panel consisting of the curriculum developers and experienced educators reviewed the learning objectives in the existing curriculum. The expert identified a specific subject area or a course. They reviewed the learning outcomes/topics within a course and listed them into a questionnaire for Round 1.

Fig 1. Flowchart of the modified Delphi survey in prioritising topics for reusable learning object (RLO) development, a funnel decision-making model.

Fig 1

Participants

In Round 1, the participants in this study were (1) students from the respective programs who have completed the specific course and (2) educators who were currently teaching in the course. In Round 2, participants are those who responded to the Round 1 questionnaire.

Data collection and analysis

The research team identified the participants from the respective department and invited them individually. A universal sampling method was used, and all students who have completed the specific course and educators involved in teaching the specific course were invited. An online survey was conducted using RedCap (UM) and Google Form (UPM and TU). The participants read through the participant information sheet and signed the online consent form if they agreed to participate. Subsequently, the participants filled up the demographic data and the main questionnaire. The study was approved by the University of Malaya Medical Centre Medical Research Ethics Committee (MECID No 2019225–7166) and The Ethics Committee for Research Involving Human Subjects Universiti Putra Malaya (JKEUPM-2019-103).

Round 1

In Round 1, all the eligible participants received an invitation via email or WhatsApp message, and reminders were sent two weeks later. First, participants were explained on the concept and characteristics of RLOs. They were asked to select the topics that need to be supplemented with RLOs, either topic which needs RLOs to underline their importance or more complex topics where RLOs are needed to enhance students’ understanding. The participants were provided with a comprehensive list of topics and they selected the topics using a checkbox method. Within each institution, the data were analysed separately according to the student and educator categories, and the percentage was calculated for each topic (number of participants who checked on a topic divided by a total number of participants). The topics were sorted in descending order (the most selected to least selected topics) separately for student and educator groups. The first quartile of most selected topics from the students and the first quartile of most selected topics from educators were included in the Round 2 survey.

Round 2

Participation in the previous Round 1 was a requirement for participation in the Round 2 survey. In Round 2, the participants were not aware whether the topic was chosen by the student or educator to avoid bias. Subsequently, the participants were asked to prioritise each of the topic listed to be developed into an e-learning object using a 5-point Likert scale (1-not a priority, 2-low priority, 3-medium priority, 4-high priority, 5-essential). The mean and standard deviations of each topic were calculated and sorted in descending order. Scatter plot was used to compare the topic prioritisation of students in Round 1 and Round 2. It was also used to compare the difference of topic prioritisation between students and educators. All the data were analysed using SPSS version 21.

Results

Table 2 shows the demography of the participants in this study. A total of 43 educators and 377 students from the three institutions participated in this study. The response rate for Round 1 was 69.7% (UM 71.7%, UPM 56.1%, TU Pharmacy 81.2%) and round 2 was 72.3% (UM 68.7%, UPM 64.3%, TU Pharmacy 83.9%) (S1 Table).

Table 2. Demography of participants (n = 420).

Educators UM (n = 15) UPM (n = 25) TU (Pharmacy) (n = 3)
Gender (n, %) Male 4 (26.7) 6 (24) 1 (33.3)
Female 11 (73.3) 19 (76) 2 (66.7)
Academic qualification (n, %) Master 9 (60) 12 (48) 0
PhD 6 (40) 13 (52) 3 (100)
Years of teaching (median, IQR) 9 (13) 8 (5) 2.5 (3.5)
Hours spent in teaching the specified curriculum per week (mean±SD) 4.4±3.6 7.5±10.9 8.7±3.4
Students UM (n = 119) UPM (n = 205) TU (Pharmacy) (n = 53)
Gender (n, %) Male 56 (47.1) 56 (27) 14 (26.4)
Female 63 (52.9) 149 (73) 39 (73.6)
Current year of study (n, %) Year 1 - - 29 (54.7)
Year 2 - 14 (7) 24 (45.3)
Year 3 - 67 (33) -
Year 4 34 (28.6) 76 (37) -
Year 5 85 (71.4) 48 (23) -

*UM University Malaya; UPM Universiti Putra Malaysia; TU Taylor’s University; IQR interquartile range; SD standard deviation.

Fig 2 shows the comparison of topic prioritisation between Round 1 (percentage of selection) and Round 2 (mean score of each topic) by the students and educators. For UM in Round 1, there was a difference in the topics selected by the students and educators. In Round 2, the students gave a higher mean score (Round 2) to the topics selected by themselves compared to those selected only by the educators (Fig 2A). For UPM, on the other hand, both the students and educators selected the same topics in Round 1 (Fig 2B). Hence, there was no discrepancy in the topic selection between students and educators in Round 2 survey. For TU Pharmacy, none of the topics reached a consensus of ≥ 50% from the students in Round 1 (Fig 2C). In Round 2, students gave a higher score on six topics that were only selected by the educators on Round 1. Eleven topics selected by students in Round 1 had a lower mean score in Round 2. There was inconsistency with the selection of the topic in Round 1 and Round 2 by the students.

Fig 2. Comparison of topic prioritisation in Round 1 (percentage of selection) and Round 2 (mean score) by the students and educators.

Fig 2

(A) University of Malaya (B) Universiti Putra Malaysia (C) Taylor’s University. Only the first quartile of most selected topics by each party (students or educators) in Round 1 were plotted in the scatter plot. The line represents the mean score of the topics in Round 2.

Fig 3 shows the comparison of mean scores for each topic between the students and educators in Round 2. To harmonise the prioritisation of topics between students and educators, the topics with higher scores by both the students and educators will be selected to be developed into RLOs. For UM, the topics with mean score ≥ 3.36 by the educators and mean score ≥ 4.20 by the students will be prioritised for RLO development (Fig 3A). For UPM, the prioritised topics were consistent between the students and educators (Fig 3B). For TU Pharmacy, the topics with mean score ≥ 3.35 by the educators and mean score ≥ 3.26 by the students will be prioritised for e-learning development (Fig 3C). For UM and TU Pharmacy, the topics with a lower mean score by both the educators and students would not be prioritised for e-learning development.

Fig 3. Comparison of the mean score of each topic given by the students and educators in Round 2.

Fig 3

(A) University of Malaya (B) Universiti Putra Malaysia (C) Taylor’s University. A scatter plot was used to harmonise the topics prioritisation between students and educators by comparing the mean score of each topic in Round 2 (X-axis is the mean score of each topic by educators, Y-axis is the mean score of each topic by students). The lines represent the mean score of the topics.

Discussion

This study demonstrated a great variation on how students prioritised topics for RLO development in each institution. For UPM, students and educators prioritised the same topics for RLO development. For UM and TU Pharmacy, there was a mismatch in the prioritised topics between the students and educators. Students from UM were consistent with their choice of topics in both Round 1 and Round 2, however, students from TU Pharmacy were not consistent with the choice of topics in Round 2 as compared to Round 1. To harmonise the prioritisation between the students and educators in UM and TU Pharmacy, the topics with higher mean scores by both the students and educators would be selected for RLO development.

The students and educators in UPM were congruent in their selection of topics for RLO development. This could be because, the personal and professional development curriculum in UPM has fewer topics (n = 34) as compared to UM (n = 84) and TU Pharm (n = 87). The scope of the personal and professional curriculum was more focused, aiming at professionalism, ethics and law, cultural competence, and evidence-based medicine [24,25].

For UM, there was a clear discrepancy in the topics selected by the students and educators for RLO development. This could be due to the scope of the primary care medicine curriculum in UM was wide, ranging from knowledge and comprehension of principles to specific clinical skills such as communication and procedural skills. Misconceptions of the practice and principles of primary care medicine among students might be contributing to the mismatch in topics between students and educators [2628]. A study reported that students focused on the psychosocial and human aspects of primary care medicine but less emphasized on the technical aspects such as managing uncertainty and clinical reasoning of primary care practice [27]. Some of the students appeared to focus on the organ or disease-based medical knowledge instead of holistic and patient-centred approach in primary care medicine [26]. More exploration needed to determine the reasons for a discrepancy, whether caused by the presence of mismatch teaching and learning focus between students and educators, or different opinions between students and educators on the types of topics that should be supplemented with RLOs in the primary care medicine curriculum.

For TU Pharmacy with the curriculum on basic science topics (microbiology, biochemistry, anatomy and physiology), students prioritised the educators’ choice of topics over theirs. With the use of the Delphi survey where the choices are anonymous, students have a chance to reconsider the priority of topics in the subsequent round, unconsciously reconsidered the topics which were prioritised by the educators in the previous round. It is possible that the students’ views could have changed because some students were less confident and changed to the majority viewpoint [29]. However, this pattern was not observed in the group of UM students where the students remained consistent with the topics of their choice in Round 2, not influenced by the educators’ topics. Another possible explanation for differences in prioritisation pattern across different universities is the nature of the topics. For instance, TU’s topics were mainly on basic science (knowledge-based) while those of UM and UPM focused on clinical competencies (e.g. prescription and communication). In addition, TU had a large number of topics for selection but only a smaller number of participants, especially educators. This again might contribute to the difference in the prioritisation process for TU.

In this present study of prioritising topics for RLO development, different results are expected when the same methodology is carried out in different institutions because of the variation in the existing curriculum, regional needs, teaching methods, learning activities and pace of e-learning development in each institution. More research, especially using a qualitative approach, is needed to explore why students select topics differently from their educators, how the students and educators select the topics for RLOs and what factors they are considering when they prioritise. Understanding these will help to formulate a better methodology in prioritising topics for RLO development.

In the current higher education teaching processes, the student-centred approach has been implemented as the students in higher education have the maturity to understand the required standard and learning objectives [30]. Often, students have a higher expectation in their learning processes causing a mismatch between the educators’ perception and students’ expectation [3133]. Lack of knowledge and understanding among the faculty and educators could be a factor that contributed to an unmet expectation among the students. Hence, it is important to take the students’ preference of topics into consideration when developing RLOs. Students have their own experiences what topics that are difficult to understand where RLO would be helpful; or what topics require more multimedia interaction to enhance their learning processes. However, educators’ opinions might be subjected to their personal interest, subspeciality or perception. While educators’ opinions on the selection of topics for RLO development should be considered as they are the content and education experts, their preference and prioritisation may be influenced by their personal interest and perceptions of the students. Afshar et al. [34] have highlighted how educator’s preferred teaching approach resulted in the loss of interest and reluctance in learning biochemistry among medical students, and how this can be addressed by taking into considerations of the students’ learning needs. We would, therefore, like to argue that teaching approaches should prioritise students’ needs and preferences over those of the educators. In addition, students’ need assessment should be the cornerstone in curriculum planning and development to allow educators in identifying topics, skills and knowledge that address learning needs [35]. With needs assessment, learning becomes more relevant to their clinical practice, which is more likely to lead to a change in their practice [36].

For the final topic selection for RLO development, topics with a higher priority by both students and educators would be chosen because these topics meet the learning or teaching needs of both parties. We would expect the RLOs to effectively improve the learning outcomes and have a higher success rate of implementation if the topics were chosen by both end-users of the RLOs. Likewise, topics with low priority by both the students and educators would not be prioritised for RLO development. For the topics which were only prioritised by one party, either students or educators only, more stakeholders’ input is needed to decide on the necessity of these topics to be developed into RLOs. Harmonising the topics prioritisation between students and educators is important to balance the opinions of both parties and carefully choose the topics for RLO development especially when the resources for e-learning development are limited and high-quality multimedia content is expensive to develop.

This present study demonstrated a practical approach using a modified Delphi survey to prioritise the topics systematically for RLO development. It allows the anonymity and confidentiality of the participants and students can prioritise the topics independently and objectively, free from peer pressure and educators’ dominance. The data in this study were analysed separately between the students and educators, so the opinions from both parties would be considered. Analysing both parties together would diminish the opinions of the educators as students often outnumbered educators in an institution. The modified Delphi survey is one of the methods that engages the students for RLO development in the curriculum. This present study was not aimed to achieve a consensus but rather described on the practical approach used to prioritise the topics which is applicable for different disciplines in healthcare curricula. This present study offered a basis for embedding the concept of topic prioritisation to develop highly relevant and useful eLearning objects. Engaging the end-users in topic selection is important for the effective implementation of RLOs.

The strength of this study was that it was conducted in three higher education institutions in Malaysia, with input from both students and educators as stakeholders. This study showed the feasibility of conducting a modified Delphi survey across different institutions and healthcare curricula. This was also the first study examining the variation in the selection of topics for RLO development among institutions. However, there are some limitations to this study. Firstly, the majority of topics selected by the students from TU Pharmacy did not achieve >50% of consensus in Round 1, which indicated low agreement of the topics. Second, there were smaller number of educators in this study especially in TU which may not reflect the ‘true’ needs of the educators involved in the teaching of the curricula. The nature of topics was different between institutions where UM’s and UPM’s topics were related to clinical competencies while TU Pham’s topics were basic science knowledge. This might contribute to the discrepancy of results between institutions. The students in this study were at different stages of their undergraduate programmes; this might have affected their learning needs and experience. This factor was not further explored in this study.

Conclusions

This study showed the variation of opinions in topic selection between students and educators across institutions and health topics. Further research is needed to explore the factors influencing the discrepancy in students’ learning needs from the perspective of the students and educators. This study also highlighted the importance of conducting students’ learning needs assessment before developing eLearning resources for effective implementation. Learning needs assessment should be the starting point when designing eLearning resources for healthcare curricula.

Supporting information

S1 Table. Response rate of the modified Delphi survey according to institutions.

(PDF)

S2 Table. Percentage of selection in Round 1 and mean score in Round 2 for each topic.

(PDF)

Acknowledgments

The authors would like to thank all the students and educators from the Department of Primary Care Medicine (UM), Faculty of Medicine and Health Sciences (UPM), Department of Pharmacy (TU), and Department of Biomedical (TU) for their participation and contribution. The authors would like to thank Dr Sin Ling Hue for her assistance in data collection for UM.

Data Availability

All relevant data are within the manuscript and its Supporting information files.

Funding Statement

This project was funded by the the European Union ERASMUS+ Programme under the ACoRD project (reference number: 598935-EPP-1-2018-1-UK-EPPKA2-CBHE-JP). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Mourad A, Jurjus A, Hajj Hussein I. The What or the How: a Review of Teaching Tools and Methods in Medical Education. Medical Science Educator. 2016;26. [Google Scholar]
  • 2.Bath-Hextall F, Wharrad H, Leonardi-Bee J. Teaching tools in Evidence Based Practice: evaluation of reusable learning objects (RLOs) for learning about Meta-analysis. BMC Medical Education. 2011;11(1):18. doi: 10.1186/1472-6920-11-18 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Arkorful Valentina A N. The role of e-learning, the advantages and disadvantages of its adoption in higher education. International Journal of Education and Research. 2014;2(12):397–410. [Google Scholar]
  • 4.Hameed S, Badii A, Cullen AJ. Effective e-learning integration with traditional learning in a blended learning environment. European and Mediterranean conference on information systems 2008 (EMCIS 2008); Al Bostan Rotana, Dubai, UAE: Brunel University; 2008.
  • 5.Kim S. The Future of E-Learning in Medical Education: Current Trend and Future Opportunity. Journal of educational evaluation for health professions. 2006;3:3. doi: 10.3352/jeehp.2006.3.3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Garrison DR, Kanuka H. Blended learning: Uncovering its transformative potential in higher education. The Internet and Higher Education. 2004;7(2):95–105. [Google Scholar]
  • 7.Rowe M, Frantz J, Bozalek V. The role of blended learning in the clinical education of healthcare students: A systematic review. Medical teacher. 2012;34(4):e216–e21. doi: 10.3109/0142159X.2012.642831 [DOI] [PubMed] [Google Scholar]
  • 8.Dhaese S, Van de Caveye I, Bussche P, Bogaert S, De Maeseneer J. Student participation: To the benefit of both the student and the faculty. 2015;28(1):79–82. [DOI] [PubMed] [Google Scholar]
  • 9.Milles LS, Hitzblech T, Drees S, Wurl W, Arends P, Peters H. Student engagement in medical education: A mixed-method study on medical students as module co-directors in curriculum development. Medical teacher. 2019;41(10):1143–50. doi: 10.1080/0142159X.2019.1623385 [DOI] [PubMed] [Google Scholar]
  • 10.Karakitsiou DE, Markou A, Kyriakou P, Pieri M, Abuaita M, Bourousis E, et al. The good student is more than a listener - The 12+1 roles of the medical student. Medical teacher. 2012;34(1):e1–8. doi: 10.3109/0142159X.2012.638006 [DOI] [PubMed] [Google Scholar]
  • 11.Karlgren K, Paavola S, Ligorio MB. Introduction: what are knowledge work practices in education? How can we study and promote them? Research Papers in Education. 2020;35(1):1–7. [Google Scholar]
  • 12.Branch RM. Instructional Design: The ADDIE Approach: Springer Publishing Company, Incorporated; 2009. 203 p. [Google Scholar]
  • 13.Wharrad H, Windle R. Case studies of creating reusable inter professional e-learning objects. In: Bromage A, Clouder L, Gordon F, Thistlethwaite J, editors. Interprofessional E-Learning and Collaborative Work: Practices and Technologies: IGI-Global; 2010. p. 260–74. [Google Scholar]
  • 14.Windle RJ, McCormick D, Dandrea J, Wharrad H. The characteristics of reusable learning objects that enhance learning: A case-study in health-science education. British Journal of Educational Technology. 2011;42(5):811–23. [Google Scholar]
  • 15.Thangaratinam S, Redman CW. The Delphi technique. The Obstetrician & Gynaecologist. 2005;7(2):120–5. [Google Scholar]
  • 16.Eubank BH, Mohtadi NG, Lafave MR, Wiley JP, Bois AJ, Boorman RS, et al. Using the modified Delphi method to establish clinical consensus for the diagnosis and treatment of patients with rotator cuff pathology. BMC medical research methodology. 2016;16:56. doi: 10.1186/s12874-016-0165-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Mullen Penelope M. Delphi: myths and reality. Journal of Health Organization and Management. 2003;17(1):37–52. doi: 10.1108/14777260310469319 [DOI] [PubMed] [Google Scholar]
  • 18.Sun C, Dohrn J, Oweis A, Huijer HA, Abu-Moghli F, Dawani H, et al. Delphi Survey of Clinical Nursing and Midwifery Research Priorities in the Eastern Mediterranean Region. Journal of nursing scholarship: an official publication of Sigma Theta Tau International Honor Society of Nursing. 2017;49(2):223–35. doi: 10.1111/jnu.12280 [DOI] [PubMed] [Google Scholar]
  • 19.Roney L, McKenna C. Determining the Education and Research Priorities in Pediatric Trauma Nursing: A Delphi Study. Journal of trauma nursing: the official journal of the Society of Trauma Nurses. 2018;25(5):290–7. doi: 10.1097/JTN.0000000000000390 [DOI] [PubMed] [Google Scholar]
  • 20.Gustavsen PH, Nielsen DG, Paltved C, Konge L, Nayahangan LJ. A national needs assessment study to determine procedures for simulation-based training in cardiology in Denmark. Scandinavian cardiovascular journal: SCJ. 2019;53(1):35–41. doi: 10.1080/14017431.2019.1569716 [DOI] [PubMed] [Google Scholar]
  • 21.Nayahangan LJ, Stefanidis D, Kern DE, Konge L. How to identify and prioritize procedures suitable for simulation-based training: Experiences from general needs assessments using a modified Delphi method and a needs assessment formula. Medical teacher. 2018;40(7):676–83. doi: 10.1080/0142159X.2018.1472756 [DOI] [PubMed] [Google Scholar]
  • 22.Hall S, Stephens J, Parton W, Myers M, Harrison C, Elmansouri A, et al. Identifying Medical Student Perceptions on the Difficulty of Learning Different Topics of the Undergraduate Anatomy Curriculum. Medical Science Educator. 2018;28(3):469–72. [Google Scholar]
  • 23.Keeney S, Hasson F, McKenna HP. A critical review of the Delphi technique as a research methodology for nursing. International journal of nursing studies. 2001;38(2):195–200. doi: 10.1016/s0020-7489(00)00044-4 [DOI] [PubMed] [Google Scholar]
  • 24.Komattil R, Hande SH, Mohammed CA, Subramaniam B. Evaluation of a personal and professional development module in an undergraduate medical curriculum in India. Korean J Med Educ. 2016;28(1):117–21. doi: 10.3946/kjme.2016.17 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Yielder J, Moir F. Assessing the Development of Medical Students’ Personal and Professional Skills by Portfolio. J Med Educ Curric Dev. 2016;3:JMECD.S30110. doi: 10.4137/JMECD.S30110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.López-Roig S, Pastor M, Rodríguez C. The reputation and professional identity of family medicine practice according to medical students: a Spanish case study. Aten Primaria. 2010;42(12):591–601. doi: 10.1016/j.aprim.2010.05.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Chung C, Maisonneuve H, Pfarrwaller E, Audétat M-C, Birchmeier A, Herzig L, et al. Impact of the primary care curriculum and its teaching formats on medical students’ perception of primary care: a cross-sectional study. BMC Family Practice. 2016;17(1):135. doi: 10.1186/s12875-016-0532-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Erikson CE, Danish S, Jones KC, Sandberg SF, Carle AC. The role of medical school culture in primary care career choice. Acad Med. 2013;88(12):1919–26. doi: 10.1097/ACM.0000000000000038 [DOI] [PubMed] [Google Scholar]
  • 29.Becker GE, Roberts T. Do we agree? Using a Delphi technique to develop consensus on skills of hand expression. J Hum Lact. 2009;25(2):220–5. doi: 10.1177/0890334409333679 [DOI] [PubMed] [Google Scholar]
  • 30.Lea SJ, Stephenson D, Troy J. Higher Education Students’ Attitudes to Student-centred Learning: Beyond ’educational bulimia’? Studies in Higher Education. 2003;28(3):321–34. [Google Scholar]
  • 31.Perera J, Lee N, Win K, Perera J, Wijesuriya L. Formative feedback to students: the mismatch between faculty perceptions and student expectations. Medical teacher. 2008;30(4):395–9. doi: 10.1080/01421590801949966 [DOI] [PubMed] [Google Scholar]
  • 32.Möller R, Shoshan M. Does reality meet expectations? An analysis of medical students’ expectations and perceived learning during mandatory research projects. BMC Medical Education. 2019;19(1):93. doi: 10.1186/s12909-019-1526-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Wenrich M, Jackson MB, Scherpbier AJ, Wolfhagen IH, Ramsey PG, Goldstein EA. Ready or not? Expectations of faculty and medical students for clinical skills preparation for clerkships. Med Educ Online. 2010;15:doi: 10.3402/meo.v15i0.5295 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Afshar M, Han Z. Teaching and Learning Medical Biochemistry: Perspectives from a Student and an Educator. Medical Science Educator. 2014;24(3):339–41. doi: 10.1007/s40670-014-0004-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Lockyer J. Needs assessment: Lessons learned. Journal of Continuing Education in the Health Professions. 1998;18(3):190–2. [Google Scholar]
  • 36.Grant J. Learning needs assessment: assessing the need. BMJ. 2002;324(7330):156–9. doi: 10.1136/bmj.324.7330.156 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Jenny Wilkinson

20 Apr 2021

PONE-D-20-36536

Prioritizing topics for developing e-learning resources in healthcare curricula: a comparison between students and educators

PLOS ONE

Dear Dr. Ng,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jun 04 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Jenny Wilkinson, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

  1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2.  Thank you for including your ethics statement: 'The participants read through the participant information sheet and signed the online consent form if they agreed to participate. Subsequently, the participants filled up the demographic data and the main questionnaire. The study was approved by the respective medical research ethics committees (MECID No 2019225-7166, JKEUPM-2019-103).'

a. Please amend your current ethics statement to include the full names of the ethics committees/institutional review board(s) that approved your specific study.

b. Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research

3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

Additional Editor Comments:

Thank you for your submission; attached are reviewer comments for your information and I now invite you to provide a response to these comments.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The work aim is valid and can be easily replicated in other settings. Nevertheless, there are many confusing points of concern in the manuscript that need to be explained and commented on and, if found necessary to improve the presentation, to make appropriate changes to improve the presented work. [1] Why in case of TU, 2 different courses (pharmacy and Biomed) were separated in methods and tables but combined only in results? Was there any difference in results figures? Was this a reason to have educator choose completely different choices than students when combined results were presented? [2] In table S2 (B) and figure 3, it is scientifically unacceptable that 25/educator and 205/student agree (independently) to value 16 topics by only 2 values out of the 5-point Likert scale (4 by13/16 and 3 by 3/16)!! This has produced an odd “scatter” plot as shown in figure 3. What went wrong here? [3] Results, line 152 reads: “topics were sorted in descending order (the most selected to least selected topics) separately for each of student and educator groups. The first quartile of most selected topics from the students and the first quartile of most selected topics from educators were included in Round 2 survey.” OK, why item #2 (scoring 52/educator and 55.1/student) was included as a quartile in R2 and not item #23 (scoring 60/educator and 61/student); see (table S2 (B): Supplementary documents). How can the value of 52 and 55.1 be included in a quartile from descending values and in same table values of 60 and 61 (respectively) are not included in the same quartile?? No clue can be discovered. [4] Need to comment on the difference in results from TU in comparison with UM and UPM by referring to difference in nature of the lists of “topics”. The lists used in UM and UPM cover number of “competencies’ while those for TU covered merely listed subjects relating to purely factual knowledge as detailed in the supplemental materials (tables S2 A-D). Accordingly, one major result should emphasize that nature of topics can affect the results. That was also superimposed by the fact that in case of TU there was a large number of topics with a small number of participants especially educators. [5] There is a need to emphasize that students’ role as a participant in identifying their “learning needs” which can improve their outcomes and those of forthcoming batches, are more relevant than role of their educators in identifying their “teaching needs” which can easily be subjective depending on each educator’s interest and subspecialty. [6] You need to check values as in table 2, row 1 indicates that (n=) for UPM is 25 and for TU-pharmacy is 3 but the following rows indicate that either the figure in row 1 is reversed or details in following rows are reversed between the two universities. [7] In figure 2, I suggest using different shapes for the plotted values instead of colours as usually printouts are made in BW rendering the figure to be of less value. [8] Check text carefully, there are several sentences containing doubled words and misspellings e.g. lines 214, 249. [9] Results, line #177 to 178, the sentence states that fig 2 is for students and fell short to mention for educators also. I Suggest rewriting and resubmitting as a study of student learning needs assessment of previous students for future classes’ RLO. The needs of the students can give a more valid list than a list of educator’s choices of importance of the competencies. Meanwhile, the study should cover competencies in UM and UPM only (exclude factual knowledge of TU).

Reviewer #2: Very interesting idea - Does the difference in student level at the institutions play a role here? I am a bit unclear on your final conclusion. Please consider clarifying the take-away for the reader and perhaps increasing discussion a bit. Overall, very nicely done.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Ghanim Alsheikh

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jun 24;16(6):e0253471. doi: 10.1371/journal.pone.0253471.r002

Author response to Decision Letter 0


31 May 2021

Response to reviewers

Reviewer #1:

The work aim is valid and can be easily replicated in other settings. Nevertheless, there are many confusing points of concern in the manuscript that need to be explained and commented on and, if found necessary to improve the presentation, to make appropriate changes to improve the presented work.

[1] Why in case of TU, 2 different courses (pharmacy and Biomed) were separated in methods and tables but combined only in results? Was there any difference in results figures? Was this a reason to have educator choose completely different choices than students when combined results were presented?

Response:

Thank you for highlighting this discrepancy to us. Although it was our original intention to analyse the two courses separately, we did not include TU Biomed data in our final results because only one educator responded in Round 2 which made comparison of responses between students and educators responses unattainable. Therefore, only TU Pharmacy results were reported for TU in Figure 2C and Figure 3C.

We have previously stated it under Methods, line 175-176 "The comparison between students and educators was not done for TU (Biomed) because there was only one educator who responded in Round 2.". We realised that this was not clear and we have revised the manuscript accordingly as follows, under Methods, 2nd paragraph, line 110-114 (Please refer to the manuscript with tracked changes for the line numbering):

"For TU, two courses (Pharmacy and Biomedical) participated in this modified Delphi survey. As the response rate from TU Biomedical was low, where only one educator responded in Round 2 (Round 1: 3 educators, 48 students; Round 2: 1 educator, 40 students), we decided to exclude TU Biomedical from the results and only reported the findings from TU Pharmacy (Supplementary Table S1)."

[2] In table S2 (B) and figure 3, it is scientifically unacceptable that 25/educator and 205/student agree (independently) to value 16 topics by only 2 values out of the 5-point Likert scale (4 by13/16 and 3 by 3/16)!! This has produced an odd "scatter" plot as shown in figure 3. What went wrong here?

Response:

Thank you for pointing out the error in Supplementary Table S2(B) ans Figure 3 and we would like to sincerely apologise for the mistakes. The values were such because the decimal points were rounded up to a whole figure instead of presented in two decimal points like the other institutionas.

We have revised the values to the original two decimal points, and have amended Table S2(B) and figure 3 accordingly. The revised scatterplot shows the spread of the agreement. The revised results did not affect the interpretation and conclusion of this study.

[3] Results, line 152 reads: "topics were sorted in descending order (the most selected to least selected topics) separately for each of student and educator groups. The first quartile of most selected topics from the students and the first quartile of most selected topics from educators were included in Round 2 survey." OK, why item #2 (scoring 52/educator and 55.1/student) was included as a quartile in R2 and not item #23 (scoring 60/educator and 61/student); see (table S2 (B): Supplementary documents). How can the value of 52 and 55.1 be included in a quartile from descending values and in same table values of 60 and 61 (respectively) are not included in the same quartile?? No clue can be discovered.

Response:

Thank you for pointing out the discrepancy and we sincerely apologised for the mistake. For item #23, the scoring by educator (n=12) was 48% (not 60%, n=15). The scoring #23 for students was correct i.e. 61% (n=125). Therefore, #23 was not in the first quartile for the educator scoring and hence was not included in Round 2. We have made the amendment in Supplementary material Table S2(B).

[4] Need to comment on the difference in results from TU in comparison with UM and UPM by referring to difference in nature of the lists of "topics". The lists used in UM and UPM cover number of "competencies' while those for TU covered merely listed subjects relating to purely factual knowledge as detailed in the supplemental materials (tables S2 A-D). Accordingly, one major result should emphasise that nature of topics can affect the results. That was also superimposed by the fact that in case of TU there was a large number of topics with a small number of participants especially educators.

Response:

Thank you for the suggestions. We have added the following points under Discussion, 4th paragraph, line 260-265 as shown below:

“Another possible explanation for differences in prioritisation pattern across different universities is the nature of the topics. For instance, TU’s topics were mainly on basic science (knowledge-based) while those of UM and UPM focused on clinical competencies (e.g. prescription and communication). In addition, TU had a large number of topics for selection but only a smaller number of participants, especially educators. This again might contribute to the difference in the prioritization process for TU.”

[5] There is a need to emphasise that students' role as a participant in identifying their "learning needs" which can improve their outcomes and those of forthcoming batches, are more relevant than role of their educators in identifying their "teaching needs" which can easily be subjective depending on each educator's interest and subspecialty.

Response:

We totally agree with the Reviewer and have added the points with supporting references under Discussion, 6th paragraph, line 278-398 to emphasise the relevance of students' role in identifying their own learning needs.

"Lack of knowledge and understanding among the faculty and educators could be a factor that contributes to an unmet expectation among the students. Hence, it is important to take the students' preference of topics into consideration when developing RLOs. Students have their own experiences what topics that are difficult to understand where RLO would be helpful; or what topics require more multimedia interaction to enhance their learning processes. While educators' opinions on the selection of topics for RLO development should be considered as they are the content and education experts, their preference and prioritisation may be influenced by their personal interest and perceptions of the students. Afshar et al. have highlighted how educator’s preferred teaching approach resulted in the loss of interest and reluctance in learning biochemistry among medical students, and how this can be addressed by taking into considerations of the students' learning needs (34). We would, therefore, like to argue that teaching approaches should prioritise students’ needs and preferences over those of the educators. In addition, students' need assessment should be the cornerstone in curriculum planning and development to allow educators in identifying topics, skills and knowledge that address learning needs (35). With needs assessment, learning becomes more relevant to their clinical practice, which is more likely to lead to a change in their practice (36)."

[6] You need to check values as in table 2, row 1 indicates that (n=) for UPM is 25 and for TU-pharmacy is 3 but the following rows indicate that either the figure in row 1 is reversed or details in following rows are reversed between the two universities.

Response:

We apologised for the mistake and have amended the figure accordingly in Table 2.

[7] In figure 2, I suggest using different shapes for the plotted values instead of colours as usually printouts are made in BW rendering the figure to be of less value.

Response:

Thank you for your suggestion. We have changed the colour to different shapes and shading in Figure 2

[8] Check text carefully, there are several sentences containing doubled words and misspellings e.g. lines 214, 249.

Response:

We have combed through the manuscript and edited all the doubled words accordingly.

[9] Results, line #177 to 178, the sentence states that fig 2 is for students and fell short to mention for educators also.

Response:

We have now added the word "educators" in the sentence.

I Suggest rewriting and resubmitting as a study of student learning needs assessment of previous students for future classes' RLO. The needs of the students can give a more valid list than a list of educator's choices of importance of the competencies. Meanwhile, the study should cover competencies in UM and UPM only (exclude factual knowledge of TU).

Response:

Thank you for the suggestion to improve the manuscript.

Although we acknowledged that the nature of topics (clinical competencies vs basic science knowledge) might affect the topic selections of students, this study was intended to be exploratory and hope to compare the learning needs of the students with those perceived by the educators across different health disciplines. This, we hope, will identify gaps and findings for future research, particularly whether and how the nature of topics (and other factors) affects students' learning needs.

We address the limitation under Discussion section, last paragraph, line 331-333 as below:

"The nature of topics was different between institutions where UM's and UPM's topics were related to clinical competencies while TU Pham's topics were basic science knowledge. This might contribute to the discrepancy of results between institutions."

Reviewer #2:

Very interesting idea - Does the difference in student level at the institutions play a role here?

Response:

We agree with the reviewer that the difference in student level at the institutions might influence the results of this present study. We acknowledged this limitation under Discussion, last paragraph, line 334-335 as below:

"The students in this study were at different stages of their undergraduate programmes; this might have affected their learning needs and experience. This factor was not further explored in this study."

I am a bit unclear on your final conclusion. Please consider clarifying the take-away for the reader and perhaps increasing discussion a bit. Overall, very nicely done.

Response:

We have refined the conclusion to clarify the take-away message for the readers, under conclusion, line 338-344 as below:

"This study showed the variations of opinions in topic selection between students and educators across institutions and health topics. Further research is needed to explore the factors influencing the discrepancy in students’ learning needs from the perspective of the students and educators. This study also highlighted the importance of conducting students' learning needs assessment before developing eLearning resources for effective implementation. Learning needs assessment should be the starting point when designing eLearning resources for healthcare curricula."

Attachment

Submitted filename: Response to reviewer_PlosOne_31.5.2021.docx

Decision Letter 1

Jenny Wilkinson

7 Jun 2021

Prioritising topics for developing e-learning resources in healthcare curricula: a comparison between students and educators using a modified Delphi survey.

PONE-D-20-36536R1

Dear Dr. Ng,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Jenny Wilkinson, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Thank you for your responses and manuscript revisions, these have satisfactorily addressed reviewer comments.

Reviewers' comments:

Acceptance letter

Jenny Wilkinson

16 Jun 2021

PONE-D-20-36536R1

Prioritising topics for developing e-learning resources in healthcare curricula: a comparison between students and educators using a modified Delphi survey.

Dear Dr. Ng:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr Jenny Wilkinson

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Response rate of the modified Delphi survey according to institutions.

    (PDF)

    S2 Table. Percentage of selection in Round 1 and mean score in Round 2 for each topic.

    (PDF)

    Attachment

    Submitted filename: Response to reviewer_PlosOne_31.5.2021.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES