Abstract
Medical ethics and law (MEL) have a well-established place in medical curricula within the UK, but appropriately assessing MEL in a medical school context can be extremely challenging. The Institute of Medical Ethics convened a working group focused on assessment in 2021, and in this article, we present a summary of the work undertaken by this group. We start by explaining the challenges presented by the assessment of MEL, highlighting the potentially demanding requirements set out by the General Medical Council in the UK. We then explore how MEL is currently assessed in UK medical schools. We go on to consider a number of different forms of assessment and their suitability for assessing ethics and law. Finally, we report the key recommendations from the working group and conclude that we are unconvinced that current approaches to assessing MEL are sufficient to robustly assess the General Medical Council’s learning outcomes.
Keywords: Education, Ethics- Medical
Introduction
It has long been stated by experts in medical ethics and law (MEL) that the subject should be an integral part of medical curricula,1 and its place is now well-established and accepted throughout UK medical schools. Within its Outcomes for Graduates,2 the General Medical Council (GMC) lists numerous learning outcomes directly related to MEL. The Medical Licensing Assessment, which will debut in 2025, also features MEL assessments. Graduates must pass the Medical Licensing Assessment to register to practise. Despite the recognised importance and now longstanding inclusion of MEL in medical curricula, questions remain about how best to assess that students are attaining MEL-related learning outcomes.
The Institute of Medical Ethics (IME) established a working group in 2021 to explore the assessment of MEL and to update its assessment guidance. This paper reports a summary of the work undertaken by this group and an overview of its main recommendations. We start by outlining the challenges of MEL assessment before discussing how UK medical schools currently assess MEL. Finally, we present a number of options for assessing MEL, briefly highlighting their merits and limitations before reporting the working group’s key recommendations.
Centrality of MEL
The importance of MEL education is cemented by the GMC, the organisation with statutory responsibility in the UK for, among other things, setting standards for medical education and training. Outcomes for Graduates2 specifies the learning outcomes that medical students must achieve by the end of their course and is set out in three sections, one of which is ‘Professional values and behaviours’. This section’s subsections include ‘Professional and ethical responsibilities’, ‘Legal responsibilities’ and ‘Dealing with complexity and uncertainty’. Learning outcomes are included under each subsection.
Alongside fundamental learning outcomes relating to, for example, confidentiality and consent (topics for which the GMC provides specific guidance), there are also arguably more complex and demanding outcomes. For instance, newly qualified doctors must be able to ‘summarise the current ethical dilemmas in medical science and healthcare practice; the ethical issues that can arise in everyday clinical decision-making; and apply ethical reasoning to situations which may be encountered in the first years after graduation’.2 To achieve this requires knowledge about ongoing ethical debates, the ability to recognise ethical issues in practice and possession of the knowledge and skills to undertake ethical reasoning. This sets high expectations for MEL, so it is crucial that teaching is appropriate to help students achieve these learning outcomes and that assessment is appropriate to determine whether students have achieved them. The variety of learning outcomes relevant to MEL increases the complexity here: some outcomes are knowledge-based, some are skills-based and others may be considered attitudinal. Different teaching and assessment approaches may be needed for each.
Outcomes for Graduates provides the overarching learning outcomes but it is beyond that document’s scope to provide specific guidance regarding the content of MEL curricula or how the content should be taught and assessed. This is left to individual medical schools to determine, although the IME provides its own indicative Core Curriculum. There is a debate in the academic literature regarding what the overarching aims of MEL education should be,3 but IME core curriculum’s aim is clearly articulated: ‘to equip students to identify ethical and legal issues in medical practice, have a critically reflective approach to those issues, and be able to give a reasoned justification of the actions they would take in line with the knowledge, attitudes and skills covered within the curriculum’.4 We used this aim to underpin our working group’s activities, and we think of it in terms of ‘helping students to become doctors who do the right things for the right reasons’. The question when it comes to assessment, then, is how medical schools can be sure that their students are thus equipped upon graduation. Assessments must be fair, reliable and valid and carried out by someone with appropriate expertise in the area being assessed. The challenge is clear: the GMC sets a diverse range of MEL learning outcomes and medical schools have to be sure that students are meeting them.
Learning, teaching and assessment
Although this paper focuses on assessment, this should be considered within the broader context of teaching MEL. Previous IME assessment guidance drew on Mattick and Bligh's 2006 publication5 to obtain a picture of MEL teaching and assessment, but this is now outdated. More recent research conducted by Brooks and Bell6 sought to evaluate undergraduate ethics curricula against the IME’s core curriculum recommendations. The number of participating medical schools was relatively low (11 out of 33 invited institutions), so the results offer only a partial snapshot, but they provide important insights. The average quantity of ethics teaching provided by each course was around 38 hours. This tended to be focused towards the first 3 years of programmes, with an average of only 1.65 hours of formal ethics teaching in the final year. There were different approaches to structuring MEL curricula: 6/11 responding medical schools had a discrete ethics curriculum with specific MEL learning objectives, and 5/11 taught MEL as a module not taught alongside other topics. 10/11 responding schools undertook a summative assessment of MEL, mostly in the format of an integrated written exam (9/11) and less commonly as a discrete written ethics examination (5/11). Objective structured clinical exams (OSCEs) (6/11) and work-based clinical assessments (5/11) were also often used. An issue highlighted by Mattick and Bligh previously, which still held true in Brooks and Bell’s study, was that at several medical schools (5/11 in Brooks and Bell), it was possible to fail MEL aspects of assessments yet still get a medical degree and become a doctor.
Developing guidance
As part of our working group’s activities, we undertook a consultation across UK medical schools, which included a survey, a workshop and a deliberation process. A questionnaire was sent to educators at 44 UK medical schools in the summer of 2021 to update our understanding of MEL education. This covered how MEL is assessed, the factors that lead to it being assessed in these ways, and what an ideal form of MEL assessment might look like. This provided an up-to-date snapshot of current MEL assessment practices and their justifications. Responses were received from 15 medical schools. The IME also hosted a national workshop in March 2022 attended by 43 participants, including representatives from 18 medical schools. IME members were invited via email to attend, and participants included ethics educators and assessors (including clinicians), as well as medical students. Representatives from the GMC and Medical Schools Council also attended as observers. The workshop’s aim was to explore reasoning behind MEL assessment decisions, to inform best practice guidance. A draft report was shared with attendees, which included the survey results. Small-group discussions were used to consider different approaches to assessment. Written notes were collated from groups after the workshop. The Assessment Working Group met multiple times prior to and after the workshop to plan the group’s activities, discuss the implications of the Medical Licensing Assessment and deliberate best practices for MEL assessment. The group finalised the internal Interim Report in June 2022. A narrative summary of the survey aspect of the consultation is provided in the next section. The recommendations presented at the end of this paper are based on the working group’s reflections on the expert views obtained via the consultation process, alongside available literature on MEL assessment.
MEL assessment consultation results in UK medical schools
How and when MEL is assessed
There was no consistency from respondents about how and when they assess MEL, but all respondents noted that they assess multiple times throughout the curriculum, rather than one-off assessment. Many respondents (12/15) reported using formative assessment to promote the learning of MEL. Every responding medical school uses single-best answer (SBA) multiple-choice questions (MCQs), and 13 of them also embed MEL assessment in OSCEs. Less frequently used assessment modalities included assessed presentations, essays, reflections and open-book assessments.
Why MEL is assessed in this way
Several respondents acknowledged that summative assessments emphasise the importance of the subject being assessed. The respondents also recognised that different subjects within MEL are suited to different assessment methods (eg, testing ethical reasoning and testing knowledge of law). Several respondents (6) highlighted how using a range of assessment methods is important to adequately address different types of learning outcomes. Six respondents noted that decisions regarding the format of MEL assessments were determined by the schools’ overarching assessment strategy and direction, suggesting that MEL educators may sometimes not have completely ‘free reign’ when deciding how to assess their subject.
Important aspects of MEL
While all areas of MEL were deemed important by our respondents, consent (7) and confidentiality (6) were mentioned most frequently. Four participants highlighted the importance of assessing ethical reasoning skills, and others alluded to similar ideas (eg, dealing with issues of consent and confidentiality in difficult situations). These areas all correspond to various GMC learning outcomes. In general, SBA MCQs were seen as a better choice for the legal aspects (where there are clearer correct/incorrect answers) than for ethical questions (which often require more provision for nuance, in both questions and answers). SBAs were noted for being good for assessing knowledge, as were OSCEs, but behaviour and sound reasoning were context-dependent and often hard to assess. Around half of the responses (7) reported that professionalism is taught and assessed alongside MEL. For the other half, professionalism is a separate module outside of the MEL programme.
Areas of challenge in developing MEL assessment
Marking assessments was highlighted as an area of difficulty with three respondents noting that it can be burdensome for staff and that it can be difficult to assess behaviours and ethical reasoning. Two respondents also noted the difficulty in setting assessments, particularly when trying to apply questions to practice. One respondent noted that short answer questions (SAQs) even though they permit more depth than SBA MCQs, still lack sufficient depth to fully capture complex ethical reasoning. Different assessment formats were highlighted as being appropriate for assessing different outcomes but, perhaps unsurprisingly, all had their own challenges and downsides.
The remaining challenges for our working group
Respondents highlighted the importance of a consistent and robust approach to assessment and suggested that a programme of assessment should provide adequate coverage for the various types of MEL learning outcomes. It is clear that there are multiple constraints that may stand in the way of this. ‘Assessment burden’ on both staff and students is a significant factor: having multiple and varied assessments of MEL throughout a medical degree may be robust, but it may also place unreasonable demands on staff and students alike. Our recommendations must ultimately be a reasonable proposition, and what is reasonable will be dependent on individual institutions and their overarching assessment models. If, however, constraints prevent adequate assessment of MEL learning outcomes, then something clearly has to change.
There are inconsistencies in how MEL is taught: its coverage, the teaching methods and the timing of teaching. Further, different medical schools have different overarching approaches to assessment, meaning that there will be various additional constraints to contend with at local levels. The availability of staff to mark assessments, different balances of formative and summative assessment and varying levels of integration of MEL within the broader medical curriculum mean that a simple ‘one-size-fits-all’ set of recommendations is unlikely to be useful to many MEL educators. Our recommendations and guidance are therefore necessarily ‘higher level’. In the next section, we give a brief overview of some of the key assessment modalities, before summarising our overarching recommendations.
The key assessment options available
Multiple Choice Questions
MCQs are popular in medical education and are used to assess many curriculum areas. They allow standardisation and can effectively highlight, for students, areas where their knowledge is lacking. They are resource-light to mark, as computers are frequently employed. Their utility in assessing complex issues has, however, been questioned. Although well-constructed MCQs can be a satisfactory approach for testing higher-order cognitive skills, it is extremely challenging to produce such high-quality items.7,9 A common problem with MCQs is the need for an unequivocal single best answer, which is not suited to ethical scenarios where there may arguably be multiple acceptable ways forward. Devising a range of plausible distractor answers that cannot themselves defensibly be construed as a ‘correct answer’ is challenging, particularly for questions focused on ethics. Fenwick contrasts the statistical reliability of MCQs with a potential loss of validity in the more subtle matters of ethical judgement,10 and Wong et al11 highlight the difficulty in presenting brief scenarios in a sufficiently realistic and multi-faceted way. Overall, MCQs can be well suited to assessing knowledge-based learning outcomes, but less suitable for more nuanced matters of judgement or explanation.
Modified essay questions (MEQs) and Short Answer Questions
MEQs or SAQs linked to brief clinical scenarios may provide a means of assessing more complex issues and greater depth of understanding. Mitchell et al describe the use of MEQs in Newcastle, Australia.12 Their format was the step-by-step unfolding of a clinical case with no option to revise earlier decisions. They found their approach, like MCQs, was suited to assessing knowledge. Favia et al13 also used questions embedded within clinical scenarios and found that their structured approach was able to assess knowledge, application of knowledge and ethical reasoning. A potential shortcoming of SAQs and MEQs is that students may be tempted to use formulaic ‘safe’ answers rather than answers that demonstrate deeper understanding or capability. Answers taking this approach may demonstrate limited knowledge and understanding, yet nonetheless not be wrong and may therefore be awarded marks. SAQs in particular will often be allocated a relatively low number of marks, so there is limited granularity for distinguishing between good, mediocre and wrong answers. Marking of MEQs and SAQs takes a significant amount of time and needs to be undertaken by subject experts due to the need to exercise judgement on answers and potentially modify mark schemes to accommodate unanticipated answers that are nonetheless plausibly correct.
OSCEs
In addition to students being expected to acquire, retain and apply knowledge of ethics, they are expected to acquire behavioural skills and professional attitudes.14 OSCE-type examinations were developed at the University of Dundee Medical School in 1977, and their use has become widespread. They are felt to be a reliable means of assessment at the ‘shows how’ level of Miller’s pyramid.15 Singer and colleagues proposed the use of ethics-specific OSCEs.16 On further evaluation, they found that while the initial results showed that the assessments had adequate construct validity and interrater reliability, internal consistency reliability was low. They concluded that the OSCE ‘is not a feasible stand-alone method for summative evaluation of clinical ethics.’17 As part of a suite of MEL assessment OSCEs may still have a role to play: integrating ethical components into primarily clinical OSCE stations would provide an additional modality beyond written answers, and Campbell et al suggest that OSCEs may be one of the better ways of assessing competence,18 also highlighting their formative value.
Situational judgement tests (SJT)
Goss et al discuss the use of situational judgement tests in the assessment of professionalism among medical students. They developed three 40 item tests with the aim of testing both the students’ knowledge of principles and their ability to apply them appropriately. Each test consisted of 25 scenarios, each followed by five possible responses which the students must rank from most to least appropriate, plus a further 15 scenarios each with eight possible responses from which they must choose the three most appropriate. They concluded that the validity and reliability were acceptable and that students felt they positively affected their learning. However, there were significant time burdens involved in developing the questions and in standard setting/marking, although they anticipated that this burden would be reduced in subsequent years.19
Foucault and colleagues have sought to develop the concept of the SJT in ethics further and have devised a concordance-of-judgement learning tool. They developed a number of validated clinical vignettes to illustrate professionalism issues. These were shown to an expert panel of clinicians who had been selected by the students as role models of professionalism. The panel’s answers were used as the basis for the model answer for the test. The test itself was administered via a web-based platform. This programme allowed the students to obtain immediate feedback, comparing their answers with the experts’ explanations. Participating students said that the assessment was easy to use and led to new learning.20 SJTs have been critiqued recently and have fallen out of favour in some contexts,21 but may have the potential to assess whether students are able to do ‘the right things’, although perhaps not for ‘the right reasons’.
Portfolios and peer evaluation
Other means of assessing students’ understanding of ethics include portfolios and peer review. O’Sullivan et al set out to evaluate whether portfolios help develop skills such as reflective practice and ethical judgement. They surveyed the students’ own perceptions of their performance of tasks included in the portfolio. Perhaps unsurprisingly they found that the demand to produce reflective writing was associated with most students (63%) feeling they had experienced an improvement in reflective practice. The effect with regard to ethics was less marked (48%). There was, however, no direct assessment of the students’ understanding of ethical principles or their application.22
Emkee and colleagues sought to evaluate the utility of paired self-assessment and peer evaluation of students’ professional behaviours. They identified two subgroups who may be at risk of future professionalism concerns: those who gave themselves a significantly higher score than they received from their peers and those who did not engage with the process. While this may help identify outliers who are a cause for concern, it does not provide a method of assessment for the majority.23
Simulation and serious games
New technologies allow novel methods of education and assessment. Computers can also offer teaching via serious games (ie, games developed for a pedagogical purpose).24 Serious games have been used to teach and assess clinical reasoning.25 A number of groups have investigated the potential of using serious games to teach ethics,26,28 but so far they have not been used as an assessment tool. Gisondi and colleagues have used high-fidelity simulation with video monitoring, where ethical issues are incorporated into clinical scenarios. This may be enhanced further with the increasing availability of virtual reality technology. This general approach has again proven to be a resource-intensive means of assessment, but further technological developments along with the increased power of artificial intelligence (noting, of course, the myriad ethical/practical issues that this brings) may make novel technology-assisted modes of assessment an avenue worth more detailed exploration.
Summary and recommendations
A one-size-fits-all approach to MEL assessment is unlikely to be appropriate. The discussion above highlights how different assessment methods suit different aspects of MEL. Moreover, differences in how the same underlying content is taught across curricula in different medical schools may necessitate different assessment methods. The forthcoming introduction of the Medical Licensing Assessment will not provide a silver bullet here.29 The Applied Knowledge Test aspect of the MLA will be MCQ only and this assessment’s blueprint states that there could be as few as two MEL questions included (out of 200 questions in total).30 MEL assessment outside of the MLA therefore remains a necessity. We suggest that a robust programme of MEL assessment should include the following:
A range of assessment modalities to address the diversity of learning outcomes. Methods of assessment should be appropriate for the specifically assessed learning outcomes, along with the point in the course at which it is assessed. We would anticipate that this should include MCQs/SAQs and something more practically oriented such as OSCEs, but that this is unlikely sufficient. A single assessment modality will certainly not be sufficient to assess the full range of learning outcomes.
Robust assessment should have the ability to demonstrate when students do not meet GMC learning outcomes. Medical schools are expected to ensure graduates meet the GMC’s stated MEL outcomes, and therefore, compensation across different areas of the curriculum should be limited; a student should not be able to make up for low achievement in MEL with high achievement in anatomical knowledge. This does not necessarily require separate MEL assessments and could be achieved by more granular consideration of performance in specific curriculum areas within written assessments (somewhat similar to domain marking in OSCEs). Appropriate measures should be in place to ensure that students cannot progress/graduate unless able to demonstrate that they have met the assessed learning outcomes.
If it is not possible to adequately address all MEL learning outcomes via formal summative assessment, staff supervising students on clinical placements should be encouraged to highlight issues with students who may be falling short of those criteria that require ‘demonstration’ or ‘acting’ in particular ways. Although clearly unethical or unprofessional behaviour may already be reported, having a lower threshold for raising concerns may better highlight students who are struggling to meet these learning outcomes and allow for timely remediation. Framing this as a supportive—rather than punitive—process, with clearly defined thresholds for raising concerns (linked to MEL learning outcomes) may encourage clinical staff to more readily flag students falling short of demonstrating achievement of MEL learning outcomes.
Based on our consultation with medical schools and published literature, we are unconvinced that the widely used approaches to MEL assessment are generally sufficient to robustly address the GMC’s learning outcomes. It was beyond the scope of our working group to develop novel methods of assessment, and further work needs to be undertaken to develop assessments that can robustly assess students’ ethical reasoning and behaviour. Recent and rapid developments in virtual reality and artificial intelligence provide a glimmer of what may be possible here but plainly require careful development and implementation. The key recommendation from our working group’s activities is, therefore, that further research is urgently needed.
Footnotes
Patient consent for publication: Not applicable.
Ethics approval: Not applicable.
Provenance and peer review: Not commissioned; externally peer reviewed.
Data availability statement
Data are available upon reasonable request.
References
- 1.Consensus Statement by Teachers of Medical Ethics and Law in UK Medical Schools Teaching medical ethics and law within medical education: a model for the UK core curriculum. J Med Ethics. 1998;24:188–92. doi: 10.1136/jme.24.3.188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.General Medical Council Outcomes for graduates. 2018. [6-Jun-2024]. https://www.gmc-uk.org/-/media/documents/dc11326-outcomes-for-graduates-2018_pdf-75040796.pdf Available. Accessed.
- 3.Carrese JA, Malek J, Watson K, et al. The essential role of medical ethics education in achieving professionalism: the Romanell Report. Acad Med. 2015;90:744–52. doi: 10.1097/ACM.0000000000000715. [DOI] [PubMed] [Google Scholar]
- 4.Institute of Medical Ethics Core curriculum for undergraduate medical ethics and law. 2019. [6-Jun-2024]. https://ime-uk.org/wp-content/uploads/2020/10/IME_revised_ethics_and_law__curriculum_Learning_outcomes_2019.pdf Available. Accessed.
- 5.Mattick K, Bligh J. Teaching and assessing medical ethics: where are we now? J Med Ethics. 2006;32:181–5. doi: 10.1136/jme.2005.014597. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Brooks L, Bell D. Teaching, learning and assessment of medical ethics at the UK medical schools. J Med Ethics. 2017;43:606–12. doi: 10.1136/medethics-2015-103189. [DOI] [PubMed] [Google Scholar]
- 7.Campbell DE. How to write good multiple-choice questions. J Paediatr Child Health. 2011;47:322–5. doi: 10.1111/j.1440-1754.2011.02115.x. [DOI] [PubMed] [Google Scholar]
- 8.McCoubrie P. Improving the fairness of multiple-choice questions: a literature review. Med Teach. 2004;26:709–12. doi: 10.1080/01421590400013495. [DOI] [PubMed] [Google Scholar]
- 9.Coughlin PA, Featherstone CR. How to Write a High Quality Multiple Choice Question (MCQ): A Guide for Clinicians. Eur J Vasc Endovasc Surg. 2017;54:654–8. doi: 10.1016/j.ejvs.2017.07.012. [DOI] [PubMed] [Google Scholar]
- 10.Fenwick A. Medical ethics and law: assessing the core curriculum. J Med Ethics. 2014;40:719–20. doi: 10.1136/medethics-2013-101329. [DOI] [PubMed] [Google Scholar]
- 11.Wong MK, Hong DZH, Wu J, et al. A systematic scoping review of undergraduate medical ethics education programs from 1990 to 2020. Med Teach. 2022;44:167–86. doi: 10.1080/0142159X.2021.1970729. [DOI] [PubMed] [Google Scholar]
- 12.Mitchell KR, Myser C, Kerridge IH. Assessing the clinical ethical competence of undergraduate medical students. J Med Ethics. 1993;19:230–6. doi: 10.1136/jme.19.4.230. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Favia A, Frank L, Gligorov N, et al. A Model for the Assessment of Medical Students’ Competency in Medical Ethics. AJOB Prim Res. 2013;4:68–83. doi: 10.1080/21507716.2013.768308. [DOI] [Google Scholar]
- 14.Wong J, Cheung E. Ethics assessment in medical students. Med Teach. 2003;25:5–8. doi: 10.1080/0142159021000061341. [DOI] [PubMed] [Google Scholar]
- 15.Davis MH. OSCE: the Dundee experience. Med Teach. 2003;25:255–61. doi: 10.1080/0142159031000100292. [DOI] [PubMed] [Google Scholar]
- 16.Singer PA, Cohen R, Robb A, et al. The ethics objective structured clinical examination. J Gen Intern Med. 1993;8:23–8. doi: 10.1007/BF02600289. [DOI] [PubMed] [Google Scholar]
- 17.Singer PA, Robb A, Cohen R, et al. Evaluation of a multicenter ethics objective structured clinical examination. J Gen Intern Med. 1994;9:690–2. doi: 10.1007/BF02599011. [DOI] [PubMed] [Google Scholar]
- 18.Campbell AV, Chin J, Voo TC. How can we know that ethics education produces ethical doctors? Med Teach. 2007;29:431–6. doi: 10.1080/01421590701504077. [DOI] [PubMed] [Google Scholar]
- 19.Goss BD, Ryan AT, Waring J, et al. Beyond Selection: The Use of Situational Judgement Tests in the Teaching and Assessment of Professionalism. Acad Med. 2017;92:780–4. doi: 10.1097/ACM.0000000000001591. [DOI] [PubMed] [Google Scholar]
- 20.Foucault A, Dubé S, Fernandez N, et al. Learning medical professionalism with the online concordance-of-judgment learning tool (CJLT): A pilot study. Med Teach. 2015;37:955–60. doi: 10.3109/0142159X.2014.970986. [DOI] [PubMed] [Google Scholar]
- 21.Nabavi N. How appropriate is the situational judgment test in assessing future foundation doctors? BMJ. 2023;380:101. doi: 10.1136/bmj.p101. [DOI] [PubMed] [Google Scholar]
- 22.O’Sullivan AJ, Howe AC, Miles S, et al. Does a summative portfolio foster the development of capabilities such as reflective practice and understanding ethics? An evaluation from two medical schools. Med Teach. 2012;34:e21–8. doi: 10.3109/0142159X.2012.638009. [DOI] [PubMed] [Google Scholar]
- 23.Emke AR, Cheng S, Chen L, et al. A Novel Approach to Assessing Professionalism in Preclinical Medical Students Using Multisource Feedback Through Paired Self- and Peer Evaluations. Teach Learn Med. 2017;29:402–10. doi: 10.1080/10401334.2017.1306446. [DOI] [PubMed] [Google Scholar]
- 24.Gorbanev I, Agudelo-Londoño S, González RA, et al. A systematic review of serious games in medical education: quality of evidence and pedagogical strategy. Med Educ Online. 2018;23:1438718. doi: 10.1080/10872981.2018.1438718. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Dankbaar MEW, Alsma J, Jansen EEH, et al. An experimental study on the effects of a simulation game on students’ clinical cognitive skills and motivation. Adv Health Sci Educ Theory Pract. 2016;21:505–21. doi: 10.1007/s10459-015-9641-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Lorenzini C, Faita C, Carrozzino M, et al. VR-based serious game designed for medical ethics training. Augmented and Virtual Reality: Second International conference, Proceedings 2; 2015. [DOI] [Google Scholar]
- 27.Taylor N. Adaptive and emergent behaviour and complex systems. 23rd Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour 2009; Society for the Study of Artificial Intelligence and the Simulation of Behaviour; [Google Scholar]
- 28.Guo J, Singer N, Bastide R. Design of a serious game in training non-clinical skills for professionals in health care area. 2014 IEEE 3rd International Conference on Serious Games and Applications for Health (SeGAH); Rio de Janeiro, Brazil. [6-Jun-2024]. Available. accessed. [DOI] [Google Scholar]
- 29.Deans Z, Moorlock G, Trimble M. The medical licensing assessment will fall short of determining whether a UK medical graduate behaves ethically. Br J Hosp Med. 2024;85:1–7. doi: 10.12968/hmed.2023.0370. [DOI] [PubMed] [Google Scholar]
- 30.Medical Schools Council Medical schools applied knowledge test (MS AKT) sampling grid. 2023. [15-Nov-2024]. https://www.medschools.ac.uk/media/3103/ms-akt-sampling-grid-updated-2023.pdf Available. Accessed.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data are available upon reasonable request.
