Abstract
Background
Oral examinations are a popular method of assessment within medicine, being capable of measuring candidates' ability to carry out tasks or develop skills (operational knowledge). One example of this is the oral examination for membership of the Royal College of General Practitioners (RCGP), which is designed to assess candidates' decision-making skills and the professional values that underpin these decisions. While the reliability of oral examinations has been investigated, to date, little is known about their ability to measure what they set out to measure (validity).
Aim
To investigate the content validity of the MRCGP oral examination, with particular focus on its ability to assess the process of decision-making.
Design of study
An evaluation of oral examination video recordings, using qualitative methods.
Method
The MRCGP oral examinations are video recorded as part of an ongoing quality assurance programme. Fifty of the recordings carried out in 2002 were selected randomly and analysed for content and dialogue patterns reflecting the assessment of the decision-making process.
Results
All examiners used the specified contexts outlined in the examination objectives to present candidates with dilemmas. The assessment of decision-making skills, however, was limited by a tendency among examiners to present the candidate with new, more complex dilemmas rather than giving them the opportunity to discuss the implications, make choices and ultimately, justify their decision. Moreover, while examiners frequently asked candidates questions relating to professional values, they rarely asked them to demonstrate how those values support their decisions.
Conclusion
In order that the benefits of oral examination can be fully realised, questions need to be structured in a way that encourages candidates to discuss all stages of the decision-making process.
Keywords: decision making, MRCGP oral examination, reliability and validity
INTRODUCTION
Oral examinations have been used as a method of assessment for centuries,1 remaining popular in medical education as they potentially provide a valuable resemblance of the dialogue between doctors and patients. Indeed, in a UK national survey of the assessment of undergraduate medical education, oral examinations were found to be the most popular form of assessment of clinical skills in year five.2 Oral assessments are also used within postgraduate medical certification, often being a component of the Royal Colleges' Membership examinations. Moreover, the Postgraduate Medical Education Training Board (PMETB) plans to use oral examinations for ‘case-based discussions’ during the second foundation (F2) year.3 Case-based oral assessments are also used by the General Medical Council to evaluate doctors' performance.4
There has, however, been some controversy surrounding the reliability of oral examinations, with some studies finding they provide a good inter-rater reliability,5,6 but others demonstrating wide variations in the marks awarded,7 the key problem being their potential for subjectivity.8,9 Furthermore, it has been suggested that oral examinations are often inappropriately used to assess candidates' knowledge, which could be tested more effectively using written examination methods.10
Attempts to improve the reliability of oral examinations include increasing both the number of oral assessments and the number of examiners11 and the use of structured question grids.12
Despite these potential weaknesses surrounding their reliability, oral examinations are considered to be the most suitable form of assessment for specific skills such as clinical reasoning, which are inherently complex to examine.13 Moreover, it has been suggested that they contribute to the learning experience, providing students with an incentive to explore topics as well as having an interaction with examiners.14 To date, however, there has been a scarcity of published work on the validity of the oral assessments used within medical education. In this paper, we examine the content validity of the MRCGP oral examination and consider our findings in relation to oral examinations in general.
The MRCGP oral examination
The MRCGP comprises of four modules each designed to test a different area of a candidate's performance.15 The oral module aspires to assess decision-making skills and the professional values underpinning these decisions. As stated in the RCGP oral examination handbook:
‘The examiners will be looking for evidence that your approach to decision making is coherent, rational, ethical and sensitive.’15
While the RCGP does not specify exactly what constitutes coherent, rational, ethical and sensitive decision making, there is a vast literature, largely from management disciplines, identifying how decisions are made.16-18 Decision making is generally defined as a process involving several stages, the starting point being the presentation of a dilemma.19 Following this, the decision maker should be able to identify various options, and by considering the implications of each option, make an appropriate choice. Having made a choice, the decision maker evaluates their choice, making adjustments where necessary, and ultimately, reflects on what has been learnt by the experience.
In order to assess these decision-making skills, examiners present candidates with pre-defined scenarios and ask them a series of questions relating to the scenario. A planning grid is used in the preparation of questions as this has been found to improve reliability.20 All examiners undergo initial training when joining the examination panel, as well as ongoing development based on the peer review of video-recorded examinations.21
METHOD
Seven per cent of the MRCGP oral examinations are video recorded as part of an ongoing quality assurance programme. An additional 3% of examinations were recorded for the purpose of this study, ensuring adequate data for analysis. In December 2002, there were a total of 131 examiners, assessing 742 candidates and a total of 80 recordings made. Fifty of these recordings were selected randomly from a box containing all the tapes. As can be seen in Table 1, length of time as a MRCGP oral examiner, sex, and age of the selected group of examiners are similar to those of the whole panel.
Table 1.
Comparison of sample with rest of panel of examiners.
Panel of oral examiners | Sample of examiners | |
---|---|---|
n | 131 | 50 |
Sex (%) | ||
Male | 99 (76%) | 39 (78%) |
Female | 32 (24%) | 11 (22%) |
Average age (years) | 47.3 | 47.2 |
Average length of time on examination panel (years) | 6.7 | 7.2 |
How this fits in
Concerns have been raised about the reliability of oral examinations in general. Specific reliability issues have been shown to exist in the assessment of candidates taking the MRCGP oral examination. To date, little is known about the content validity of the MRCGP oral examination. Examiners assess candidates in the appropriate subject areas as defined in the examination regulations. In the assessment of decision-making skills, there is a tendency among examiners to present candidates with new, increasingly complex dilemmas rather than allowing them to discuss each of the stages of decision making. This forces the candidate to repeatedly describe the early stages of decision-making process, leaving the later stages untested. For many candidates, the MRCGP oral exam can only be said to partially test decision making.
Since examiners worked in pairs and were recorded for the whole day, the first examination on each tape was used, providing four questions (two questions from each examiner). As examiners tend to use the same questions throughout the day, we felt it unlikely that questions from the beginning of the tape would be very different to those later on the tape. The selected portions of the videotape were then typed verbatim and provided the data for the study.
The data were coded and analysed for content and dialogue patterns, looking in particular at the sequential dialogue between the examiner and candidate, which reflected the decision-making process being assessed. Initially, codes were assigned to the transcripts, identifying the nature of the subject under question and the different aspects of decision making. Following this, the data relating to decision making were examined in more detail, looking for patterns in the dialogue. In particular, we wanted to see how decision-making skills were elicited by examiners and demonstrated by candidates. For example, many sections of the examiner dialogue were coded for ‘new dilemma posed’ and ‘increased complexity of original dilemma’. The candidate's response to this was then also coded to determine whether their response displayed decision-making skills. Following this, further analysis was carried out with reference to the literature on decision making to help clarify which aspects of the decision-making process were being assessed.
The coding frame was developed by a GP and examiner of the MRCGP oral examinations and subsequently checked and modified by a social scientist. Initial analysis of the data was carried out, assessing the explanatory values of the categories. Following this, both authors discussed and agreed the analytic framework.
Responder validation was used to appraise the accuracy of the analysis of individual transcripts. Ten examiners agreed to receive copies of their transcripts along with the coding and interpretation of their dialogue.
RESULTS
Topics being examined
Analysis of the data revealed that a broad range of over 50 different topics were selected for assessment, each reflecting areas stipulated in the examination regulations (Table 2).
Table 2.
Examples of the topics covered in examiners' questions.
Area of general practice | Topic in question | Example (opening question from examiner) |
---|---|---|
Communication | Breaking bad news | ‘You receive a chest xray report showing a mass highly suspicious of a primary carcinoma How are you going to communicate that to patient?’ (Examiner 4.A) |
Rationing | Viagra on the NHS | ‘A patient came to see me having lived in Spain for 18 months and asked if I could continue prescribing Viagra. What sort of issues does that raise?’ (Examiner 43.A) |
Quality of care | Poorly performing doctor | ‘How should society identify poorly performing doctors?’ (Examiner 1.A) |
Sick doctors | GP's role in helping a colleague | ‘Imagine that you are at reception and you overhear receptionist saying, ‘I wonder why nobody ever asks to see Dr Smith?’ What would you do?’ (Examiner 29.A) |
Ethical dilemma | End of life decisions | ‘A patient with a terminal illness requests not to be resuscitated. What factors would you consider when deciding how to respond?’ (Examiner 21.A) |
Assessment of decision making
Our analysis showed that almost all examiners started the questions by presenting candidates with a dilemma, asking them to discuss what they would do if faced with the situation. A few examiners explicitly asked the candidate to identify the dilemma, and in doing so, encouraged them to specifically discuss the possible options. For example:
‘A consultant asks you to prescribe some cimetidine for the unlicensed use of wart treatment. What is the dilemma here?’ (Examiner 36.A.)
Having asked the candidate about one dilemma, we found that rather than moving their line of questioning through the process of decision making there was a tendency among examiners to make the original case increasingly complex. Thus, having responded to the dilemma by outlining the possible options, the candidate is not then asked to explain why one option may be preferable to another.
In making the original case more complex, we found that examiners often present the candidate with a further dilemma; one that renders the previously stated options inappropriate. This is illustrated in Box 1 where the candidate largely responds to the increasingly complex dilemma being presented by highlighting the options that they considers to be appropriate. The candidate provided new options each time that the examiner added a further ‘layer’ to the case in question. At times, the examiner asked the candidate to make a choice, asking what they would do in the given situation. While the candidate stated their preferred option, they did not offer any justification for this selection or state any implications of the choice, and neither were they asked to provide this more detailed information. Consequently, by the end of the series of questions, the candidate was not able to demonstrate the full range of skills required in making decisions, but rather was encouraged to reveal their knowledge about the possible options that a GP faces when presented with a variety of dilemmas, and at times to make choices on the action that they would take. In other words, the candidate displayed the early stages of the decision-making process.
Box 1. An example of an examiner (47.A) asking a series of questions that make a dilemma increasingly complex for the candidate.
Examiner: ‘I would like to ask you a question about doctors treating themselves. Is it reasonable for a GP to take treatment for indigestion?’
The candidate suggests that there are many different views about this issue and that there are no clear-cut answers. They suggest that there may be a good reason to self-treat for something like indigestion — for example, some doctors may be too busy at work to see their own GP for treatment or they may feel they would be wasting the GP's time by consulting for something like this.
Examiner: ‘Let's be more precise. A partner who is a 50-year-old man. You go into his room just as he swallows a Zoton tablet from a free sample he's got. What are your views on that?’
The candidate suggests that they would ask the GP about his general health.
Examiner: ‘He says “well I just went out for a curry last night and it's a bit of indigestion”.’
The candidate suggests that this kind of self-treatment would be reasonable. However, they also state that GPs should be cautious about self-treating as they may miss something important and this could have a knock-on effect for patient care.
Examiner: ‘How can it impact on our patients?’
The candidate suggests that there is a lack of objective assessment as to whether the GP is fit to continue to work.
Examiner: ‘Well suppose a few days later you find a result in the post, it's actually one of your partners who has got the practice nurse to send blood for Helicobacter serology and you see that result. What would you do about that?’
The candidate suggests that the GP has a right to confidentiality, but also probably expects the result to be seen as he has requested that a member of staff take the blood. This, they say, might indicate that the GP is worried about his health.
Examiner: ‘Would you take any action on that?’
The candidate suggests that they would act on the result, but that the way this was done would depend on their relationship and whether they knew the GP well enough to ask about his health and need for treatment.
Examiner: ‘So what if he says to you “I am not registered with any other doctor?”.’
The candidate states that they would advise the GP to register with someone.
Examiner: ‘What if he says, “well in the practice in the town and I fell out with them a few years ago”?’
The candidate says that they would enquire about why the GP has not registered with his own GP and would encourage him to do so.
Examiner: ‘OK would you take any further action other than doing that?’
The candidate suggests that they would not be in a position to take the matter any further.
Examiner: ‘What if you discover he is then taking antidepressants as well, that he was supplying himself.’
The candidate simply states that this is ‘a bit difficult’.
Examiner: ‘What would you do then?’
The candidate suggests that the GP's competency needs to be considered.
Examiner: ‘So what action do you think you'd take?’
The candidate states that they would speak to the GP about taking antidepressants.
Examiner: ‘And he refuses to cooperate?’
The candidate says that they would look for further help and support to deal with this situation.
Examiner: ‘What would you do?’
The candidate lists a number of possible options — places that they could find help. For example, another partner in the practice, LMC, PCT.
In order to demonstrate the full range of decision-making skills, the candidate needs to be asked to discuss the implications of each of the options they outline and justify the choices that they make. This would indicate that they do not simply recognise that there are many different choices, but that they have the ability and knowledge required to weigh up one option against another and to use this information to make the most appropriate choice.
Although we found that the majority of examiners focused on the early stages of decision making, some examiners were clearly very skilled at eliciting candidates' ability to make decisions, encouraging them to discuss how they weigh up the options and then make choices. In the following oral examination the examiner (43.A) asks the candidate questions that encourage the candidate to demonstrate decision-making skills:
Examiner: ‘I would like to ask you a question about racialism. A patient makes a racist remark about one of your partners. What are your options?’
The candidate responds confirming that action is required.
Examiner: ‘Tell me [what] the options are rather than what you would do.’
The candidate lists a number of options from ignoring the comment to taking action and by discussing the comment with the patient. The candidate also proposes several ways of approaching the patient.
Examiner: ‘What are the strengths and weakness of each of those options?’
The candidate goes through some of the strengths and weaknesses of suggested options but decides on the need to take action and address the patient.
Examiner: ‘Why would it be the best thing to do?’
The candidate defends their stance by suggesting that it is important to try and stamp out this behaviour before it gets any worse and perhaps becomes violent.
Examiner: ‘What are the wider implications of this? I agree with what you have said so far.’
The candidate discusses the importance of the doctor–patient relationship and the need to respect this relationship. Furthermore they suggest it is important that doctors are respected by society.
Examiner: ‘Any other implications? What are the implications to the practice itself?’
The candidate suggests that it is important that action is taken otherwise the practice could be labelled as being racist.
As can be seen, the examiner focused on one dilemma — a racist remark made by a patient — and asked the candidate to outline the options. They then asked the candidate to explain the strengths and weaknesses of the options, encouraging them to consider the implications of each of the choices. The examiner then continued by asking the candidate to justify their choice and to consider the implications in a wider context. By taking the candidate through these stages of decision making, the examiner was able to adequately assess their skills in this area.
Professional values underpinning decision making
The tendency to remain at the level of knowledge assessment was also a feature of the questions that address candidates' professional values underpinning decisions in general practice. The data show that while examiners frequently ask questions relating to professional values, they rarely encourage the candidate to demonstrate how these values support their decisions. Moreover, these questions are generally asked right at the end of the examination. For example, in a question about a patient repeatedly using the ‘out of hours service’, the examiner's final question to the candidate was:
‘Would you strike her off?’ (Examiner 46.A)
The candidate responds by suggesting that they would want to meet the patient to express their concerns about the misuse of the services.
Rather than encouraging the candidate to discuss why they might want to express their concerns about misusing the service and what she would want to achieve from this, the examination came to an abrupt end. A few examiners encouraged the candidate to discuss how their professional values supported their decisions. This can be seen in a series of questions asked by the examiner (18.A) encouraging a candidate to demonstrate how their professional values support decision making.
Examiner: ‘In some countries it is not the GP, for example, who signs the sickness certification. What do you see as the pros and cons of GPs doing sickness certification for patients?’
The candidate suggests that the benefits of a GP signing sick certificates are that they will know the patients fairly well and will have an understanding of whether their current health problem is consistent with their medical history. The candidate emphasises the importance of having knowledge about the patient and how this allows the GP to assess whether a sick certificate is warranted. On the negative side, the candidate suggests that this very same situation of knowing the patient may make it difficult for the GP to deny the patient a sick certificate.
Examiner: ‘Because?’
The candidate suggests that the relationship that the GP has with the patient is important and that the GP may want to give the patient the benefit of the doubt and offer a sick certificate as a means of preserving the doctor–patient relationship.
Examiner: ‘Is there a conflict of interest there sometimes?’
The candidate agrees that there probably is.
Examiner: ‘Why is that?’
The candidate suggests that GPs may feel under pressure to sign a sick certificate even for fairly trivial reasons, which don't really justify 1 or 2 weeks off of work.
In this oral examination the candidate was asked to explain why it might be difficult for the GP to deny the patient a sick certificate and whether there might be a conflict of interest. When the candidate did not provide an ‘in-depth’ answer, the examiner attempted to get the candidate to justify their response. Without being given this opportunity to justify their view, it is possible that the candidate's final response would have inappropriately influenced the examiner's overall subjective impression.
Responder validation
Ten examiners were asked to comment on our analysis of their individual examination transcript, and all appeared to be in broad agreement with the findings. Many examiners also commented on the usefulness of seeing their dialogue transcribed, often reflecting on how they could improve their examination technique now that they had seen it in this way:
‘It was very interesting to see it played out in writing.’ (Examiner 22.)
‘What your transcript has taught me is not to increase the complexity of the scenario to make it more difficult, but to concentrate more on trying to elucidate the process of decision making.’ (Examiner 21.)
‘There is little decision making here except that no decision can be sensibly made without the ability to consider all the possible issues.’ (Examiner 34.)
The added (and unexpected) benefit of carrying out the responder validation, therefore, was that it appeared to provide a useful learning tool, encouraging examiners to reflect on the effectiveness of their examining technique and allowing them to identify areas for potential improvement.
DISCUSSION
Summary of main findings
From our qualitative evaluation of the MRCGP oral examination, we have been able to show that candidates are being assessed on a broad range of relevant clinical and professional topics as stipulated in the examination regulation. The extent to which decision-making skills are assessed, however, tends to be limited by examiners' increasing the complexity of the dilemma rather than exploring the full range of skills required to make appropriate decisions. The assessment of professional values was largely examined at the level of knowledge and comprehension, with few examiners encouraging candidates to justify their expressed viewpoint or allowing them to demonstrate how they might use these values to support their decision making.
These findings suggest that while the MRCGP oral examination is a valid measurement of clinical knowledge, it is not a totally valid measurement of decision making skills or the ability to use professional values when making decisions. It would appear, therefore, that this expensive form of assessment is not being used to its full potential.
Strengths and limitations of the study
One of the authors is a MRCGP examiner and, therefore, known to all of the examiners participating in the study. However, the use of video-recorded data has allowed us to observe what actually occurs within an oral examination rather than what examiners might report occurs. Moreover, the examiners are regularly video recorded during the examination process as part of the quality-assurance exercise. This familiarisation is likely to reduce attempts by examiners to change their performance while being monitored.
A further potential problem with the study is that the first oral examination on each of the videotapes was used, and therefore the first examination of the day, was analysed. Although examiners tend to use the same set of questions throughout the day, their behaviour may have changed as they ‘warmed up’. Examiners' performance may be different if looked at over a period of time. The use of responder validation, however, provided some assurance that the analysis was a fair reflection of the events occurring during the whole examination period. This validation exercise could have been improved if all of the examiners were involved.
Relationship to other work
Although there is a vast body of literature reporting on the reliability of assessments used in medical education, there is a scarcity of published work about the content validity of different assessment methods. Indeed, a systematic review of reliability and validity studies within postgraduate medical certification, carried out between 1985 and 2000, included 55 published papers in their analysis, of which just four investigated content validity, and only one reported on the validity of oral examinations.22 One striking finding of this review was that the large majority of studies were from general practice or family medicine, with the MRCGP being the only examination reporting on reliability or validity measures for UK membership or fellowship examinations for the Royal Colleges. It is important to recognise, therefore, that while this evaluation of the content validity of the MRCGP oral examination indicates areas for improvement, it is likely that this assessment has already undergone greater quality assurance assessments, and subsequent improvements, than postgraduate examinations within hospital specialties.
One possible way of improving the reliability and validity of oral examinations is through the use of more structured questions. Indeed, Objective Structured Clinical Examinations (OSCEs) are frequently used to assess UK undergraduate medical education.2 Although studies generally show OSCEs as valid and reliable methods of assessment,23,24 a study of undergraduate medical student assessment found the use of a structured question grid during medical oral examinations to be of little value in improving reliability.12
While OSCEs have become very popular within medical education, being viewed as more reliable than traditional oral examinations, questions have been raised about their validity, and, in particular, their tendency measure clinical factual knowledge rather than the organisation of knowledge.25 Indeed, Mavis and Henry warn of the danger of being lured into a false sense of security surrounding the validity of OSCEs.26
Implications for educational practice
Our findings suggest that while examiners for the MRCGP oral examination are testing candidates' knowledge in appropriate areas of general practice, there is a need to provide further training in the assessment of decision making. We found some examples of good exam technique and suggest that these could be used for training purposes. The transcription and discussion of individual examination dialogue may also be a useful tool for improving the quality of this assessment method. Although this is time consuming it may prove to be an excellent way of giving examiners feedback on their performance and an opportunity to reflect upon this. It is important to acknowledge that the MRCGP oral examination has already been, and continues to be evaluated for its validity and reliability. Therefore, although we have identified further areas for improving validity, its ongoing quality assurance programme should be an example of good practice for others involved in oral assessments to follow.
Acknowledgments
This project would not have been feasible without the support of the MRCGP Examination Board, MRCGP Examination Department, Members of the Oral Development Group and all the examiners on the panel of MRCGP examiners
Ethics committee and consent
The Examination Board of the Royal College of General Practitioners approved this project on 14 October 2002. The RCGP Ethics Committee classified this study as an audit/quality assurance exercise. Although this did not require ethical approval, we wanted to ensure that the ethical principles of carrying out research were adhered to. We therefore obtained informed consent from all of the examiners and candidates. All candidates included in the study gave consent for their oral examination to be video taped. On the day of the examination a verbal brief was given to all candidates on details of the project. In order to preserve anonymity surrounding a potentially sensitive area, when presenting our findings, we have summarised the candidates' dialogue
Competing interests
Robin Simpson is a member of the Oral Development Group of the MRCGP College of examiners
REFERENCES
- 1.Siker ES. A measure of competence. The first Mushin lecture. Anaesthesia. 1976;31:732–742. doi: 10.1111/j.1365-2044.1976.tb11863.x. [DOI] [PubMed] [Google Scholar]
- 2.Fowell SL, Maudsley G, Maguire P, et al. Student assessment in undergraduate medical education in the United Kingdom. Med Educ. 2000;34(suppl 1):1–49. doi: 10.1046/j.1365-2923.2000.0340s1001.x. [DOI] [PubMed] [Google Scholar]
- 3.NHS Website. Modernising Medical Careers. http://www.mmc.nhs.uk/assessment.asp?m=4 (accessed 20 Feb 2005)
- 4.Southgate L, Cox J, David T, Hatch D, et al. The General Medical Council's Performance Procedures; peer review of performance in the workplace. Med Educ. 2001;35(suppl 1):9–19. doi: 10.1046/j.1365-2923.2001.0350s1009.x. [DOI] [PubMed] [Google Scholar]
- 5.Kearney RA, Puchalski SA, Yang HY, Skakun EN. The inter-rater and intra-rater reliability of new Canadian oral examination format in anesthesia is fair to good. Can J Anaesth. 2002;49(3):232–236. doi: 10.1007/BF03020520. [DOI] [PubMed] [Google Scholar]
- 6.Schubert A, Tetzlaff JE, Tan M, Rychman JV, Mascha E. Consistency, inter-rater reliability, and validity of 441 consecutive mock oral examinations in anaesthesiology. Anaesthesiology. 1999;91:288–298. doi: 10.1097/00000542-199907000-00037. [DOI] [PubMed] [Google Scholar]
- 7.Weingarten MA, Polliack MR, Tabenkin H, Kahan E. Variations among examiners in family medicine residency board oral examinations. Med Educ. 2000;34(1):13–17. doi: 10.1046/j.1365-2923.2000.00408.x. [DOI] [PubMed] [Google Scholar]
- 8.Evans LR, Ingersol RW, Smith EJ. The reliability validity and taxonomic structure of the oral examination. J Med Educ. 1966;41:651–657. doi: 10.1097/00001888-196607000-00002. [DOI] [PubMed] [Google Scholar]
- 9.Rowland-Morin PA, Burchard KW, Garb JL, Coe NPW. Influence of effective communication by surgery students on their oral examination scores. Acad Med. 1991;66:169–171. doi: 10.1097/00001888-199103000-00011. [DOI] [PubMed] [Google Scholar]
- 10.Jayawickramarajah PT. Oral examinations in medical education. Med Educ. 1985;19:290–293. doi: 10.1111/j.1365-2923.1985.tb01323.x. [DOI] [PubMed] [Google Scholar]
- 11.Daelmans HE, Scherpbier AJ, van der Vleuten CP, Donker AJ. Reliability of clinical oral examinations re-examined. Med Teach. 2001;23(4):422–424. doi: 10.1080/01421590120042973. [DOI] [PubMed] [Google Scholar]
- 12.Olson LG, Coughlan J, Rolfe I, Hensley MJ. The effect of a structured question grid on the validity and perceived fairness of a medical long case assessment. Med Educ. 2000;34(1):46–52. doi: 10.1046/j.1365-2923.2000.00465.x. [DOI] [PubMed] [Google Scholar]
- 13.Ryding HA, Murphy HJ. Employing oral examinations (viva voce) in assessing dental students clinical reasoning skills. J Dent Educ. 1999;63(9):682–687. [PubMed] [Google Scholar]
- 14.Rangachari PK. The targeted oral. Adv Physiol Educ. 2004;28:213–214. doi: 10.1152/advan.00030.2004. [DOI] [PubMed] [Google Scholar]
- 15.Royal College of General Practitioners. MRCGP Examination Regulations for 2005. http://www.rcgp.org.uk/exam/regulations/2005/regu11.asp (accessed 20 Feb 2005)
- 16.Garvin DA, Roberto MA. What you don't know about making decisions. Harv Bus Rev. 2001;79(8):108–116. 61. [PubMed] [Google Scholar]
- 17.Charan R. Conquering a culture of indecision. Harv Bus Rev. 2001;79(4):75–82. 168. [PubMed] [Google Scholar]
- 18.Hammond JS, Keeney RL, Raiffa H. The hidden traps in decision making. Clin Lab Manage Rev. 1999;13(1):39–47. [PubMed] [Google Scholar]
- 19.Collins Concise Dictionary and Thesaurus. 2nd edn. Glasgow: Collins; 1995. p. 231. [Google Scholar]
- 20.Wass V, Wakeford R, Neighbour R, van der Vleuten C. Achieving acceptable reliability in oral examinations: an analysis of the Royal College of General Practitioners membership examination's oral component. Med Educ. 2003;37(2):126–131. doi: 10.1046/j.1365-2923.2003.01417.x. [DOI] [PubMed] [Google Scholar]
- 21.Wakeford R, Southgate L, Wass V. Improving oral examinations: selecting, training and monitoring examiners for the MRCGP. BMJ. 1995;311:931–935. doi: 10.1136/bmj.311.7010.931. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Hutchinson L, Aitken P, Hayes T. Are medical postgraduate certification processes valid? A systematic review of published evidence. Med Educ. 2002;36(1):73–91. doi: 10.1046/j.1365-2923.2002.01120.x. [DOI] [PubMed] [Google Scholar]
- 23.Martin IG, Jolly B. Predictive validity and estimated cut score of an objective structured clinical examination (OSCE) used as an assessment of clinical skills at the end of the first clinical year. Med Educ. 2002;36(5):418–425. doi: 10.1046/j.1365-2923.2002.01207.x. [DOI] [PubMed] [Google Scholar]
- 24.Townsend AH, Mcllvenny S, Miller CJ, Dunn EV. The use of an objective structured clinical examination (OSCE) for formative and summative assessment in a general practice clinical attachment and its relationship to final medical school examination performance. Med Educ. 2001;35(9):841–846. doi: 10.1046/j.1365-2923.2001.00957.x. [DOI] [PubMed] [Google Scholar]
- 25.Hodges B, McNaughton N, Regehr G, et al. The challenge of creating new OSCE measures to capture the characteristics of expertise. Med Educ. 2002;36:742–748. doi: 10.1046/j.1365-2923.2002.01203.x. [DOI] [PubMed] [Google Scholar]
- 26.Mavis BE, Henry RC. Between a rock and a hard place: finding a place for the OSCE in medical education. Med Educ. 2002;36(5):408–409. doi: 10.1046/j.1365-2923.2002.01241.x. [DOI] [PubMed] [Google Scholar]