Skip to main content
The Ulster Medical Journal logoLink to The Ulster Medical Journal
. 2011 May;80(2):62–67.

Selecting Tomorrow’s Doctors

Keith Steele 1
PMCID: PMC3229847  PMID: 22347744

Application to medical school is a competitive process. In 2009 there were 27,429 applications for approximately 8000 places in the UK, up 13.7 per cent on the previous year1. The cost of training a medical student is circa £200,000 but the cost of selecting the wrong applicant can be even greater2.

Traditionally medical schools have relied on performance in knowledge-based examinations for selection and although these distinguish academically able applicants, it is often the failure to develop non-cognitive competencies such as motivation and/or empathy and ability to communicate that lead to problems for doctors in their professional lives. If we accept that we want our doctors to have the non-cognitive skills to relay information to us as patients and also to have the cognitive skills to consider management/prognosis with us then it follows that we should accept our applicants to medical schools on both their cognitive and non-cognitive abilities. There is now worldwide support for this approach both in the UK from the Medical Schools Council and the GMC and in the US from the Accreditation Council for Graduate Medical Education3.

Widening participation so that medical students are representative of the population they serve has assumed importance politically. In 2008 the four lower socioeconomic classes accounted for only 15 per cent of medical students in the UK4. Some schools seek to actively redress this by selecting on aptitude rather than achievement, the latter being related to socioeconomic factors and type of school attended.

Given the highly competitive nature of selection which can be secretive and varies between Universities, the Schwartz report into fair admission to Higher Education has recommended the following five principles and two guidelines: 1) the selection process should be both transparent and should be published and available online; 2) selection should consider both achievement and potential; 3) selection methods should be reliable, valid and informed by best practice; 4) the predictive validity of selection methods should be monitored; 5) staff should receive training in selection processes; 6) there should be feedback to unsuccessful candidates; 7) barriers to selection should be minimised e.g. disability considered post selection5.

Given that attrition rates in medical school are low it can be argued the selection exercise is of even higher importance. Instruments used for selection include personal statements, academic references, tests of previous academic performance, aptitude tests, personality tests, random selection and interviews. Not surprisingly given the variety of instruments there is considerable controversy surrounding their most effective use.

Methods used to select Medical students

Personal Statements and Academic References

Ferguson et al followed up the 1995 cohort at Nottingham Medical School over a five year period. They used manifest coding to categorise applicant’s personal statements and academic references6. They found that information on the academic reference did not predict academic performance whereas there was a correlation between content matter in the personal statement and aspects of clinical performance. On the other hand in a recent paper which defined selector practice from Bart’s. and the London where personal statements are used to screen for interview, the authors commented that the study confirmed the subjective nature and low reliability of this process7. Personal statements are subject to plagiarism and UCAS has claimed that up to five per-cent of personal statements amongst eight hundred applicants to Medicine contained material borrowed from three online example statements. They have recently introduced copycat software to address this8.

In my experience of examining academic references over the last three years I have yet to find an unfavourable one.

Tests of previous academic performance

A-level results have been found to be a consistent predictor of academic performance in medical school in the UK6. At McMaster, grade point average has been found to be a predictor of academic performance and clinical performance in their graduate entry programme. Unfortunately A-levels/GCSEs have become less useful because of grade inflation, as the vast majority of applicants achieve 3A grades. Over the past 20 years the proportion of A-grades has risen from 9-27 per-cent9. For this reason the A star grade which is based on performance at A2 and is awarded to approximately the top ten per-cent across all A-level subjects has been introduced (70 per-cent of the 2010 cohort accepted on the basis of A-level performance to QUB Medical school had at least one A star and one student had five). Females perform better in both GCSEs and A-Levels than males10. It has also been argued that A-levels are biased and that grades can be affected by type of school attended. Widening participation was a priority of the Department of Health under its proposal in 2004 to increase the number of training places in the UK for medicine11. For graduate entry the use of grade point average is fraught with difficulty because some degrees are seen to be easier than others and there is variability of grades awarded between different universities12. Our own findings relating to the 2008 entry cohort to Medicine at Queens University Belfast (QUB) show that the best predictor in both first and second year examinations is GCSE performance (Cronbachs alpha 0.8).

McManus et al in a longitudinal study of students entering Westminster Medical school have shown that A-level grades have long term predictive value for undergraduate and postgraduate careers including time taken to obtain membership. The authors argue that A-level assesses achievement and that past achievement affects future achievement13. James has shown that for 1986-90 entrants to the undergraduate course at Nottingham medical school achieving a high grade in A-level Chemistry predicted success at BMedSci and a high grade in A-level Biology predicted success in BMBS14.

Aptitude Tests

These tests are designed to assess the applicant’s aptitude for medicine. They include the Graduate Medical Schools Admission test (GAMSAT) used both in Australia and for graduate entry in the Republic of Ireland, the Medical Colleges Admission test used in the US (MCAT), the Biomedical Admissions test (BMAT) and UK Clinical Aptitude test (UKCAT) used in the UK. The rationale behind these tests is that they should be free from bias, assess ability rather than achievement and would help distinguish between candidates scoring at the GCSE/A level ceiling. BMAT, GAMSAT and MSAT are in various formats but essentially consider written communication, critical reasoning and problem solving. They have a scientific component. The UKCAT consists of four cognitive subtests namely verbal reasoning, quantitative ability, abstract reasoning and decision analysis (all attributes felt to be important in medicine). It was first administered as an online test in 2006 by the UKCAT Consortium comprising 26/31 of the UK Medical schools by its agent Pearson Vue in testing centres throughout the world. Each cognitive test has a scale score ranging from 300-900 with a mean set to 600 using the 2006 reference cohort. Universities and candidates receive both subtest scores and the total test score which ranges from 1200-3600. A fifth test, currently administered for research purposes only, is a behavioural test intended to measure a number of non-cognitive competencies to include empathy, robustness and integrity: features associated with good doctors. Reliability scores of the cognitive subtests range from moderate to high. Scores showed a negative correlation with age; males performed better than females and the ethnic grouping black/British were the lowest performing group. The UKCAT Consortium claims it is a fair and reliable test15.

There have been several small studies on the predictive validity of UKCAT. One from Dundee/Aberdeen and one from Nottingham show no predictive validity whereas one from Newcastle does16-18. Our own findings at QUB for our 2008 entry cohort show no relationship between UKCAT score (both total score and subtest scores) and performance in years 1 and 2 of the course. This may change in the later part of the course where there is more emphasis on clinical reasoning and problem solving. All these studies suffer from small numbers and problems with generalizability. The potential for the UKCAT project is that it now has 4 cohorts of applicants who have taken the test accompanied by progression data from member Medical schools. This is an enormous achievement with considerable potential and it is hoped that as a result the predictive validity of the test will be determined.

There is mixed evidence internationally for the predictive validity of aptitude tests. McManus followed a Westminster cohort of medical students for 20 years. Whilst A-levels were predictive of outcome a standard IQ test was not13. Tests such as MCAT in the US have shown predictive validity. Emery has shown predictive validity for preclinical examinations with BMAT but only includes one medical school and numbers are small19. Concerns have been raised about UKCAT. A recent study from Scotland of 1st year medical students has raised concerns about the cost and fairness of the test , the use of the data in selection processes by medical schools and the influence of test preparation20. UKCAT state that 6.5 per-cent of their applicants receive bursaries and that their website offers two timed practice tests which address issues around candidates needing to pay commercial organisations for test preparation15. A recent study of A- level and UKCAT performance in applicants applying to UK Medical and Dental schools in 2006 found that UKCAT moderately correlated with A-level and that the total score did provide a useful proxy for A-levels in the selection process. It did show a bias toward males and social class 1 applicants21. The definitive verdict on how much UKCAT gives added value to the selection process should be available in the next few years and is keenly anticipated.

Personality testing

Ideally doctors should be safe practitioners who manifest considerable job satisfaction. We know that introverted, neurotic doctors can burn out although conversely the same traits can be associated with safe practitioners22. There is no consensus of opinion on the personality best suited for the practising physician. Doherty and Nugent have reviewed seven longitudinal studies which examined student’s scores on valid personality tests and compared these with outcome measures around performance and stress. The studies come from UK, Belgium, US and Norway. Four of the studies looked at personality factors and academic success, one considered personality and clinical competence and two looked at relationship between personality and stress. The authors concluded that not only does conscientiousness predict long term success in medical training but also vulnerability to stress if it is accompanied by high levels of neuroticism and low levels of extraversion23. These findings are in keeping with job performance findings in other professions24.

Powis claims that the inclusion of personality tests in selection to Australian Medical schools significantly adds to the ability to predict candidates who will perform well in the course25. UKCAT includes psychological tests within its battery of non-cognitive tests which are currently being administered for research purposes only and are not currently being used in the selection process.

Random Selection

Dutch Medical schools select sixty per-cent of their intake using predetermined criteria and the remainder by national ballot. Applicants can apply three times. Dutch students are in favour of this as they argue that it redresses the power of the Medical schools26. In 2003 Queen Mary’s London, previously referred to as Bart’s. and the London introduced a ballot for entry into its graduate entry programme. The negative publicity created by the ballot led to its withdrawal.

Interviews

Traditional interviews are used as part of the selection process in most medical schools in UK, US and Australia. The process usually comprises two or three interviewers using either unstructured or semi-structured questions. Medical students, NHS staff, the public and academics have all been used as interviewers. Traditional interviews are subject to bias in terms of gender, age, race, halo effects, hawk/dove interviewer effects, tendency to place weighting on unfavourable information, effect of similarity/dissimilarity of interviewer to interviewee etc27. In addition the interview process can be affected by the impact of the previous candidate and the effects of pre-interview information about the applicant. While the face validity of interviews is strong Kreiter has shown that they have a low to moderate reliability and has called into question the fairness of the interview in the selection process28. Harasym et al in their paper on reliability and validity of interviewers judgments using simulated candidates have shown that interviewer accuracy was only moderate (56 per-cent) and questioned the validity and reliability of two person interviews29. The context of the interview can also affect outcome i.e. the individual’s ability to problem solve can be affected given differing scenarios. Even though examiner reliability can be improved by training and the use of semi-structured questions a single interview in a specific context may not provide a true assessment of the applicant’s ability. Similar arguments led to the development of the Objective Structured Clinical Examination which tests clinical skills using a multiple sample approach. Eva and his colleagues from McMaster coined the term Admissions OSCE or Multiple Mini Interview(MMI) in response to concerns about the reliability of interviews30. This consists of multiple stations each with a different assessor, set in different contexts (non-clinical) designed to test predetermined non-cognitive competencies held to be important for a medical career. The fresh start with each station dilutes hawk/dove assessors and allows independent assessment in multiple situations. His original paper tested for the domains of critical thinking, ethical decision making, communication skills and knowledge of the healthcare system. A reliability coefficient (Cronbach’s Alpha) of 0.65 was reported which is much higher than that associated with traditional interviews. Axelson and Kreiter have shown that increasing the number of interview stations added more to reliability than increasing the number of interviewers per station31. Reiter et al have looked at violations of test security. They describe several studies where MMI station stems were randomly allocated to some groups of applicants. This did not influence applicants’ performance ranking32. Rosenfield in another study from McMaster concluded that although MMIs require greater preparatory effort and the need for an assessment centre they require fewer person hours and have cost advantages over traditional interviews33. Humphrey et al used MMIs to recruit paediatric SHOs in Warwickshire. Both candidates and interviewers agreed that the process was a fair and acceptable tool for selection in UK specialist training34. Eva has shown that for Canadian students the correlation between admissions MMI and the number of stations passed in the OSCE component of the Canadian Qualifying Examination required for licensure was r=0.43(p<0.05). It was also a better predictor when compared with other selection instruments used in McMaster35.

Graduate –V-Undergraduate Entry to Medical School

In North America medical school programmes are almost exclusively 4 year graduate entry programmes (GEP) whereas in Australia and the UK there is a mixture of GEPs and undergraduate programmes to which graduates can be admitted. James in his retrospective study of predictors of success in the undergraduate course in Nottingham Medical school 1970-95 showed that mature or graduate entrants were more successful in obtaining a first class BMed.Sci. degree but were less successful in passing BMBS14. A study comparing the academic performance of graduate and undergraduate entry medical students completing the same preclinical curriculum at the University of Melbourne showed that graduate students performed better in all four bioscience assessments and also on early clinical skill assessments36. This study reflects our own findings at QUB for the 2008 entry cohort in that although our numbers of graduates are small over the first 2 years of the course, the graduates performed better in all tests and significantly so in yr1 Bioscience exams and across all yr 1 and 2 Objective Structured Clinical Exams. (Table1)

Table 1.

Graduate v Undergraduate performance in Examinations for Y1 and2 of the Medical Course QUB (2008 entry)

Group Statistics
Graduate N Mean Std. deviation Std. Error Mean P value

Bioscience exams Y1 no 220 68.51 7.12 0.48 0.03
yes 15 72.58 5.33 1.38

Bioscience exams Y2 no 219 70.89 8.16 0.55 0.18
yes 15 73.79 7.24 1.87

Three Clinical Exams(OSCE) Y1 and 2 no 219 74.5 7.42 0.50 0.03
yes 15 78.82 6.95 1.80

Arguments for graduate entry include higher motivation, widening diversity, faster production of doctors and proven ability in an academic tertiary environment. School leavers on the other hand are close to the academic ceiling and have good study skills. Although there is evidence that graduate entry widens diversity there is no evidence that graduates make better doctors37.

In March 2007 the QUB/DHSSPS Strategic group met to discuss graduate entry to Medicine at QUB38. A separate graduate programme was rejected because of cost and the sustainability of two courses competing with each other side by side for placements/resources. It was felt that instead there should be multiple entry points into the Medical course and a strategy was developed to double the percentage of graduates from 6 per-cent in 2007 to 12 per-cent. It was felt that this mix of students some of whom had already completed a degree programme would bring diversity to the student experience given the mature approach of graduates and their different life experiences when compared to school leavers

Selection for Medicine (2012 entry) at QUB

There is considerable variation in the selection tools used by Medical schools in the UK. Most schools interview but criteria for shortlisting for interview vary from evidence of previous academic ability, performance in aptitude tests, predicted performance at A-level, scoring of personal statements or a combination of the above39.

Relaying bad news to a cancer patient requires both communication skills and empathy along with cognitive knowledge of the management options and prognosis. The desired endpoint is not a bookworm or a butterfly but a well rounded doctor who exhibits both cognitive and non-cognitive competencies35. This concept along with best evidence on selection from the literature and our own research findings at QUB on the predictive validity of our selection tools has fashioned a change in our processes for 2012 entry.

Alongside this has been our strategy to internationalise the school and to encourage graduate entry to medicine. Our 2010 entry cohort includes 17.6 per-cent graduate entry, 16 per-cent from GB and ROI and 5 per-cent are international students. Fifty four per-cent of the 2010 intake were female. 20.8 per-cent of our 2009 entry cohort were from socio-economic groups 4 and 5; higher than most other schools4. The Department of Health (N.I.) has recently increased our international numbers from 12 to 26 for 2011 entry in keeping with the proportion of international students in other UK medical schools. We are currently actively recruiting both students and staff from South East Asia and North America.

For 2011 entry we had 850 plus applicants competing for 236 EU and 26 international places. There are a number of entry pathways into the Medical school and these include Y14, post A- level and graduate entry.

It has been agreed by both the School and University that for 2012 entry there will be a two stage Admissions process. In keeping with the best practice this process is transparent, has been published , is available on line40 and the predictive validity and reliability of our selection instruments is monitored in keeping with Schwartz guidance5. The first stage will recognise past academic achievement in keeping with evidence from the literature along with recognition of the importance of aptitude tests (UKCAT). We do not exclude applicants on the basis of aptitude tests alone. In stage one which is cognitive, applicants will be scored and ranked as follows

For Y14 applicants the best 9 GCSEs will be considered with 4 marks for an A star and 3 for an A. Maximum 36 points

For graduates holding a 2:1 Honours degree or better (or predicted to achieve same) and who hold 3Bs at first attempt(ABB from 2013 entry) in the specific subject requirements at A level-36 points will be allocated.

For post A -level applicants who already have 3As at A-Level and an A at AS Level 36 points will be allocated.

For ROI applicants the best 9 junior cert intermediate are considered with 4marks for an A and 2 marks for a B. The maximum mark is 36 points

All applicants will take UKCAT in the year of entry and their overall score will attract up to six points The distribution of total UKCAT scores for our 2011 entry cohort is shown in Table 2. This score will be added to their knowledge based score and all applicants ranked. The top circa 500 applicants will then be considered under stage 2 of the selection process which will consist of a nine station multiple mini interview to determine non-cognitive performance. Multiple Mini Interviews are used to test non-cognitive competence in keeping with best evidence available from the literature. The applicant’s personal statement is considered within this process. The non-cognitive competencies which are tested have been determined using a Delphi technique by both the public and Faculty and have been published40

Table 2.

Ukcat banding scores of eu applicants applying for medicine at qub in 2011

Band Score Scoring Range Medicine Banding Total 2011 APPLICATION %
0 1200 - 1299 6 1
1300 - 1399
1400 - 1499
1500 - 1599
1600 - 1699
1700 - 1799
1800 - 1899

1 1900 - 1999 22 3
2000 - 2099

2 2100 - 2199 87 11
2200 - 2299

3 2300 - 2399 198 24
2400 - 2499

4 2500 - 2599 259 33
2600 - 2699

5 2700 - 2799 160 20
2800 - 2899

6 2900 - 2999 63 8
3000 - 3099
3100 - 3199
3200 - 3299
3300 - 3399
3400 - 3499
3500 - 3600

Total 795 100

Final decisions about whether or not to make an offer will be made on the basis of interview ranking alone (i.e. Stage 2 results) and not in combination with other factors. During the 2011 entry process, approximately 200 applicants took our MMI. Each station lasted for 5 minutes and the examination was blueprinted to test for motivation, communication, empathy, problem solving, integrity and ethical reasoning. One third of our stations involved role-players and the others were semi-structured interviews. Prior to the interviews all assessors were trained and all participated in a standardisation process on the day of the assessment. For our 2011 MMIs the Cronbach’s alpha was 0.56. and there was a Gaussian distribution of marks from 30-85 per-cent for candidates. The MMIs were standard set using the borderline regression method and offers were made to applicants who reached the cut score as determined by the panel of assessors. The school is currently actively recruiting assessors and these positions are open to both clinical academic, non-clinical academic and NHS staff. The MMI process is a considerable challenge for us and will require 120 days of assessor time per annum.

While some of our Admission instruments will favour certain groups we try to achieve an overall balance and are currently monitoring all of our selection tools to ensure equality.

Further details regarding the admissions process for Medicine at QUB along with video clips of MMIs are available on www.qub.ac.uk/schools/mdbs/medical/Prospectivestudents.

We feel we now have a selection process which meets GMC recommendations in that it is transparent, objective, uses a variety of selection tools which are constantly monitored in keeping with best practice and also considers both cognitive and non cognitive factors. We hope this approach will widen participation compared to the previous approach which was largely cognitive and relied on selection using very narrow parameters. Our approach to selection also has an advantage in that NHS colleagues, the wider public and academic staff will all have an input into selecting tomorrow’s doctors.

Conflict of Interest

The author is a Director on the UKCAT Board and a member of the UKCAT Development and Research groups

Acknowledgments

I would acknowledge Mr Mike Stevenson Senior Lecturer in Statistics QUB for his statistical input into this article

references

  • 1. http://news.bbc.co.uk\l\hi\education\833824.stm.
  • 2.Brown C, Lilford RJ. Selecting Medical Students. Brit Med J. 2008;336(7648):786. doi: 10.1136/bmj.39517.679977.80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.General Medical Council. 2nd. London: General Medical Council; 2009. Tomorrow’s doctors: outcomes and standards for undergraduate medical education. [Google Scholar]
  • 4.Equality and diversity in UK Medical Schools. British Medical Association; 2009. BMA Equal Opportunities Committee. October 2009. Available online from: http://www.bma.org.uk/equality_diversity/age/equalityanddiversityinukmedschools.jsp. Last accessed April 2011. [Google Scholar]
  • 5.Schwartz S. London: Admissions to Higher Education; 2004. Fair admissions to higher education: recommendations for good practice. Available online from: http://www.admissions-review.org.uk/downloads/finalreport.pdf. Last accessed April 2011. [Google Scholar]
  • 6.Ferguson E, James D, O’Hehir F, Sanders A, Mc Manus IC. Pilot Study of the roles of personality, references and personal statements in relation to performance over the five years of a medical degree. Brit Med J. 2003;326(7386):429–32. doi: 10.1136/bmj.326.7386.429. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Turner R, Nicholson S. Reasons selectors give for accepting and rejecting medical applications before interview. Med Educ. 2011;45(3):298–307. doi: 10.1111/j.1365-2923.2010.03874.x. [DOI] [PubMed] [Google Scholar]
  • 8.Cheltenham, UK: UCAS; 2007. UCAS research shows few applicants pay to plagiarise. Available online from: http://www.ucas.ac.uk/about_us/media_enquiries/media_releases/2007/2007-03-07. Last accessed April 2011. [Google Scholar]
  • 9.A-level results: grade inflation is just a cruel confidence trick. The Telegraph. 2009. Comment. Aug 20. Available online from: http://www.telegraph.co.uk/comment/6063012/A-level-results-grade-inflation-is-just-a-cruel-confidence-trick.html. Last accessed April 2011.
  • 10. http://news.bbc.co.uk/1/shared/bsp/hi/education/10/examresults/gcse/fc/html/allsubjects.stm http://news.bbc.co.uk/1/hi/education/892803.stm.
  • 11.London: Department of Health; 2004. Department of Health. Medical schools: delivering the doctors of the future. [Google Scholar]
  • 12.Didier T, Kreiter CD, Buri R, Solow C. Investigating the utility of a GPA institutional adjustment index. Adv Health Sci Educ. 2006;11(2):145–53. doi: 10.1007/s10459-005-0390-0. [DOI] [PubMed] [Google Scholar]
  • 13.McManus IC, Smithers E, Partridge P, Keeling A, Fleming PR. A levels and intelligence as predictors of medical careers in UK doctors: 20 year prospective study. Brit Med J. 2003;327(7407):139–42. doi: 10.1136/bmj.327.7407.139. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.James D, Chilvers D. Academic and non-academic predictors of success on the Nottingham undergraduate medical course 1970 – 1995. Med Educ. 2001;35(11):1056–64. doi: 10.1046/j.1365-2923.2001.01042.x. [DOI] [PubMed] [Google Scholar]
  • 15.UKCAT 2009/10 Annual Report. Nottingham: UKCAT; 2010. UK Clinical Apititude Test. Available online from: http://www.ukcat.ac.uk/pdf/Annual%20report%202009-10.pdf Last accessed April 2011. [Google Scholar]
  • 16.Lynch B, MacKenzie R, Dowell J, Cleland J, Prescott G. Does the UKCAT predict year 1 performance in medical school. Med Educ. 2009;43(12):1203–9. doi: 10.1111/j.1365-2923.2009.03535.x. [DOI] [PubMed] [Google Scholar]
  • 17.Yates J, James D. The value of the UK Clinical Aptitude test in predicting pre-clinical performance: a prospective cohort at Nottingham Medical School. BMC Med Educ. 2010;10:55. doi: 10.1186/1472-6920-10-55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Wright SR, Bradley PM. Has the UK Clinical Aptitude Test improved medical student selection? Med Educ. 2010;44(11):1069–76. doi: 10.1111/j.1365-2923.2010.03792.x. [DOI] [PubMed] [Google Scholar]
  • 19.Emery JL, Bell JF. The predictive validity of the Bio-Medical Admissions Test for pre-clinical examination performance. Med Educ. 2009;43(6):557–64. doi: 10.1111/j.1365-2923.2009.03367.x. [DOI] [PubMed] [Google Scholar]
  • 20.Cleland JA, French FH, Johnston PW. On behalf of the Scottish Medical Careers Cohort Study Group. A mixed-methods study identifying and exploring medical students’ views of the UKCAT Med Teach. 2011;33(3):244–9. doi: 10.3109/0142159X.2011.557753. [DOI] [PubMed] [Google Scholar]
  • 21.James D, Yates J, Nicholson S. Comparison of A-level and UKCAT performance in students applying to UK Medical and Dental Schools in 2006: cohort study. Brit Med J. 2010;340:c478. doi: 10.1136/bmj.c478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.McManus IC, Keeling A, Paice E. Stress, burnout and doctors attitudes to work are determined by personality and learning style: a twelve year longitudinal study of UK medical graduates. BMC Med. 2004;2:29. doi: 10.1186/1741-7015-2-29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Doherty EM, Nugent E. Personality factors and medical training: a review of the literature. Med Educ. 2011;45(2):132–40. doi: 10.1111/j.1365-2923.2010.03760.x. [DOI] [PubMed] [Google Scholar]
  • 24.Demerouti E. Job characteristics, flow and performance: the moderating role of conscientiousness. J Occup Health Psychol. 2006;11(3):266–80. doi: 10.1037/1076-8998.11.3.266. [DOI] [PubMed] [Google Scholar]
  • 25.Powis D. Selecting medical students. Med J Aust. 2008;188(6):323–4. doi: 10.5694/j.1326-5377.2008.tb01644.x. [DOI] [PubMed] [Google Scholar]
  • 26.Coebergh J. Dutch medical schools abandon selection for lottery system for places. Student Brit Med J. 2003;11(136):74. Available online from: http://archive.student.bmj.com/issues/03/05/news/138a.php Last accessed April 2011. [Google Scholar]
  • 27.Edwards JC, Johnston EK, Molidor JB. The interview in the admissions process. Acad Med. 1990;65(3):167–77. doi: 10.1097/00001888-199003000-00008. [DOI] [PubMed] [Google Scholar]
  • 28.Kreiter CD, Yin P, Solow CM, Brennan RL. Investigating the reliability of the medical school admissions interview. Adv Health Sci Theory Pract. 2004;9(2):147–59. doi: 10.1023/B:AHSE.0000027464.22411.0f. [DOI] [PubMed] [Google Scholar]
  • 29.Harasym PH, Woloschuk W, Mandin H, Brundin-Mather R. Reliability and validity of interviewers’ judgements of medical school candidates. Acad Med. 1996;71(1 Suppl):S40–2. doi: 10.1097/00001888-199601000-00038. [DOI] [PubMed] [Google Scholar]
  • 30.Eva KW, Rosenfeld J, Reiter HI, Norman GR. An Admissions OSCE: the multiple mini-interview. Med Educ. 2004;38(3):314–26. doi: 10.1046/j.1365-2923.2004.01776.x. [DOI] [PubMed] [Google Scholar]
  • 31.Axelson RD, Kreiter CD. Rater and occasion impacts on the reliability of pre-admission assessments. Med Educ. 2009;43(12):1198–202. doi: 10.1111/j.1365-2923.2009.03537.x. [DOI] [PubMed] [Google Scholar]
  • 32.Reiter H, Salvatori P, Rosenfeld J, Trinh K, Eva KW. The effect of defined violations of test severity on admissions outcomes using multiple mini-interviews. Med Educ. 2006;40(1):36–42. doi: 10.1111/j.1365-2929.2005.02348.x. [DOI] [PubMed] [Google Scholar]
  • 33.Rosenfeld JM, Reiter HI, Trinh K, Eva KW. The cost efficiency comparison between the multiple mini-interviews and traditional admissions interviews. Adv Health Sci Educ. 2008;13(1):43–58. doi: 10.1007/s10459-006-9029-z. [DOI] [PubMed] [Google Scholar]
  • 34.Humphrey S, Dowson S, Wall D, Diwakar V, Goodyear H. Multiple mini-interviews: opinions of candidates and interviewers. Med Educ. 2008;42(2):207–13. doi: 10.1111/j.1365-2923.2007.02972.x. [DOI] [PubMed] [Google Scholar]
  • 35.Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini-interviews predict clerkship and licensing examination performance. Med Educ. 2007;41(4):378–84. doi: 10.1111/j.1365-2929.2007.02709.x. [DOI] [PubMed] [Google Scholar]
  • 36.Dodds A, Reid KJ, Conn JJ, Elliott SL, McColl GJ. Comparing the academic performance of graduate and undergraduate entry medical students. Med Educ. 2010;44:197–204. doi: 10.1111/j.1365-2923.2009.03559.x. [DOI] [PubMed] [Google Scholar]
  • 37.James D, Ferguson E, Powis D, Symonds I, Yates J. Graduate entry to medicine; widening academic and socio-economic access. Med Educ. 2008;42(3):294–300. doi: 10.1111/j.1365-2923.2008.03006.x. [DOI] [PubMed] [Google Scholar]
  • 38. Minutes of QUB/DHSSPS Strategic Group 5\3\07.
  • 39.Parry J, Mathers J, Stevens A, Parsons A, Lilford R, Spurgeon P, Thomas H. Admissions processes for five year medical courses at English schools:review. Brit Med J. 2006;332(7548):1005–9. doi: 10.1136/bmj.38768.590174.55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. www.qub.ac.uk/schools/mdbs/medical/Prospectivestudents.

Articles from The Ulster Medical Journal are provided here courtesy of Ulster Medical Society

RESOURCES