Skip to main content
The BMJ logoLink to The BMJ
. 2005 Sep 10;331(7516):555–559. doi: 10.1136/bmj.331.7516.555

Intellectual aptitude tests and A levels for selecting UK school leaver entrants for medical school

I C McManus 1, David A Powis 2, Richard Wakeford 3, Eamonn Ferguson 4, David James 5, Peter Richards 6
PMCID: PMC1200591  PMID: 16150766

Short abstract

An extension of A level grades is the most promising alternative to intellectual aptitude tests for selecting students for medical school


How to make the selection of medical students effective, fair, and open has been contentious for many years.w1 A levels are a major component of selection for entry of school-leavers into UK universities and medical schools,w2 but intellectual aptitude tests for the selection of medical students are burgeoning—they include the Oxford medicine admissions test1 and the Australian graduate medical school admissions test 2 (table). Tests such as the thinking skills assessmentw3 are being promoted for student selection generally. The reasons include a political climate in which government ministers are advocating alternatives to A levels, some support for them in the Schwartz report on admissions to higher education,3 lobbying from organisations such as the Sutton Trust,w4 and the difficulty of distinguishing between the growing numbers of students achieving three A grades at A level. We examine the problems that intellectual aptitude tests are addressing, their drawbacks, any evidence that they are helpful, and alternatives.

Table 1.

Aptitude tests currently used in United Kingdom by medical schools and other university courses

Test Name Further information Comments
BMAT Biomedical admissions test www.bmat.org.uk Used by Cambridge, Imperial College, Oxford, and University College London, as well as three veterinary schools
GAMSAT Graduate medical school admissions test www.acer.edu.au/gamsat Used for selection by Australian graduate medical schools. At present it is used by four graduate entry schools in United Kingdom. UK version: GAMSAT UK
MSAT Medical school admissions test www.acer.edu.au/msat Used by three UK medical schools
MVAT Medical and veterinary admissions test No published details Developed in Cambridge and was a precursor to the biomedical admissions test
OMAT Oxford medicine admissions test See reference 3 Developed in Oxford and was a precursor to the biomedical admissions test
PQA Personal qualities assessment www.pqa.net.au Subtests of mental agility, interpersonal values, and interpersonal traits. Administered in several UK medical schools on a research basis only
TSA Thinking skills assessment www.cam.ac.uk/cambuniv/undergrad/tests/tsa.html Used by several Cambridge colleges for selection in a range of disciplines, of which computer science is presently the predominant one

Medical schools need selection procedures that are evidence based and legally defensible. We therefore explored a series of questions around these developments.

Many UK school leavers apply to university with other educational qualifications, including the international Baccalaureat and Scottish highers. Medical schools are increasingly admitting entrants other than directly from school. These different mechanisms of entry require separate study. We discuss the main school-leaver route through A levels.

Are A levels predictive of outcome?

Many beliefs are strongly held about undergraduate student selection but without any “visible means of support”4: one is that A levels are not predictive of outcome at university. The opposite is true. A study of 79 005 18 year old students entering university in 1997-8 and followed through until 2000-1 (www.hefce.ac.uk) shows a clear relation between A level grades and university outcome (fig 1). The result is compatible with many other studies of students in general,5w5 w6 and of medical students in particular.5-8w7-w10 Small studies of individual students in individual years at individual institutions are unlikely to find such correlations—the reasons being statistical, including lack of power, restriction of range, and attenuation of correlations caused by unreliability of outcome measures (see bmj.com).

Fig 1.

Fig 1

Outcome of students in relation to A level grades in all subjects. Grades are based on best three. A=10, B=8, C=6, D=4, and E=2 points

Summary points

So many applicants are achieving top grades at A level that it is increasingly impractical to select for medical schools primarily on such achievement

Schools are introducing tests of intellectual aptitude without evidence of appropriateness, accuracy, or added value, making them open to legal challenge

Since the 1970s, university achievement has been shown to be predicted by A levels but not by intelligence tests

The discriminative ability of A levels might be restored by introducing A+ and A++ grades, as recommended in the Tomlinson Report

If additional grades at A level cannot be introduced, medical schools could collectively commission and validate a new test of high grade scientific knowledge and understanding

An argument exists for also developing and validating tests of non-cognitive variables in selection, including interpersonal communication skills, motivation, and probity

Why are A levels predictive?

The three broad reasons why A levels may predict outcome in medicine are: cognitive ability—A levels are indirect measures of intelligence, and intelligence correlates with many biological and social outcomes9; substantive content—A levels provide students with a broad array of facts, ideas, and theories about disciplines such as biology and chemistry, which provide a necessary conceptual underpinning for medicine; and motivation and personality—achieving high grades at A level requires appropriate motivation, commitment, personality, and attitudes, traits that are also beneficial at medical school and for lifelong learning.

Cognitive ability alone cannot be the main basis of the predictive ability of A levels, because measures of intelligence and intellectual aptitude alone are poor predictors of performance at university. This is not surprising, as an old axiom of psychology says that “the best predictor of future behaviour is past behaviour,” here meaning that the future progress in passing medical school examinations will best be predicted by performance in past examinations. Of course success in medicine and being a good doctor are not identical nor are either of these the same as passing medical school examinations; but those who fail medical school examinations and have to leave medical school never become doctors of any sort. The predictive value of A levels most likely results either from their substantive content, their surrogate assessment of motivation to succeed, or both.

Separating the substantive and motivational components of A levels is straightforward in principle. If the substantive content of A levels is important for prediction then there will be a better prediction of outcome from disciplines underpinning medical science, such as biology and chemistry, than there will be from other subjects—for example, music, French, or economics. Alternatively if motivational factors are the main basis for the predictive power of A levels, indicating pertinent personality traits and attitudes such as commitment, the particular subject taken will be less relevant and an A grade in music, French, or economics will be found to predict performance at medical school as well as an A grade in biology or chemistry. Few analyses, however, differentiate between these factors. But evidence is increasing that A level chemistry is a particularly good predictor of performance in basic medical science examinations6,10 (although not all studies find an effectw7 w11), and A level biology also seems to be important.6,11 As almost all medical school entrants take at least two science A levels, however, one of which is chemistry, this leaves little variance to partition. It should also be remembered that there are probably true differences in the difficulty of A levels, with A grades easier to achieve in subjects such as photography, art, Italian, and business studies, than in chemistry, physics, Latin, French, mathematics, biology, and German.12 A clear test of the need for substantive content will occur should medical schools choose to admit students without A level chemistry, as has been suggested,w12 or perhaps without any science subjects. If students with only arts A levels perform as well as those with science A levels, then the substantive content of A levels is unimportant—and there are suggestions that arts and humanities subjects independently predict outcome.13 w13

What do intellectual aptitude tests do?

Aptitude has many meanings,14 but the glossary to the Schwartz report says that aptitude tests are “designed to measure intellectual capabilities for thinking and reasoning, particularly logical and analytical reasoning abilities.”3 Aptitude, however, also refers to noncognitive abilities, such as personality. We therefore talk here of “intellectual aptitude,” both in the sense described by Schwartz and in the meaning used in the United States for what used to be known as scholastic aptitude tests (SATs; but now the “A” stands for assessment) and which are largely assessments of intellectual ability.w14

Most intellectual aptitude tests assess a mixture of what psychologists call fluid intelligence (logic and critical reasoning, or intelligence as process15) and crystallised intelligence (or intelligence as knowledge, consisting of general culturally acquired knowledge of vocabulary, geography, and so on). We thus consider intelligence tests and intellectual aptitude tests as broadly equivalent, but distinguish them from achievement tests, such as A levels, which assess knowledge of the content and ideas of academic subjects such as chemistry and mathematics.

Although purveyors of tests such as the biomedical admissions test (table) argue that they are not measures of intelligence but of “critical thinking,” there is little agreement on what critical thinking means.w15-w17 Critical thinking is related more to aspects of normal personality than it is to IQ and reflects a mixture of cognitive abilities (for example, deductive reasoning) and dispositional attitudes to learning (for example, open mindedness).16 Evidence also shows that critical thinking skills, not dispositions, predict success in examinations17 and that critical thinking may lack temporal stability.18 The content and the timed nature of the biomedical admissions test suggest it will correlate highly with conventional IQ tests.

What do intellectual aptitude tests predict?

We know of three studies that have compared intellectual aptitude tests with A levels (see bmj.com).

The investigation into supplementary predictive information for university admissions (ISPIUA) project5 studied 27 315 sixth formers in 1967, who were given a three hour test of academic aptitude (TAA); 7080 entered university in 1968 and were followed up until 1971. The results were clear: “TAA appears to add little predictive information to that already provided by GCE results [A levels and O levels] and school assessment in general.”

The Westminster study14 followed up 511 entrants to the Westminster Medical School between 1975 and 1982 who had been given a timed IQ test, the AH5.w18 Intellectual aptitude did not predict outcomes measured in 2002, whereas A level grades were predictive of both academic and career outcomes.

The 1991 cohort study looked at 6901 applicants to UK medical schools in 1990, of whom 3333 were admittedw19 w20 and followed up.w21 w22 An abbreviated version of the timed IQ test was given to 786 interviewees.w18 A levels were predictive of performance in basic medical science examinations, in final clinical examinations, and in part 1 of a postgraduate examination, whereas the intellectual aptitude test was not predictive (see bmj.com).

In the United States (presently outside the hegemony of the A level system) a recent study of dental school admissions19 evaluated an aptitude selection test, carefully founded in the psychology of cognitive abilities and skill acquisition. The scores were of no predictive value for clinical achievement at the end of the course.

Do aptitude tests add anything to what A levels already tell us?

In interpreting the validity of aptitude tests it should be acknowledged that some aptitude tests are not content free and do assess substantive components. For instance, the biomedical admissions test seeks to test aptitude and skills (problem solving, understanding argument, data analysis, and inference) in section 1 and scientific knowledge and applications in section 2. Section 2 contains questions on biology, chemistry, physics, and mathematics (www.bmat.org.uk) at the level of key stage 4 of the UK national curriculum (www.nc.uk.net). Because most applicants for medical school are studying many of those subjects for A level, section 2 may be a better predictor of university outcome than is section 1, because it is indirectly assessing breadth of scientific background (and in all likelihood it will be found to correlate with A level grades, but not necessarily to add any predictive value).

On balance, A levels predict university achievement mainly because they measure the knowledge and ideas that provide the conceptual scaffolding necessary for building the more advanced study of medicine.20 w23 As with building a house, the scaffolding will later be taken down and its existence forgotten, but it will nevertheless have played a key part in construction. Motivation and particular personality traits may also be necessary, just as a house is built better and faster by an efficient, conscientious builder. However, intellectual aptitude tests assess neither the fundamental scientific knowledge needed to study medicine nor the motivation and personality traits. Pure intellectual aptitude tests only assess fluid intelligence, and empirically that is a weak predictor of university performance. Intelligence in a builder, although highly desirable, is no substitute for a professional knowledge of the craft of house building.

What are the problems of using A levels for selection?

Reported problems in using A levels in selection are threefold: the increasing numbers of candidates with three A grades at A level; social exclusion, and type of schooling.

A continuing concern of the UK government is that entry to medical school is socially exclusive.21 The class distribution of entrants has been unchanged for over half a century, with a preponderance of applicants and entrants from social class 1.w24 It is not clear whether the entrant profile reflects bias by selectors,w25 an active choice not to apply for medicine by those from social classes IV and V,w26 or an underlying distribution of ability by social class.w27

Although most children in the United Kingdom attend state schools, the minority attending private (independent) schools are over-represented among university entrants, possibly because as a group they achieve higher A level scores.

Can these problems be fixed?

The increasing numbers of candidates with three grade As at A level

Intellectual aptitude tests are seen as a way to stretch the range, continuing to differentiate when A level grades are at their ceiling. The problem is that although these tests undoubtedly provide variance (as indeed would random numbers), it is not useful variance since it does not seem to predict performance at university.

The simplest solution to the ceiling problem is that suggested in the Tomlinson Report22 of introducing A+ and A++ grades at A level, so that A levels continue to do what they do well, rather than being abandoned and replaced by tests with unproved validity. An appropriate alternative strategy would be to commission a new test of high level scientific knowledge and understanding that measures above the top end of A levels, but it may be better to stay with what we already know.

Social exclusion

It has been suggested that a pool of talented individuals capable of becoming good doctors is excluded by current admission methods. Even if that were so (and we know of no evidence to support it) there is no basis for believing that intellectual aptitude tests are capable of identifying them.

Type of schooling

To tackle possibly unfair over-representation of entrants from independent schools, a case has been made for university selectors taking into account type of school and the relative performance of the school: achieving three grade Cs at A level in a school where most pupils gain three grade Es may predict university achievement better than gaining three grade Cs in a school where most pupils gain three grade Bs. Detailed analyses by the Higher Education Funding Council for England, however, show that after taking into account the grades achieved by an individual student, the aggregate achievement of the school from which the student has come provides no additional prediction of university outcome (see bmj.com). The funding council has good evidence to show that on aggregate, pupils from independent schools under-perform at university compared with those with the same grades from state schools (fig 2).

Fig 2.

Fig 2

Predictive value for gaining first class or upper second class degree of A levels obtained at different types of school (see bmj.com for definition of school types)

Intellectual aptitude tests are not a solution to this problem. A solution might be to upgrade the A level grades of applicants from state schools so that, say, one A grade and two B grades are treated as equivalent to two A grades and one B grade from an independent school applicant (that is, increasing by 20 points on the new tariff of the Universities and Colleges Admissions Service (www.ucas), or by 2 points on the older scheme shown in figure 2). Any system should also take into account that many pupils are at independent schools until age 16 and then transfer to (state) sixth form colleges for their A levels. A proper, holistic assessment of each student will, however, require more information than is readily available on the present admissions form.

What is the potential value of non-cognitive aptitude tests?

The aptitude tests we have considered are those that assess cognitive skills. Other skills, however, are needed by doctors, such as the ability to communicate and to empathise, having appropriate motor and sensory skills, and having the appropriate attitudes and ethical standards necessary for professionalism (see for example, www.pqa.net.au). None of these is disputed, and there are strong arguments that selectors can and should take such measures into account at the same time as they are assessing the intellectual skills necessary for coping with the course.w28 w29 We have not considered such aptitudes in detail here because few UK medical schools are as yet using them in selection (as opposed to research and validation), although initiatives are in progress, both in the United Kingdom and elsewhere.w30 w31 Situational selection tests have been used in five Flemish medical and dental schools and were found to predict performance in the final first year's examinations better than tests of cognitive ability.23

It is also the case, however, that if selection is to be made on the basis of several independent characteristics, then the extent of selection on each is inevitably lower than if there is selection only on any one of them,24 until eventually the selective power of each is so reduced that “if you select on everything you are actually selecting on nothing.”w32 An attractive argument is that if most students with three grade As at A level (or indeed even lower grades) can cope with a medical course, then instead of looking for selection using A+ and A++ grades, medical schools should be selecting from the existing pool more systematically on non-cognitive measures. That will require validation of the measures, but it might result in a cohort of doctors who are not only academically able but also well suited to medicine because of temperament and attitude. Whether the slight potential decrease in the academic qualifications of entrants will be offset by their increased suitability on non-cognitive measures will depend on a precise knowledge of the relation between academic ability, suitability, and examination performance, and in particular whether these are linear or non-linear and show thresholds. It is an important question that must be answered by empirical study.

Conclusions

Schwartz urged universities to use “tests and approaches that have already been shown to predict undergraduate success” and to assess applicants holistically. We conclude that A levels, using a more finely developed marking system at the top end (A+ and A++ grades, for example) have the greatest potential towards enabling enhanced selection by medical schools' admissions staff: such grades will be maximally robust, in view of the testing time (and coursework) involved.

We understand why the new intellectual aptitude tests are being introduced, but are concerned that they are being introduced uncritically and without published evidence on their reliability and validity. Typically, they involve only an hour or two of testing time and are thus unlikely to have high reliability or generalisability (particularly owing to content specificity), although no data have been published. Their validity can be doubted for good reason, as published studies have found that intellectual aptitude compares poorly with A levels in predicting the outcome of university and medical school, and it has not been shown to add value to the selection process.

The appropriate alternative to refining A level grades would be for the medical schools to commission a new test, reliably assessing high grade scientific knowledge and understanding. At the same time, more research into the value of non-cognitive tests is clearly important and required.

We accept that our criticism of intellectual aptitude tests could be shown to be misplaced when the medical schools using them publish their evidence on predictive validity and reliability. Currently the tests are being justified, not by means of any reported data but by general assertions of organisational quality, unspecified relations between scores and university examinations, and by the observation that admissions staff are using them.25 Without evidence, medical schools using these tests are vulnerable to legal challenge.

Supplementary Material

[extra: Additional information]

Inline graphicSupplementary information and figures are on bmj.com

We thank the referees and William James and Chris Whetton for help with preparing this paper.

Contributors: DAP and ICM conceived the article. ICM wrote the first draft. DAP, RW, EF, DJ, and PR provided input. RW revised the draft, following referees' comments. ICM is guarantor.

Funding: None.

Competing interests: DAP is a member of the team developing the personal qualities assessment.

References

  • 1.James W, Hawkins C. Assessing potential: the development of selection procedures for the Oxford medical course. Oxford Rev Educ 2004;30: 241-55. [Google Scholar]
  • 2.Brown P. Admissions testing for entry to medicine. 01: the newsletter of LTSN-01 2004;No 6:27. www.ltsn-01.ac.uk (accessed 6 Jan 2005).
  • 3.Admissions to Higher Education Steering Group. Fair admissions to higher education: recommendations for good practice. Nottingham: Department for Education and Skills Publications, 2004. www.admissions-review.org.uk (accessed 6 Jan 2005).
  • 4.Bekhradnia B, Thompson J. Who does best at university? London: Higher Education Funding Council England, 2002. www.hefce.ac.uk/Learning/whodoes (accessed 6 Jan 2005).
  • 5.Choppin BHL, Orr L, Kurle SDM, Fara P, James G. The prediction of academic success. Slough: NFER Publishing, 1973.
  • 6.James D, Chilvers C. Academic and non-academic predictors of success on the Nottingham undergraduate medical course 1970-1995. Med Educ 2001;35: 1056-64. [DOI] [PubMed] [Google Scholar]
  • 7.Lumb AB, Vail A. Comparison of academic, application form and social factors in predicting early performance on the medical course. Med Educ 2004;38: 1002-5. [DOI] [PubMed] [Google Scholar]
  • 8.Ferguson E, James D, Madeley L. Factors associated with success in medical school and in a medical career: systematic review of the literature. BMJ 2002;324: 952-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Deary IJ. Differences in mental abilities. BMJ 1998;317: 1701-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Montague W, Odds FC. Academic selection criteria and subsequent performance. Med Educ 1990;24: 151-7. [DOI] [PubMed] [Google Scholar]
  • 11.McManus IC, Richards P. Prospective survey of performance of medical students during preclinical years. BMJ 1986;293: 124-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Curriculum, evaluation, and management centre. A-level update. www.cemcentre.org/recenttopics/alevelupdate/default.asp (accessed 6 Jan 2005).
  • 13.Grey MR, Pearson SA, Rolfe IE, Kay FJ, Powis DA. How do Australian doctors with different pre-medical school backgrounds perform as interns? Educ Health 2001;14: 87-96. [DOI] [PubMed] [Google Scholar]
  • 14.McManus IC, Smithers E, Partridge P, Keeling A, Fleming PR. A levels and intelligence as predictors of medical careers in UK doctors: 20 year prospective study. BMJ 2003;327: 139-42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Ackerman PL. A theory of adult intellectual development: process, personality, interests and knowledge. Intelligence 1996;22: 227-57. [Google Scholar]
  • 16.Perkins D, Tishman S, Ritchart R, Donis K, Andrade A. Intelligence in the wild: a dispositional view of intellectual traits. Educ Psychol Rev 2000;12: 269-93. [Google Scholar]
  • 17.Giddens J, Gloackner GW. The relationship of critical thinking to performance on the NCLEX-RN. J Nurs Educ 2005;44: 85-9. [DOI] [PubMed] [Google Scholar]
  • 18.Pithers RT, Soden R. Critical thinking in education: a review. Educ Res 2000;42: 237-49. [Google Scholar]
  • 19.Gray SA, Deem LP, Straja SR. Are traditional cognitive tests useful in predicting clinical success? J Dent Educ 2002;66: 1241-5. [PubMed] [Google Scholar]
  • 20.Ferguson E, James D, O'Hehir F, Sanders A. A pilot study of the roles of personality, references and personal statements in relation to performance over the 5 years of a medical degree. BMJ 326; 2003: 429-31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Department of Health. Medical schools: delivering the doctors of the future. London: DoH Publications, 2004.
  • 22.Working group on 14-19 reform. 14-19 Curriculum and qualifications reform: final report of the working party on 14-19 reform [the Tomlinson report]. Nottingham: Department for Education and Skills Publications, 2004.
  • 23.Lievens F, Coetsier P. Situational tests in student selection: an examination of predictive validity, adverse impact and construct validity. Int J Sel Assess 2002;10: 245-57. [Google Scholar]
  • 24.McManus IC, Vincent CA. Selecting and educating safer doctors. In: Vincent CA, Ennis M, Audley RJ, eds. Medical accidents. Oxford: Oxford University Press, 1993: 80-105.
  • 25.University of Cambridge Local Examinations Syndicate. FAQs on BMAT/TSA: “How do you know the tests are any good?” http://bmat.ucles-red.cam.ac.uk/background.html (accessed 25 Mar 2005).

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

[extra: Additional information]
bmj_331_7516_555__1.pdf (374.3KB, pdf)

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES