Skip to main content
The BMJ logoLink to The BMJ
. 2002 Apr 20;324(7343):950–951. doi: 10.1136/bmj.324.7343.950

General practitioners' self ratings of skills in evidence based medicine: validation study

Jane M Young a, Paul Glasziou b, Jeanette E Ward c
PMCID: PMC102329  PMID: 11964341

To practise evidence based medicine, clinicians need to understand and use terms such as “relative risk reduction,” “absolute risk reduction,” and “number needed to treat.”1 Self ratings represent one method of assessing competence in these skills. About a third of clinicians claim to understand such terms.2 We evaluated the validity of self ratings and conducted a blinded validation in general practice.

Methods and results

Fifty general practitioners in Sydney, Australia, completed self administered questionnaires,2 in which they rated their understanding of each of seven terms used in evidence based medicine as “Would not be helpful for me to understand,” “I don't understand but would like to,” “I already have some understanding,” and “I understand this and could explain to others.” We considered the last response to represent full understanding (self rating of competence). Participants sealed their responses in an envelope before participating in a structured interview with JY (who was unaware of their self rating), in which they were asked to explain each term as if to a medical student. Unprompted comments were recorded (see box on bmj.com). The study was approved by the Central Sydney Area Health Service Ethics Review Committee.

Three independent experts in evidence based medicine had been asked to identify criteria essential for showing that the participant knew the correct meaning of the term (criterion based assessment; see table on bmj.com). During interviews with general practitioners, JY ticked any criterion met by participants' verbal explanations. To demonstrate competence in understanding number needed to treat, for example, participants had to include in their verbal responses the concept that this represents the number of patients needed to be treated to achieve one good outcome or prevent one bad outcome and that it is the reciprocal of absolute risk reduction.

Participants' verbal explanations almost never met the essential criteria (table). Although self ratings were modest, only one participant's explanation met all essential criteria, and this for only one term, positive predictive value.

We could not calculate sensitivity and specificity of self rated competence for any terms other than positive predictive value as only one respondent met objective criteria for competence. We calculated positive and negative predictive values for each term to assess the probability of competence given a positive or negative self rating. The predictive value of a positive self rating was 8% for positive predictive value but zero for the other six terms (table). As no participants demonstrated competence exceeding their self rating, the predictive value of a negative self rating was 100% for all terms.

Comment

Participants' self ratings of their understanding of terms used in evidence based medicine differed from an objective, criterion based assessment. Moreover, participants' comments showed considerable misunderstanding about terms.

Medical education in Australia has largely not prepared general practitioners for evidence based medicine. Remediation is crucial if they are to understand research findings on which clinical practice ought to be based and avoid pitfalls such as “framing effect.”3 Little rigorous research has been conducted to identify effective educational strategies for clinicians.4

It is unclear whether findings from our modest sample also apply to medical practitioners in other settings. Australia's general practitioners are at least as familiar with evidence based medicine as their counterparts in other countries, given recent focus in health policy.5 Our method may have resulted in underperformance by participants, who might have been able to explain these terms to medical students when not under the scrutiny of an academic interviewer. Furthermore, general practitioners may understand these terms less in the abstract but more when they are used in context by a conference speaker or in a research article.

Supplementary Material

[extra: Box and table]

Table.

Comparison of participants' self rating of understanding of terms used in evidence based medicine with objective criteria developed by experts

Self rating of competence No of responses No of criteria met
Could not or refused to answer or participate Positive predictive value (%) Negative predictive value (%)
All Some None
Levels of evidence:
 I understand and could explain to others 7 0 0 6 1 0 100
 Other responses 43 0 2 3 38
Relative risk:
 I understand and could explain to others 9 0 0 8 1 0 100
 Other responses 41 0 0 5 36
Absolute risk:
 I understand and could explain to others 15 0 0 8 7 0 100
 Other responses 35 0 0 1 34
Number needed to treat:
 I understand and could explain to others 8 0 2 2 4 0 100
 Other responses 42 0 0 4 38
Test sensitivity:
 I understand and could explain to others 13 0 1 7 5 0 100
 Other responses 37 0 1 4 32
Test specificity:
 I understand and could explain to others 13 0 0 6 7 0 100
 Other responses 37 0 0 3 34
Positive predictive value:
 I understand and could explain to others 13 1 0 1 11 8 100
 Other responses 37 0 0 0 37

Acknowledgments

We thank the general practitioners who participated in this study and Jeremy Anderson, Chris Del Mar, and Chris Silagy, who responded to our request to rate criteria.

Footnotes

Editorial by Woodcock et al

Funding: At the time of the fieldwork, JY was employed by Central Sydney Area Health Service. JY is currently supported by National Health and Medical Research Council Public Health (Australia) fellowship No 007024.

Competing interests: None declared.

See box and additional table on bmj.com

References

  • 1.Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practise and teach EBM. Edinburgh: Churchill Livingstone; 1998. [Google Scholar]
  • 2.McColl A, Smith H, White P, Field J. General practitioners' perceptions of the route to evidence based medicine: a questionnaire survey. BMJ. 1998;316:361–365. doi: 10.1136/bmj.316.7128.361. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cranney M, Walley T. Same information, different decisions: the influence of evidence on the management of hypertension in the elderly. Br J Gen Pract. 1996;46:661–663. [PMC free article] [PubMed] [Google Scholar]
  • 4.Hyde C, Parkes J, Deeks J, Milne R. Oxford: ICRF/NHS Centre for Statistics in Medicine; 2000. Systematic review of effectiveness of teaching critical appraisal.www.bham.ac.uk/arif/SysRevs/TeachCritApp.PDF (accessed 4 Jan 2002). [Google Scholar]
  • 5.Ahmed T, Silagy C. The move towards evidence-based medicine. Med J Aust. 1995;163:60–61. [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

[extra: Box and table]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES