The term “evidence-based medicine” (EBM) was introduced in 1992 in a seminal paper by Gordon Guyatt as a solution to “an exploding volume of literature . . . deepening concern about burgeoning medical costs, and increasing attention to quality and outcomes.”1 Over the ensuing decades, EBM has been integrated into the medical culture and incorporated almost universally into medical school and residency curricula.2,3 In addition, Guyatt's recognition of the need to reduce health care costs and improve quality has entered the mainstream consciousness, framed increasingly around the notion of “value.”
Value can be conceptualized as the ratio of health outcomes and costs.4 Skills in EBM are critical to optimizing value, since a deep understanding of evidence is required for predicting health outcomes in individual patients. In particular, clinicians must recognize the clinical impact of interventions, grapple with uncertainty in the evidence, and uncover bias in published studies in order to fully balance the benefits and harms of potential approaches. More than 20 years of EBM immersion should have thoroughly prepared us for these tasks—but has it?
The study by Caverly et al5 in this issue of the Journal of Graduate Medical Education suggests that EBM education has failed to prepare physicians for high-value practice. The authors presented medical residents and attending internal medicine physicians with 4 vignettes that described drug studies with different types of endpoints: total mortality, disease-specific mortality, a surrogate outcome (simply called a “risk factor” in the vignette), and a composite outcome with a surrogate component. Participants were asked to rate the extent to which each study proved that the new drug “might help people.” Improvement in the composite outcome, as proof of drug benefit, was rated most highly by both residents and attending physicians. While participants were not asked to directly compare endpoints, fewer than half rated all-cause mortality as better proof of benefit than improvement in a surrogate endpoint, and fewer than a quarter of participants rated all-cause mortality as better proof than a composite endpoint. Despite limitations in this study approach, the findings suggest that physicians lack the skill to accurately weigh the relative importance of different types of endpoints in clinical trials, and they tend to overvalue surrogate and composite endpoints.
The overvaluing of surrogate and composite endpoints threatens health care value, because improvements in surrogate endpoints may occur without improvement (or with worsening) of clinical outcomes. For example, class 1C antiarrhythmic agents were routinely prescribed to post-myocardial infarction patients with asymptomatic ventricular arrhythmias after myocardial infarction for arrhythmia suppression, until the Cardiac Arrhythmia Suppression Trial found that these drugs actually increased mortality compared to a placebo.6 Use of dual angiotensin-converting enzyme inhibitor and angiotensin receptor blocker therapy for a variety of indications grew rapidly based on the possible benefit in surrogate outcomes (eg, proteinuria in nephropathy) until complications such as hypotension and hyperkalemia were clarified.7 In both of these cases, prescribing based on surrogate outcomes likely harmed large numbers of patients. Further, since pharmaceutical industry marketing is often based on surrogate outcomes,8 a failure of physicians to recognize the limitations of surrogate outcomes may facilitate successful industry marketing of new expensive (and possibly minimally effective) drugs, resulting in reduced value for patients.9
Why, despite EBM education, are physicians unable to appreciate the greater value of a reduction in mortality compared to an improvement in a surrogate outcome? First, evaluating the appropriateness of endpoints is not adequately emphasized in EBM education. Despite the ubiquitous “PICO” structure for clinical questions, with “O” representing the outcome of interest, there is little instruction in the relative weight of different outcomes, and the complexity of composite outcomes defies simple explanation. Instruction in the applicability of evidence to patient care includes consideration of whether all clinically relevant outcomes were reported.10 However, applicability issues tend to be deemphasized in EBM teaching in favor of teaching about internal validity; thus, discussions of outcomes may be cursory. Reflecting this lack of emphasis, standard tools for evaluating physicians' EBM skills do not test understanding of the relative value of outcome measures.11,12 Finally, evidence hierarchy13 is routinely taught as a central EBM concept. This hierarchy emphasizes the importance of study design, where randomized trials are highly valued without consideration of specific study characteristics (such as the chosen primary outcome), so a relatively inexperienced EBM practitioner would likely consider a randomized trial with a surrogate primary outcome to be high-level evidence.
Clearly, inclusion of specific study characteristics in the evidence hierarchy would render it overly complex and unusable, but perhaps that's the point. Understanding evidence is legitimately complex, and attempts to oversimplify the process may perversely lead to misinterpretation of evidence and the incorporation of low-level evidence into clinical practice. The findings of Caverly and colleagues5 may represent the tip of the iceberg of evidence misinterpretation. While few studies have assessed physician skills in identifying appropriate evidence for clinical adoption, physicians have poor numeracy,14 fail to discount for conflicts of interest when weighing evidence,15 and appear to be influenced by industry marketing9 that tends to present evidence poorly.
How can educators better train physicians to use evidence to improve value for patients? First, we can emphasize basic concepts in EBM education, rather than the details of critical appraisal or instruction in calculating quantifiers, such as the number needed to treat and the likelihood ratio. This teaching should include the importance of clinically relevant outcome measures, appropriate comparators, and adequate follow-up time in clinical trials as well as the possible influence of conflicts of interest. These basic concepts need to be reinforced repeatedly throughout training. Second, after more than 2 decades of EBM education, we need to recognize that evidence interpretation is complex, and that many (perhaps most) physicians may never master it. For these learners, the ability to identify and retrieve reliable high-quality evidence is critical,16 but the ability to perform critical appraisal is less important. All trainees must become skilled at accessing high-quality evidence-based guidelines (from a variety of national organizations) and topic summaries (from sources such as BMJ Clinical Evidence17). Trainees should also know how to access summaries and interpretations of individual high-impact clinical trials (from sources such as ACP Journal Club18 and McMaster PLUS19) and high-quality systematic reviews (from sources such as Cochrane20). Accomplishing this mastery may take time away from traditional EBM education, and will also require humility and the acknowledgment of the complexity of EBM. At the same time, it will result in a physician workforce with the ability to use the best evidence and make high-value clinical decisions for patients.
The study by Caverly et al5 shows us that current EBM education may not provide physicians with the skills required to make the best decisions for patients. Refocusing on EBM basics will remove the blinders, help physicians recognize good and bad evidence, and improve the value of care provided to all patients.
Footnotes
Deborah Korenstein, MD, FACP, is a Clinical Member, Memorial Sloan Kettering Cancer Center.
References
- 1.Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA. 1992;268(17):2420–2425. doi: 10.1001/jama.1992.03490170092032. [DOI] [PubMed] [Google Scholar]
- 2.Blanco MA, Capello CF, Dorsch JL, Perry G, Zanetti ML. A survey study of evidence-based medicine training in US and Canadian medical schools. J Med Libr Assoc. 2014;102(3):160–168. doi: 10.3163/1536-5050.102.3.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Bednarczyk J, Pauls M, Fridfinnson J, Weldon E. Characteristics of evidence-based medicine training in Royal College of Physicians and Surgeons of Canada emergency medicine residencies—a national survey of program directors. BMC Med Educ. 2014;14:57. doi: 10.1186/1472-6920-14-57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Porter ME. What is value in health care? N Engl J Med. 2010;363(26):2477–2481. doi: 10.1056/NEJMp1011024. [DOI] [PubMed] [Google Scholar]
- 5.Caverly TJ, Matlock DD, Prochazka AV, Lucas BP, Hayward RA. Interpreting clinical trial outcomes for optimal patient care: a survey of clinicians and trainees. J Grad Med Educ. 2016;8(1):xx–xx. doi: 10.4300/JGME-D-15-00137.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Echt DS, Liebson PR, Mitchell LB, Peters RW, Obias-Manno D, Barker AH, et al. Mortality and morbidity in patients receiving encainide, flecainide, or placebo. The Cardiac Arrhythmia Suppression Trial. N Engl J Med. 1991;324(12):781–788. doi: 10.1056/NEJM199103213241201. [DOI] [PubMed] [Google Scholar]
- 7.Messerli FH, Bangalore S. ALTITUDE trial and dual RAS blockade: the alluring but soft science of the surrogate end point. Am J Med. 2013;126(3):e1–e3. doi: 10.1016/j.amjmed.2012.07.006. [DOI] [PubMed] [Google Scholar]
- 8.Korenstein D, Keyhani S, Mendelson A, Ross JS. Adherence of pharmaceutical advertisements in medical journals to FDA guidelines and content for safe prescribing. PloS One. 2011;6(8):e23336. doi: 10.1371/journal.pone.0023336. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Spurling GK, Mansfield PR, Montgomery BD, Lexchin J, Doust J, Othman N, et al. Information from pharmaceutical companies and the quality, quantity, and cost of physicians' prescribing: a systematic review. PLoS Med. 2010;7(10):e1000352. doi: 10.1371/journal.pmed.1000352. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Guyatt G, Rennie D, O'Meade MO, Cook DJ. Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. 3rd ed. New York, NY: McGraw-Hill Education;; 2014. [Google Scholar]
- 11.Ramos KD, Schafer S, Tracz SM. Validation of the Fresno test of competence in evidence based medicine. BMJ. 2003;326(7384):319–321. doi: 10.1136/bmj.326.7384.319. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, et al. Instruments for evaluating education in evidence-based practice: a systematic review. JAMA. 2006;296(9):1116–1127. doi: 10.1001/jama.296.9.1116. [DOI] [PubMed] [Google Scholar]
- 13.Guyatt GH, Haynes RB, Jaeschke RZ, Cook DJ, Green L, Naylor CD, et al. Users' guides to the medical literature: XXV. Evidence-based medicine: principles for applying the users' guides to patient care. Evidence-based Medicine Working Group. JAMA. 2000;284(10):1290–1296. doi: 10.1001/jama.284.10.1290. [DOI] [PubMed] [Google Scholar]
- 14.Johnson TV, Abbasi A, Schoenberg ED, Kellum R, Speake LD, Spiker C, et al. Numeracy among trainees: are we preparing physicians for evidence-based medicine? J Surg Educ. 2014;71(2):211–215. doi: 10.1016/j.jsurg.2013.07.013. [DOI] [PubMed] [Google Scholar]
- 15.Silverman GK, Loewenstein GF, Anderson BL, Ubel PA, Zinberg S, Schulkin J. Failure to discount for conflict of interest when evaluating medical literature: a randomised trial of physicians. J Med Ethics. 2010;36(5):265–270. doi: 10.1136/jme.2009.034496. [DOI] [PubMed] [Google Scholar]
- 16.Slawson DC, Shaughnessy AF. Teaching evidence-based medicine: should we be teaching information management instead? Acad Med. 2005;80(7):685–689. doi: 10.1097/00001888-200507000-00014. [DOI] [PubMed] [Google Scholar]
- 17.BMJ Publishing Group Ltd. BMJ Clinical Evidence. 2015 http://clinicalevidence.bmj.com. Accessed October 23. [Google Scholar]
- 18.American College of Physicians. ACP Journal Club Archives. 2015 http://www.acpjc.org. Accessed October 23. [Google Scholar]
- 19.BMJ Group, McMaster University's Health Information Research Unit. McMaster PLUS. 2015 https://plus.mcmaster.ca/evidenceupdates. Accessed October 23. [Google Scholar]
- 20.The Cochrane Collaboration. Our evidence. 2015 http://www.cochrane.org/evidence. Accessed October 23. [Google Scholar]