The practice of medicine is an art, based on science.
—William Osler1
Our profession has always pursued better patient outcomes, and for more than a century, science has been the great lever in that pursuit. But new knowledge now arrives at a rate akin to a journalistic blastoma, far beyond the consumption and comprehension of the most conscientious reader. It is clear that evidence-based medicine works, but at least two rubs exist: Evidence-based guidelines have also metastasized to the point we can't see the forest for the decision trees; and despite guidelines, there is a gap of nearly two decades between discovery of new knowledge and its translation into practice. As one wag put it, “The opposite of good is not evil, but good intentions.” How far are we falling short of optimal practice? How may we remedy this shortcoming?
In 1976, Clem McDonald found that internal medicine residents were complying with only 22 percent of 390 protocols for outpatient practice.2 These were not obscure subspecialty rules, but guidelines all should know and observe, such as performing Pap smears for women under 60 and decreasing potassium supplements or potassium-sparing diuretics if the last potassium level was higher than normal. In a crossover design, Clem introduced an innovation that improved clinician compliance with these guidelines to 55 percent. When the innovation was removed, clinician compliance returned to its previous low level. Twenty-five years later, in the inpatient medicine services of the same county hospital where Clem had done his outpatient studies, it was found that only one percent of inpatients over age 64 received influenza immunizations,3 until an evolution of Clem's innovation increased immunization rates to 55 percent. What kind of innovation produces such dramatic effects? Computer reminders to clinicians using electronic medical records (EMRs). But why wasn't compliance even higher? Not everyone should receive all protocol procedures, such as patients who don't want immunization or have already been immunized, or those allergic to eggs or with a relative experiencing breathing trouble after flu immunization. Clem subtitled his original article, “The Nonperfectability of Man,” and another study confirmed that even with computer reminders, clinicians remain nonperfectible.4 For influenza immunization on admission, five computer reminders were less effective than computer-generated standing orders administered by nurses.
Why care about practice protocol lapses? Because poor practice produces poor outcomes. Across two successive flu seasons, influenza immunizations of community-dwelling individuals over 64 in three large managed-care organizations reduced hospitalization by an average of 30 percent for pneumonia and influenza, 19 percent for both cardiac and cerebrovascular disease, and all causes of deaths by 49 percent.5
Depression suffers the same dangerous clinician underperformance, at least in Helsinki. Seventy percent of depressed patients received no antidepressant medication, 84 percent received no psychotherapy, and none received electroconvulsive therapy (ECT) before making a suicide attempt. Of the 30 percent who received antidepressants, only 41 percent had adequate dose and duration (12% of all patients). After their suicide attempts, 61 percent still received no antidepressant, 83 percent still received no psychotherapy, and still none had ECT. Of the 39 percent who now received antidepressants after a suicide attempt, still only 44 percent received an adequate dose and duration (17% of all patients).6
Accurate clinical measurements are needed to guide evidence-based decisions, but many research assessments (e.g., Hamilton Depression Rating Scale) that are used in clinical research trials are impractical in clinical practice. Except during florid psychosis, patient self-reported assessments of depression severity are valid and reliable. For some disorders (e.g., pain), self-reports are the only available data; in other disorders, they can be as accurate and reliable as clinician assessments.7,8 Patients willingly testify about symptoms, side effects, and functioning relevant to their illnesses. Computer interviews can facilitate these communications, both in the clinic and at times when clinicians may not be available.9 Many computer interviews employ interactive voice response (IVR), which lets patients use any touch-tone telephone to guide their answers in perfectly reliable structured interviews.10 IVR largely overcomes functional illiteracy with written prose that affects 12 percent of the US adult population and 23 percent of those over age 64.11 Many older people have trouble using computers, but almost all use telephones well enough. STAR*D, the 4,040-patient, four-year, NIMH study of sequenced treatments for resistant depression,12 used IVR assessments for 10 dimensions of patient status.
What can be done to improve the application of clinical knowledge, leveraging science, and technology? EMRs are already used by many of us, and their use is expanding rapidly. While transition from familiar idiosyncratic paper records to standardized electronic records presents challenges, there are clear advantages of having standardized records incorporating practice guidelines that are amenable to regular updates as new knowledge emerges. Patient reported assessments can also be provided to clinicians via EMRs, along with recommendations for best practice decisions informed by evidence. Legitimate concerns include inappropriate guideline decisions that are unduly difficult to override, and awareness that, “The weakness in any guideline is where to draw the line between statements that are based on consensus derived from overwhelming evidence and those that are really statements of opinion, however widely the opinion may be shared.”13 As Paul Keck remarked, “One thing that the history of medicine teaches us is that expert opinion at any given time can be very wrong.”14
Pogo informs us, “We have met the enemy and he is us.” Max Planck narrowed the realm of problems to science: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”15 In our pursuit of better patient outcomes, our profession should not delay another generation before adopting new tools that prove helpful. Science remains our largest lever, and restoring patient functioning while reducing suffering remain our unchanging goals. EMRs with integral patient-reported assessments and evidence-based guidelines are bringing helpful science forward through practitioners to patients, closing the gap between what is known and practiced.
References
- 1.Bean WB. Osler Aphorisms. Vol. 259. New York, NY: Henry Schuman, Inc.; 1950. p. 123. [Google Scholar]
- 2.McDonald CJ. Protocol-based computer reminders, the quality of care and the non-perfectability of man. N Engl J Med. 1976;295:1351–5. doi: 10.1056/NEJM197612092952405. [DOI] [PubMed] [Google Scholar]
- 3.Dexter PR, Perkins S, Overhage JM, et al. A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med. 2001;345:965–70. doi: 10.1056/NEJMsa010181. [DOI] [PubMed] [Google Scholar]
- 4.Dexter PR, Perkins S, Maharry KS, et al. Inpatient computer-based standing orders vs physician reminders to increase influenza and pneumococcal vaccination rates: A randomized trial. JAMA. 2004;292:2366–71. doi: 10.1001/jama.292.19.2366. [DOI] [PubMed] [Google Scholar]
- 5.Nichol KL, Margolis KL, Wuorenma J, et al. The efficacy and cost effectiveness of vaccination against influenza among elderly persons living the the community. N Engl J Med. 2003;348:1322–32. doi: 10.1056/NEJM199409223311206. [DOI] [PubMed] [Google Scholar]
- 6.Suominen KH, Isometsa ET, Henriksson MM, et al. Inadequate treatment for major depression both before and after attempted suicide. Am J Psychiatry. 1988;155(12):1778–80. doi: 10.1176/ajp.155.12.1778. [DOI] [PubMed] [Google Scholar]
- 7.Kobak KA, Taylor LH, Dottl SL, et al. A computer administered telephone interview to identify mental disorders. JAMA. 1997;278:905–10. [PubMed] [Google Scholar]
- 8.Kroenke K, Spitzer RL, Williams JB. The PHQ-9: Validity of a brief depression severity measure. J Gen Intern Med. 2001;16:606–13. doi: 10.1046/j.1525-1497.2001.016009606.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Kobak KA, Reynolds WM, Rosenfeld R, et al. Development and validation of a computer-administered version of the Hamilton depression rating scale. Psychologic Assess. 1990;2:56–63. [Google Scholar]
- 10.Corkrey R, Parkinson L. Interactive voice response: Review of studies 1989-2000. Behav Res Methods Instrum Comput. 2002;34(3):342–53. doi: 10.3758/bf03195462. [DOI] [PubMed] [Google Scholar]
- 11.Kutner M, Greenberg E, Baer J. A First Look at the Literacy of America's Adults in the 21st Century. Washington, DC: National Center for Education Statistics, Department of Education, December 2005; Available at: http://nces.ed.gov/naal. [Google Scholar]
- 12.Trivedi MH, Rush AJ, Wisniewski SR. Evaluation of outcomes with citalopram for depression using measurement-based care in STAR*D: Implications for clinical practice. Am J Psychiatry. 2006;163(1):28–40. doi: 10.1176/appi.ajp.163.1.28. [DOI] [PubMed] [Google Scholar]
- 13.Vieta E, Nolen WA, Grunze H, et al. A European perspective on the Canadian guidelines for bipolar disorder. Bipolar Disord. 2005;7(Suppl 3):73–6. doi: 10.1111/j.1399-5618.2005.00221.x. [DOI] [PubMed] [Google Scholar]
- 14.Keck PE, Perlis RH, Otto MW, et al. The expert consensus guideline series: Treatment of bipolar disorder. Postgraduate Medicine Special Report. 2004 Dec;:1–120. [Google Scholar]
- 15.Planck M. Scientific Autobiography and Other Papers. New York, NY: Philosophical Library; 1949. pp. 33–4. [Google Scholar]
