Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jan 5.
Published in final edited form as: Congenit Heart Dis. 2010 Jul-Aug;5(4):339–342. doi: 10.1111/j.1747-0803.2010.00433.x

Deciding without Data

Jeffrey R Darst 1,*, Jane W Newburger 1, Stephen Resch 2, Rahul H Rathod 1, James E Lock 1
PMCID: PMC4283550  NIHMSID: NIHMS651697  PMID: 20653700

Abstract

Introduction

Physician decisions drive most of the increases in health care expenditures, yet virtually no published literature has sought to understand the types of evidence used by physicians as they make decisions in real time.

Methods

Ten pediatric cardiologists recorded every clinically significant decision made during procedures, test interpretation, or delivery of inpatient and outpatient care during 5 full days and 5 half days of care delivery. The basis for each decision was assigned to one of 10 predetermined categories, ranging from arbitrary and anecdotal, to various qualities of published studies, to parental preference and avoiding a lawsuit.

Results

During the 7.5 days, 1188 decisions (158/day) were made. Almost 80% of decisions were deemed by the physicians to have no basis in any prior published data and fewer than 3% of decisions were based on a study specific to the question at hand.

Conclusions

In this pilot study, physicians were unable to cite a formal evidence source for most of their real-time clinical decision making, including those that consumed medical resources. Novel approaches to building an evidence base produced from real-time clinical decisions may be essential for health care reform based on data.

Introduction

Every day, physicians make real-time decisions to order tests, medications, procedures, hospital admissions, and clinic visits. These decisions fuel the large majority of US health care expenditures, contributing to utilization of unnecessary tests and ineffective treatments. They can also be integral to learning and medical innovation, especially in treatment of unusual diseases with limited treatment options. Yet virtually no published literature has sought to understand the types of evidence used by physicians in their real-time clinical decision making. In this pilot study, we explored two questions: how many clinically significant decisions does a single busy subspecialist make in a given day, and what forms the basis for those decisions? Our study subjects were pediatric cardiologists-subspecialists caring for patients with uncommon diseases using expensive diagnostic testing and therapeutic procedures1 in a field that has witnessed extraordinary advances in effectiveness.2-4

Methods

Data Collection

Ten faculty pediatric cardiologists across several clinical settings either self-recorded or reported to a trainee functioning as scribe (JD) every clinically important decision made during a half- or full-day session. Decisions were defined as clinically important when they (1) had direct clinical relevance to the patient and (2) could be the subject of a research study. Standard actions that were a necessary part of care, such as measuring a blood pressure, were not considered decisions. However, we included decisions to select one of multiple options, e.g., choice of sheath size, or divergence from formulaic care, e.g., measuring 4-extremity blood pressures for suspected aortic coarctation. Decisions “not” to order a test or a medication, if consciously considered, were also included. The basis for each decision was classified by the self-reporting physician into one of 10 categories, predefined by the study authors (Table 1). When published studies were cited as the basis for a decision, the self-reporting physician categorized the study into definitions 6, 7, or 8 according to its content and quality. The physicians could cite a published study only if it was specifically recalled. Decisions based on guidelines were classified according to the evidence upon which the guideline recommendations were based. The basis for a single decision could involve more than one category; in these cases, the basis of the decision was divided equally. For example, each category was given a weight of 0.5 if attributed equally to two categories. Physicians were instructed on the category definitions with use of a written algorithm and one-on-one instruction (JD). Responses were evaluated by a single reviewer (JD) to promote consistency. All reported decisions met the study definition of “clinically important,” i.e., none were excluded post hoc. Inconsistencies noted by the reviewer, such as two decisions that seemed similar but were classified differently by the same physician, were sent back to the physician for adjudication; the reviewer could not recode the basis of a decision independently. Fewer than 3% of all decisions were sent for adjudication and 1.5% of the total decisions were reclassified.

Table 1.

Decision Definitions

1. Arbitrary/instinct: Multiple options are present, but one is chosen without a clear cut
 reason in mind; decision not attributable to the 9 categories below.
2. Avoid a lawsuit: Done without definable value to the patient; for documentation only.
3. Experience/anecdote: Based on a memory of one or more cases; if specific cases
 cannot be recalled, the decision may be arbitrary.
4. Trained to do it: Taught by a more senior or experienced colleague.
5. First principles: Things we know to be true, physiology-based.
6. Limited study: Case reports, small series.
7. General studies: Can be related to the question at hand.
8. Specific studies: Expressly addresses the question at hand.
9. For research: Anything done primarily out of curiosity or to learn something about the
 patient or the disease.
10. Parental preference: An otherwise arbitrary decision that is swayed by parent input.

Data Analysis

Summary statistics described every decision from all 10 cardiologists. We described clinically significant decisions made during three types of activities: (1) intraprocedural “technical” decisions made during cardiac catheterizations or invasive electrophysiology studies, e.g., the choice of a particular catheter or balloon size; (2) nonprocedural direct care decisions, including ordering a test, arranging for follow up, or starting or changing a medication; and (3) diagnostic test interpretation, usually involving image interpretation rather than assessment of a numerical value. We also described real-time decisions as a function of physician academic rank and years of experience. We did not perform tests of statistical significance.

Results

Ten physicians contributed a total of 1188 decisions, recorded over 7.5 days (a full day of clinical work for five clinicians, and a half day of clinical work for five clinicians), for a mean of 158 decisions per day. The physicians included 1 instructor, 3 assistant professors, 2 associate professors, and 4 full professors. The median duration between graduation from fellowship and completion of this exercise was 22.5 years (range 4–30 years). Altogether, the 10 physicians had accrued 185 years of faculty experience and 1117 peer-reviewed publications. Of all decisions recorded (Table 2), more than one-third were attributed to experience or personal anecdote, the most common basis for a decision. The next four most common categories (arbitrary/instinct, trained to do it, first principles, and general study) ranged from 12.3% to 14.7%. The final five categories (limited study, specific study, parental preference, research, and lawsuit avoidance) together accounted for only 9% of the total decisions made. Few decisions were made to avoid a lawsuit, for parental preference, or for research purposes (0.2%, 0.5%, and 0.3%, respectively).

Table 2.

Basis of Decisions

  n = 1188
Number of Decisions* % of Total
Experience/anecdote 441 37.1%
Arbitrary/Instinct 175 14.7%
Trained to do it 173 14.6%
General study 146 12.3%
First principles 146 12.3%
Limited study 61 5.1%
Specific study 34 2.9%
Parental preference 6 0.5%
For research 4 0.3%
Avoid a lawsuit 2 0.2%
*

Rounded to the nearest whole integer

Physicians interpreting diagnostic tests or performing procedures made more decisions than those evaluating patients. In one half day of echocardiogram review, more than 200 decisions were made; during the course of a single 4-hour catheterization procedure, 124 decisions were made but fewer than 150 decisions were made during a full-day clinic. Physicians performing catheterizations or electrophysiological procedures based 46% of their decisions on experience/anecdote, 20% on arbitrary/instinct, and 17% on their previous training. Notably, published research studies of any kind (limited, general, or specific) formed the basis for only 5.5% of decisions during invasive procedures. Physicians providing nonprocedural, direct patient care had a similar distribution of sources for their decisions, except that published studies played a larger role (19%). The basis for decisions in interpretation of diagnostic tests deviated substantially from those of the previous two subgroups of physicians. Whereas experience/anecdote remained the most commonly cited reason for a decision, published research studies of various types combined were cited as the basis for the majority (52.5%) of cases. Even in this subgroup, a study specific to the decision at hand was cited in only 2.7% of cases.

Finally, we explored the relationship of academic rank and experience to decision making. Senior physicians were more likely to attribute a decision to specific experience or anecdote (41% vs. 26%), and less likely to attribute a decision to “taught to do it” (9% vs. 29%). In this small series, faculty seniority did not appear to have an influence on the frequency of the other categories on which decisions could be based.

Discussion

Despite the importance of physicians in determining medical care expenditures, we are unaware of previous literature that approaches real-time decision making about resource utilization from their vantage point. In this pilot study, pediatric cardiologists classified the foundation for decisions that they made in the course of their daily practices. We found that, in the real-world practice of academic pediatric cardiology, only a small fraction of the 1188 decisions were attributed to high-quality relevant published studies.5 Indeed, only one in five decisions were believed by the deciding physician to have some basis in a research publication, and only 3% were specific to the question at hand. The lack of a specific research basis for most clinical decisions in this field must be viewed in the context of the progress that has nonetheless been made.

Despite the paucity of single- or multicenter prospective studies in pediatric cardiovascular disease, mortality for critical congenital heart disease fell by almost 40% between 1979 and 1997,2 and adults with congenital heart disease now outnumber children.6 This dramatic progress supports the hypothesis that much, perhaps most, of medical innovation and learning occurs in the absence of a formal evidence base built on randomized clinical trials or well-designed prospective cohort studies.

Early estimates of the proportion of medical practice based upon scientific evidence were as low as 10%7 to 21%.8 In a landmark study, Ellis and colleagues scrutinized the primary diagnosis and treatments administered to 109 inpatients on a general medical team and retrospectively reviewed the evidence base for primary interventions.5 They found that 53% of interventions could be defended by data from randomized clinical trials and 29% by convincing nonexperimental evidence, defined as having face validity so great that randomized trials were felt to be unnecessary and/or unethical; the remainder were based upon no substantial evidence. In other medical fields, estimates of the percentage of interventions based on randomized controlled trials have ranged from 11% for pediatric surgical interventions9 to 31% each for pediatric inpatients10 and general medical consultations,11 to 65% for psychiatric inpatients.12 All of these studies were performed retrospectively, actively sought corroborative data as opposed to conflicting data, and included only decisions surrounding primary interventions. In contrast, we studied real-time decision making prospectively, we included all decisions, counting those that consumed resources, and we focused on the decision-making physicians' perceptions of the evidence base for their choices.

This pilot study should be viewed in light of its clear limitations. The small number of physicians who participated in this study practiced at a single center and in a single specialty, limiting generalizability. Because physicians were required to recall a specific study to classify a decision as being based upon research, this study may have underestimated the role that previous research played in medical decisions. We did not search the literature to ascertain how often an evidence base for decisions existed when the self-reporting physicians could not recall it. Conversely, we did not assess whether a recalled research study actually pertained to or supported the decision at hand. Furthermore, the classification of a study (i.e., limited, general, or specific) that prompted a decision was determined solely by the self-reporting physician. Participation in this study may, itself, have altered decision-making patterns. The use of a scribe to record procedural decisions, but not nonprocedural care or test interpretation, may have introduced bias. The distinctions between “arbitrary/instinct,”“experience/anecdote,” and “taught to do it” are imprecise. Decisions “not” to do something were almost certainly underrecognized by self-reporting physicians. Our study did not explore mechanisms for influencing physician decision-making, such as pay-for-performance. Finally, the study design did not permit us to assess whether decisions were, in fact, optimal.13

Conclusion

We found that despite considerable progress in the field of pediatric cardiology, physicians were unable to cite a formal evidence source for most of their clinical decision making. Our pilot study highlights several important unanswered questions: Which types of problem in clinical medicine require a “study” and which can be left to “emerge” from experience? As policy makers focus on comparative effectiveness as a way to generate more value from health care spending, what “counts” as evidence is critically important. Whether ideal or not, many, if not most, decisions in clinical medicine do not depend on the results of formally gathered data. Novel approaches to building an evidence base that harnesses the rich information produced in real-time clinical experience may turn out to be an essential complement to clinical trials and classic cohort studies.

Acknowledgement

This study was supported in part by National Institutes of Health under award number: T32HL007572. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Footnotes

Conflict of Interest

None of the authors or affiliated institutions have any conflicts of interest or financial involvement with the subject matter discussed in this manuscript.

References

  • 1.Connor JA, Jenkins K. Factors associated with increased resource utilization for congenital heart disease. Pediatrics. 2005;116:689–695. doi: 10.1542/peds.2004-2071. [DOI] [PubMed] [Google Scholar]
  • 2.Boneva RS, Botto LD, Moore CA, Yang Q, Correa A, Erickson JD. Mortality associated with congenital heart defects in the United States: trends and racial disparities, 1979-1997. Circulation. 2001;103:2376–2381. doi: 10.1161/01.cir.103.19.2376. [DOI] [PubMed] [Google Scholar]
  • 3.Karamlou T, Diggs BS, Person T, Ungerleider RM, Welke KF. National practice patterns for management of adult congenital heart disease: operation by pediatric heart surgeons decreases in-hospital death. Circulation. 2008;118:2345–2352. doi: 10.1161/CIRCULATIONAHA.108.776963. [DOI] [PubMed] [Google Scholar]
  • 4.Marelli AJ, Mackie AS, Ionescu-Ittu R, Rahme E, Pilote L. Congenital heart disease in the general population: changing prevalence and age distribution. Circulation. 2007;115:163–172. doi: 10.1161/CIRCULATIONAHA.106.627224. [DOI] [PubMed] [Google Scholar]
  • 5.Ellis J, Mulligan I, Rowe J, Sackett DL. Inpatient general medicine is evidence based. A-Team, Nuffield Department of Clinical Medicine. Lancet. 1995;346:407–410. [PubMed] [Google Scholar]
  • 6.Williams RG, Pearson GD, Barst RJ, et al. Report of the National Heart, Lung, and Blood Institute Working Group on research in adult congenital heart disease. J Am Coll Cardiol. 2006;47:701–707. doi: 10.1016/j.jacc.2005.08.074. [DOI] [PubMed] [Google Scholar]
  • 7.Williamson JW, Goldschmidt PG, Jillson IA. Medical Practice Information Demonstration Project: Final Report. Policy Research; Baltimore, MD: 1979. [Google Scholar]
  • 8.Dubinsky M, Ferguson JH. Analysis of the National Institutes of Health Medicare coverage assessment. Int J Technol Assess Health Care. 1990;6:480–488. doi: 10.1017/s0266462300001069. [DOI] [PubMed] [Google Scholar]
  • 9.Kenny SE, Shankar KR, Rintala R, Lamont GL, Lloyd DA. Evidence-based surgery: interventions in a regional paediatric surgical unit. Arch Dis Child. 1997;76:50–53. doi: 10.1136/adc.76.1.50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Moyer VA, Gist AK, Elliott EJ. Is the practice of paediatric inpatient medicine evidence-based? J Paediatr Child Health. 2002;38:347–351. doi: 10.1046/j.1440-1754.2002.00006.x. [DOI] [PubMed] [Google Scholar]
  • 11.Gill P, Dowell AC, Neal RD, Smith N, Heywood P, Wilson AE. Evidence based general practice: a retrospective study of interventions in one training practice. BMJ. 1996;312:819–821. doi: 10.1136/bmj.312.7034.819. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Geddes JR, Game D, Jenkins NE, Peterson LA, Pottinger GR, Sackett DL. What proportion of primary psychiatric interventions are based on evidence from randomised controlled trials? Qual Health Care. 1996;5:215–217. doi: 10.1136/qshc.5.4.215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Lucas BP, Evans AT, Reilly BM, et al. The impact of evidence on physicians' inpatient treatment decisions. J Gen Intern Med. 2004;19(5):402–409. doi: 10.1111/j.1525-1497.2004.30306.x. Pt 1. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES