Abstract
Objectives To characterize the information needs of family physicians by collecting the questions they asked about patient care during consultations and to classify these in ways that would be useful to developers of knowledge bases. Design An observational study in which investigators visited physicians for two half-days and collected their questions. Taxonomies were developed to characterize the clinical topic and generic type of information sought for each question. Setting Eastern Iowa. Participants Random sample of 103 family physicians. Main outcome measures Number of questions posed, pursued, and answered; topic and generic type of information sought for each question; time spent pursuing answers; and information resources used. Results Participants asked a total of 1,101 questions. Questions about drug prescribing, obstetrics and gynecology, and adult infectious disease were most common, comprising 36% of the total. The taxonomy of generic questions included 69 categories; the three most common types, comprising 24% of all questions, were “What is the cause of symptom X?” “What is the dose of drug X?” and “How should I manage disease or finding X?” Answers to most questions (n = 702 [64%]) were not immediately pursued, but of those pursued, most (n = 318 [80%]) were answered. Physicians spent an average of less than 2 minutes pursuing an answer, and they used readily available print and human resources. Only two questions led to a formal literature search. Conclusions Family physicians in this study did not pursue answers to most of their questions. Questions about patient care can be organized into a limited number of generic types, which could help guide the efforts of knowledge-base developers.
INTRODUCTION
Physicians often have questions about the care of their patients. Should a 6-week-old girl exposed to chickenpox be given varicella-zoster immunoglobulin? What could cause urinary retention in an elderly woman? Is it safe to prescribe nicotine patches for a pregnant woman? Most questions occur at the point of care in busy clinics and hospitals.1,2,3 Answers may or may not be pursued, and if pursued, they may or may not be found.
When faced with questions about patient care, physicians are advised to seek the “best available evidence” to guide their decisions.4 However, this advice is often ignored in practice.1,5,6,7 Instead, practicing physicians seek “bottom-line” answers from highly digested, immediately available resources.1,5,6,8,9,10,11
Our objective was to characterize the information needs of family physicians by collecting their questions and classifying them in ways that would be useful to developers of knowledge bases.12 We collected questions about medical knowledge that could potentially be answered by general sources such as textbooks and journals, not questions about patient data that would be answered by the medical record.13 Previous studies have analyzed a relatively small number of questions, making it difficult to develop comprehensive descriptions and classification schemes.1,2,14,15 Questions that arise in practice could help guide the content of textbooks, review articles, continuing education courses, and medical school curricula. Questions without answers could help guide research.
PARTICIPANTS AND METHODS
Participants
All 386 family physicians working in the eastern third of Iowa (area code 319) were eligible for our study, and using a physician database maintained by the University of Iowa College of Medicine, Iowa City, we invited them in random order to participate. To achieve our goal of at least 100 participants and 1,000 questions, we invited 129 physicians. This goal was based on a subjective sense of adequacy and on the frequency of questions occurring in previous studies.1,2,14 We excluded retired physicians, house officers, and full-time emergency physicians.
Procedures
One week after receiving an introductory letter, each physician was invited by telephone to participate. Two half-day visits were scheduled, usually separated by a week. One of us (J W E) made the first visit, and a research nurse made the second. All visits occurred between April 1996 and December 1997. Before the first visit, the participants received a letter informing them about the study:
We are interested in everything from a clear-cut question (“What's the dose of metformin?”) to the vague, fleeting uncertainties that you and I would normally keep to ourselves (“I'm not totally sure what this rash is, even though I'm going to call it a contact dermatitis for now”). Normally, we spend the day trying to convince our patients and our nurses that we know what we're doing. I'm asking you to reveal your ignorance (to me—not to your patients or nurse), which is not a natural thing to do.
We included all clinical questions related to the care of specific patients. We excluded requests for facts that could be obtained from the medical record (“What was her blood potassium concentration?”) or from the patient (“How long have you been coughing?”).
The visiting researcher stood in a clinic hallway or a physician's office and recorded questions between patient visits by writing them on a standard form. When a physician pursued an answer, we recorded the resources used and the time spent with each resource. When an answer was not pursued, we asked why. Most questions referred to patients seen during the observation period, but we also recorded questions recalled about patients seen earlier.
Categorizing questions
The questions were categorized as either topics or generic questions (details are given on the BMJ web site: www.bmj.com). The topic taxonomy, which included 63 categories based on specialties, was modified from a system used to file journal articles.16 We added categories to accommodate questions that did not fall into a medical specialty (such as anatomy, legal issues, or medical ethics). We developed 43 arbitrary rules to improve consistency among coders, such as “Osteoporosis is endocrinology, but hormone replacement therapy is obstetrics and gynecology.” Because almost all questions encompassed more than one topic, we assigned both a primary and a secondary topic. For prescribing questions—“What is the dose of amoxicillin for a 1 year old?”—we assigned “prescribing information” as the primary topic and the relevant specialty (pediatric infectious disease) as the secondary topic.
In developing the taxonomy of generic questions, we used a procedure similar to one described in a study of MEDLINE searches.17 Questions with essentially identical structures (“What is the dose of atorvastatin?” or “What is the dose of metformin?”) were placed into a single generic type (“What is the dose of drug X?”). First, six of us (J W E, J A O, M H E, G R B, B T L, and M L C) used a random sample of 100 questions to independently develop preliminary generic questions. One of us (J W E) combined these six schemes into a consensus taxonomy and modified it further as he coded all the questions. This taxonomy was then distributed to all of us to make further changes. After the final revision was approved by all of us, five of us (J W E, J A O, G R B, B T L, M L C) used it to code a different random sample of 100 questions. The purpose of this step was to measure the interrater reliability of the final taxonomy of generic questions, which contained 69 categories. Reported frequencies for all taxonomies are based on assignments by one of us (J W E).
Statistical analysis
Most analyses were descriptive. The κ statistic was used to determine the interrater reliability of the question taxonomies. We used liberal reliability criteria for the topic taxonomy: a match was recorded if either the primary or secondary topic assigned by one coder matched either the primary or secondary topic assigned by the other. We used the Kruskal-Wallis one-way analysis of variance and linear regression for continuous outcomes, such as the frequency of questions and the time spent answering them, and the χ2 statistic and logistic regression for dichotomous outcomes, such as whether an answer was found. A two-tailed significance level of 0.05 was chosen, and all analyses were performed with a commercial software program (Stata; Stata Corporation, College Station, TX).
RESULTS
Demographic data
Of the 129 physicians invited, 103 (80%) agreed to participate. The average age of participants was 48 years (range, 31-87 years), and 23 were women. Twenty-one physicians were in solo practices, and 54 practiced in a rural area (town population, <30,000). Among the 83 physicians in group practices, the number of partners ranged from 1 to 10 (median, 3). Eighty physicians practiced in free-standing clinic buildings where family practice was the only specialty. Seven physicians were full-time faculty members, two at the University of Iowa College of Medicine and five in community residency programs. Typically, each physician had a private office and saw patients by rotating among two or three adjacent examination rooms. Each physician generally worked with a nurse, who measured and recorded vital signs, answered patient telephone calls, cleaned examination rooms, and assisted with procedures. The primary funding source for 90 of the physicians was fee for service. Patients were typically scheduled every 10 to 15 minutes.
The 103 participants saw 2,467 patients and asked 1,101 questions during 732 observation hours. After the exclusion of 323 questions recalled about patients seen before the observation period, each physician asked an average of 7.6 questions during the two half-days (3.2 questions per 10 patients seen). The average age of all patients was 39 years (range, 0-98 years), and 1,474 (60%) were female. Patients prompting physicians' questions were older than those not prompting questions (mean age, 43 vs 37 years; P < 0.001), and they were more likely to be female (64% vs 58%; P < 0.05). These age and sex differences were independent of each other in a multiple logistic regression. Older physicians saw more patients and asked fewer questions than younger physicians. For each 10-year increase in age, physicians saw 1.9 more patients (P = 0.06) and had 1.7 fewer questions (P < 0.01) per 10 observation hours.
Taxonomies
Topics
The most common question topics were drug prescribing (209 questions [19%]), obstetrics and gynecology (96 questions [9%]), and adult infectious disease (89 questions [8%]). The distribution of question topics tended to mirror the distribution of clinical problems seen. However, disproportionately more questions were about prescribing and disproportionately fewer questions were about health maintenance visits (annual adult and well-child examinations). Except for drug prescribing, most topics were not pursued. The 14 most common topics accounted for 899 (82%) of all questions. The κ statistic for the topic taxonomy was 0.91 (indicating “almost perfect” interrater agreement18).
Generic questions
The most frequently assigned generic questions were “What is the cause of symptom X?” “What is the dose of drug X?” and “How should I manage disease or finding X?” (Table 1). Only queries about drug dose were routinely pursued. The 25 most common generic questions accounted for 887 (81%) of all questions. The κ statistic for the generic question taxonomy was 0.66 (indicating “substantial” interrater agreement18). Although most questions were unique, we found several questions repeated with essentially identical wording: “What is this rash?” (n = 22), “Is this a viral or a bacterial infection?” (n = 13), “What is causing the patient's abdominal pain?” (n = 11), “What is causing the patient's chest pain?” (n = 10), “What is causing the patient's fatigue?” (n = 8), “What is causing the patient's dysuria (urinalysis normal)?” (n = 5), “What is causing the patient's hives?” (n = 4), and “Should the patient be given prophylaxis for subacute bacterial endocarditis?” (n = 4).
Table 1.
Ten most common generic questions asked by 103 family physicians*
Generic question | Questions asked† | Questions pursued‡ | Questions answered§ |
---|---|---|---|
What is the cause of symptom X? | 94 (9) | 8 (9) | 4 (50) |
What is the dose of drug X? | 88 (8) | 75 (85) | 73 (97) |
How should I manage disease or finding X?∥ | 78 (7) | 23 (29) | 19 (83) |
How should I treat finding or disease X? | 75 (7) | 25 (33) | 18 (72) |
What is the cause of physical finding X? | 72 (7) | 13 (18) | 6 (46) |
What is the cause of test finding X? | 45 (4) | 18 (40) | 13 (72) |
Could this patient have disease or condition X? | 42 (4) | 6 (14) | 4 (67) |
Is test X indicated in situation Y? | 41 (40) | 12 (29) | 10 (83) |
What is the drug of choice for condition X? | 36 (3) | 17 (47) | 13 (76) |
Is drug X indicated in situation Y? | 36 (3) | 9 (25) | 7 (78) |
Values are given as number (percentage).
The percentage is the proportion of total questions asked (N = 1,101).
The percentage is the proportion of questions asked.
The percentage is the proportion of questions pursued.
Not specifying diagnostic management versus treatment.
Answers
During the observation period, answers to 702 questions (64%) were not pursued. Physicians said that they might pursue answers to 123 of these questions after the observation period. The commonest reason for not immediately pursuing an answer was that, after voicing some uncertainty, the physician thought that a reasonable decision could be based on his or her current knowledge (n = 148 [21%]). Physicians found answers to 318 (80%) of the 399 questions they pursued. As judged by the observer, most answers (n = 291 [92%]) directly answered the question posed, whereas 27 (8%) provided information related to the question without directly answering it.
Answers were obtained from 156 unique resources. The mean (SD) time spent pursuing an answer was 118 (169) seconds, and the median time (interquartile range) was 60 (115) seconds. Less time was spent pursuing answers to questions about prescribing than for other topics (74 vs 153 seconds; P < 0.001). Prescribing texts and human sources were most likely to provide an answer (table 2). Formal literature searches were initiated for only two questions.
Table 2.
Information sources used by family physicians to find answers to 399 questions
Time spent seeking answers (seconds)* | ||||
---|---|---|---|---|
Information source | No. of times used (% of total) | Mean (SD) | Median (IQR) | No. (%) of searches that were successful† |
Human (such as physician, pharmacist) | 161 (36) | 109 (104) | 68 (150) | 127 (79) |
Nonprescribing printed information (such as textbooks, journal articles) | 143 (32) | 100 (89) | 70 (75) | 75 (52) |
Prescribing text | 113 (25) | 70 (66) | 50 (60) | 96 (85) |
Printed information posted on walls | 17 (4) | 42 (34) | 35 (45) | 14 (82) |
Computer application (such as CD Rom, Internet) | 10 (2) | 395 (552) | 180 (210) | 2 (20) |
Total | 444 (100) | 102 (137) | 60 (90) | 314 (71) |
IQR = interquartile range. |
Kruskal-Wallis test, with Bonferroni corrected P values, showed that the average time spent with nonprescribing print sources and computers exceeded that with prescribing texts (P < 0.01) or posted information P < 0.01). No other significant differences in paired comparisons.
A χ2 test using a 5 × 2 contigency table (with 4 df) was significant (P < 0.001).
DISCUSSION
Physicians in this study did not pursue answers to most of their questions, except those about drug prescribing. This result is consistent with a study of Oregon physicians in which an answer was pursued when the problem was perceived as urgent and when a definitive answer was thought to exist.3 In that study, and in ours, physicians pursued only a few of their questions but found answers to about 80% of those pursued.3,14
In previous studies, the frequency of questions has varied widely and seems to depend on the setting, the definition of a “question,” and the methods used to collect them.8 We recorded 3.2 questions for every 10 patients seen. In other studies, this number has ranged from 0.7 questions per 10 patients in a private office setting to 57.7 questions per 10 patients on an inpatient teaching service.2,8,19
Our participants spent an average of less than 2 minutes pursuing an answer. In a study of questions asked by Missouri family physicians, MEDLINE searches by medical librarians took an average of 27 minutes per question.15 Sackett and Strauss found that printed summaries of evidence could be provided at the point of care within 30 seconds but that computer applications were too slow and too bulky to be feasible in their hospital setting.20 Although computers fared poorly in this and other studies,6,9 improvements in their speed, portability, and user friendliness are making them more useful to physicians.21
Study limitations
Our taxonomies require validation in other settings because we studied a homogeneous group of physicians in a small geographic area. The presence of a researcher may have influenced the questioning behavior of the participants: some physicians may have been reluctant to reveal gaps in their knowledge, whereas others may have generated questions to please the observer. We tried to minimize these effects by assuring participants that there was no right number of questions and by developing the trust needed to reveal knowledge gaps. We did not ask participants to rate the importance or urgency of the questions.
CONCLUSIONS
Busy family physicians need bottom-line answers to their questions, and they need them quickly.10,14,22 Evidence can be provided at the point of care, but it is most useful when it has been digested into quickly accessible summaries.8,20,23 These summaries tend to reflect the perspective of research, emphasizing the performance characteristics of tests and results of clinical trials. We found that this perspective often did not mesh with the needs of family physicians. For example, when faced with a clinical problem, the physicians often asked what steps to take, without distinguishing between diagnostic and therapeutic steps (“How should I manage disease or finding X?”). We agree with those who say that physicians should frame their questions better,4 but we also think that authors should frame their answers better. By learning what questions occur in practice, authors could provide more useful information, which could ultimately lead to better patient care.
Acknowledgments
We thank Sharon Kaschmitter, who helped collect the data; Jeff Dawson, who helped analyze the data; Dedra Diehl, who helped verify the references; and the 103 physicians who generously gave their time as participants.
Funding: This study was supported by grant G9518 from the American Academy of Family Physicians Foundation
Competing interests: None declared
This article was originally published in BMJ 1999;319:358-361
References
- 1.Covell DG, Uman GC, Manning PR. Information needs in office practice: are they being met? Ann Intern Med 1985;103: 596-599. [DOI] [PubMed] [Google Scholar]
- 2.Ely JW, Burch RJ, Vinson DC. The information needs of family physicians: case-specific clinical questions. J Fam Pract 1992;35: 265-269. [PubMed] [Google Scholar]
- 3.Gorman PN, Helfand M. Information seeking in primary care: how physicians choose which clinical questions to pursue and which to leave unanswered. Med Decis Making 1995;15: 113-119. [DOI] [PubMed] [Google Scholar]
- 4.Sackett DL, Richardson WS, Rosenberg W, et al. Evidence-based medicine: how to practice and teach EBM. Edinburgh (Scotland): Churchill Livingstone; 1997.
- 5.McColl A, Smith H, White P, et al. General practitioner's perceptions of the route to evidence based medicine: a questionnaire survey. BMJ 1998;316: 361-365. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Haug JD. Physicians' preferences for information sources: a meta-analytic study. Bull Med Libr Assoc 1997;85: 223-232. [PMC free article] [PubMed] [Google Scholar]
- 7.Feinstein AR, Horwitz RI. Problems in the “evidence” of “evidence-based medicine.” Am J Med 1997;103: 529-535. [DOI] [PubMed] [Google Scholar]
- 8.Smith R. What clinical information do doctors need? BMJ 1996;313: 1062-1068. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Verhoeven AA, Boerma EJ, Meyboom-de Jong B. Use of information sources by family physicians: a literature survey. Bull Med Libr Assoc 1995;83: 85-90. [PMC free article] [PubMed] [Google Scholar]
- 10.Huth EJ. “In the balance”: weighing the evidence [editorial]. Ann Intern Med 1994;120: 889. [DOI] [PubMed] [Google Scholar]
- 11.Timpka T, Ekstrom M, Bjurulf P. Information needs and information seeking behaviour in primary health care. Scand J Prim Health Care 1989;7: 105-109. [DOI] [PubMed] [Google Scholar]
- 12.Cimino J. Generic queries for meeting clinical information needs. Bull Med Libr Assoc 1993;81: 195-206. [PMC free article] [PubMed] [Google Scholar]
- 13.Wyatt J. Use and sources of medical knowledge. Lancet 1991;338: 1368-1373. [DOI] [PubMed] [Google Scholar]
- 14.Gorman PN, Ash J, Wykoff L. Can primary care physicians' questions be answered using the medical journal literature? Bull Med Libr Assoc 1994;82: 140-146. [PMC free article] [PubMed] [Google Scholar]
- 15.Chambliss ML, Conley J. Answering clinical questions. J Fam Pract 1996;43: 140-144. [PubMed] [Google Scholar]
- 16.Reynolds RD. A family practice article filing system. J Fam Pract 1995;41: 583-590. [PubMed] [Google Scholar]
- 17.Wilson SR, Starr-Schneidkraut N, Cooper MD. Use of the critical incident technique to evaluate the impact of MEDLINE. Final report submitted to the National Library of Medicine. Palo Alto, CA: American Institute for Research; 1989. NTIS order PB90-142522.
- 18.Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33: 159-174. [PubMed] [Google Scholar]
- 19.Osheroff JA, Forsythe DE, Buchanan BG, et al. Physicians' information needs: analysis of questions posed during clinical teaching. Ann Intern Med 1991;114: 576-581. [DOI] [PubMed] [Google Scholar]
- 20.Sackett DL, Straus SE. Finding and applying evidence during clinical rounds: the “evidence cart.” JAMA 1998;280: 1336-1338. [DOI] [PubMed] [Google Scholar]
- 21.Ebell MH, Barry HC. InfoRetriever: rapid access to evidence-based information on a handheld computer. MD Comput 1998;15: 289-297. [PubMed] [Google Scholar]
- 22.Greer AL. The two cultures of biomedicine: can there be consensus? [editorial] JAMA 1987;258: 2739-2740. [PubMed] [Google Scholar]
- 23.Shaughnessy AF, Slawson DC, Becker L. Clinical jazz: harmonizing clinical experience and evidence-based medicine. J Fam Pract 1998;47: 425-428. [PubMed] [Google Scholar]