Skip to main content
The BMJ logoLink to The BMJ
. 1999 Aug 7;319(7206):358–361. doi: 10.1136/bmj.319.7206.358

Analysis of questions asked by family doctors regarding patient care

John W Ely a, Jerome A Osheroff b, Mark H Ebell c, George R Bergus a, Barcey T Levy a, M Lee Chambliss d, Eric R Evans e
PMCID: PMC28191  PMID: 10435959

Abstract

Objectives

To characterise the information needs of family doctors by collecting the questions they asked about patient care during consultations and to classify these in ways that would be useful to developers of knowledge bases.

Design

Observational study in which investigators visited doctors for two half days and collected their questions. Taxonomies were developed to characterise the clinical topic and generic type of information sought for each question.

Setting

Eastern Iowa.

Participants

Random sample of 103 family doctors.

Main outcome measures

Number of questions posed, pursued, and answered; topic and generic type of information sought for each question; time spent pursuing answers; information resources used.

Results

Participants asked a total of 1101 questions. Questions about drug prescribing, obstetrics and gynaecology, and adult infectious disease were most common and comprised 36% of all questions. The taxonomy of generic questions included 69 categories; the three most common types, comprising 24% of all questions, were “What is the cause of symptom X?” “What is the dose of drug X?” and “How should I manage disease or finding X?” Answers to most questions (702, 64%) were not immediately pursued, but, of those pursued, most (318, 80%) were answered. Doctors spent an average of less than 2 minutes pursuing an answer, and they used readily available print and human resources. Only two questions led to a formal literature search.

Conclusions

Family doctors in this study did not pursue answers to most of their questions. Questions about patient care can be organised into a limited number of generic types, which could help guide the efforts of knowledge base developers.

Key messages

  • Questions that doctors have about the care of their patients could help guide the content of medical information sources and medical training

  • In this study of US family doctors, participants frequently had questions about patient care but did not pursue answers to most questions (64%)

  • On average, participants spent less than 2 minutes seeking an answer to a question

  • The most common resources used to answer questions included textbooks and colleagues; formal literature searches were rarely performed

  • The most common generic questions were “What is the cause of symptom X?” “What is the dose of drug X?” and “How should I manage disease or finding X?”

Introduction

Doctors often have questions about the care of their patients. “Should a 6 week old girl exposed to chicken pox be given varicella-zoster immunoglobulin?” “What could cause urinary retention in an elderly woman?” “Is it safe to use nicotine patches during pregnancy?” Most questions occur at the point of care in busy clinics and hospitals.13 Answers may or may not be pursued, and, if pursued, they may or may not be found.

When faced with questions about patient care, doctors are advised to seek the “best available evidence” to guide their decisions.4 However, this advice is often ignored in practice.1,57 Instead, practising doctors seek “bottom line” answers from highly digested, immediately available resources.1,5,6,811

Our objective was to characterise the information needs of family doctors by collecting their questions and classifying them in ways that would be useful to developers of knowledge bases.12 We collected questions about medical knowledge that could potentially be answered by general sources such as textbooks and journals, not questions about patient data that would be answered by the medical record.13 Previous studies have analysed relatively small numbers of questions, making it difficult to develop comprehensive descriptions and classification schemes.1,2,14,15 Questions that arise in practice could help guide the content of textbooks, review articles, continuing education courses, and medical school curricula. Questions without answers could help guide research.

Participants and methods

Participants

All 386 family doctors working in the eastern third of Iowa (area code 319) were eligible for our study, and, using a doctor database maintained by the University of Iowa, we invited them in random order to participate. To achieve our goal of at least 100 participants and 1000 questions, we invited 129 doctors. This goal was based on a subjective sense of adequacy and on the frequency of questions occurring in previous studies.1,2,14 We excluded retired doctors, house officers, and full time emergency doctors.

Procedures

One week after receiving an introductory letter, each doctor was invited by telephone to participate. Two half day visits were scheduled, usually separated by a week. JWE made the first visit, and a research nurse made the second. All visits occurred between April 1996 and December 1997. Before the first visit, the participants received a letter informing them about the study:

“We are interested in everything from a clear cut question (‘What’s the dose of metformin?’) to the vague, fleeting uncertainties that you and I would normally keep to ourselves (‘I’m not totally sure what this rash is, even though I’m going to call it a contact dermatitis for now’). Normally, we spend the day trying to convince our patients and our nurses that we know what we’re doing. I’m asking you to reveal your ignorance (to me—not to your patients or nurse), which is not a natural thing to do.”

We included all clinical questions related to the care of specific patients. We excluded requests for facts that could be obtained from the medical record (“What was her blood potassium concentration?”) or from the patient (“How long have you been coughing?”).

The visiting researcher stood in a clinic hallway or a doctor’s office and recorded questions between patient visits by writing them on a standard form. When a doctor pursued an answer we recorded the resources used and the time spent with each resource. When an answer was not pursued we asked why. Most questions referred to patients seen during the observation period, but we also recorded questions recalled about patients seen earlier.

Taxonomies

The questions were categorised using two taxonomies: topics and generic questions (details given on the BMJ website). The topic taxonomy, which included 63 categories based on specialties, was modified from a system used to file journal articles.16 We added categories to accommodate questions that did not fall into a medical specialty (such as anatomy, legal issues, medical ethics). We developed 43 arbitrary rules to improve consistency among coders—such as “Osteoporosis is endocrinology, but hormone replacement therapy is obstetrics and gynaecology.” Because almost all questions encompassed more than one topic, we assigned both a primary and a secondary topic. For prescribing questions (“What is the dose of amoxicillin for a 1 year old?”), we assigned “prescribing information” as the primary topic and the relevant specialty (“pediatric infectious disease”) as the secondary topic.

In developing the taxonomy of generic questions, we used a procedure similar to one described in a study of Medline searches.17 Questions with essentially identical structures (“What is the dose of atorvastatin?” “What is the dose of metformin?”) were placed into a single generic type (“What is the dose of drug X?”). First, six of the authors used a random sample of 100 questions to independently develop preliminary generic questions. JWE combined these six schemes into a consensus taxonomy and modified it further as he coded all the questions. This taxonomy was then distributed to all authors to make further changes. After the final revision was approved by all the authors, five of them used it to code a different random sample of 100 questions. The purpose of this step was to measure the interrater reliability of the final taxonomy of generic questions, which contained 69 categories. Reported frequencies for all taxonomies are based on JWE’s assignments.

Statistical analysis

Most analyses were descriptive. The κ statistic was used to determine the interrater reliability of the question taxonomies. We used liberal reliability criteria for the topic taxonomy: a match was recorded if either the primary or secondary topic assigned by one coder matched either the primary or secondary topic assigned by the other. We used the Kruskal-Wallis one way analysis of variance and linear regression for continuous outcomes, such as the frequency of questions and the time spent answering them, and used the χ2 statistic and logistic regression for dichotomous outcomes, such as whether an answer was found. A two tailed significance level of 0.05 was chosen, and all analyses were performed with Stata (Stata Corporation, College Station TX, USA).

Results

Demographic data

Of the 129 doctors invited, 103 (80%) agreed to participate. The mean age of participants was 48 (range 31-87), and 23 were female. Twenty one were in single handed practices, and 54 practised in a rural area (town population <30 000). Among the 83 doctors in group practices, the number of partners ranged from one to 10 (median 3). Eighty doctors practised in freestanding clinic buildings where family practice was the only specialty. Seven doctors were full time faculty members, two at the University of Iowa and five in community residency programmes. Typically, each doctor had a private office and saw patients by rotating among two or three adjacent examination rooms. Each doctor generally worked with a nurse, who took vital signs, answered patient telephone calls, cleaned examination rooms, and assisted with procedures. The primary funding source for 90 of the doctors was fee for service. Patients were typically scheduled every 10 to 15 minutes.

The 103 participants saw 2467 patients and asked 1101 questions during 732 observation hours. After exclusion of 323 questions recalled about patients seen before the observation period, each doctor asked an average of 7.6 questions during the two half days (3.2 questions per 10 patients seen). The mean age of all patients was 39 years (range 0-98), and 1474 (60%) were female. Patients prompting doctors’ questions were older than those not prompting questions (mean age 43 v 37, P<0.001), and they were more likely to be female (64% v 58%, P<0.05). These age and sex differences were independent of each other in a multiple logistic regression. Older doctors saw more patients and asked fewer questions than younger doctors. For each 10 year increase in age, doctors saw 1.9 more patients (P=0.06) and had 1.7 fewer questions (P<0.01) per 10 observation hours.

Taxonomies

Topics

The most common question topics were drug prescribing (209 questions, 19%), obstetrics and gynaecology (96 questions, 9%), and adult infectious disease (89 questions, 8%). The distribution of question topics tended to mirror the distribution of clinical problems seen. However, there were disproportionately more questions about prescribing and disproportionately fewer questions about health maintenance visits (annual adult examinations and “well child” examinations). Except for drug prescribing, most topics were not pursued. The 14 most common topics accounted for 899 (82%) of all questions. The κ statistic for the topic taxonomy was 0.91 (indicating “almost perfect” interrater agreement18).

Generic questions

The most frequently assigned generic questions were “What is the cause of symptom X?” “What is the dose of drug X?” and “How should I manage disease or finding X?” (table 1). Only queries about drug dose were routinely pursued. The 25 most common generic questions accounted for 887 (81%) of all questions. The κ statistic for the generic question taxonomy was 0.66 (indicating “substantial” interrater agreement18). Although most questions were unique, we found several repeated questions with essentially identical wording: “What is this rash?” (n=22), “Is this a viral or a bacterial infection?” (n=13), “What is causing the patient’s abdominal pain?” (n=11), “What is causing the patient’s chest pain?” (n=10), “What is causing the patient’s fatigue?” (n=8), “What is causing the patient’s dysuria (urine analysis normal)?” (n=5), “What is causing the patient’s hives?” (n=4), and “Should the patient be given prophylaxis for subacute bacterial endocarditis?” (n=4).

Table 1.

Ten most common generic questions asked by 103 family doctors. Values are numbers (percentages)

Generic question Questions asked* Questions pursued Questions answered
What is the cause of symptom X? 94 (9) 8 (9)  4 (50)
What is the dose of drug X? 88 (8) 75 (85) 73 (97)
How should I manage disease or finding X?§ 78 (7) 23 (29) 19 (83)
How should I treat finding or disease X? 75 (7) 25 (33) 18 (72)
What is the cause of physical finding X? 72 (7) 13 (18)  6 (46)
What is the cause of test finding X? 45 (4) 18 (40) 13 (72)
Could this patient have disease or condition X? 42 (4)  6 (14)  4 (67)
Is test X indicated in situation Y? 41 (4) 12 (29) 10 (83)
What is the drug of choice for condition X? 36 (3) 17 (47) 13 (76)
Is drug X indicated in situation Y? 36 (3)  9 (25)  7 (78)
*

Percentage is proportion of total questions asked (n=1101). Percentage is proportion of questions asked. 

Percentage is proportion of questions pursued. §Not specifying diagnostic management versus treatment. 

Answers

During the observation period, answers to 702 (64%) of the questions were not pursued. Doctors said that they might pursue answers to 123 of these questions after the observation period. The commonest reason for not immediately pursuing an answer was that, after voicing some uncertainty, the doctor felt that a reasonable decision could be based on his or her current knowledge (n=148, 21%). Doctors found answers to 318 (80%) of the 399 questions they pursued. As judged by the observer, most answers (n=291, 92%) directly answered the question posed, whereas 27 (8%) provided information related to the question without directly answering it.

Answers were obtained from 156 unique resources. The mean time spent pursuing an answer was 118 seconds (SD 169 seconds), and the median time was 60 seconds (interquartile range 115 seconds). Less time was spent pursuing questions about prescribing than other topics (74 v 153 seconds, P<0.001). Prescribing texts and human sources were most likely to provide an answer (table 2). Formal literature searches were initiated for only two questions.

Table 2.

Information sources used by family doctors to find answers to 399 questions

Information source No of times used (% of total) Time spent seeking answers (seconds)*
No (%) of searches that were successful
Mean (SD) Median (IQR)
Human (such as doctor, pharmacist) 161 (36) 109 (104) 68 (150) 127 (79)
Non-prescribing printed information (such as textbooks, journal articles) 143 (32) 100 (89) 70 (75) 75 (52)
Prescribing text 113 (25) 70 (66) 50 (60) 96 (85)
Printed information posted on walls 17 (4) 42 (34) 35 (45) 14 (82)
Computer application (such as CD Rom, internet) 10 (2) 395 (552) 180 (210)  2 (20)
Total  444 (100) 102 (137) 60 (90) 314 (71) 

SD=standard deviation, IQR=interquartile range. 

*

Kruskal-Wallis test, with Bonferroni corrected P values, showed that average time spent with non-prescribing print sources and with computers was higher than with prescribing texts (P<0.01) or posted information (P<0.05). There were no other significant differences in paired comparisons. 

5×2 χ2 test with 4 degrees of freedom was significant (P<0.001). 

Discussion

With the exception of questions about drug prescribing, doctors in this study did not pursue answers to most of their questions. This result is consistent with a study of Oregon doctors in which an answer was pursued when the problem was perceived as urgent and when a definitive answer was thought to exist.3 In that study, and in ours, doctors pursued only a minority of their questions but found answers to about 80% of those pursued.3,14

In previous studies the frequency of questions has varied widely and seems to depend on the setting, the definition of a “question,” and the methods used to collect them.8 We recorded 3.2 questions for every 10 patients seen. In other studies this number has ranged from 0.7 questions per 10 patients in a private office setting to 57.7 questions per 10 patients on an inpatient teaching service.2,8,19

Our participants spent an average of less than 2 minutes pursuing an answer. In a study of questions asked by Missouri family doctors, Medline searches by medical librarians took an average of 27 minutes per question.15 Sackett and Strauss found that printed summaries of evidence could be provided at the point of care within 30 seconds but that computer applications were too slow and too bulky to be feasible in their hospital setting.20 Although computers fared poorly in this and other studies,6,9 improvements in their speed, portability, and user friendliness are making them more useful to doctors.21

Study limitations

Our taxonomies require validation in other settings because we studied a homogeneous group of doctors in a small geographic area. The presence of a researcher may have influenced the questioning behaviour of the participants: some doctors may have been reluctant to reveal gaps in their knowledge, whereas others may have generated questions to please the observer. We tried to minimise these effects by assuring participants that there was no right number of questions and by developing the trust needed to reveal knowledge gaps. We did not ask participants to rate the importance or urgency of the questions.

Conclusions

Busy family doctors need “bottom line” answers to their questions, and they need them quickly.10,14,22 Evidence can be provided at the point of care, but it is most useful when it has been digested into quickly accessible summaries.8,20,23 These summaries tend to reflect the perspective of research, emphasising the performance characteristics of tests and results of clinical trials. We found that this perspective often did not mesh with the needs of family doctors. For example, when faced with a clinical problem the doctors often asked what steps to take, without distinguishing between diagnostic and therapeutic steps (“How should I manage disease or finding X?”). We agree with those who say that doctors should frame their questions better,4 but we also think that authors should frame their answers better. By learning what questions occur in practice, authors could provide more useful information, which could ultimately lead to better patient care.

Supplementary Material

[extra: Table w1]
[extra: Table w2]

Acknowledgments

We thank Sharon Kaschmitter, who helped collect the data; Jeff Dawson, who helped analyse the data; Dedra Diehl, who helped verify the references; and to the 103 doctors who generously gave their time as participants.

Footnotes

Funding: This study was supported by a grant (G9518) from the American Academy of Family Physicians Foundation.

Competing interests: None declared.

References

  • 1.Covell DG, Uman GC, Manning PR. Information needs in office practice: are they being met? Ann Intern Med. 1985;103:596–599. doi: 10.7326/0003-4819-103-4-596. [DOI] [PubMed] [Google Scholar]
  • 2.Ely JW, Burch RJ, Vinson DC. The information needs of family physicians: case-specific clinical questions. J Fam Pract. 1992;35:265–269. [PubMed] [Google Scholar]
  • 3.Gorman PN, Helfand M. Information seeking in primary care: how physicians choose which clinical questions to pursue and which to leave unanswered. Med Decis Making. 1995;15:113–119. doi: 10.1177/0272989X9501500203. [DOI] [PubMed] [Google Scholar]
  • 4.Sackett DL, Richardson WS, Rosenberg W, Hayes RB. Evidence-based medicine. How to practice and teach EBM. Edinburgh: Churchill Livingstone; 1997. [Google Scholar]
  • 5.McColl A, Smith H, White P, Field J. General practitioner’s perceptions of the route to evidence based medicine: a questionnaire survey. BMJ. 1998;316:361–365. doi: 10.1136/bmj.316.7128.361. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Haug JD. Physician’s preferences for information sources: a meta-analytic study. Bull Med Libr Assoc. 1997;85:223–232. [PMC free article] [PubMed] [Google Scholar]
  • 7.Feinstein AR, Horwitz RI. Problems in the “evidence” of “evidence-based medicine.”. Am J Med. 1997;103:529–535. doi: 10.1016/s0002-9343(97)00244-1. [DOI] [PubMed] [Google Scholar]
  • 8.Smith R. What clinical information do doctors need? BMJ. 1996;313:1062–1068. doi: 10.1136/bmj.313.7064.1062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Verhoeven AA, Boerma EJ, Meyboom-de Jong B. Use of information sources by family physicians: a literature survey. Bull Med Libr Assoc. 1995;83:85–90. [PMC free article] [PubMed] [Google Scholar]
  • 10.Huth EJ. “In the balance”: weighing the evidence. Ann Intern Med. 1994;120:889. doi: 10.7326/0003-4819-120-10-199405150-00012. [DOI] [PubMed] [Google Scholar]
  • 11.Timpka T, Ekstrom M, Bjurulf P. Information needs and information seeking behaviour in primary health care. Scand J Prim Care. 1989;7:105–109. doi: 10.3109/02813438909088656. [DOI] [PubMed] [Google Scholar]
  • 12.Cimino J. Generic queries for meeting clinical information needs. Bull Med Libr Assoc. 1993;81:195–206. [PMC free article] [PubMed] [Google Scholar]
  • 13.Wyatt J. Use and sources of medical knowledge. Lancet. 1991;338:1368–1373. doi: 10.1016/0140-6736(91)92245-w. [DOI] [PubMed] [Google Scholar]
  • 14.Gorman PN, Ash J, Wykoff L. Can primary care physicians’ questions be answered using the medical journal literature? Bull Med Libr Assoc. 1994;82:140–146. [PMC free article] [PubMed] [Google Scholar]
  • 15.Chambliss ML, Conley J. Answering clinical questions. J Fam Pract. 1996;43:140–144. [PubMed] [Google Scholar]
  • 16.Reynolds RD. A family practice article filing system. J Fam Pract. 1995;41:583–590. [PubMed] [Google Scholar]
  • 17.Wilson SR, Starr-Schneidkraut N, Cooper MD. Use of the critical incident technique to evaluate the impact of MEDLINE. Final report submitted to the National Library of Medicine. Palo Alto, CA: American Institute for Research; 1989. (NTIS order No PB90-142522.) [Google Scholar]
  • 18.Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174. [PubMed] [Google Scholar]
  • 19.Osheroff JA, Forsythe DE, Buchanan BG, Bankowitz RA, Blumenfeld BH, Miller RA. Physicians’ information needs: analysis of questions posed during clinical teaching. Annal Intern Med. 1991;114:576–581. doi: 10.7326/0003-4819-114-7-576. [DOI] [PubMed] [Google Scholar]
  • 20.Sackett DL, Straus SE. Finding and applying evidence during clinical rounds: the “evidence cart.”. JAMA. 1998;280:1336–1338. doi: 10.1001/jama.280.15.1336. [DOI] [PubMed] [Google Scholar]
  • 21.Ebell MH, Barry HC. InfoRetriever: rapid access to evidence-based information on a handheld computer. MD Computing. 1998;15:289–297. [PubMed] [Google Scholar]
  • 22.Greer AL. The two cultures of biomedicine: can there be a consensus? JAMA. 1987;258:2739–2740. [PubMed] [Google Scholar]
  • 23.Shaughnessy AF, Slawson DC, Becker L. Clinical jazz: harmonizing clinical experience and evidence-based medicine. J Fam Pract. 1998;47:425–428. [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

[extra: Table w1]
[extra: Table w2]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES