Skip to main content
Journal of the Medical Library Association : JMLA logoLink to Journal of the Medical Library Association : JMLA
. 2003 Jul;91(3):364–366.

What is the best way to gather clinical questions from physicians?

Mark H Ebell 1, Linda White 1
PMCID: PMC164401  PMID: 12883556

INTRODUCTION

Increasing attention is being paid to the clinical questions of physicians and other health care providers. Asking and answering questions is important to continuing education, the development of information sources, and the delivery of quality patient care [1–3]. Attempts are underway to systematically identify and answer the questions of clinicians with the best available evidence [4].

The number of questions per patient seen varies considerably depending on how the questions are collected. Studies using direct observation or exit interviews after each patient encounter have generally identified more questions per patient than studies using physician self-report [5–7]. On the other hand, direct observation and exit interviews are almost certainly more expensive and time consuming than self-report, because they require a full-time observer with each physician for the study period. They therefore may not be practical for ongoing surveillance of physician questions. This kind of ongoing surveillance is important because questions change as new tests and treatments are developed. This study is the first to directly compare the content of clinical questions gathered by exit interview after each patient encounter and by physician self-report.

METHODS

Questions were gathered from the physicians in two ways, exit interviews following each patient encounter in the physicians' offices and physicians' self-reports as recorded on cards carried in their pockets. In the exit interview arm of the study, physicians were observed for two days in their outpatient practices by a registered nurse with a master's degree in public health. She was a trained research assistant but did not have special training to perform these observations. Observation did not necessarily occur on successive half days. The observer stationed herself outside the treatment room and asked the physicians the following question after every patient encounter: “During the patient encounter, please describe any questions that could not be answered by asking the patient, looking at the chart, or calling the lab.” She then recorded each question and asked the physicians whether they pursued the question, answered it, and implemented the answer. Other characteristics of the question and answer were also directly recorded in a database on a laptop in the physicians' offices. The questions were those of the physicians and not those of the observer.

In the self-report arm of the study, physicians from the Ambulatory Sentinel Practice Network (ASPN) carried a four-inch-by-eleven-inch card during patient care for one week. ASPN was a national family practice research network (it disbanded in late 1999). The card was two-sided and could be folded to fit in the pocket of a shirt or lab coat. The physicians were instructed to record any clinical questions that occurred during the care of patients; the decision to pursue, answer, and implement the answer; the importance of the question; and the source of the answer, if answered. Physicians also reported the number of patients seen during the observation period. The card is available in electronic form from the author.

Data were entered in a Microsoft Access database. Analysis used a chi-square test for categorical variables, Fisher's exact test for categorical variables with a small cell size, and Student's t-test for normally distributed continuous variables. A cutoff of less than 0.01 was chosen for statistical significance because of multiple comparisons.

RESULTS

Characteristics of the clinicians are shown in Table 1. Gender, age, and practice setting were similar between groups. Both groups consisted entirely of primary care physicians, although more pediatricians and internists participated in the exit interview arm than in the self-report arm. Also more osteopathic physicians participated in the exit interview group than the self-report group, although their training and scope of practice is very similar to that of the allopathic physicians.

Table 1 Physician characteristics

graphic file with name i0025-7338-091-03-0364-t01.jpg

Exit interviews identified more questions per patient than self-report (0.42 versus 0.16). Table 2 shows the characteristics of questions in the two groups. Questions identified by exit interview were less likely to be pursued but equally likely to be answered; the questions were also more likely to be rated “very important.” The number of questions rated “not important” or “somewhat important” (1 or 2 on the 5-point Likert scale) were similar between groups (17.5% versus 14.3%).

Table 2 Question characteristics

graphic file with name i0025-7338-091-03-0364-t02.jpg

Questions were classified using two validated taxonomies [8], one based on the type of question and one on the clinical topic. While the questions were generally similar between groups, questions identified by exit interview were less likely to be about drug therapy than questions from self-report. They were also more likely to be about adult medicine topics in the exit interview group, despite the number of pediatricians in this group, and less likely to be about surgical topics.

DISCUSSION

Clinical questions and answers based on the best available evidence are an important way that physicians learn. The Journal of Family Practice, Journal of the American Board of Family Practice, and other medical journals have recently launched features that regularly answer clinical questions. The Clinical Evidence* and InfoPOEMs references are also organized around physician's questions, and Ely has advocated that medical references should be organized around the real questions of physicians from practice, rather than around traditional concepts of disease and pathophysiology [9]. These undertakings require a sustained effort to collect the questions of physicians in practice, because questions change as new tests and interventions are developed.

While exit interviews identify more questions per patient, they are likely to be more expensive and resource intensive than self-reports. Although the authors did not do a formal cost analysis, it seems self-evident that exit interviews are more expensive because an observer must be stationed with each physician for an interview after each patient encounter. Self-report only requires that the physician mail or fax the data collection card to a central location where the questions can be added to a central repository. It is reassuring that the questions identified by self-report are generally similar to those identified by exit interview and as likely to be rated “neutral,” “important,” or “very important” by the physicians. It is unclear why questions gathered via exit interview are more likely to be rated “very important”; this seems paradoxical, because one would expect that if physicians record fewer questions during self-report, they would tend to record the more important ones. This area is certainly appropriate for further inquiry, and we hope that others would attempt to reproduce our findings.

A sentinel group of 100 physicians seeing 100 patients per week and doing one week of self-report a year would be expected to generate 1,600 new questions per year. If duplicate questions and questions about drug dosage were eliminated, the remaining 1,000 or so questions could be answered if each primary care training program in the United States took responsibility for one new question per year. Of course, such an effort would require support for a central editorial infrastructure and informaticians to aid in dissemination. The Family Practice Inquiries Network has begun this process in the United States [10] but will require more support to answer all of the new questions generated each year by primary care physicians.

Footnotes

* The Clinical Evidence Website may be viewed at http://www.evidence.org.

† The InfoPOEMs Website may be viewed at http://www.infopoems.com.

Contributor Information

Mark H. Ebell, Email: ebell@msu.edu.

Linda White, Email: whiteli@msu.edu.

REFERENCES

  1. Ebell MH. Information at the point-of-care: answering clinical questions. J Am Board Fam Pract. 1999 May–Jun; 12(3):225–35. [DOI] [PubMed] [Google Scholar]
  2. Ely JW, Osheroff JA, Ebell MH, Bergus GR, Levy BT, Chambliss ML, and Evans ER. Analysis of questions asked by family doctors regarding patient care. BMJ. 1999 Aug 7; 319(7206):358–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Gorman PN, Helfand M. Information seeking in primary care: how physicians choose which clinical questions to pursue and which to leave unanswered. Med Decis Making. 1995 Apr–Jun; 15(2):113–9. [DOI] [PubMed] [Google Scholar]
  4. Family Practice Inquiries Network. [Web document]. [cited 5 Nov 01]. <http://www.fpin.org>. [Google Scholar]
  5. Barrie AR, Ward AM. Questioning behaviour in general practice: a pragmatic study. BMJ. 1997 Dec 6; 315(7121):1512–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Covell DG, Uman C, and Manning PR. Information needs in office practice: are they being met? Ann Intern Med. 1985 Oct; 103(4):596–9. [DOI] [PubMed] [Google Scholar]
  7. Ely . op. cit., . 358–361. [Google Scholar]
  8. Ely . op. cit., . 358–361. [Google Scholar]
  9. Ely . op. cit., . 358–361. [Google Scholar]
  10. Family Practice Inquiries Network,. op. cit. [Google Scholar]

Articles from Journal of the Medical Library Association are provided here courtesy of Medical Library Association

RESOURCES