Abstract
We assessed the reading habits of internists with and without epidemiological training because such information may help guide medical journals as they make changes in how articles are edited and formatted. In a 1998 national self-administered mailed survey of 143 internists with fellowship training in epidemiology and study design and a random sample of 121 internists from the American Medical Association physician master file, we asked about the number of hours spent reading medical journals per week and the percentage of articles for which only the abstract is read. Respondents also were asked which of nine medical journals they subscribe to and read regularly. Of the 399 eligible participants, 264 returned surveys (response rate 66%). Respondents reported spending 4.4 hours per week reading medical journal articles and reported reading only the abstract for 63% of the articles; these findings were similar for internists with and without epidemiology training. Respondents admitted to a reliance on journal editors to provide rigorous and useful information, given the limited time available for critical reading. We conclude that internists, regardless of training in epidemiology, rely heavily on abstracts and prescreening of articles by editors.
Changes in medical journal publishing, especially the move toward electronic publication of journals, may dramatically change how articles are edited, formatted, and distributed. In this changing environment, current information is needed about how clinicians read journal articles.
As part of a randomized trial assessing whether journal attribution affects an internist's perception of the validity of a study, we asked a national sample of internists the following: (1) How much time do you currently spend reading journals? (2) Which journals do you read? and (3) What is the proportion of articles for which you read only the abstract? We also sought to determine whether the answers to these questions differed based on an internist's training in clinical epidemiology and study design.
METHODS
Subjects
As part of a larger study,1 we mailed questionnaires to 416 physicians practicing internal medicine in the United States. We recruited half our sample from the American Medical Association's (AMA) master list of licensed physicians in the United States (this database is not limited to AMA members). At our request, the vendor of the AMA master list randomly selected 208 physicians who listed internal medicine as their primary specialty.
The other survey participants were randomly selected from a list of the 398 internists who had completed the Robert Wood Johnson (RWJ) Clinical Scholars Program using a random number generator in STATA (Version 5.0, STATA Corp., College Station, Tex). The RWJ group was selected as an enriched source of internists who were likely to have had formal training in epidemiology, study design, and biostatistics, as such training is an integral part of the 2-year fellowship program.
American Board of Internal Medicine (ABIM) and subspecialty board certification status were obtained from the ABIM Web site for both groups (www.abim.org).
Participants were sent a questionnaire in May, 1998 along with a prepaid return envelope. The survey was re-sent to those who had not responded within 3 weeks. The specifics of the sampling design and power calculations are detailed elsewhere.1
Survey Instrument
We asked for demographic data, including year of birth, gender, and year of graduation from medical school. In addition, we asked whether the respondent had received at least 2 years of postdoctoral training in study design, epidemiology, and biostatistics. The answer to this question served as the basis for characterizing the respondent as trained or untrained in clinical epidemiology.
Participants were asked to estimate the number of hours they spent reading journal articles per week and the percentage of articles for which they limited their reading to only the abstract. We did not ask respondents to state their specific purpose for reading journals (e.g., “keeping up” with the literature, tracking down patient-specific information, or for research purposes)2–4 because our primary interest was in the amount of time spent reading rather than the reasons for it.
Respondents were asked which of the following journals they subscribe to or read regularly: American Journal of Medicine, Annals of Internal Medicine, Archives of Internal Medicine, Hospital Practice, Journal of the American Medical Association, Journal of General Internal Medicine, Lancet, New England Journal of Medicine, and Southern Medical Journal. We arbitrarily chose journals that we believed spanned a broad range of perceived quality and would be easily recognizable to most internists in the United States. At the survey's conclusion, there was an opportunity for open-ended comments.
Statistical Analysis
In bivariate analyses, χ2tests were used for comparing categorical variables, while the Wilcoxon rank sum test was used for comparing ordinal variables. In multivariate analyses, logistic regression was used to model dichotomous dependent variables while ordinary least squares regression was used to model continuous dependent variables. Data were analyzed using SAS (Version 6.12, SAS Institute, Cary, NC).
RESULTS
Sample
Of 416 surveys distributed in the initial mailing, 13 were returned with no forwarding address and 4 were returned by internists reporting that they no longer practiced medicine. Of the 399 eligible participants, 264 surveys were returned (response rate 66%). For the internists taken from the AMA master file, the response rate was 58%. The response rate among alumni of the RWJ Clinical Scholars Program was 74%. Respondent characteristics are summarized in Table 1.
Table 1.
Epidemiology Training | ||||
---|---|---|---|---|
Characteristic | Total (n = 264) | Yes (n = 143) | No (n = 121) | P Value |
Mean age, y (SD) | 47 (10) | 45 (7) | 50 (12) | <.001 |
Average year of graduation from medical school (SD) | 1978 (10) | 1980 (7) | 1975 (11) | <.001 |
Male, % | 84 | 82 | 87 | .4 |
Certified by the American Board of Internal Medicine, % | 86 | 96 | 75 | <.001 |
Certified in a medical subspecialty, % | 23 | 26 | 20 | .3 |
Reading Habits
Overall, respondents reported spending an average of 4.4 hours per week reading medical journal articles and reported reading only the abstract for 63% of articles. Internists with and without epidemiology training reported spending similar amounts of time reading the medical literature per week (4.1 vs 4.7 h; P = .5) and reading only the abstract (67% vs 59%; P = .06). These comparisons were adjusted for age, gender, ABIM certification status, and subspecialty board certification status.
Journal Subscribing and Reading Behavior
Respondents' unadjusted reading and subscribing behaviors are described in Table 2. After adjustment for gender, medical school graduation year, and ABIM and subspecialty board certification, internists trained in clinical epidemiology were more likely than internists without advanced epidemiology training to report subscribing to and/or reading certain journals and less likely to report reading others.
Table 2.
Subscribe to the Journal, % | Read the Journal, % | |||||
---|---|---|---|---|---|---|
Journal | All Respondents (n = 264) | Trained in Epidemiology (n = 143) | Not Trained in Epidemiology (n = 121) | All Respondents (n = 264) | Trained in Epidemiology (n = 143) | Not Trained in Epidemiology (n = 121) |
American Journal of Medicine | 23 | 18 | 30 | 20 | 15 | 26 |
Annals of Internal Medicine | 77 | 82 | 68 | 72 | 73 | 68 |
Archives of Internal Medicine | 40 | 34 | 47 | 36 | 32 | 40 |
Hospital Practice | 31 | 19 | 44* | 23 | 13 | 33* |
Journal of the American Medical Association | 71 | 72 | 68 | 70 | 78 | 59* |
Journal of General Internal Medicine | 34 | 52 | 12* | 32 | 50 | 11* |
Lancet | 7 | 10 | 4* | 14 | 19 | 7* |
New England Journal of Medicine | 74 | 78 | 68* | 75 | 81 | 67 |
Southern Medical Journal | 5 | 1 | 9* | 5 | 2 | 7* |
P≤ .05 for the comparison between internists trained and not trained in epidemiology after adjusting for gender. American Board of Internal Medicine and subspecialty board certification, and medical school graduaton year.
Open-ended Comments
Two major themes emerged from the open-ended comment section. Readers expressed the need for accessible clinical summaries and the expectation that journal editors will assure rigor and quality. One respondent noted, “More of an effort should be made to have critical review of the literature done by experts and screened for the rest of us. It is unrealistic to expect that, even if you have the skills, you will have time to critically review all the literature that is out there.” Another respondent said, “With the exploding pace of medical knowledge, an internist necessarily relies (and should rely) on journal editors to ensure scientific rigor in statistics, conclusions, etc. It is not important (or even desirable) that internists spend their limited neurons on these technical issues; it's hard enough to retain the results and employ them in clinical care.”
DISCUSSION
In a national survey, we found that internists report spending just over 4 hours a week reading medical journals and report reading only abstracts for about two thirds of the articles. In addition, it appeared that internists rely on journal editors to ensure that what they are reading is methodologically sound. This desire for reading a quick and reliable summary of an article may explain the heavy reliance internists place on abstracts.
Evidence-based medicine is increasingly being promoted as the ideal method of practicing clinical medicine.5–7 In the series, “Users' Guides to the Medical Literature,” clinicians facing discrete clinical problems are encouraged to search the literature, retrieve and critique articles, and ultimately apply their conclusions to the care of patients.8–11 Medical journals are expected to serve a key role in the dissemination of information and in the practice of evidence-based medicine. Unfortunately, little information is available describing how physicians currently use the medical literature.
Two decades ago, Stinson and Mueller reported that the medical literature was the most common source of information for health professionals in Alabama.12 During the same period, Stross and Harlan found that physicians attending continuing medical education conferences reported devoting about 3 hours per week to journal reading.13,14 A decade ago, Winkler et al. found in a national physician survey that internists reported spending an average of 6.2 hours per week reading medical journals and newsletters and that 68% of them preferred that medical information be distributed in summary rather than complete form.15
Our findings update these important studies and have implications for the way in which accumulating medical evidence is disseminated. The medical literature continues to burgeon while internists face increasing pressure to meet growing clinical demands. It is unlikely, therefore, that the amount of time physicians devote to continuing medical education will keep pace with the rate at which medical knowledge is growing. Given the finite amount of time internists devote to journal reading, practicing clinicians require clinical relevance in the articles they read.16
Our finding that internists read only abstracts twice as frequently as full articles suggests either that internists rely on abstracts for extracting information from the majority of pertinent articles or that they use abstracts as a screening tool for determining which articles are worth reading. In either case, abstracts appear to be a more important source of information than the articles they accompany. It is thus of some concern that it is common for data in the abstract to be inconsistent with or absent from the article's body.17 We believe that much greater emphasis should be placed on standardizing abstract structure and increasing abstract quality.18–20 Additionally, greater emphasis should be placed on teaching physicians how to better read abstracts and improve their method of interfacing with on-line databases that provide only synopses of studies.
Our findings need to be interpreted in the context of several limitations. First, the results are based on respondents' self-reported reading behavior and interpretation of “reading.” We do not know whether physicians accurately estimated the time they spent reading journal articles and abstracts. They likely overestimated the time spent reading journals and underestimated their reliance on abstracts. Also, we did not ask the respondents to specify whether the amount of time spent reading medical journals that falls under the general category of “keeping up” with the medical literature4 was used primarily to “track down” specific information to meet needs generated by specific patient encounters2 or for other purposes such as research or administrative decisions.
Second, most of the internists trained in clinical epidemiology in our sample were graduates of a single, national fellowship program. Robert Wood Johnson Clinical Scholars are selected on the basis of their interest in research and health care policy, and very often pursue careers in academia and public policy upon completion of the fellowship. Therefore, these internists may not be representative of the larger population of internists trained in epidemiology and study design.
Third, although the response rate for our survey (66%) was higher than the average response rate reported in other published physician surveys (54%),21 it is possible that internists who chose not to participate differed in their reading habits from participants. It is encouraging, however, that nonrespondents were similar to the respondents on several characteristics (data not shown). Finally, subscriptions to some of the journals on our questionnaire are included as a benefit of membership in the journal's sponsoring professional society. It is likely, however, that an important reason for joining a professional society is the desire to receive the society's journal.
Journals serve a critical role in the dissemination of new medical evidence. Our study suggests that peer review and prescreening of articles by journal editors are highly valued by clinical readers and thus should remain an essential component of clinical journals. Our findings also demonstrate that internists rely heavily on an article's abstract perhaps due to their desire for concise summaries of research findings. Teachers of evidence-based medicine should focus on providing physicians with methods for adequately assessing abstract quality in addition to teaching learners how to critically appraise the entire article.
Acknowledgments
This study was supported by the University of Washington Robert Wood Johnson Clinical Scholars Program and the Department of Veterans Affairs. Drs. Christakis and Elmore are supported by Robert Wood Johnson Generalist Faculty Awards. Drs. Saint, Christakis, and Saha were Robert Wood Johnson Clinical Scholars at the time this work was conducted.
The authors thank Drs. Jeoffrey K. Stross and Rajesh Mangrulkar for their critical review of an earlier draft of the manuscript, and also thank the survey respondents for their participation.
REFERENCES
- 1.Christakis DA, Saint S, Saha S, et al. Do physicians judge a study by its cover? An investigation of journal attribution bias. J Clin Epidemiol. 2000;53:773–8. doi: 10.1016/s0895-4356(99)00236-x. [DOI] [PubMed] [Google Scholar]
- 2.Haynes RB, McKibbon KA, Fitzgerald D, Guyatt GH, Walker CJ, Sackett DL. How to keep up with the medical literature: IV. Using the literature to solve clinical problems. Ann Intern Med. 1986;105:636–40. doi: 10.7326/0003-4819-105-4-636. [DOI] [PubMed] [Google Scholar]
- 3.Haynes RB, McKibbon KA, Fitzgerald D, Guyatt GH, Walker CJ, Sackett DL. How to keep up with the medical literature: II. Deciding which journals to read regularly. Ann Intern Med. 1986;105:309–12. doi: 10.7326/0003-4819-105-2-309. [DOI] [PubMed] [Google Scholar]
- 4.Haynes RB, McKibbon KA, Fitzgerald D, Guyatt GH, Walker CJ, Sackett DL. How to keep up with the medical literature: I. Why try to keep up and how to get started. Ann Intern Med. 1986;105:149–53. doi: 10.7326/0003-4819-105-1-149. [DOI] [PubMed] [Google Scholar]
- 5.Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ. 1996;312:71–2. doi: 10.1136/bmj.312.7023.71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Sackett DL, Rosenberg WM. On the need for evidence-based medicine. J Public Health Med. 1995;17:330–4. [PubMed] [Google Scholar]
- 7.Bordley DR, Fagan M, Theige D. Evidence-based medicine: a powerful educational tool for clerkship education. Am J Med. 1997;102:427–32. doi: 10.1016/S0002-9343(97)00158-7. [DOI] [PubMed] [Google Scholar]
- 8.Jaeschke R, Guyatt GH, Sackett DL. Users' guides to the medical literature. III. How to use an article about a diagnostic test. B. What are the results and will they help me in caring for my patients? Evidence-Based Medicine Working Group. JAMA. 1994;271:703–7. doi: 10.1001/jama.271.9.703. [DOI] [PubMed] [Google Scholar]
- 9.Oxman AD, Sackett DL, Guyatt GH. Users' guides to the medical literature. I. How to get started. Evidence-Based Medicine Working Group. JAMA. 1993;270:2093–5. [PubMed] [Google Scholar]
- 10.Guyatt GH, Rennie D. Users' guides to the medical literature. JAMA. 1993;270:2096–7. Editorial. [PubMed] [Google Scholar]
- 11.Guyatt GH, Sackett DL, Cook DJ. Users' guides to the medical literature. II. How to use an article about therapy or prevention. B. What were the results and will they help me in caring for my patients? Evidence-Based Medicine Working Group. JAMA. 1994;271:59–63. doi: 10.1001/jama.271.1.59. [DOI] [PubMed] [Google Scholar]
- 12.Stinson ER, Mueller DA. Survey of health professionals' information habits and needs conducted through personal interviews. JAMA. 1980;243:140–3. [PubMed] [Google Scholar]
- 13.Stross JK, Harlan WR. The impact of mandatory continuing medical education. JAMA. 1978;239:2663–6. doi: 10.1001/jama.239.25.2663. [DOI] [PubMed] [Google Scholar]
- 14.Stross JK, Harlan WR. The dissemination of new medical information. JAMA. 1979;241:2622–4. [PubMed] [Google Scholar]
- 15.Winkler JD, Berry SH, Brook RH, Kanouse DE, et al. Physicians' Attitudes and Information Habits. In: Kanouse DE, Winkler JD, Kosecoff J, et al., editors. Changing Medical Practice Through Technology Assessment: An Evaluation of the NIH Consensus Development Program. Ann Arbor, Mich: Health Administration Press; 1989. pp. 87–101. In: [Google Scholar]
- 16.Justice AC, Berlin JA, Fletcher SW, Fletcher RH, Goodman SN. Do readers and peer reviewers agree on manuscript quality? JAMA. 1994;272:117–9. [PubMed] [Google Scholar]
- 17.Pitkin RM, Branagan MA, Burmeister LF. Accuracy of data in abstracts of published research articles. JAMA. 1999;281:1110–1. doi: 10.1001/jama.281.12.1110. [DOI] [PubMed] [Google Scholar]
- 18.Taddio A, Pain T, Fassos FF, Boon H, Ilersich AL, Einarson TR. Quality of nonstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal and the Journal of the American Medical Association. CMAJ. 1994;150:1611–5. [PMC free article] [PubMed] [Google Scholar]
- 19.Pitkin RM, Branagan MA. Can the accuracy of abstracts be improved by providing specific instructions? A randomized controlled trial. JAMA. 1998;280:267–9. doi: 10.1001/jama.280.3.267. [DOI] [PubMed] [Google Scholar]
- 20.Winker MA. The need for concrete improvement in abstract quality. JAMA. 1999;281:1129–30. doi: 10.1001/jama.281.12.1129. Editorial. [DOI] [PubMed] [Google Scholar]
- 21.Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50:1129–36. doi: 10.1016/s0895-4356(97)00126-1. [DOI] [PubMed] [Google Scholar]