Skip to main content
CMAJ : Canadian Medical Association Journal logoLink to CMAJ : Canadian Medical Association Journal
. 2001 May 29;164(11):1580–1581.

Reports of reports: How good are secondary publications in medicine?

Frank Davidoff 1
PMCID: PMC81114  PMID: 11402798

Throughout its history, medicine has abounded with dogma, that is, “ >‘knowledge’ based on mere authority, tradition, or pathophysiological theory … which one is expected uncritically to take for granted.”1 The replacement of dogma with empirical observation has been slow, although it began as far back as 1536, when Ambroise Paré tested a variety of hypotheses through planned clinical observations. Francis Bacon would have called Paré's approach “ordered experience,” since it is founded on methodological investigation and aspires to be objective, in contrast with “ordinary experience,” which is based on chance observations, hence more likely to be subjective. The British naval surgeon James Lind continued and extended these early efforts “to improve the evidence of medicine” with his epoch-making controlled (but not randomized) studies on the treatment of scurvy, which were begun in 1747. Indeed, medical empiricism had become well established in the United Kingdom,2 largely in the military, well before Pierre Louis in Paris put together his well-known 1836 treatise that challenged the efficacy of bleeding as treatment for pneumonia.3

Publishing the results of empirical clinical studies was soon recognized as crucial in moving medicine away from dogma, since, as the biologist Edward Wilson put it, “One of the strictures of the scientific ethos is that a discovery does not exist until it is safely reviewed and in print.”4 But medicine is an applied practice, not just an intellectual discipline, and publishing reports of empirical clinical studies is therefore just the first step. A medicine driven by empiricism requires that practitioners actually read published reports, believe at least the best of them and change their practices accordingly. Unfortunately, for as long as primary reports of empirical clinical studies have been published, practising physicians have found it difficult to read them and absorb their findings — the demands of practice leave little time and energy for “keeping up,” and the work of selecting and interpreting relevant primary reports is far from trivial.

One result of the long-standing gap between publication of empirical knowledge and its application in clinical practice has been the emergence of a so-called “secondary” or “synoptic” literature, that is, books and journals that select and summarize the most important and strongest studies. These summaries are published in the hope that, because someone else has done the work of selecting and summarizing, clinicians will read, absorb and use the information in their practices. Secondary publications are not new; one of the earliest was the Medical and Philosophical Commentaries, a quarterly review of relevant articles that was launched in Edinburgh in 1773.5 But the exponential growth in volume and complexity of the medical literature in recent years has increased the potential importance of secondary publications, and many are now produced. In this issue (page 1573), P.J. Devereaux and colleagues6 hold 3 of them up to critical scrutiny, a welcome contribution to the process of self-improvement that characterizes (or should characterize) all aspects of a profession.

Their principal finding was that, although the quality of secondary reporting was generally good, all 3 publications “often omitted important information.” Because the publications the authors studied are almost certainly more rigorously edited than most others that now provide clinicians with secondary reports — including tabloids, glossy controlled-circulation “throw-aways” and medical Web sites, many with large readerships — the results of the study by Devereaux and colleagues are not easily generalizable; the study might have been more relevant if the authors had sampled a wider spectrum of publications (perhaps they will choose to do so in their next study). At the same time, if even “the best” of the secondary publications fall substantially short in completeness and accuracy, the results of the study by Devereaux and colleagues raise troubling questions about the quality of reporting in other, less carefully produced secondary publications.

The study by Devereaux and colleagues rests on 2 main premises: first, that certain elements of design and analysis are key in understanding the validity of primary research reports and, second, that it is possible to summarize all of the important information about a primary study in a synoptic report. Although there is considerable debate as to what elements of design and analysis are most critical in determining the strength of the evidence, there is little doubt that some elements are more important than others.7 In this context, it is particularly distressing that the authors found not a single mention in the secondary reports they studied of what is arguably one of the most important elements of randomized controlled trials, namely, concealment of the allocation of study subjects to treatment groups. Unfortunately, the authors do not tell us whether that problem, and others like it, lay in the failure of the primary publications to include that information or the failure of the secondary reports to pass that information along, or both, although it is a problem either way. And a study such as theirs cannot answer deeper but more intriguing questions about the strength of evidence, such as whether a clinical study is only as strong as its weakest link or is stronger than the sum of its parts.

There is also some doubt as to whether it is possible to report everything that is important about a clinical trial in the limited space of a secondary report. Devereaux and colleagues suggest that it is, and give a convincing example of how that might be done. If that were true, however, we would never need the full (primary) report. In fact, some journals have already been criticized for publishing “short” reports of full-scale clinical trials, on the grounds that it is impossible to record all of the relevant detail in such a limited space. And the CONSORT guidelines state that a proper report of a randomized clinical trial should include no less than 22 items of information, many of them rather complex, plus a flow diagram8 — hard to shoe-horn into a few hundred words.

Reading the study by Devereaux and colleagues brings to mind the old children's game of telephone, in which one person whispers a message to a second, who in turn passes it along to a third, and so on around the circle. By the time it has made the rounds, the message almost always becomes garbled, wherein lies the fun. But medicine is not a game, and it is a serious matter if the quality of evidence degrades as it moves from the researchers' notebooks to the minds of practitioners and patients. In documenting the quality of reporting in secondary publications, Devereaux and colleagues have therefore done us a service. At the same time, it seems they have only scratched the surface of a very large, and very old, challenge in medicine: the difficulty of getting the empirical evidence out reliably to those who need it most.

Footnotes

Competing interests: None declared.

Correspondence to: Dr. Frank Davidoff, Editor, Annals of Internal Medicine, 190 North Independence Mall West, Philadelphia PA 19106-1572, USA

References

  • 1.Tröhler U. To improve the evidence of medicine. The 18th century British origins of a critical approach. Edinburgh: Royal College of Physicians of Edinburgh; 2000. p. 2.
  • 2.Tröhler U. To improve the evidence of medicine. The 18th century British origins of a critical approach. Edinburgh: Royal College of Physicians of Edinburgh; 2000.
  • 3.Rangachari PK. Evidence-based medicine: Old French wine with a new Canadian label? J R Soc Med 1997;90:280-4. [DOI] [PMC free article] [PubMed]
  • 4.Wilson EO. Consilience: the unity of knowledge. New York: Knopf; 1998. p. 59.
  • 5.Chalmers I, Tröhler U. Helping physicians to keep abreast of the medical literature: medical and philosophical commentaries, 1773–1795. Ann Intern Med 2000;133:238-43. [DOI] [PubMed]
  • 6.Devereaux PJ, Manns BJ, Ghali WA, Quan H, Guyatt GH. Reviewing the reviewers: the quality of reporting in three secondary journals. CMAJ 2001;164 (11):1573-6. Available: www.cma.ca/cmaj/vol-164/issue-11/1573.asp [PMC free article] [PubMed]
  • 7.Juni P, Witschi A, Block R, Egger M. The hazards of scoring the quality of clinical trials for meta-analyses. JAMA 1999;282:1054-60. [DOI] [PubMed]
  • 8.Moher D, Schulz KF, Altman DG, for the CONSORT Group. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Ann Intern Med 2001;134:657-62. [DOI] [PubMed]

Articles from CMAJ: Canadian Medical Association Journal are provided here courtesy of Canadian Medical Association

RESOURCES