Abstract
This week’s headline story about antidepressants highlights the ongoing problem of how study results are often distorted by a failure to access full datasets. Jeanne Lenzer and Shannon Brownlee report
New generation antidepressants aren’t all they’re cracked up to be. That seems to be the central message in the meta-analysis published this week by Irving Kirsch and colleagues in PLoS-Medicine,1 and it was this message that made the headlines. Kirsch’s conclusion follows on the heels of similar studies showing that statins are useful in only a small subset of patients taking the drugs2 and earlier studies showing that the safety and performance of cyclo-oxygenase-2 inhibitors seemed better than proved to be the case,3 further reinforcing previous criticisms that regulators in the United Kingdom and the United States are not doing their duty to protect the public from useless and dangerous drugs. But there’s another, deeper problem here—a problem that, ironically enough, was highlighted by GlaxoSmithKline’s news release stating that Kirsch’s conclusions are “incorrect” because he evaluated only a “small subset of the total data available.” How can regulators, the public, and doctors know how useful (or how potentially dangerous) drugs really are unless outside researchers have access to all the data?
The gist of the new study’s findings is that analysis of published and unpublished data from studies of antidepressants in adults shows that only a very small subset of patients seemed to benefit. Antidepressants “failed to separate” from placebo in almost all instances except among a subset of severely depressed patients. That subset of a subset comprised severely depressed patients—those who scored 23 or higher on the Hamilton rating scale of depression (HRSD). Other studies have found different problems with the antidepressants. Recently the US Food and Drug Administration found safety problems—roughly a doubling of suicidality—among adolescents and young adults using antidepressants.4 But do we know the truth about antidepressants even now? Or statins? Or any one of many other drugs currently on the market?
The answer to that isn’t as simple as it might seem. Firstly, there’s the problem of publication bias, the tendency for positive studies to get published and negative studies to be filed away in a drawer. In the case of antidepressants, a 2008 analysis by Erick Turner and colleagues published in the New England Journal of Medicine found that only 8% of antidepressant trials with negative findings were reported as negative, while positive trials were reported as such 97% of the time.5 The problem is not limited to antidepressants, says Turner. A former medical examiner for the FDA, Turner recently told the BMJ that it is critical for researchers to be able to obtain complete study protocols and full datasets to be able determine whether a study’s conclusions are valid. His concerns were highlighted by a 1999 study showing that in five top medical journals the authors’ conclusions as stated in journal abstracts either were not supported or were contradicted by data given in the body of the article in 18% to 68% of articles (depending on journal).6
Many researchers try to overcome publication bias by requesting all study results or data from the FDA. That’s what Kirsch and his colleagues did. They filed a request under the Freedom of Information Act with the FDA for all study data for the six most widely prescribed of the new generation antidepressants on the market. They also asked for the data from the UK National Institute for Health and Clinical Excellence. The FDA identified 47 relevant trials but for reasons known only to itself failed to release data from nine of the trials. The FDA told the BMJ yesterday that it would look into the reasons for the failure to release the nine studies—which, as it happens, all yielded negative results. In other words, nine of 47 trials of antidepressants were not released, and all had negative findings. The excluded trials represented fully 38% of test participants in trials of sertraline and 23% in trials of citalopram—making it impossible for Kirsch to analyse two of the six antidepressants for overall efficacy.
The recurring drug scandals of the past decade have finally persuaded the US Congress of the need for greater transparency and oversight in the drug market. The Food and Drug Administration Amendments Act (FDAAA) of 2007 represents an important step towards better—and more complete—analyses of drugs.7 It requires companies to register all trials at ClinicalTrials.gov, the registry of clinical trials at the National Library of Medicine, a move that will permit outside researchers to know that certain trials have been (or are) being conducted. Until now this information was often not easy to come by; companies could legally refuse to reveal that they were even conducting certain studies of drugs already on the market. The FDAAA requires researchers to post the primary and secondary outcome measures of their studies at the time of registration (generally within 21 days of enrolment of the first patient) and the results within one year (with extensions up to two years) of the time that the FDA approves the drug or some other action is taken or the trial is concluded.
But the FDAAA is far from perfect, and we can expect more analyses like that of Kirsch and colleagues, in which everyone will express amazement that this or that drug made it to the market with little evidence of benefit—and often with signals of potential harms exceeding those benefits. The FDAAA does not require release of underlying data. Instead, researchers will post only the key results. Deborah Zarin, director of ClinicalTrials.gov, says that the registry may require some tabular data results to be posted, but that is yet to be determined.
The problem of lack of access to underlying data, rather than just authors’ conclusions, has concrete and worrying effects, says Fred Geisler, a neurosurgeon at the Illinois Neuro-Spine Center. Geisler points to the use of high dose steroids in spinal injury patients on the basis of a single, potentially flawed study funded by the National Institutes of Health (NIH).8 Geisler calculates that several thousand patients have died as the result of high dose steroids used to treat acute spinal cord injury. Two recent surveys show that most neurosurgeons agree with him.9 They believe that steroids are either useless or dangerous; yet when asked why most of them continue to give the drug, they cite fears of malpractice on the basis of the standard of practice set by the NIH study. Several researchers have lobbied unsuccessfully for the release of the underlying data, without which they cannot verify their concerns—or lay them to rest.9
Kirsch responded to GlaxoSmithKline’s claim that his conclusions are “incorrect,” stating, “If they would make available a complete dataset of all of the unpublished as well as published data I would be delighted to perform a meta-analysis, and I think they would be doing the public and medical community a great service if they did so.”
Sidney Wolfe, director of Public Citizen’s Health Research Group, an independent non-profit organisation that evaluates drug safety and efficacy based in Washington, DC, agrees with others that the FDAAA is a valuable step towards needed transparency, but he points to another gap in the legislation: results do not need to be posted for up to two years. “This is when the drugs are most heavily promoted,” he told the BMJ.
Turner agrees with Wolfe and added that one problem with relying on the registry is that “it is difficult to assess results that are divorced from the protocol.” He says that companies have to provide only the most rudimentary outcomes. Turner says that a far more in-depth and detailed database—the FDA new drug approval analyses—already exists and should be expanded. The FDA, he says, should implement the Freedom of Information Act to make all FDA analyses available to the public. That would solve two problems at once: it would provide detailed data, rather than just the authors’ conclusions about their data, and it could be expanded to include the many FDA analyses that are already on file about drugs currently on the market.10
Beyond the potential for ineffective or dangerous drugs reaching the market, the data secrecy that persists despite the laudatory reforms brought about by FDAAA have important implications for the conduct of clinical research. Failing to make all data available violates the covenant between patient and researcher. Human participants enter into trials with the expressed understanding that the research is intended not for their benefit but for the benefit of medical knowledge. In return for this sacrifice, researchers implicitly promise to make that knowledge widely known. When data are not in the public domain but hidden, that covenant is broken. How difficult will it be in the future when potential research participants understand that they risk volunteering in vain because the data from the trial can be buried?
See News doi: 10.1136/bmj.39503.656852.DB.
The Food and Drug Administration Amendments Act of 2007 is at http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=110_cong_public_laws&docid=f:publ085.110.pdf.
References
- 1.Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, Johnson BT. Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med 2008;5(2): e45. [DOI] [PMC free article] [PubMed]
- 2.Abramson J, Wright JM. Are lipid-lowering guidelines evidence-based? Lancet 2007;369:168-9. [DOI] [PubMed] [Google Scholar]
- 3.Jüni P, Reichenbach S, Egger M. COX 2 inhibitors, traditional NSAIDs, and the heart. BMJ 2005;330:1342-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hammad TA, Laughren T, Racoosin J. Suicidality in pediatric patients treated with antidepressant drugs. Arch Gen Psychiatry 2006;63:332-9. [DOI] [PubMed] [Google Scholar]
- 5.Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008;358:252-60. [DOI] [PubMed] [Google Scholar]
- 6.Pitkin RM, Branagan MA, Burmeister LF. Accuracy of data in abstracts of published research articles. JAMA 1999;281:1110-1. [DOI] [PubMed] [Google Scholar]
- 7.Groves T. Mandatory disclosure of trial results for drugs and devices. BMJ 2008;336:170. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Bracken MB, Shepard MJ, Holford TR, Leo-Summers L, Aldrich EF, Fazl M, et al. Administration of methylprednisolone for 24 or 48 hours or tirilazad mesylate for 48 hours in the treatment of acute spinal cord injury. Results of the third national acute spinal cord injury randomized controlled trial. National acute spinal cord injury study. JAMA 1997;277:1597-1604. [PubMed] [Google Scholar]
- 9.Lenzer J. NIH secrets: study break. New Republic 2006. Oct 30. www.ahrp.org/cms/index2.php?option=com_content&do_pdf=1&id=398
- 10.Turner EH. A taxpayer-funded clinical trials registry and results database. PLoS Med 2004;1(3):e60. [DOI] [PMC free article] [PubMed] [Google Scholar]