Skip to main content
The BMJ logoLink to The BMJ
editorial
. 2007 Jan 27;334(7586):163–164. doi: 10.1136/bmj.39104.362951.80

Translating animal research into clinical benefit

Daniel G Hackam 1
PMCID: PMC1782020  PMID: 17255568

Abstract

Poor methodological standards in animal studies mean that positive results rarely translate to the clinical domain


Most treatments are initially tested on animals for several reasons. Firstly, animal studies provide a degree of environmental and genetic manipulation rarely feasible in humans.1 Secondly, it may not be necessary to test new treatments on humans if preliminary testing on animals shows that they are not clinically useful. Thirdly, regulatory authorities concerned with public protection require extensive animal testing to screen new treatments for toxicity and to establish safety. Finally, animal studies provide unique insights into the pathophysiology and aetiology of disease, and often reveal novel targets for directed treatments. Yet in a systematic review reported in this week's BMJ Perel and colleagues find that therapeutic efficacy in animals often does not translate to the clinical domain.2

The authors conducted meta-analyses of all available animal data for six interventions that showed definitive proof of benefit or harm in humans. For three of the interventions—corticosteroids for brain injury, antifibrinolytics in haemorrhage, and tirilazad for acute ischaemic stroke—they found major discordance between the results of the animal experiments and human trials. Equally concerning, they found consistent methodological flaws throughout the animal data, irrespective of the intervention or disease studied. For example, only eight of the 113 animal studies on thrombolysis for stroke reported a sample size calculation, a fundamental step in helping to ensure an appropriately powered precise estimate of effect. In addition, the use of randomisation, concealed allocation, and blinded outcome assessment—standards that are considered the norm when planning and reporting modern human clinical trials—were inconsistent in the animal studies.

A limitation of the review is that only six interventions for six conditions were analysed; this raises questions about its applicability across the spectrum of experimental medicine. Others have found consistent results, however. In an overview of similar correlative reviews between animal studies and human trials, Pound and colleagues found that the results of only one—thrombolytics for acute ischaemic stroke—showed similar findings for humans and animals.3 In our systematic review of 76 highly cited (and therefore probably influential) animal studies, we found that only just over a third translated at the level of human randomised trials.4 Similar results have been reported in cancer research.5

Why then are the results of animal studies often not replicated in the clinical domain? Several possible explanations exist. A consistent finding is the presence of methodological biases in animal experimentation; the lack of uniform requirements for reporting animal data has compounded this problem. A series of systematic reviews has shown that the effect size of animal studies is sensitive to the quality of the study and publication bias.6 7 8 A review of 290 animal experiments presented at emergency medicine meetings found that animal studies that did not use randomisation or blinding were much more likely to report a treatment effect than studies that were randomised or blinded.9

A second explanation is that animal models may not adequately mimic human pathophysiology. Test animals are often young, rarely have comorbidities, and are not exposed to the range of competing (and interacting) interventions that humans often receive. The timing, route, and formulation of the intervention may also introduce problems. Most animal experiments have a limited sample size. Animal studies with small sample sizes are more likely to report higher estimates of effect than studies with larger numbers; this distortion usually regresses when all available studies are analysed in aggregate.10 11 To compound the problem, investigators may select positive animal data but ignore equally valid but negative work when planning clinical trials, a phenomenon known as optimism bias.12

What can be done to remedy this situation? Firstly, uniform reporting requirements are needed urgently and would improve the quality of animal research; as in the clinical research world, this would require cooperation between investigators, editors, and funders of basic scientific research. A more immediate solution is to promote rigorous systematic reviews of experimental treatments before clinical trials begin. Many clinical trials would probably not have gone ahead if all the data had been subjected to meta-analysis. Such reviews would also provide robust estimates of effect size and variance for adequately powering randomised trials.

A third solution, which Perel and colleagues call for, is a system for registering animal experiments, analogous to that for clinical trials. This would help to reduce publication bias and provide a more informed view before proceeding to clinical trials. Until such improvements occur, it seems prudent to be critical and cautious about the applicability of animal data to the clinical domain.

Competing interests: None declared.

References

  • 1.Lemon R, Dunnett SB. Surveying the literature from animal experiments. BMJ 2005;330:977-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Perel P, Roberts I, Sena E, Wheble P, Briscoe C, Sandercock P, et al. Comparison of treatment effects between animal experiments and clinical trials: systematic review. BMJ 2007. doi: 10.1136/bmj.39048.407928.BE [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Pound P, Ebrahim S, Sandercock P, Bracken MB, Roberts I. Where is the evidence that animal research benefits humans? BMJ 2004;328:514-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Hackam DG, Redelmeier DA. Translation of research evidence from animals to humans. JAMA 2006;296:1731-2. [DOI] [PubMed] [Google Scholar]
  • 5.Corpet DE, Pierre F. How good are rodent models of carcinogenesis in predicting efficacy in humans? A systematic review and meta-analysis of colon chemoprevention in rats, mice and men. Eur J Cancer 2005;41:1911-22. [DOI] [PubMed] [Google Scholar]
  • 6.Macleod MR, O'Collins T, Horky LL, Howells DW, Donnan GA. Systematic review and meta-analysis of the efficacy of FK506 in experimental stroke. J Cereb Blood Flow Metab 2005;25:713-21. [DOI] [PubMed] [Google Scholar]
  • 7.Macleod MR, O'Collins T, Howells DW, Donnan GA. Pooling of animal experimental data reveals influence of study design and publication bias. Stroke 2004;35:1203-8. [DOI] [PubMed] [Google Scholar]
  • 8.O'Collins VE, Macleod MR, Donnan GA, Horky LL, van der Worp BH, Howells DW. 1,026 experimental treatments in acute stroke. Ann Neurol 2006;59:467-77. [DOI] [PubMed] [Google Scholar]
  • 9.Bebarta V, Luyten D, Heard K. Emergency medicine animal research: does use of randomization and blinding affect the results? Acad Emerg Med 2003;10:684-7. [DOI] [PubMed] [Google Scholar]
  • 10.Lee DS, Nguyen QT, Lapointe N, Austin PC, Ohlsson A, Tu JV, et al. Meta-analysis of the effects of endothelin receptor blockade on survival in experimental heart failure. J Card Fail 2003;9:368-74. [DOI] [PubMed] [Google Scholar]
  • 11.Roberts I, Kwan I, Evans P, Haig S. Does animal experimentation inform human health care? Observations from a systematic review of international animal experiments on fluid resuscitation. BMJ 2002;324:474-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Chalmers I, Matthews R. What are the implications of optimism bias in clinical research? Lancet 2006;367:449-50. [DOI] [PubMed] [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES