Skip to main content
Environmental Health Perspectives logoLink to Environmental Health Perspectives
. 2013 Jun 14;121(9):985–992. doi: 10.1289/ehp.1206389

Instruments for Assessing Risk of Bias and Other Methodological Criteria of Published Animal Studies: A Systematic Review

David Krauth 1, Tracey J Woodruff 2,3, Lisa Bero 1,4,
PMCID: PMC3764080  PMID: 23771496

Abstract

Background: Results from animal toxicology studies are critical to evaluating the potential harm from exposure to environmental chemicals or the safety of drugs prior to human testing. However, there is significant debate about how to evaluate the methodology and potential biases of the animal studies. There is no agreed-upon approach, and a systematic evaluation of current best practices is lacking.

Objective: We performed a systematic review to identify and evaluate instruments for assessing the risk of bias and/or other methodological criteria of animal studies.

Method: We searched Medline (January 1966–November 2011) to identify all relevant articles. We extracted data on risk of bias criteria (e.g., randomization, blinding, allocation concealment) and other study design features included in each assessment instrument.

Discussion: Thirty distinct instruments were identified, with the total number of assessed risk of bias, methodological, and/or reporting criteria ranging from 2 to 25. The most common criteria assessed were randomization (25/30, 83%), investigator blinding (23/30, 77%), and sample size calculation (18/30, 60%). In general, authors failed to empirically justify why these or other criteria were included. Nearly all (28/30, 93%) of the instruments have not been rigorously tested for validity or reliability.

Conclusion: Our review highlights a number of risk of bias assessment criteria that have been empirically tested for animal research, including randomization, concealment of allocation, blinding, and accounting for all animals. In addition, there is a need for empirically testing additional methodological criteria and assessing the validity and reliability of a standard risk of bias assessment instrument.

Citation: Krauth D, Woodruff TJ, Bero L. 2013. Instruments for assessing risk of bias and other methodological criteria of published animal studies: a systematic review. Environ Health Perspect 121:985–992 (2013); http://dx.doi.org/10.1289/ehp.1206389

Introduction

Results from animal toxicology studies are a critical—and often the only—input to evaluating potential harm from exposure to environmental chemicals or the safety of drugs before they proceed to human testing. However, there is significant debate about how to use animal studies in risk assessments and other regulatory decisions (Adami et al. 2011; European Centre for Ecotoxicology and Toxicology of Chemicals 2009; Weed 2005; Woodruff and Sutton 2011). An important part of this debate is how to evaluate the methodology and potential biases of the animal studies in order to establish how confident one can be in the data.

For the evaluation of human clinical research, there is a distinction between assessing risk of bias and methodological quality (Higgins and Green 2008). Risks of bias are methodological criteria of a study that can introduce a systematic error in the magnitude or direction of the results (Higgins and Green 2008). In controlled human clinical trials testing the efficacy of drugs, studies with a high risk of bias—such as those lacking randomization, allocation concealment, or blinding of participants, personnel, and outcome assessors—produce larger treatment effect sizes, thus falsely inflating the efficacy of the drugs compared with studies that have these design features (Schulz et al. 1995; Schulz and Grimes 2002a, 2002b). Biased human studies assessing the harms of drugs are less likely to report statistically significant adverse effects (Nieto et al. 2007). An assessment of a study’s methodology includes evaluation of additional study criteria related to how a study is conducted (e.g., in compliance with human subjects guidelines) or reported (e.g., study population described). Finally, risk of bias is not the same as imprecision (Higgins and Green 2008). Whereas bias refers to systematic error, imprecision refers to random error. Although smaller studies are less precise, they may not be more biased.

Although there is a well-developed and empirically based literature on how to evaluate the risk of bias of randomized controlled clinical trials, less is known about how to do this for animal studies. Some risks of bias in animal studies have been identified empirically. For example, analyses of animal studies examining interventions for stroke, multiple sclerosis, and emergency medicine have shown that lack of randomization, blinding, specification of inclusion/exclusion criteria, statistical power, and use of comorbid animals are associated with inflated effect estimates of pharmaceutical interventions (Bebarta et al. 2003; Crossley et al. 2008; Minnerup et al. 2010; Sena et al. 2010; Vesterinen et al. 2010). However, these studies used a variety of instruments to evaluate the methodology of animal studies and often mixed assessment of risks of bias, reporting, and other study criteria.

Several guidelines and instruments for evaluating the risks of bias and other methodological criteria of animal research have been published, but there has been no attempt to compare the criteria that they include; to determine whether risk of bias, reporting, or other criteria are assessed; or to determine whether the criteria are based on empirical evidence of bias. The purpose of this review was 2-fold: a) to systematically identify and summarize existing instruments for assessing risks of bias and other methodological criteria of animal studies, and b) to highlight the criteria that have been empirically tested for an association with bias in either animal or clinical models.

Methods

Inclusion/exclusion criteria. Articles that met the following inclusion criteria were included: a) The article was a published report focusing on the development of an instrument for assessing the methodology of animal studies, and b) the article was in English. Where multiple analyses using a single instrument were published separately, the earliest publication was used. Modifications or updates of previously published instruments were considered new instruments and included. We did not include applications of previously reported instruments that were used, for example, to assess a certain area of animal research.

Search strategy. We searched Medline for articles published from January 1966 through November 2011 using a search term combination developed with input from expert librarians. Bibliographies from relevant articles were also screened to find any remaining articles that were not captured from the Medline search. Our search strategy contained the following MeSH terms, text words, and word variants:

{(animal experimentation[mh]) AND (standards[sh] OR research design[mh] OR bias[tw] OR biases[tw] OR checklist*[tw] OR translational research/ethics)} OR {(animals, laboratory[majr] OR disease models, animal[mh] OR drug evaluation, preclinical[mh] OR chemical evaluation OR chemical toxicity OR chemical safety) AND (research[majr:noexp] OR translational research[majr] OR research design[majr] OR “quality criteria”) AND (guideline* OR bias[tw] OR biases[tiab] OR reporting[tw])} OR {(animal*[ti] OR preclinical[ti] OR pre-clinical[ti] OR toxicology OR toxicological OR ecotoxicology OR environmental toxicology) AND (methodological quality OR research reporting OR study quality OR “risk of bias” OR “weight of evidence”)} OR {(CAMARADES[tiab] OR “gold standard publication checklist” OR exclusion inclusion criteria animals bias) OR (peer review, research/standards AND Animals[Mesh:noexp])} OR {(models, biological[mh] OR drug evaluation, preclinical[mh] OR toxicology[mh] OR disease models, animal[majr]) AND (research design[mh] OR reproducibility of results[mh] OR “experimental design”) AND (quality control[mh] OR guidelines as topic[mh] OR bias[tw] OR “critical appraisal”) AND (Animals[Mesh:noexp])} AND eng[la].

Article selection. Studies were screened in two stages. Initially, we reviewed abstracts and article titles, and only those articles meeting our inclusion criteria were further scrutinized by reading the full text. Any articles that did not clearly meet the criteria after review of the full text were discussed by two authors, who made the decision about inclusion. Exact article duplicates were removed using Endnote X2 software (Thomson Reuters, Carlsbad, CA).

Data extraction. We extracted data on each criterion included in each instrument, as well as information on how the instrument was developed.

Instrument development and characteristics. We recorded the method used to develop each instrument (i.e., whether the criteria in the instrument were selected based on consensus, previous animal instruments, and/or clinical instruments). We also recorded whether or not the criteria in the instrument were empirically tested to determine if they were associated with biased effect estimates. Empirical testing was rated as completed if at least one of the individual criterion was empirically tested.

Numerical methodological “quality” scores have been shown to be invalid for assessing risk of bias in clinical research (Jüni et al. 1999). The current standard in evaluating clinical research is to report each component of the assessment instrument separately and not calculate an overall numeric score (Higgins and Green 2008). Although the use of quality scores is now considered inappropriate, it is still a common practice. Therefore, we also assessed whether and how each instrument calculated a “quality” score.

We also noted whether the instrument had been tested for reliability and validity. Reliability in assessing risk of bias refers to the extent to which results are consistent between different coders or in trials or measurements that are repeated (Carmines and Zeller 1979). Validity refers to whether the instrument measures what it was intended to measure, that is, methodological features that could affect research outcomes (Golafshani 2003).

Study design criteria to assess risk of bias and other methodological criteria. Based on published risk of bias assessment instruments for clinical research, we developed an a priori list of criteria and included additional criteria if they occurred in the review of the animal instruments (Cho and Bero 1994; Higgins and Green 2008; Jadad et al. 1996; Schulz et al. 2010).

We collected risk of bias, methodological, and reporting criteria because these three types of assessment criteria were often mixed in the individual instruments. The final list of these criteria is as follows:

  • Treatment allocation/randomization. Describes whether or not treatment was randomly allocated to animal subjects so that each subject has an equal likelihood of receiving the intervention.

  • Concealment of allocation. Describes whether or not procedures were used to protect against selection bias by ensuring that the treatment to be allocated is not known by the investigator before the subject enters the study.

  • Blinding. Relates to whether or not the investigator involved with performing the experiment, collecting data, and/or assessing the outcome of the experiment was unaware of which subjects received the treatment and which did not.

  • Inclusion/exclusion criteria. Describes the process used for including or excluding subjects.

  • Sample size calculation. Describes how the total number of animals used in the study was determined.

  • Compliance with animal welfare requirements. Describes whether or not the research investigators complied with animal welfare regulations.

  • Financial conflict of interest. Describes if the investigator(s) disclosed whether or not he/she has a financial conflict of interest.

  • Statistical model explained. Describes whether the statistical methods used and the unit of analysis are stated and whether the statistical methods are appropriate to address the research question.

  • Use of animals with comorbidity. Describes whether or not the animals used in the study have one or more preexisting conditions that place them at greater risk of developing the health outcome of interest or responding differently to the intervention relative to animals without that condition.

  • Test animal descriptions. Describes the test animal characteristics including animal species, strain, substrain, genetic background, age, supplier, sex, and weight. At least one of these characteristics must be present for this criterion to be met.

  • Dose–response model. Describes whether or not an appropriate dose–response model was used given the research question and disease being modeled.

  • All animals accounted for. Describes whether or not the investigator accounts for attrition bias by providing details about when animals were removed from the study and for what reason they were removed.

  • Optimal time window investigated. Describes whether or not the investigator allowed sufficient time to pass before assessing the outcome. The optimal time window used in animal research should reflect the time needed to see the outcome and depends on the hypothesis being tested. The optimal time window investigated should not be confused with the “therapeutic time window of treatment,” which is defined as the time interval after exposure or onset of disease during which an intervention can still be effectively administered (Candelario-Jalil et al. 2005).

We extracted data on the study design criteria assessed by each instrument. We recorded the number of criteria assessed for each instrument, excluding criteria related only to journal reporting requirements (i.e., headers in an abstract).

Analysis. Here we report the frequency of each criterion assessed, as well as the frequency of any additional criteria that were included in the instruments.

Results

As shown in Figure 1, we identified 3,731 potentially relevant articles. After screening the article titles and abstracts, we identified 88 citations for full text evaluation. After reviewing full text, 60 papers were excluded for at least one of three reasons: a) They did not meet inclusion criteria; b) the studies reviewed a preexisting instrument; and c) the article reported application of an instrument. After screening bibliographies, two additional instruments were found. Overall, 30 instruments were identified and included in the final analysis.

Figure 1.

Figure 1

Flow of included studies. n indicates the number of studies.

Table 1 lists the criteria of each instrument. Of the 30 instruments, 13 were derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; 3 were derived from previously developed clinically based risk of bias assessment instruments or citing clinical studies supporting the inclusion of specific criteria; 5 were developed using evidence from clinical research and either through consensus or citing past instrument publications; 3 were developed through consensus and citing past publications; and 6 had no description of how they were developed.

Table 1.

Description of instruments for assessing risk of bias and methodological criteria of animal studies (n = 30).

Instrument identifier Method used to develop instrument No. of criteria Quality score calculated Specific disease modeled Instrument criteria empirically tested Intended use of instrument
Vesterinen etal. 2011 Developed using evidence from clinical research and either through consensus or citing past animal instrument publications. Instrument development was based on previous research studies and new criteria not captured by past publications. 12 No None No Preclinical drug research
Agerstrand etal. 2011 Based on consensus and citing past guidelines. Authors collaborated with researchers and regulators to develop the criteria, relied on previously published reports, drew from their own professional experiences, and received additional suggestions from ecotoxicologists from Brixham Environmental Laboratories/AstraZeneca and researchers within the MistraPharma research program. 25 No None No Environmental toxicology research (specifically environmental risk assessment of pharmaceuticals)
National Research Council Institute for Laboratory Animal Research 2011 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria. Evidence-based rationale for including specific criteria is provided. Expert laboratory animal researchers with scientific publishing experience formed the committee that developed these guidelines. 19 No None No General animal research
Lamontagne etal. 2010 Developed using evidence from clinical research and either through consensus or citing past animal instrument publications; relied on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Statement for determining relevant risk of bias criteria. Some of the criteria were incorporated into the risk of bias assessment based on clinical evidence showing an association between the criterion and overestimated treatment effect (Montori etal. 2005). 9 No Sepsis No Preclinical drug research
Conrad and Becker 2010 Developed through consensus and citing past guidelines; constructed using five previously developed quality assessment guidelines. 10 Yesa None No General animal research
Vesterinen etal. 2010 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; derived from the consensus statement “Good Laboratory Practice” for modeling stroke (Macleod etal. 2009). 5 No Multiple sclerosis Yes Preclinical drug research
Kilkenny etal. 2010 (the ARRIVE Guidelines) Developed using evidence from clinical research and either through consensus or citing past animal instrument publications; developed using the CONSORT (CONsolidated Standards of Reporting Trials) criteria, consensus, and consultation among scientists, statisticians, journal editors, and research funders. 13 No None No General animal research
Minnerup etal. 2010 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; derived from the STAIR (Stroke Therapy Academic Industry Roundtable) recommendations (STAIR 1999). 11 Yesb Stroke No Preclinical drug research
Hooijmans etal. 2010 (the gold standard publication checklist; GSPC) Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria. Many of the criteria in the GSPC are supported by previous studies showing the importance of such parameters. The authors also discussed and optimized the GSPC with animal science experts. 17 No None No General animal research
van der Worp etal. 2010 Developed using evidence from clinical research and either through consensus or citing past animal instrument publications; recommendations based largely on CONSORT and to a smaller extent on animal guidelines (Altman etal. 2001; Dirnagl 2006; Macleod etal. 2009; Sena etal. 2007; STAIR 1999). 9 No Stroke No Preclinical drug research
Macleod etal. 2009 Developed using evidence from clinical research and either through consensus or citing past animal instrument publications; criteria based on past meta-analyses done by CAMARADES (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) researchers and CONSORT. 9 No Stroke No Preclinical drug research
Fisher etal. 2009 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; updated the original STAIR guidelines (STAIR 1999). No description of how the new instrument was developed. 15 No Stroke No Preclinical drug research
Rice etal. 2008 Derived from previously developed clinically based risk of bias assessment instruments or citing clinical studies supporting the inclusion of specific criteria; modified form of the Jadad criteria (Jadad etal. 1996) used to assess clinical interventions. 6 No Animal pain models No Preclinical drug research
Sniekers etal. 2008 No description of how the instrument was developed. 7 No Osteoarthritis Yes Preclinical drug research
Sena etal. 2007 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; derived from four previous checklists: STAIR (1999), Amsterdam criteria (Horn etal. 2001), CAMARADES (Macleod etal. 2004), and Utrecht criteria (van der Worp etal. 2005). 21 No Stroke Yes Preclinical drug research
Unger 2007 No description of how the instrument was developed. 4 No None No Preclinical drug research
Hobbs etal. 2005 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; modified version of Australasian ecotoxicity database (AED) quality assessment scheme (Markich etal. 2002). 18 Yesc None No Environmental toxicology research
Marshall etal. 2005 Derived from previously developed clinically based risk of bias assessment instruments or citing clinical studies supporting the inclusion of specific criteria; this instrument was based on CONSORT. 10 No Shock/sepsis No Preclinical drug research
van der Worp etal. 2005 (Utrecht criteria) Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria. The checklist was derived from the STAIR criteria (STAIR 1999), and recommendations resemble the scale used by Horn etal. (2001). 9 Yes Stroke No Preclinical drug research
de Aguilar-Nascimento 2005 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; motivated by past research describing the importance of certain study design features (Festing 2003; Festing and Altman 2002; Johnson and Besselsen 2002). 9 No None No General animal research
Macleod etal. 2004 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; informed by previously published criteria (Horn etal. 2001; Jonas etal. 1999). 10 Yesd Stroke Yes Preclinical drug research
Bebarta etal. 2003 Derived from previously developed clinically based risk of bias assessment instruments or citing clinical studies supporting the inclusion of specific criteria; randomization and blinding were included based on evidence from human clinical trials showing that lack of these features often overestimates the magnitude of treatment effects. 2 No None Yes Preclinical drug research
Verhagen etal. 2003 No description of how the instrument was developed. 10 No None No General animal research
Festing and Altman 2002 Developed based on consensus and citing past guidelines; derived from published guidelines for contributors to medical journals (Altman etal. 2000), invitro models (Festing 2001), and a previously published checklist (Festing and van Zutphen 1997). 10 No None No General animal research
Johnson and Besselsen 2002 No description of how the instrument was developed. 7 No None No General animal research
Lucas etal. 2002 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria. An 8-point rating system was developed based on two previous recommendations (Horn etal. 2001; STAIR 1999). 8 Yesd,e None Yes Preclinical drug research
Horn etal. 2001 (Amsterdam criteria) Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; derived in part from the original STAIR guidelines (STAIR 1999). 8 Yesf Stroke No Preclinical drug research
Durda and Preziosi 2000 Derived by modifying or updating previously developed animal research methodology assessment instruments or citing animal studies supporting the inclusion of specific criteria; compiled methodological requirements and acceptance criteria for ecotoxicology testing published by national and international governmental and testing organizations. 15 No None No Environmental toxicology research
Klimisch etal. 1997 No description of how the instrument was developed. 9 No None No Environmental toxicology research
Hsu 1993 No description of how the instrument was developed. 6 No Stroke No Preclinical drug research
aAlthough no specific methodological score was proposed, the authors did rank their criteria based on their relative importance. The authors also favor a scoring system that could be used to assign credits/points each time a criterion is present in a study and proposed several ideas for how to assign scores. bDevelopment of the methodological scores was based on previous studies (Minnerup etal. 2008, 2009). To calculate a quality score, one point was awarded for each quality assessment criterion that was mentioned in a study. cTo calculate the quality score, points were awarded if the assessment criteria were satisfied in the article. The scores given for each question were added to give an overall score, which was expressed as a percentage of the total possible score. Data were classified as unacceptable (≤50%), acceptable (51–79%), or high (≥80%). dTo calculate the methodological score, one point was given for each criterion mentioned in the article. eStudies containing total quality scores <5 were considered to be of “poor methodological quality”; studies with 5 or 6 points were considered to have “moderate methodological quality”; and studies with 7 or 8 points were considered to have “good methodological quality.” fTo calculate the methodological score, one point was given for each criterion mentioned in the article. Studies scoring <4 were considered to be of “poor methodological quality,” and studies scoring ≥4points were considered to be of “good methodological quality.”

Six instruments contained at least one criterion that showed an association of the criterion with inflated drug efficacy in animal models.

Seven instruments calculated a score for assessing methodological “quality.” Descriptions of how these scores were calculated are provided in Table 1. Sixteen of the instruments were designed for no specific disease model; the most commonly modeled disease was stroke (9 of 30 instruments).

Only 1 instrument was tested for validity (Sena et al. 2007), and 1 instrument was tested for reliability (Hobbs et al. 2005). Overall, 18 instruments were designed specifically to evaluate preclinical drug studies, 8 instruments documented general animal research guidelines, and 4 instruments were designed to assess environmental toxicology research.

The total number of risk of bias, methodological, and/or reporting criteria assessed by each instrument ranged from 2 to 25. Table 2 shows the study design criteria used to assess risk of bias for each of the 30 instruments. Although these criteria were included in at least some of the instruments, they were not all supported by empirical evidence of bias. Blinding and randomization were the two most common criteria found in existing instruments; 25 instruments included randomization and 23 instruments included blinding. The need to provide a sample size calculation was listed in 18 instruments. None of the instruments contained all 13 criteria from our initial list; 2 instruments contained 9 criteria, and 4 instruments contained only 1 or 2 of the criteria.

Table 2.

Study design criteria aimed at reducing bias by instrument.

Instrument reference Random allocation of treatment Allocation concealment Blinding Inclusion exclusion criteria stated Sample size calculation Compliance with animal welfare requirements Conflict of interest disclosed Statistical model explained Animals with comorbidity Test animal details Dose–response model Every animal accounted for Optimal time window used No. (%) of criteria in each instrument (n=13)
Vesterinen etal. 2011a Y Y Y Y Y N Y Y N Y N Y N 9 (69)
Agerstrand etal. 2011a Y N N N N N N Y N Y Y N Y 5 (38)
National Research Council Institute for Laboratory Animal Research 2011a Y N Y Y N N N N N Y N Y N 5 (38)
Lamontagne etal. 2010a Y Y Y N Y N N N Y N N N N 5 (38)
Conrad and Becker 2010a N N N N N N Y N N N N N N 1 (8)
Vesterinen etal. 2010 Y N Y N Y Y Y N N N N N N 5 (38)
Kilkenny etal. 2010a Y N Y N Y Y Y Y N Y N N N 7 (54)
Minnerup etal. 2010a Y N Y N N Y Y N Y Y N N N 6 (46)
Hooijmans etal. 2010a Y N Y Y Y Y N Y N Y N Y N 8 (62)
van der Worp etal. 2010a Y Y Y Y Y N N Y N N N Y N 7 (54)
Macleod etal. 2009a Y Y Y Y Y N Y N N Y N Y N 8 (62)
Fisher etal. 2009a Y Y Y Y Y N Y Y Y N Y N N 9 (69)
Rice etal. 2008a Y N Y N Y N N N N Y N Y N 5 (38)
Sniekers etal. 2008a N N Y N Y N N N N Y N N Y 4 (31)
Sena etal. 2007a Y Y Y N Y Y Y N Y N Y N N 8 (62)
Unger 2007 Y N Y N N N N Y N N N Y N 4 (31)
Hobbs etal. 2005a N N N N N N N Y N Y Y N N 3 (23)
Marshall etal. 2005a Y N Y N Y N N N N Y N Y N 5 (38)
van der Worp etal.2005a Y N Y N Y N N N Y N N N N 4 (31)
de Aguilar- Nascimento 2005a Y N Y N Y N N N N N N N N 3 (23)
Macleod etal. 2004a Y N Y N Y Y Y N N N N N N 5 (38)
Bebarta etal. 2003 Y N Y N N N N N N N N N N 2 (15)
Verhagen etal. 2003a N N N N N N N Y N N Y N N 2 (15)
Lucas etal. 2002a Y N Y N N N N N N N Y N N 3 (23)
Festing and Altman 2002a Y N Y N Y N N Y N Y N N N 5 (38)
Johnson and Besselsen 2002a Y N N N Y N N Y N N N N Y 4 (31)
Horn etal. 2001a Y N Y N N N N N N N Y N N 3 (23)
Durda and Preziosi 2000a Y N N N N N N Y N Y Y N N 4 (31)
Klimisch etal. 1997a N N N N N N N N N Y Y N N 2 (15)
Hsu 1993a Y N Y N Y N N N N N Y N N 4 (31)
No. (%) of instruments containing criterion (n=30) 25 (83) 6 (20) 23 (77) 6 (20) 18 (60) 6 (20) 9 (30) 12 (40) 6 (20) 14 (47) 10 (33) 7 (23) 3 (10)
Abbreviations: Y, the criterion was present; N, the criterion was not present. aThe instrument contained additional criteria (see Supplemental Material, TableS1).

Additional criteria assessed by each instrument are listed in Supplemental Material, Table S1. Some of these criteria related to reporting requirements for the abstract, introduction, methods, results, and conclusions, rather than risk of bias criteria. These reporting criteria were not included in the count for the number of risk of bias criteria assessed by an instrument. For example, Kilkenny et al. (2010) stated that the ARRIVE Guidelines is a 20-criteria instrument. However, we consider the ARRIVE Guidelines as a 13-criteria instrument because 7 of the original criteria pertain to reporting requirements. Fourteen instruments contained criteria to describe animal housing, husbandry, or physiological conditions. Inclusion of these criteria is empirically supported by studies showing that changes in housing conditions affect physiological and behavioral parameters in rodents (Duke et al. 2001; Gerdin et al. 2012). Among instruments that did not specify the need to use randomization, 4 of 5 instruments stated that a control group should be used.

Discussion

In this systematic review we identified 30 instruments for assessing risk of bias and other methodological criteria of animal research. Identifying bias, the systematic error or deviation from the truth in actual results or inferences (Higgins and Green 2008), in animal research is important because animal studies are often the major or only evidence that forms the basis for regulatory or further research decisions. Our review highlights the variability in the development and content of instruments that are currently used to assess bias in animal research.

Most of the instruments were not tested for reliability or validity. One notable exception is the CAMARADES (Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies) instrument developed by Sena et al. (2007); these authors combined criteria from four previous instruments and showed that the instrument appears to have validity. Similarly, Hobbs et al. (2005) tested the reliability of a modified version of the Australasian ecotoxicity database (AED) instrument and found an improvement in reliability compared with the original AED instrument. Furthermore, most of the instruments were not developed on the basis of empirical evidence showing an association between specific study design criteria and bias in research outcomes. Only six instruments included criteria that were supported by data showing an association between a particular methodological criterion and effect size in animal studies (Bebarta et al. 2003; Lucas et al. 2002; Macleod et al. 2004; Sena et al. 2007; Sniekers et al. 2008; Vesterinen et al. 2010). Most of the instruments contain criteria based on expert judgment, and others extrapolate from evidence of risk of bias in human studies. In addition, seven instruments calculated a “quality score”; however, these scores are not considered a valid measure of risk of bias, and this practice should be discontinued (Juni et al. 1999).

Types of bias that are known to influence the results of research include selection, performance, detection, and exclusion. These biases have been demonstrated in animal studies, and methodological criteria that can protect against the biases have been empirically tested.

Selection bias, which introduces systematic differences between baseline characteristics in treatment and control groups, can be minimized by randomization and concealment of allocation. Lack of randomization or concealment of allocation in animal studies biases research outcomes by altering effect sizes (Bebarta et al. 2003; Macleod et al. 2008; Sena et al. 2007; Vesterinen et al. 2010). Performance bias is the systematic difference between treatment and control groups with regard to care or exposure other than the intervention (Higgins and Green 2008). Detection bias refers to systematic differences between treatment and control groups with regard to how outcomes are assessed (Higgins and Green 2008). Blinding of investigators can protect against performance bias, and there is substantial evidence that lack of blinding in a variety of types of animal studies is associated with exaggerated effect sizes (Bebarta et al. 2003; Sena et al. 2007; Vesterinen et al. 2010). Blinding of outcome assessors is a primary way of reducing detection bias. There are many ways to achieve adequate blinding in animal studies, such as having coded data (blinding to treatment assignment) analyzed by a statistician who is independent of the rest of the research team. Exclusion bias refers to the systematic difference between treatment and control groups in the number of animals that were included in and completed the study. Accounting for all animals used in the study and using intention-to-treat analysis can reduce exclusion bias (Marshall et al. 2005).

Some criteria included in the animal research assessment instruments are not associated with bias. For example, a statement of compliance with animal welfare requirements is a reporting issue. Sample size calculations are often included as a criterion in animal research assessment instruments, but bias is not the same as imprecision. Whereas bias refers to systematic error, imprecision refers to random error, meaning that multiple replications of the same study will produce different effect estimates because of sampling variation (Higgins and Green 2008). Although larger and more precise studies may give a more accurate estimate of an effect, they are not necessarily less biased. Furthermore, sample size calculations can be greatly affected by the underlying assumptions made for the calculation (Bacchetti 2010). Although a sample size calculation is not a risk of bias criterion, it is an important characteristic to consider in evaluating an overall body of evidence.

Some of the criteria listed in the instruments are unique to animal studies. For example, in preclinical drug research, testing animals with comorbidities is necessary to identify whether or not candidate drugs retain efficacy in light of additional health complications and to more closely resemble the health status of humans. Empirical evidence supports the use of this criterion because studies that included healthy animals instead of animals with comorbidities overestimated the effect sizes of experimental stroke interventions by > 10% (Crossley et al. 2008). For environmental chemicals, use of comorbid animals could result in the opposite influence on effect size (i.e., to decrease it), and considering this as a criterion is consistent with recommendations to evaluate the influence of biological factors that may influence risk (National Research Council 2009). Timing of exposure also influences study outcome (Benatar 2007; van der Worp et al. 2010; Vesterinen et al. 2010), and some effects may be observed only for exposures that occur during certain developmental periods (National Research Council 2009). Sex, the nutritional status of experimental animals, and animal housing and husbandry conditions (Duke et al. 2001; Gerdin et al. 2012) could also affect the response to an intervention or environmental chemical exposure, but these criteria should be studied to determine if they introduce a systematic bias in results. These unique criteria have not been sufficiently included in the study instruments; even if these criteria do not produce systematic bias, they should be clearly described and reported in animal studies to aid interpretation of the findings (Marshall et al. 2005).

Although some risk of bias criteria have been investigated primarily in human studies, they warrant consideration for animal studies. Reviews of clinical studies have shown that study funding sources and financial ties of investigators (including university- or industry-affiliated investigators) are associated with favorable research outcomes for the sponsors (Lundh et al. 2011). In that study, favorable research outcomes were defined as either increased effect sizes for drug efficacy studies, or decreased effect sizes for studies of drug harm. Selective reporting of outcomes and failure to publish entire studies is considered an important source of bias in clinical studies; however, little is known about the extent of this bias in animal research (Hart et al. 2012; Rising et al. 2008).

Further research should consider potential interactions between criteria for assessing risk of bias. Existing instruments have tested the association of study design criteria on effect size using univariate models. Multiple regression models should be used to ascertain the relationship between a study design criterion and effect size when taking into account other criteria in the model. Covariance between methodological criteria should also be examined. For example, randomized studies may be less likely to omit blinding than nonrandomized studies (van der Worp et al. 2010). Knowing the relative importance of these criteria will provide additional support for inclusion of specific criteria in risk of bias assessment instruments.

Most of the instruments identified for our study exclude some criteria that appear to be important for assessing bias in animal studies (e.g., allocation concealment). It is important to recognize that some authors purposely exclude certain criteria from their instruments to reduce complexity and unnecessary detail. The most complex instrument had 25 criteria (Agerstrand et al. 2011). The detailed level of reporting needed to apply the gold standard publication checklist (GSPC), which has 17 criteria, was one of the main criticisms against it (Hooijmans et al. 2010).

Because many journals now allow online publication of supplemental data, risk of bias assessment should be less limited by a lack of space for reporting detailed methods. Reporting of clinical research has improved because risk of bias assessments for systematic reviews and other purposes have become more prevalent and standards for reporting have been implemented by journals (Turner et al. 2012). Recent calls for reporting criteria for animal studies (Landis et al. 2012; National Research Council Institute for Laboratory Animal Research 2011) recognize the need for improved reporting of animal research. As happened for clinical research, reporting of animal research is likely to improve if risk of bias assessments become more common.

Many of the instruments identified in our review were derived to evaluate preclinical animal drug research, which could limit their potential application in environmental health research. Although selection, detection, and performance biases are relevant for all animal research, some of the preclinical instruments contain criteria specific for assessing the quality of stroke research, such as the “avoidance of anesthetics with marked intrinsic neuroprotective properties” (Macleod et al. 2008; Sena et al. 2007). On the other hand, investigation of an optimal time window for outcome assessment (National Research Council 2009), the timing of the exposure (National Research Council 2009), and measurement of outcomes that are sensitive to the exposure at the appropriate time (Wood 2000) are particularly important for assessing animal studies of environmental exposures.

Study limitations. A limitation of our study is that we may not have identified all published assessment instruments for animal research. Our inclusion criteria allowed only articles published in English; therefore, we may have missed some instruments published in other languages. Furthermore, because we limited our search to articles indexed in Medline, articles indexed exclusively in Embase or some other database would have been missed. However, both our consultation with a librarian and the large pool of studies identified through the electronic search suggest that it was comprehensive.

Conclusions

In this review we identified a wide variety of instruments developed to evaluate animal studies. The individual criteria included in animal risk of bias assessment instruments should be empirically tested to determine their influence on research outcomes. Furthermore, these instruments need to be tested for validity and reliability. Finally, existing instruments (many of which were developed using stroke models) need to be tested on other animal models to ensure their relevance and generalizability to other systems.

Supplemental Material

(475 KB) PDF

Acknowledgments

We thank G. Won [University of California, San Francisco (UCSF) Mount Zion Campus] for her assistance with developing the search strategy. We also thank D. Apollonio (UCSF Laurel Heights Campus), D. Dorman (North Carolina State University), and R. Philipps (UCSF Laurel Heights Campus) for reviewing this manuscript.

Footnotes

This study was funded by grant R21ES021028 from the National Institute of Environmental Health Sciences, National Institutes of Health. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

The authors declare they have no actual or potential competing financial interests.

References

  1. Adami HO, Berry SC, Breckenridge CB, Smith LL, Swenberg JA, Trichopoulos D, et al. Toxicology and epidemiology: improving the science with a framework for combining toxicological and epidemiological evidence to establish causal inference. Toxicol Sci. 2011;122(2):223–234. doi: 10.1093/toxsci/kfr113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Agerstrand M, Kuster A, Bachmann J, Breitholtz M, Ebert I, Rechenberg B, et al. Reporting and evaluation criteria as means towards a transparent use of ecotoxicity data for environmental risk assessment of pharmaceuticals. Environ Pollut. 2011;159(10):2487–2492. doi: 10.1016/j.envpol.2011.06.023. [DOI] [PubMed] [Google Scholar]
  3. Altman DG, Gore SM, Garner MJ, Pocock SJ. London: BMJ Books; 2000. Statistical Guidelines for Contributors to Medical Journals. 2nd ed. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, et al. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med. 2001;134(8):663–694. doi: 10.7326/0003-4819-134-8-200104170-00012. [DOI] [PubMed] [Google Scholar]
  5. Bacchetti P. 2010Current sample size conventions: flaws, harms, and alternatives BMC Med 8:17; 10.1186/1741-7015-8-17 (Online 22 March 2010). [DOI] [PMC free article] [PubMed]
  6. Bebarta V, Luyten D, Heard K. Emergency medicine animal research: does use of randomization and blinding affect the results? Acad Emerg Med. 2003;10(6):684–687. doi: 10.1111/j.1553-2712.2003.tb00056.x. [DOI] [PubMed] [Google Scholar]
  7. Benatar M. Lost in translation: treatment trials in the SOD1 mouse and in human ALS. Neurobiol Dis. 2007;26(1):1–13. doi: 10.1016/j.nbd.2006.12.015. [DOI] [PubMed] [Google Scholar]
  8. Candelario-Jalil E, Mhadu NH, González-Falcón A, García-Cabrera M, Muñoz E, León OS, et al. 2005Effects of the cyclooxygenase-2 inhibitor nimesulide on cerebral infarction and neurological deficits induced by permanent middle cerebral artery occlusion in the rat. J Neuroinflammation 23; 10.1186/1742-2094-2-3[Online 18 January 2005] [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Carmines EG, Zeller RA. 1979. Reliability and Validity Assessment. Beverly Hills, CA:SAGE Publications Inc. [Google Scholar]
  10. Cho MK, Bero LA. Instruments for assessing the quality of drug studies published in the medical literature. JAMA. 1994;272(2):101–104. [PubMed] [Google Scholar]
  11. Conrad JW, Jr, Becker RA.2010Enhancing credibility of chemical safety studies: emerging consensus on key assessment criteria. Environ Health Perspect 119757–764.; 10.1289/ehp.1002737[Online 15 December 2010] [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Crossley NA, Sena E, Goehler J, Horn J, van der Worp B, Bath PM, et al. Empirical evidence of bias in the design of experimental stroke studies: a metaepidemiologic approach. Stroke. 2008;39(3):929–934. doi: 10.1161/STROKEAHA.107.498725. [DOI] [PubMed] [Google Scholar]
  13. de Aguilar-Nascimento JE. Fundamental steps in experimental design for animal studies. Acta Cir Bras. 2005;20(1):2–8. doi: 10.1590/s0102-86502005000100002. [DOI] [PubMed] [Google Scholar]
  14. Dirnagl U. Bench to bedside: the quest for quality in experimental stroke research. J Cereb Blood Flow Metab. 2006;26:1465–1478. doi: 10.1038/sj.jcbfm.9600298. [DOI] [PubMed] [Google Scholar]
  15. Duke JL, Zammit TG, Lawson DM. The effects of routine cage-changing on cardiovascular and behavioral parameters in male Sprague-Dawley rats. Contemp Top Lab Anim Sci. 2001;40(1):17–20. [PubMed] [Google Scholar]
  16. Durda JL, Preziosi DV. Data quality evaluation of toxicological studies used to derive ecotoxicological benchmarks. Hum Ecol Risk Assess. 2000;6(5):747–765. [Google Scholar]
  17. European Centre for Ecotoxicology and Toxicology of Chemicals. Framework for the Integration of Human and Animal Data in Chemical Risk Assessment. Technical Report No. 104. Brussels:European Centre for Ecotoxicology and Toxicology of Chemicals. 2009. Available: http://www.ecetoc.org/uploads/Publications/documents/TR%20104.pdf [accessed 16 July 2013]
  18. Festing MF. Guidelines for the design and statistical analysis of experiments in papers submitted to ATLA. Altern Lab Anim. 2001;29(4):427–446. doi: 10.1177/026119290102900409. [DOI] [PubMed] [Google Scholar]
  19. Festing MF. Principles: the need for better experimental design. Trends Pharmacol Sci. 2003;24(7):341–345. doi: 10.1016/S0165-6147(03)00159-7. [DOI] [PubMed] [Google Scholar]
  20. Festing MF, Altman DG. Guidelines for the design and statistical analysis of experiments using laboratory animals. ILAR J. 2002;43(4):244–258. doi: 10.1093/ilar.43.4.244. [DOI] [PubMed] [Google Scholar]
  21. Festing MF, van Zutphen LM. In: Animal Alternatives, Welfare and Ethics. Amsterdam:Elsevier, 405–410; 1997. Guidelines for reviewing manuscripts on studies involving live animals. Synopsis of the workship. [Google Scholar]
  22. Fisher M, Feuerstein G, Howells DW, Hurn PD, Kent TA, Savitz SI, et al. Update of the Stroke Therapy Academic Industry Roundtable preclinical recommendations. Stroke. 2009;40(6):2244–2250. doi: 10.1161/STROKEAHA.108.541128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Gerdin AK, Igosheva N, Roberson LA, Ismail O, Karp N, Sanderson M, et al. Experimental and husbandry procedures as potential modifiers of the results of phenotyping tests. Physiol Behav. 2012;106(5):602–611. doi: 10.1016/j.physbeh.2012.03.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Golafshani N. Understanding reliability and validity in qualitative research. Qualitative Rep. 2003;8(4):597–607. [Google Scholar]
  25. Hart B, Lundh A, Bero L.2012Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ 344d7202; http//dx..org/[Online 3 January 2012] 10.1136/bmj.d7202 [DOI] [PubMed] [Google Scholar]
  26. Higgins JP, Green S. Chichester, UK: John Wiley & Sons Ltd; 2008. Cochrane Handbook for Systematic Reviews of Interventions. [Google Scholar]
  27. Hobbs DA, Warne MSJ, Markich SJ. Evaluation of criteria used to assess the quality of aquatic toxicity data. Integr Environ Assess Manag. 2005;1(3):174–180. doi: 10.1897/2004-003r.1. [DOI] [PubMed] [Google Scholar]
  28. Hooijmans CR, Leenaars M, Ritskes-Hoitinga M. A gold standard publication checklist to improve the quality of animal studies, to fully integrate the Three Rs, and to make systematic reviews more feasible. Altern Lab Anim. 2010;38(2):167–182. doi: 10.1177/026119291003800208. [DOI] [PubMed] [Google Scholar]
  29. Horn J, de Haan RJ, Vermeulen M, Luiten PG, Limburg M. Nimodipine in animal model experiments of focal cerebral ischemia: a systematic review. Stroke. 2001;32(10):2433–2438. doi: 10.1161/hs1001.096009. [DOI] [PubMed] [Google Scholar]
  30. Hsu CY. Criteria for valid preclinical trials using animal stroke models. Stroke. 1993;24(5):633–636. doi: 10.1161/01.str.24.5.633. [DOI] [PubMed] [Google Scholar]
  31. Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996;17(1):1–12. doi: 10.1016/0197-2456(95)00134-4. [DOI] [PubMed] [Google Scholar]
  32. Johnson PD, Besselsen DG. Practical aspects of experimental design in animal research. ILAR J. 2002;43(4):202–206. doi: 10.1093/ilar.43.4.202. [DOI] [PubMed] [Google Scholar]
  33. Jonas S, Ayigari V, Viera D, Waterman P. Neuroprotection against cerebral ischemia. A review of animal studies and correlation with human trial results. Ann NY Acad Sci. 1999;890:2–3. doi: 10.1111/j.1749-6632.1999.tb07975.x. [DOI] [PubMed] [Google Scholar]
  34. Jüni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA. 1999;282(11):1054–1060. doi: 10.1001/jama.282.11.1054. [DOI] [PubMed] [Google Scholar]
  35. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG.2010Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol 86e1000412; 10.1371/journal.pbio.1000412[Online 29 June 2010] [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Klimisch HJ, Andreae M, Tillmann U. A systematic approach for evaluating the quality of experimental toxicological and ecotoxicological data. Regul Toxicol Pharmacol. 1997;25(1):1–5. doi: 10.1006/rtph.1996.1076. [DOI] [PubMed] [Google Scholar]
  37. Lamontagne F, Briel M, Duffett M, Fox-Robichaud A, Cook DJ, Guyatt G, et al. Systematic review of reviews including animal studies addressing therapeutic interventions for sepsis. Crit Care Med. 2010;38(12):2401–2408. doi: 10.1097/CCM.0b013e3181fa0468. [DOI] [PubMed] [Google Scholar]
  38. Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, et al. A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012;490(7419):187–191. doi: 10.1038/nature11556. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Lucas C, Criens-Poublon LJ, Cockrell CT, de Haan RJ. Wound healing in cell studies and animal model experiments by low level laser therapy; were clinical studies justified? A systematic review. Lasers Med Sci. 2002;17(2):110–134. doi: 10.1007/s101030200018. [DOI] [PubMed] [Google Scholar]
  40. Lundh A, Lexchin J, Sismondo S, Busuioc O, Bero L.2011Industry sponsorship and research outcome. Cochrane Database Syst Rev 12MR000033; 10.1002/14651858.MR000033.pub2[Online 12 December 2012] [DOI] [PubMed] [Google Scholar]
  41. Macleod MR, Fisher M, O’Collins V, Sena ES, Dirnagl U, Bath PM, et al. 2009Good Laboratory Practice: preventing introduction of bias at the bench. Stroke 403e50–e52.; . 10.1161/STROKEAHA.108.525386 [DOI] [PubMed] [Google Scholar]
  42. Macleod MR, O’Collins T, Howells DW, Donnan GA. Pooling of animal experimental data reveals influence of study design and publication bias. Stroke. 2004;35(5):1203–1208. doi: 10.1161/01.STR.0000125719.25853.20. [DOI] [PubMed] [Google Scholar]
  43. Macleod MR, van der Worp HB, Sena ES, Howells DW, Dirnagl U, Donnan GA. Evidence for the efficacy of NXY-059 in experimental focal cerebral ischaemia is confounded by study quality. Stroke. 2008;39(10):2824–2829. doi: 10.1161/STROKEAHA.108.515957. [DOI] [PubMed] [Google Scholar]
  44. Markich SJ, Warne MS, Westbury AM, Roberts CJ. A compilation of data on the toxicity of chemicals to species in Australasia. Part 3: Metals. Australas J Exotoxicol. 2002;8:1–138. [Google Scholar]
  45. Marshall JC, Deitch E, Moldawer LL, Opal S, Redl H, van der Poll T. Preclinical models of shock and sepsis: what can they tell us? Shock. 2005;24(suppl 1):1–6. doi: 10.1097/01.shk.0000191383.34066.4b. [DOI] [PubMed] [Google Scholar]
  46. Minnerup J, Heidrich J, Rogalewski A, Schabitz WR, Wellmann J. The efficacy of erythropoietin and its analogues in animal stroke models: a meta-analysis. Stroke. 2009;40(9):3113–3120. doi: 10.1161/STROKEAHA.109.555789. [DOI] [PubMed] [Google Scholar]
  47. Minnerup J, Heidrich J, Wellmann J, Rogalewski A, Schneider A, Schabitz WR. Meta-analysis of the efficacy of granulocyte-colony stimulating factor in animal models of focal cerebral ischemia. Stroke. 2008;39(6):1855–1861. doi: 10.1161/STROKEAHA.107.506816. [DOI] [PubMed] [Google Scholar]
  48. Minnerup J, Wersching H, Diederich K, Schilling M, Ringelstein EB, Wellmann J, et al. Methodological quality of preclinical stroke studies is not required for publication in high-impact journals. J Cereb Blood Flow Metab. 2010;30(9):1619–1624. doi: 10.1038/jcbfm.2010.74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Montori VM, Devereaux PJ, Adhikari NK, Burns KE, Eggert CH, Briel M, et al. Randomized trials stopped early for benefit: a systematic review. JAMA. 2005;294(17):2203–2209. doi: 10.1001/jama.294.17.2203. [DOI] [PubMed] [Google Scholar]
  50. National Research Council. Science and Decisions: Advancing Risk Assessment. Washington, DC:National Academies Press. 2009. Available: http://www.nap.edu/catalog.php?record_id=12209 [accessed 16 July 2013] [PubMed]
  51. National Research Council. Institute for Laboratory Animal Research. Washington, DC: National Academies Press; 2011. Guidance for the Description of Animal Research in Scientific Publications. [PubMed] [Google Scholar]
  52. Nieto A, Mazon A, Pamies R, Linana JJ, Lanuza A, Jimenez FO, et al. Adverse effects of inhaled corticosteroids in funded and nonfunded studies. Arch Intern Med. 2007;167(19):2047–2053. doi: 10.1001/archinte.167.19.2047. [DOI] [PubMed] [Google Scholar]
  53. Rice AS, Cimino-Brown D, Eisenach JC, Kontinen VK, Lacroix-Fralish ML, Machin I, et al. Animal models and the prediction of efficacy in clinical trials of analgesic drugs: a critical appraisal and call for uniform reporting standards. Pain. 2008;139(2):243–247. doi: 10.1016/j.pain.2008.08.017. [DOI] [PubMed] [Google Scholar]
  54. Rising K, Bacchetti P, Bero L.2008Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation. PLoS Med 511e217; 10.1371/journal.pmed.0050217[Online 25 November 2008] [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Schulz KF, Altman DG, Moher D. 2010CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials BMJ 340:c332; http://dx.doi.org/10.1136/bmj.c332 [Online 24 March 2010] [DOI] [PMC free article] [PubMed]
  56. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273(5):408–412. doi: 10.1001/jama.273.5.408. [DOI] [PubMed] [Google Scholar]
  57. Schulz KF, Grimes DA. Allocation concealment in randomised trials: defending against deciphering. Lancet. 2002a;359(9306):614–618. doi: 10.1016/S0140-6736(02)07750-4. [DOI] [PubMed] [Google Scholar]
  58. Schulz KF, Grimes DA. Blinding in randomised trials: hiding who got what. Lancet. 2002b;359(9307):696–700. doi: 10.1016/S0140-6736(02)07816-9. [DOI] [PubMed] [Google Scholar]
  59. Sena ES, Briscoe CL, Howells DW, Donnan GA, Sandercock PA, Macleod MR. Factors affecting the apparent efficacy and safety of tissue plasminogen activator in thrombotic occlusion models of stroke: systematic review and meta-analysis. J Cereb Blood Flow Metab. 2010;30(12):1905–1913. doi: 10.1038/jcbfm.2010.116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Sena E, van der Worp HB, Howells D, Macleod M. How can we improve the pre-clinical development of drugs for stroke? Trends Neurosci. 2007;30(9):433–439. doi: 10.1016/j.tins.2007.06.009. [DOI] [PubMed] [Google Scholar]
  61. Sniekers YH, Weinans H, Bierma-Zeinstra SM, van Leeuwen JP, van Osch GJ. Animal models for osteoarthritis: the effect of ovariectomy and estrogen treatment—a systematic approach. Osteoarthritis Cartilage. 2008;16(5):533–541. doi: 10.1016/j.joca.2008.01.002. [DOI] [PubMed] [Google Scholar]
  62. STAIR (Stroke Therapy Academic Industry Roundtable). Recommendations for standards regarding preclinical neuroprotective and restorative drug development. Stroke. 1999;30(12):2752–2758. doi: 10.1161/01.str.30.12.2752. [DOI] [PubMed] [Google Scholar]
  63. Turner L, Shamseer L, Altman DG, Schulz KF, Moher D.2012Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Syst Rev 160; 10.1186/2046-4053-1-60[Online 29 November 2012] [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Unger EF. All is not well in the world of translational research. J Am Coll Cardiol. 2007;50(8):738–740. doi: 10.1016/j.jacc.2007.04.067. [DOI] [PubMed] [Google Scholar]
  65. van der Worp HB, de Haan P, Morrema E, Kalkman CJ. Methodological quality of animal studies on neuroprotection in focal cerebral ischaemia. J Neurol. 2005;252(9):1108–1114. doi: 10.1007/s00415-005-0802-3. [DOI] [PubMed] [Google Scholar]
  66. van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, O’Collins V, et al. 2010Can animal models of disease reliably inform human studies? PLoS Med 73e1000245; 10.1371/journal.pmed.1000245[Online 30 March 2010] [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Verhagen H, Aruoma OI, van Delft JH, Dragsted LO, Ferguson LR, Knasmüller S, et al. The 10 basic requirements for a scientific paper reporting antioxidant, antimutagenic or anticarcinogenic potential of test substances in in vitro experiments and animal studies in vivo. Food Chem Toxicol. 2003;41(5):603–610. doi: 10.1016/s0278-6915(03)00025-5. [DOI] [PubMed] [Google Scholar]
  68. Vesterinen HM, Egan K, Deister A, Schlattmann P, Macleod MR, Dirnagl U. Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism. J Cereb Blood Flow Metab. 2011;31(4):1064–1072. doi: 10.1038/jcbfm.2010.217. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Vesterinen HM, Sena ES, ffrench-Constant C, Williams A, Chandran S, Macleod MR. Improving the translational hit of experimental treatments in multiple sclerosis. Mult Scler. 2010;16(9):1044–1055. doi: 10.1177/1352458510379612. [DOI] [PubMed] [Google Scholar]
  70. Weed DL. Weight of evidence: a review of concept and methods. Risk Anal. 2005;25(6):1545–1557. doi: 10.1111/j.1539-6924.2005.00699.x. [DOI] [PubMed] [Google Scholar]
  71. Wood PA. Phenotype assessment: are you missing something? Comp Med. 2000;50(1):12–15. [PubMed] [Google Scholar]
  72. Woodruff TJ, Sutton P. An evidence-based medicine methodology to bridge the gap between clinical and environmental health sciences. Health Aff (Millwood) 2011;30(5):931–937. doi: 10.1377/hlthaff.2010.1219. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

(475 KB) PDF

Articles from Environmental Health Perspectives are provided here courtesy of National Institute of Environmental Health Sciences

RESOURCES