Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2021 Jun 8.
Published in final edited form as: Exp Physiol. 2020 Jul 14;105(9):1459–1466. doi: 10.1113/EP088870

The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research

Nathalie Percie du Sert 1,, Viki Hurst 2, Amrita Ahluwalia 3,4, Sabina Alam 5, Marc T Avey 6, Monya Baker 7, William J Browne 8, Alejandra Clark 9, Innes C Cuthill 10, Ulrich Dirnagl 11, Michael Emerson 12, Paul Garner 13, Stephen T Holgate 14, David W Howells 15, Natasha A Karp 16, Stanley E Lazic 17, Katie Lidster 18, Catriona J MacCallum 19, Malcolm Macleod 20,21, Esther J Pearl 22, Ole H Petersen 23, Frances Rawle 24, Penny Reynolds 25, Kieron Rooney 26, Emily S Sena 27, Shai D Silberberg 28, Thomas Steckler 29, Hanno Würbel 30
PMCID: PMC7610926  EMSID: EMS122927  PMID: 32666546

Abstract

Reproducible science requires transparent reporting. The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved. Here, we introduce ARRIVE 2.0. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise and divide the items of the guidelines into 2 sets, the “ARRIVE Essential 10,” which constitutes the minimum requirement, and the “Recommended Set,” which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration document, which serves (1) to explain the rationale behind each item in the guidelines, (2) to clarify key concepts, and (3) to provide illustrative examples. We aim, through these changes, to help ensure that researchers, reviewers, and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.

Why good reporting is important

In recent years, concerns about the reproducibility of research findings have been raised by scientists, funders, research users, and policy makers (Begley & Ioannidis, 2015; Goodman, Fanelli, & Ioannidis, 2016). Factors that contribute to poor reproducibility include flawed study design and analysis, variability and inadequate validation of reagents and other biological materials, insufficient reporting of methodology and results, and barriers to accessing data (Freedman, Venugopalan, & Wisman, 2017). The bioscience community has introduced a range of initiatives to address the problem, from open access and open practices to enable the scrutiny of all aspects of the research (Else, 2018; Kidwell et al., 2016) through to study preregistration to shift the focus towards robust methods rather than the novelty of the results (Chambers, Forstmann, & Pruszynski, 2017; Nosek, Ebersole, DeHaven, & Mellor, 2018), as well as resources to improve experimental design and statistical analysis (Bate & Clark, 2014; Percie du Sert et al., 2017; Lazic, 2016).

Transparent reporting of research methods and findings is an essential component of reproducibility. Without this, the methodological rigour of the studies cannot be adequately scrutinised, the reliability of the findings cannot be assessed, and the work cannot be repeated or built upon by others. Despite the development of specific reporting guidelines for preclinical and clinical research, evidence suggests that scientific publications often lack key information and that there continues to be considerable scope for improvement (Glasziou et al., 2014; Hackam & Redelmeier, 2006; Kilkenny et al., 2009; Macleod et al., 2009; 2015; McCance, 1995; Rice et al., 2009; van der Worp et al., 2010). Animal research is a good case in point, where poor reporting impacts on the development of therapeutics and irreproducible findings can spawn an entire field of research, or trigger clinical studies, subjecting patients to interventions unlikely to be effective (Begley & Ellis, 2012; Begley & Ioannidis, 2015; Scott et al., 2008).

In an attempt to improve the reporting of animal research, the ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) were published in 2010. The guidelines consist of a checklist of the items that should be included in any manuscript that reports in vivo experiments, to ensure a comprehensive and transparent description (Kilkenny & Altman, 2010; Kilkenny, Browne, Cuthill, Emerson, & Altman, 2010a; 2010b; 2010c; 2010d; 2010e; McGrath, Drummond, McLachlan, Kilkenny, & Wainwright, 2010; Kilkenny et al., 2011; Kilkenny et al., 2012a; 2012b). They apply to any area of research using live animal species and are especially pertinent to describe comparative research in the laboratory or other formal test setting. The guidelines are also relevant in a wider context, for example, for observational research, studies conducted in the field, and where animal tissues are used. In the 10 years since publication, the ARRIVE guidelines have been endorsed by more than a thousand journals from across the life sciences. Endorsement typically includes advocating their use in guidance to authors and reviewers. However, despite this level of support, recent studies have shown that important information as set out in the ARRIVE guidelines is still missing from most publications sampled. This includes details on randomisation (reported in only 30%-40% of publications), blinding (reported in only approximately 20% of publications), sample size justification (reported in less than 10% of publications), and animal characteristics (all basic characteristics reported in less than 10% of publications) (Avey et al., 2016; Leung et al., 2018; Macleod et al., 2015).

Evidence suggests that 2 main factors limit the impact of the ARRIVE guidelines. The first is the extent to which editorial and journal staff are actively involved in enforcing reporting standards. This is illustrated by a randomised controlled trial at PLOS ONE, designed to test the effect of requesting a completed ARRIVE checklist in the manuscript submission process. This single editorial intervention, which did not include further verification from journal staff, failed to improve the disclosure of information in published papers (Hair et al., 2019). In contrast, other studies using shorter checklists (primarily focused on experimental design) with more editorial follow-up have shown a marked improvement in the nature and detail of the information included in publications (Han et al., 2017; Ramirez et al., 2017; The NPQIP Collaborative group, 2019). It is likely that the level of resource required from journals and editors currently prohibits the implementation of all the items of the ARRIVE guidelines.

The second issue is that researchers and other individuals and organisations responsible for the integrity of the research process are not sufficiently aware of the consequences of incomplete reporting. There is some evidence that awareness of ARRIVE is linked to the use of more rigorous experimental design standards (Reichlin, Vogt, & Wurbel, 2016); however, researchers are often unfamiliar with the much larger systemic bias in the publication of research and in the reliability of certain findings and even of entire fields (Fraser, Parker, Nakagawa, Barnett, & Fidler, 2018; Hair et al., 2019; Hurst & Percie du Sert, 2017; The Academy of Medical Sciences, 2015). This lack of understanding affects how experiments are designed and grant proposals prepared, how animals are used and data recorded in the laboratory, and how manuscripts are written by authors or assessed by journal staff, editors, and reviewers.

Approval for experiments involving animals is generally based on a harm-benefit analysis, weighing the harms to the animals involved against the benefits of the research to society. If the research is not reported in enough detail, even when conducted rigorously, the benefits may not be realised, and the harm-benefit analysis and public trust in the research are undermined (Sena & Currie, 2019). As a community, we must do better to ensure that, where animals are used, the research is both well designed and analysed as well as transparently reported. Here, we introduce the revised ARRIVE guidelines, referred to as ARRIVE 2.0. The information included has been updated, extended, and reorganised to facilitate the use of the guidelines, helping to ensure that researchers, editors, and reviewers—as well as other relevant journal staff—are better equipped to improve the rigour and reproducibility of animal research.

Introducing ARRIVE 2.0

In ARRIVE 2.0, we have improved the clarity of the guidelines, prioritised the items, added new information, and generated the accompanying Explanation and Elaboration (E&E) document to provide context and rationale for each item (Percie du Sert et al., 2020) (also available at https://www.arriveguidelines.org). New additions comprise inclusion and exclusion criteria, which are a key aspect of data handling and prevent the ad hoc exclusion of data (Landis et al., 2012); protocol registration, a recently emerged approach that promotes scientific rigour and encourages researchers to carefully consider the experimental design and analysis plan before any data are collected (Kimmelman & Anderson, 2012); and data access, in line with the FAIR Data Principles (Findable, Accessible, Interoperable, Reusable) (Wilkinson et al., 2016). S1 Table summarises the changes.

The most significant departure from the original guidelines is the classification of items into 2 prioritised groups, as shown in Tables 1 and 2. There is no ranking of the items within each group. The first group is the “ARRIVE Essential 10,” which describes information that is the basic minimum to include in a manuscript, as without this information, reviewers and readers cannot confidently assess the reliability of the findings presented. It includes details on the study design, the sample size, measures to reduce subjective bias, outcome measures, statistical methods, the animals, experimental procedures, and results. The second group, referred to as the “Recommended Set,” adds context to the study described. This includes the ethical statement, declaration of interest, protocol registration, and data access, as well as more detailed information on the methodology such as animal housing, husbandry, care, and monitoring. Items on the abstract, background, objectives, interpretation, and generalisability also describe what to include in the more narrative parts of a manuscript.

Table 1. ARRIVE Essential 10.

ARRIVE Essential 10
Study design 1 For each experiment, provide brief details of study design including:
(a) The groups being compared, including control groups. If no control group has been used, the rationale should be stated.
(b) The experimental unit (e.g. a single animal, litter, or cage of animals).
Sample size 2 (a) Specify the exact number of experimental units allocated to each group, and the total number in each experiment. Also indicate the total number of animals used.
(b) Explain how the sample size was decided. Provide details of any a priori sample size calculation, if done.
Inclusion and exclusion criteria 3 (a) Describe any criteria used for including and excluding animals (or experimental units) during the experiment, and data points during the analysis. Specify if these criteria were established a priori. If no criteria were set, state this explicitly.
(b) For each experimental group, report any animals, experimental units, or data points not included in the analysis and explain why. If there were no exclusions, state so.
(c) For each analysis, report the exact value of n in each experimental group.
Randomisation 4 (a) State whether randomisation was used to allocate experimental units to control and treatment groups. If done, provide the method used to generate the randomisation sequence.
(b) Describe the strategy used to minimise potential confounders such as the order of treatments and measurements, or animal/cage location. If confounders were not controlled, state this explicitly.
Blinding 5 Describe who was aware of the group allocation at the different stages of the experiment (during the allocation, the conduct of the experiment, the outcome assessment, and the data analysis).
Outcome measures 6 (a) Clearly define all outcome measures assessed (e.g. cell death, molecular markers, or behavioural changes).
(b) For hypothesis-testing studies, specify the primary outcome measure, i.e., the outcome measure that was used to determine the sample size.
Statistical methods 7 (a) Provide details of the statistical methods used for each analysis, including software used.
(b) Describe any methods used to assess whether the data met the assumptions of the statistical approach, and what was done if the assumptions were not met.
Experimental animals 8 (a) Provide species-appropriate details of the animals used, including species, strain and substrain, sex, age or developmental stage, and, if relevant, weight.
(b) Provide further relevant information on the provenance of animals, health/immune status, genetic modification status, genotype, and any previous procedures.
Experimental procedures 9 For each experimental group, including controls, describe the procedures in enough detail to allow others to replicate them, including:
(a) What was done, how it was done, and what was used.
(b) When and how often.
(c) Where (including detail of any acclimatisation periods).
(d) Why (provide rationale for procedures).
Results 10 For each experiment conducted, including independent replications, report:
(a) Summary/descriptive statistics for each experimental group, with a measure of variability where applicable (e.g. mean and SD, or median and range).
(b) If applicable, the effect size with a confidence interval.

Explanations and examples for items 1 to 10 are available in the Explanation and Elaboration document (Percie du Sert et al., 2020) and on the website at https://www.arriveguidelines.org).

Abbreviations: ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments)

Table 2. ARRIVE Recommended Set.

Recommended Set
Abstract 11 Provide an accurate summary of the research objectives, animal species, strain and sex, key methods, principal findings, and study conclusions.
Background 12 (a) Include sufficient scientific background to understand the rationale and context for the study, and explain the experimental approach.
(b) Explain how the animal species and model used address the scientific objectives and, where appropriate, the relevance to human biology.
Objectives 13 Clearly describe the research question, research objectives and, where appropriate, specific hypotheses being tested.
Ethical statement 14 Provide the name of the ethical review committee or equivalent that has approved the use of animals in this study, and any relevant licence or protocol numbers (if applicable). If ethical approval was not sought or granted, provide a justification.
Housing and husbandry 15 Provide details of housing and husbandry conditions, including any environmental enrichment.
Animal care and monitoring 16 (a) Describe any interventions or steps taken in the experimental protocols to reduce pain, suffering, and distress.
(b) Report any expected or unexpected adverse events.
(c) Describe the humane endpoints established for the study, the signs that were monitored, and the frequency of monitoring. If the study did not have humane endpoints, state this.
Interpretation/scientific implications 17 (a) Interpret the results, taking into account the study objectives and hypotheses, current theory, and other relevant studies in the literature.
(b) Comment on the study limitations, including potential sources of bias, limitations of the animal model, and imprecision associated with the results.
Generalisability/translation 18 Comment on whether, and how, the findings of this study are likely to generalise to other species or experimental conditions, including any relevance to human biology (where appropriate).
Protocol registration 19 Provide a statement indicating whether a protocol (including the research question, key design features, and analysis plan) was prepared before the study, and if and where this protocol was registered.
Data access 20 Provide a statement describing if and where study data are available.
Declaration of interests 21 (a) Declare any potential conflicts of interest, including financial and nonfinancial. If none exist, this should be stated.
(b) List all funding sources (including grant identifier) and the role of the funder(s) in the design, analysis, and reporting of the study.

Together with the Essential 10, the Recommended Set represents best reporting practice. Explanations and examples for items 11 to 21 are available in the Explanation and Elaboration document (Percie du Sert et al., 2020) and on the website https://www.arriveguidelines.org.

Abbreviations: ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments)

Revising the guidelines has been an extensive and collaborative effort, with input from the scientific community carefully built into the process. The revision of the ARRIVE guidelines has been undertaken by an international working group—the authors of this publication—with expertise from across the life sciences community, including funders, journal editors, statisticians, methodologists, and researchers from academia and industry. We used a Delphi exercise (Moher, Schulz, Simera, & Altman, 2010) with external stakeholders to maximise diversity in fields of expertise and geographical location, with experts from 19 countries providing feedback on each item, suggesting new items, and ranking items according to their relative importance for assessing the reliability of research findings. This ranking resulted in the prioritisation of the items of the guidelines into the 2 sets. Demographics of the Delphi panel and full methods and results are presented in Supporting Information S1 Delphi and S1 Data. Following their publication on BioRxiv, the revised guidelines and the E&E were also road tested with researchers preparing manuscripts describing in vivo studies, to ensure that these documents were well understood and useful to the intended users. This study is presented in Supporting Information S1 Road Testing and S2 Data.

While reporting animal research in adherence to all 21 items of ARRIVE 2.0 represents best practice, the classification of the items into 2 groups is intended to facilitate the improved reporting of animal research by allowing an initial focus on the most critical issues. This better allows journal staff, editors, and reviewers to verify that the items have been adequately reported in manuscripts. The first step should be to ensure compliance with the ARRIVE Essential 10 as a minimum requirement. Items from the Recommended Set can then be added over time and in line with specific editorial policies until all the items are routinely reported in all manuscripts. ARRIVE 2.0 are fully compatible with and complementary to other guidelines that have been published in recent years. By providing a comprehensive set of recommendations that are specifically tailored to the description of in vivo research, they help authors reporting animal experiments adhere to the National Institutes of Health (NIH) standards (Landis et al., 2012) and the minimum standards framework and checklist (Materials, Design, Analysis and Reporting [MDAR] (Chambers et al., 2019)). The revised guidelines are also in line with many journals’ policies and will assist authors in complying with information requirements on the ethical review of the research (Osborne, Payne, & Newman, 2009; Rands, 2011), data presentation and access (Giofre, Cumming, Fresc, Boedker, & Tressoldi, 2017; Michel, Murphy, & Motulsky 2020; Vasilevsky, Minnier, Haendel, & Champieux, 2017), statistical methods (Giofre et al., 2017; Michel et al., 2020), and conflicts of interest (Ancker & Flanagin, 2007; Rowan-Legg, Weijer, Gao, & Fernandez, 2009).

Although the guidelines are written with researchers and journal editorial policies in mind, it is important to stress that researchers alone should not have to carry the responsibility for transparent reporting. Funders, institutions, and publishers’ endorsement of ARRIVE has been instrumental in raising awareness to date; they now have a key role to play in building capacity and championing the behavioural changes required to improve reporting practices. This includes embedding ARRIVE 2.0 in appropriate training, workflows, and processes to support researchers in their different roles. While the primary focus of the guidelines has been on the reporting of animal studies, ARRIVE also has other applications earlier in the research process, including in the planning and design of in vivo experiments. For example, requesting a description of the study design in line with the guidelines in funding or ethical review applications ensures that steps to minimise experimental bias are considered at the beginning of the research cycle (Updated RCUK guidance for funding applications involving animal research, 2015).

Conclusion

Transparent reporting is clearly essential if animal studies are to add to the knowledge base and inform future research, policy, and clinical practice. ARRIVE 2.0 prioritises the reporting of information related to study reliability. This enables research users to assess how much weight to ascribe to the findings and, in parallel, promotes the use of rigorous methodology in the planning and conduct of in vivo experiments (Reichlin et al., 2016), thus increasing the likelihood that the findings are reliable and, ultimately, reproducible.

The intention of ARRIVE 2.0 is not to supersede individual journal requirements but to promote a harmonised approach across journals to ensure that all manuscripts contain the essential information needed to appraise the research. Journals usually share a common objective of improving the methodological rigour and reproducibility of the research they publish, but different journals emphasise different pieces of information (Enhancing reproducibility, 2013; Curtis et al., 2018; Prager et al., 2018). Here, we propose an expert consensus on information to prioritise. This will provide clarity for authors, facilitate transfer of manuscripts between journals, and accelerate an improvement of reporting standards.

Concentrating the efforts of the research and publishing communities on the ARRIVE Essential 10 items provides a manageable approach to evaluate reporting quality efficiently and assess the effect of interventions and policies designed to improve the reporting of animal experiments. It provides a starting point for the development of operationalised checklists to assess reporting, ultimately leading to the build of automated or semi-automated artificial intelligence tools that can detect missing information rapidly (Heaven, 2018).

Improving reporting is a collaborative endeavour, and concerted effort from the biomedical research community is required to ensure maximum impact. We welcome collaboration with other groups operating in this area, as well as feedback on ARRIVE 2.0 and our implementation strategy.

Supporting information

S1 Table. Noteworthy changes in ARRIVE 2.0.

This table recapitulates noteworthy changes in the ARRIVE guidelines 2.0, compared to the original ARRIVE guidelines published in 2010.

S1 Delphi. Delphi methods and results.

Methodology and results of the Delphi study that was used to prioritise the items of the guidelines into the ARRIVE Essential 10 and Recommended Set.

S1 Data. Delphi data.

Tabs 1, 2, and 3: Panel members’ scores for each of the ARRIVE items during rounds 1, 2, and 3, along with descriptive statistics. Tab 4: Qualitative feedback, collected from panel members during round 1, on the importance and the wording of each item. Tab 5: Additional items suggested for consideration in ARRIVE 2.0; similar suggestions were grouped together before processing. Tab 6: Justifications provided by panel members for changing an item’s score between round 1 and round 2.

S2 Data. Road testing data.

Tab 1: Participants’ demographics and general feedback on the guidelines and the E&E preprints. Tab 2: Outcome of each manuscript’s assessment and justifications provided by participants for not including information covered in the ARRIVE guidelines.

S1 Road Testing. Road testing methods and results.

Methodology used to road test the revised ARRIVE guidelines and E&E (as published in preprint) and how this information was used in the development of ARRIVE 2.0.

S1 Annotated Byline. Individual authors’ positions at the time this article was submitted.

Acknowledgements

We would like to thank the members of the expert panel for the Delphi exercise and the participants of the road testing for their time and feedback. We are grateful to the DelphiManager team for advice and use of their software. We would like to acknowledge the late Doug Altman’s contribution to this project; Doug was a dedicated member of the working group and his input to the guidelines’ revision has been invaluable.

Funding

This work was supported by the National Centre of the Replacement, Refinement & Reduction on Animals in Research (NC3Rs, https://www.nc3rs.org.uk/). NPdS, KL, VH, and EJP are employees of the NC3Rs.

Abbreviations

ARRIVE

Animal Research: Reporting of in vivo Experiments

E&E

Explanation and Elaboration

FAIR

Findable, Accessible, Interoperable, Reusable

MDAR

Materials, Design, Analysis and Reporting

NIH

National Institutes of Health

Additional information

Competing interests

AA: editor in chief of the British Journal of Pharmacology. WJB, ICC and ME: authors of the original ARRIVE guidelines. WJB: serves on the Independent Statistical Standing Committee of the funder CHDI foundation. AC: Senior Editor, PLOS ONE. AC, CJM, MMcL and ESS: involved in the IICARus trial. ME, MMcL and ESS: have received funding from NC3Rs. ME: sits on the MRC ERPIC panel. STH: chair of the NC3Rs board, trusteeship of the BLF, Kennedy Trust, DSRU and CRUK, member of Governing Board, Nuffield Council of Bioethics, member Science Panel for Health (EU H2020), founder and NEB Director Synairgen, consultant Novartis, Teva and AZ, chair MRC/GSK EMINENT Collaboration. VH, KL, EJP and NPdS: NC3Rs staff, role includes promoting the ARRIVE guidelines. SEL and UD: on the advisory board of the UK Reproducibility Network, CJMcC: shareholdings in Hindawi, on the publishing board of the Royal Society, on the EU Open Science policy platform. UD, MMcL, NPdS, CJMcC, ESS, TS and HW: members of EQIPD. MMcL: member of the Animals in Science Committee, on the steering group of the UK Reproducibility Network. NPdS and TS: associate editors of BMJ Open Science. OHP: vice president of Academia Europaea, editor in chief of Function, senior executive editor of the Journal of Physiology, member of the Board of the European Commission’s SAPEA (Science Advice for Policy by European Academies). FR: NC3Rs board member, shareholdings in GSK. FR and NAK: shareholdings in AstraZeneca. PR: member of the University of Florida Institutional Animal Care and Use Committee, editorial board member of Shock. ESS: editor in chief of BMJ Open Science. SDS: role is to provide expertise and does not represent the opinion of the NIH. TS: shareholdings in Johnson & Johnson. SA, MTA, MB, PG, DWH, and KR declared no conflict of interest.

Author contributions

NPdS: conceptualisation, data curation, formal analysis, funding acquisition, investigation, methodology, project administration, resources, supervision, visualisation, writing - original draft, writing – review and editing; VH: data curation, investigation, methodology, project administration, resources, writing - original draft; SEL, EJP: writing - review and editing; KL: investigation, project administration, writing - review and editing; AA, SA, MTA, MB, WJB, AC, ICC, UD, ME, PG, STH, DWH, NAK, CJMcC, MMcL, OHP, FR, PR, KR, ESS, SDS, TS, HW: investigation, methodology, resources, writing - original draft, writing - review and editing.

References

  1. Ancker JS, Flanagin A. A comparison of conflict of interest policies at peer-reviewed journals in different scientific disciplines. Science and Engineering Ethics. 2007;13(2):147–157. doi: 10.1007/s11948-007-9011-z. [DOI] [PubMed] [Google Scholar]
  2. Avey MT, Moher D, Sullivan KJ, Fergusson D, Griffin G, Grimshaw JM, Hutton B, Lalu MM, Macleod M, Marshall JS, Mei HJ, et al. Canadian Critical Care Translational Biology Group. The devil is in the details: incomplete reporting in preclinical animal research. Plos One. 2016;11(11):e0166733. doi: 10.1371/journal.pone.0166733. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bate ST, Clark RA. The design and statistical analysis of animal experiments. Cambridge University Press; Cambridge, United Kingdom: 2014. p. 310. [Google Scholar]
  4. Begley CG, Ellis LM. Drug development: raise standards for preclinical cancer research. Nature. 2012;483(7391):531–533. doi: 10.1038/483531a. [DOI] [PubMed] [Google Scholar]
  5. Begley CG, Ioannidis JP. Reproducibility in science: improving the standard for basic and preclinical research. Circulation Research. 2015;116(1):116–126. doi: 10.1161/CIRCRESAHA.114.303819. [DOI] [PubMed] [Google Scholar]
  6. Chambers CD, Forstmann B, Pruszynski JA. Registered reports at the European Journal of Neuroscience: consolidating and extending peer-reviewed study pre-registration. European Journal of Neuroscience. 2017;45(5):627–628. doi: 10.1111/ejn.13519. [DOI] [PubMed] [Google Scholar]
  7. Chambers K, Collings A, Graf C, Kiermer V, Mellor DT, Macleod M, Swaminathan S, Sweet D, Vinson V. Towards minimum reporting standards for life scientists. MetaArXiv. 2019 doi: 10.31222/osf.io/9sm4x. [DOI] [Google Scholar]
  8. Curtis MJ, Alexander S, Cirino G, Docherty JR, George CH, Giembycz MA, Hoyer D, Insel PA, Izzo AA, Ji Y, MacEwan DJ, et al. Experimental design and analysis and their reporting II: updated and simplified guidance for authors and peer reviewers. British Journal of Pharmacology. 2018;175(7):987–993. doi: 10.1111/bph.14153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Percie du Sert N, Bamsey I, Bate ST, Berdoy M, Clark RA, Cuthill I, Fry D, Karp NA, Macleod M, Moon L, Stanford SC, et al. The experimental design assistant. Plos Biologyogy. 2017;15(9):e2003779. doi: 10.1371/journal.pbio.2003779. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Percie du Sert N, Ahluwalia A, Alam S, Avey MT, Baker M, Browne WJ, Clark A, Cuthill IC, Dirnagl U, Emerson M, et al. Reporting animal research: Explanation and Elaboration for the ARRIVE guidelines 2.0. Plos Biologyogy. 2020 doi: 10.1371/journal.pbio.3000411. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Else H. Radical open-access plan could spell end to journal subscriptions. Nature. 2018;561(7721):17–18. doi: 10.1038/d41586-018-06178-7. [DOI] [PubMed] [Google Scholar]
  12. Fraser H, Parker T, Nakagawa S, Barnett A, Fidler F. Questionable research practices in ecology and evolution. Plos One. 2018;13(7):e0200303. doi: 10.1371/journal.pone.0200303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Freedman LP, Venugopalan G, Wisman R. Reproducibility2ø2ø: Progress and priorities. F1000Research. 2017;6:604. doi: 10.12688/f1000research.11334.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Giofre D, Cumming G, Fresc L, Boedker I, Tressoldi P. The influence of journal submission guidelines on authors’ reporting of statistics and use of open research practices. Plos One. 2017;12(4) doi: 10.1371/journal.pone.0175583. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, Michie S, Moher D, Wager E. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383(9913):267–276. doi: 10.1016/S0140-6736(13)62228-X. [DOI] [PubMed] [Google Scholar]
  16. Goodman SN, Fanelli D, Ioannidis JPA. What does research reproducibility mean? Science Translational Medicine. 2016;8(341):341ps12. doi: 10.1126/scitranslmed.aaf5027. [DOI] [PubMed] [Google Scholar]
  17. Hackam DG, Redelmeier DA. Translation of research evidence from animals to humans. Jama. 2006;296(14):1731–1732. doi: 10.1001/jama.296.14.1731. [DOI] [PubMed] [Google Scholar]
  18. Hair K, Macleod MR, Sena ES, Sena ES, Hair K, Macleod MR, Howells D, Bath P, Irvine C, MacCallum C, Morrison G, et al. A randomised controlled trial of an intervention to improve compliance with the ARRIVE guidelines (IICARus) Research Integrity and Peer Review. 2019;4(1):12. doi: 10.1186/s41073-019-0069-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Han S, Olonisakin TF, Pribis JP, Zupetic J, Yoon JH, Holleran KM, Jeong K, Shaikh N, Rubio DM, Lee JS. A checklist is associated with increased quality of reporting preclinical biomedical research: a systematic review. Plos One. 2017;12(9):e0183591. doi: 10.1371/journal.pone.0183591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Heaven D. AI peer reviewers unleashed to ease publishing grind. Nature. 2018;563(7733):609–610. doi: 10.1038/d41586-018-07245-9. [DOI] [PubMed] [Google Scholar]
  21. Hurst V, Percie du Sert N. The ARRIVE guidelines survey. Open Science Framework. 2017 doi: 10.17605/OSF.IO/G8T5Q. [DOI] [Google Scholar]
  22. Kidwell MC, Lazarevic LB, Baranski E, Hardwicke TE, Piechowski S, Falkenberg LS, Kennett C, Slowik A, Sonnleitner C, Hess-Holden C, Errington TM, et al. Badges to acknowledge open practices: a simple, low-cost, effective method for increasing transparency. Plos Biology. 2016;14(5):e1002456. doi: 10.1371/journal.pbio.1002456. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Kilkenny C, Altman DG. Improving bioscience research reporting: ARRIVE-ing at a solution. Laboratory Animals. 2010;44(4):377–378. doi: 10.1258/la.2010.0010021. [DOI] [PubMed] [Google Scholar]
  24. Kilkenny C, Browne W, Cuthill IC, Emerson M, Altman DG. Animal research: reporting in vivo experiments: the ARRIVE guidelines. The Journal of Gene Medicine. 2010a;12(7):561–563. doi: 10.1002/jgm.1473. [DOI] [PubMed] [Google Scholar]
  25. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Animal research: Reporting in vivo experiments: the ARRIVE guidelines. Experimental Physiology. 2010b;95(8):842–844. doi: 10.1113/expphysiol.2010.053793. [DOI] [PubMed] [Google Scholar]
  26. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Animal research: Reporting in vivo experiments: The ARRIVE guidelines. Journal of Physiology. 2010c;588(Pt 14):2519–2521. doi: 10.1113/jphysiol.2010.192278. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. Journal of Pharmacology & Pharmacotherapeutics. 2010d;1(2):94–99. doi: 10.4103/0976-500X.72351. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. Plos Biology. 2010e;8(6):e1000412. doi: 10.1371/journal.pbio.1000412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Kilkenny C, Browne W, Cuthill IC, Emerson M, Altman DG. Animal Research: Reporting In Vivo Experiments-the ARRIVE Guidelines. Journal of Cerebral Blood Flow and Metabolism. 2011 doi: 10.1038/jcbfm.2010.220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. Osteoarthritis and Cartilage. 2012a;20(4):256–60. doi: 10.1016/j.joca.2012.02.010. [DOI] [PubMed] [Google Scholar]
  31. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. Veterinary Clinical Pathology. 2012b;41(1):27–31. doi: 10.1111/j.1939-165X.2012.00418.x. [DOI] [PubMed] [Google Scholar]
  32. Kilkenny C, Parsons N, Kadyszewski E, Festing MF, Cuthill IC, Fry D, Hutton J, Altman DG. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. Plos One. 2009;4(11):e7824. doi: 10.1371/journal.pone.0007824. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Kimmelman J, Anderson JA. Should preclinical studies be registered? Nature Biotechnology. 2012;30(6):488–489. doi: 10.1038/nbt.2261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, Crystal RG, Darnell RB, Ferrante RJ, Fillit H, Finkelstein R, et al. A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012;490(7419):187–191. doi: 10.1038/nature11556. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Lazic SE. Experimental Design for Laboratory Biologists: Maximising Information and Improving Reproducibility. Cambridge University Press; Cambridge: 2016. [Google Scholar]
  36. Leung V, Rousseau-Blass F, Beauchamp G, Pang DSJ. ARRIVE has not ARRIVEd: Support for the ARRIVE (Animal Research: Reporting of in vivo Experiments) guidelines does not improve the reporting quality of papers in animal welfare, analgesia or anesthesia. Plos One. 2018;13(5):e0197882. doi: 10.1371/journal.pone.0197882. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Macleod MR, Fisher M, O’Collins V, Sena ES, Dirnagl U, Bath PM, Buchan A, van der Worp HB, Traystman R, Minematsu K, Donnan GA, et al. Good laboratory practice: preventing introduction of bias at the bench. Stroke; A Journal of Cerebral Circulation. 2009;40(3):e50–52. doi: 10.1161/STROKEAHA.108.525386. [DOI] [PubMed] [Google Scholar]
  38. Macleod MR, Lawson McLean A, Kyriakopoulou A, Serghiou S, de Wilde A, Sherratt N, Hirst T, Hemblade R, Bahor Z, Nunes-Fonseca C, Potluru A, et al. Risk of bias in reports of in vivo research: a focus for improvement. Plos Biology. 2015;13(10):e1002273. doi: 10.1371/journal.pbio.1002273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. McCance I. Assessment of statistical procedures used in papers in the Australian Veterinary Journal. Australian Veterinary Journal. 1995;72(9):322–328. doi: 10.1111/j.1751-0813.1995.tb07534.x. [DOI] [PubMed] [Google Scholar]
  40. McGrath JC, Drummond GB, McLachlan EM, Kilkenny C, Wainwright CL. Guidelines for reporting experiments involving animals: the ARRIVE guidelines. British Journal of Pharmacology. 2010;160(7):1573–1576. doi: 10.1111/j.1476-5381.2010.00873.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Michel MC, Murphy TJ, Motulsky HJ. New author guidelines for displaying data and reporting data analysis and statistical methods in experimental biology. Molecular Pharmacology. 2020;97(1):49–60. doi: 10.1124/mol.119.118927. [DOI] [PubMed] [Google Scholar]
  42. Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. Plos Medicine. 2010;7(2):e1000217. doi: 10.1371/journal.pmed.1000217. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. PNAS. 2018;115(11):2600–2606. doi: 10.1073/pnas.1708274114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Osborne NJ, Payne D, Newman ML. Journal editorial policies, animal welfare, and the 3Rs. The American Journal of Bioethics. 2009;9(12):55–9. doi: 10.1080/15265160903318343. [DOI] [PubMed] [Google Scholar]
  45. Prager EM, Chambers KE, Plotkin JL, McArthur DL, Bandrowski AE, Bansal N, Martone ME, Bergstrom HC, Bespalov A, Graf C. Improving transparency and scientific rigor in academic publishing. Journal of Neuroscience Research. 2018 doi: 10.1002/jnr.24340. [DOI] [PubMed] [Google Scholar]
  46. Ramirez FD, Motazedian P, Jung RG, Di Santo P, MacDonald ZD, Moreland R, Simard T, Clancy AA, Russo JJ, Welch VA, Wells GA, et al. Methodological rigor in preclinical cardiovascular studies: targets to enhance reproducibility and promote research translation. Circulation Research. 2017;120(12):1916–1926. doi: 10.1161/CIRCRESAHA.117.310628. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Rands SA. Inclusion of policies on ethical standards in animal experiments in biomedical science journals. Journal of the American Association for Laboratory Animal Science : JAALAS. 2011;50(6):901–3. [PMC free article] [PubMed] [Google Scholar]
  48. Reichlin TS, Vogt L, Wurbel H. The researchers’ view of scientific rigor-survey on the conduct and reporting of in vivo research. Plos One. 2016;11(12):e0165999. doi: 10.1371/journal.pone.0165999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Rice AS, Cimino-Brown D, Eisenach JC, Kontinen VK, Lacroix-Fralish ML, Machin I, Consortium PP, Mogil JS, StÖhr T. Animal models and the prediction of efficacy in clinical trials of analgesic drugs: a critical appraisal and call for uniform reporting standards. Pain. 2009;139(2):243–247. doi: 10.1016/j.pain.2008.08.017. [DOI] [PubMed] [Google Scholar]
  50. Rowan-Legg A, Weijer C, Gao J, Fernandez C. A comparison of journal instructions regarding institutional review board approval and conflict-of-interest disclosure between 1995 and 2005. Journal of Medical Ethics. 2009;35(1):74–78. doi: 10.1136/jme.2008.024299. [DOI] [PubMed] [Google Scholar]
  51. Scott S, Kranz JE, Cole J, Lincecum JM, Thompson K, Kelly N, Bostrom A, Theodoss J, Al-Nakhala BM, Vieira FG, Ramasubbu J, et al. Design, power, and interpretation of studies in the standard murine model of ALS. Amyotroph Lateral Scler. 2008;9(1):4–15. doi: 10.1080/17482960701856300. [DOI] [PubMed] [Google Scholar]
  52. Sena ES, Currie GL. How our approaches to assessing benefits and harms can be improved. Animal Welfare. 2019;28(1):107–115. [Google Scholar]
  53. The Academy of Medical Sciences. Reproducibility and reliability of biomedical research: improving research practice. [cited 16 June 2020];2015 Available from: https://acmedsci.ac.uk/policy/policy-projects/reproducibility-and-reliability-of-biomedical-research.
  54. The NPQIP Collaborative group. Did a change in Nature journals’ editorial policy for life sciences research improve reporting? BMJ Open Science. 2019;3(1):e000035. doi: 10.1136/bmjos-2017-000035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, O’Collins V, Macleod MR. Can animal models of disease reliably inform human studies? Plos Medicine. 2010;7(3):e1000245. doi: 10.1371/journal.pmed.1000245. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Vasilevsky NA, Minnier J, Haendel MA, Champieux RE. Reproducible and reusable research: are journal data sharing policies meeting the mark? PeerJ. 2017;5 doi: 10.7717/peerj.3208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A, Blomberg N, Boiten J-W, da Silva Santos LB, Bourne PE, Bouwman J, et al. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data. 2016;3:160018. doi: 10.1038/sdata.2016.18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Updated RCUK guidance for funding applications involving animal research 2015. [cited 16 June 2020];2019 Nov 18; Available from: https://mrc.ukri.org/news/browse/updated-rcuk-guidance-for-funding-applications-involving-animal-research/
  59. Enhancing reproducibility. Nature Methods. 2013;10(5):367. doi: 10.1038/nmeth.2471. Epub 2013/06/14. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

S1 Table. Noteworthy changes in ARRIVE 2.0.

This table recapitulates noteworthy changes in the ARRIVE guidelines 2.0, compared to the original ARRIVE guidelines published in 2010.

S1 Delphi. Delphi methods and results.

Methodology and results of the Delphi study that was used to prioritise the items of the guidelines into the ARRIVE Essential 10 and Recommended Set.

S1 Data. Delphi data.

Tabs 1, 2, and 3: Panel members’ scores for each of the ARRIVE items during rounds 1, 2, and 3, along with descriptive statistics. Tab 4: Qualitative feedback, collected from panel members during round 1, on the importance and the wording of each item. Tab 5: Additional items suggested for consideration in ARRIVE 2.0; similar suggestions were grouped together before processing. Tab 6: Justifications provided by panel members for changing an item’s score between round 1 and round 2.

S2 Data. Road testing data.

Tab 1: Participants’ demographics and general feedback on the guidelines and the E&E preprints. Tab 2: Outcome of each manuscript’s assessment and justifications provided by participants for not including information covered in the ARRIVE guidelines.

S1 Road Testing. Road testing methods and results.

Methodology used to road test the revised ARRIVE guidelines and E&E (as published in preprint) and how this information was used in the development of ARRIVE 2.0.

S1 Annotated Byline. Individual authors’ positions at the time this article was submitted.

RESOURCES