Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Aug 16.
Published in final edited form as: Trends Neurosci. 2019 Sep;42(9):568–572. doi: 10.1016/j.tins.2019.07.003

Practical considerations for navigating Registered Reports

Anastasia Kiyonaga 1, Jason M Scimeca 1
PMCID: PMC9380892  NIHMSID: NIHMS1675815  PMID: 31470913

Abstract

Recent open science efforts to improve rigor and reliability have sparked great enthusiasm. Among these, the Registered Report publication format integrates best practices in hypothesis-driven research with peer review that occurs before the research is conducted. Here, we detail practical recommendations to help researchers negotiate the mechanics of this developing format.

Keywords: open science, pre-registration, research practices


Pre-registering study plans and predictions—i.e., recording them before knowing the results—is a promising way to align research practices with the ideals of the scientific method [1-3]. The Registered Report publication route escalates and incentivizes the commitment to this practice: detailed study protocols and analysis procedures are peer-reviewed and accepted in-principle before the research is conducted [4,5]. Reviewers provide input that can modify the experimental design before data collection, and the research must then adhere to the approved protocol.

A growing list of neuroscience journals offer Registered Reports (https://cos.io/rr/) [4,6,7]. These journals provide standardized basic requirements for the format, but these can underplay the true practical demands of meeting stringent Registered Report criteria. For instance, it is clear from the guidelines that sample size estimates are required; yet it is unclear how to thoroughly meet this requirement. There is little concrete instruction available to guide the nuts and bolts of executing the Registered Report process, and researchers may wonder whether the approach would be suitable for their work (or whether the payoff will be worth the effort) [8]. This uncertainty may unnecessarily prolong the process or deter researchers from undertaking it in the first place. Here we offer recommendations to demystify and accelerate the Registered Report pipeline for researchers in any field of neuroscience; if one can make hypotheses and describe how they will be tested, then this is a fitting format (Figure 1; also see https://osf.io/5gazv/wiki/).

Figure 1: Practical steps for a thorough Registered Report.

Figure 1:

Workflow delineating the underlying steps to address three of the primary criteria on which Registered Reports are evaluated. Additional practical resources are aggregated at https://osf.io/5gazv/wiki/

Negotiating the Registered Report process

Registered Report submissions must convince reviewers that the study will be valuable—regardless of how the results turn out—and many of the criteria to accomplish this are more multifaceted than they appear. However, it is precisely these efforts on the front end that promise more robust and reliable research. While it will likely be labor-intensive to achieve these benefits, the Registered Report process can be handled efficiently if one is equipped to address the requirements from the start.

Delineate confirmatory hypotheses

Because Registered Reports are intended to constrain the space of potential post-hoc interpretation, the proposal must be grounded in concrete, testable predictions. While this may seem like a straightforward and familiar criterion, it takes several steps to do this well (Figure 1). Nonetheless, Registered Reports can be an empowering venue for testing new theories or arbitrating between competing theories, because predictions are documented at the outset. Even when researchers are agnostic about the outcome, a set of feasible predictions can be proposed—as long as the associated tests and interpretations are clearly delineated. Moreover, serendipitous discoveries can be included (given they are marked as exploratory) to lay the groundwork for new hypotheses [2,7].

When specifying confirmatory hypotheses, it is insufficient to merely sketch the expected (or possible) findings. Instead, a strong Registered Report should systematically detail the motivation for the hypotheses (i.e., justify the scientific premise), how predictions will be tested (i.e., with statistical tests of particular measures), and what the results would mean (i.e., informing particular theories). For instance, directional hypotheses may take the form of: “We will use a t-test to compare mean firing rates between condition 1 and condition 2. If condition 1 is greater than condition 2, it would support theory A; if condition 2 is greater than condition 1, it would support theory B.”

This hypothesis-driven framework also demands that the design can generate discriminatory data patterns which clearly inform the predictions. An inadequately specified hypothesis might take the form of: “We will enter firing rates into an ANOVA with three factors. If there is a three-way interaction, it would support theory A; if there is no three-way interaction, it would support theory B.” Those potential data patterns could manifest or be interpreted any number of ways, therefore that space must be narrowed. At each step of study development, researchers should critically assess whether the design is (a) truly diagnostic, so that positive findings will strongly support the proposed interpretation, and (b) optimized to prevent false negatives, so that null results will be credible [7].

Demonstrate sufficient statistical power

Registered Reports hinge on providing sufficient statistical power to minimize false positives and negatives [9]. While many journals and funders now require a priori sample size justification, this is often done ad hoc (“previous studies used this”) or even post hoc (“we stopped when we reached p <.05”). Such perfunctory justification is insufficient for a Registered Report (as for any rigorous scientific inference), and the considerations for a robust power analysis can be surprisingly extensive (Figure 1).

Registered Reports using frequentist statistics should be powered to detect the smallest effect that is both plausible and theoretically meaningful (but see https://osf.io/pukzy/ for alternatives). Yet, there is no one-size-fits-all procedure for estimating this value. Despite the temptation to rely on a single previous study or promising pilot data, this can produce misleading estimates [7,10]. Instead, one should incorporate a range of values from the relevant literature [11] and take into account that these may be inflated by publication bias. Depending on the research question at hand, one should also consider what would constitute a meaningful effect for the field (which may be smaller or larger than significant effects in the literature) [cf. 12]. This can be a nebulous requirement, and there is no universal definition of the “right” effect size. However, a successful Registered Report will ultimately require a thorough and compelling argument for the proposed approach.

Finally, the mechanics of the power analysis will depend on the proposed study design and statistical test. First, the input effect size should be based on comparable tests from the literature. Next, the sample size calculation should be conducted for the proposed tests and design. For instance, if the proposal tests an interaction in a repeated-measures ANOVA, the estimate should be calculated specifically for this test (e.g., rather than a main effect or between-subjects design). This necessary consideration can be onerous, however, because few power analysis tools are equipped for complex statistical tests. Therefore, Registered Reports favor straightforward analysis plans for which hypotheses can be clearly defined. Importantly, sophisticated scientific questions and complex datasets are still well-suited for Registered Reports; predictions should just be expressed as the simplest comparison between conditions.

Ensure reproducibility and replicability

Because the purpose of Registered Reports is to limit experimenter degrees of freedom and ensure methodological rigor [2,3,5], the methods should be described exhaustively enough that a conscientious researcher in a different lab could recreate the study. Achieving this goal requires a thoughtful assessment of the many routine procedures or seemingly inconsequential details that are often omitted from manuscripts. These include experimental technicalities, inclusion/exclusion criteria, quality checks, and analytic choices—all of which must be described beforehand, rather than be determined during or after data collection (Figure 1). However, in neuroscience research, certain variables may be unknown or indefinable beforehand. Accordingly, decisions can be based on observation of the data, provided the rules guiding those decisions are clearly described (Box 1).

Box 1: Registration procedures for the neurosciences.

Although neuroscience study designs and data patterns are often complex, hypothesis-driven research in virtually all neuroscience disciplines is amenable to registration. Some likely challenges researchers might face, and suggested solutions, are highlighted below.

Cognitive and systems neuroscience:

Decisions about which measures will best address hypotheses may depend on observations of the data (e.g., determining informative cells or electrodes, relevant frequency bands or epochs, fMRI regions of interest). Registered Reports can specify an algorithmic but non-circular approach to govern these decisions. For instance, an omnibus ANOVA can determine which electrodes show task-relevant signal and will ultimately be analyzed, or a leave-one-subject-out procedure can be used to construct a region of interest for each subject/animal.

Behavioral and molecular neuroscience:

Experiments may comprise repetition over animals or samples with high variability, and researchers must decide which data points are relevant to the proposed hypothesis tests (e.g., which individual animals to include/exclude? which intervention parameters are effective?). Registered Reports can specify a set of criteria that each sample or manipulation must meet to confidently interpret outcomes (e.g., include only animals that display pattern X, advance to next stage using only intervention that shows X), or an algorithmic approach to identify relevant covariates for inclusion in final analysis (e.g., using step-wise linear regression). For all disciplines, an if-then decision tree can also help to define thresholds and constrain these analytical paths (Figure I) [2].

With standard pre-registration (or a study conducted without pre-registration), deviations from the planned design/methods carry no practical repercussions. In contrast, such deviations in a Registered Report could ultimately be grounds for article rejection. Accordingly, researchers should think carefully about how they define procedures, and whether they can adhere to them. The most vexing aspect of the entire Registered Report process may be anticipating potential complications, in order to choose parameters wisely. While it is important to be explicit, specifying inessential criteria (although well-intentioned) might disqualify much of the data or invoke counter-productive methodological rigidity. Instead, every criterion should be well-motivated to bolster confidence in the outcomes. Finally, researchers should consider adding cushioning for unexpected obstacles into their timeline, budget, and mind-set.

Concluding remarks

While the Registered Report process shifts some of the “heavy lifting” to the early stages of study design, data collection will entail ongoing monitoring for compliance with the proposed criteria, as well as troubleshooting of unforeseen issues. Thus, the Registered Report endeavor will likely require researchers to go above-and-beyond the typical study requirements. However, the preemptive review process and proactive planning should enhance the quality and credibility of the work, ultimately benefitting individual researchers and the field at large.

Figure I.

Figure I.

Schematic data processing decision tree

Acknowledgements

We thank Nicola Kuczewski and Oksana Zinchenko for providing feedback on their Registered Report experiences. This work was partially supported by National Institute of Mental Health award F32 MH111204 to A.K.

Footnotes

Resources

A collection of practical resources, including tools for addressing the criteria described here and examples of Registered Reports that have been accepted-in-principle, is available at: https://osf.io/5gazv/wiki/.

References

  • 1.Nosek BA et al. (2015) Promoting an open research culture. Science (80- ) 348, 1422–1425 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Nosek BA et al. (2018) The preregistration revolution. Proc Natl Acad Sci 2017, 201708274. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Poldrack RA et al. (2017) Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat Rev Neurosci 18, 115–126 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Chambers CD (2013) Registered Reports: A new publishing initiative at Cortex. Cortex 49, 609–610 [DOI] [PubMed] [Google Scholar]
  • 5.Munafò MR et al. (2017) A manifesto for reproducible science. Nat Hum Behav 1, 1–9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Baxter MG and Burwell RD (2017) Promoting transparency and reproducibility in Behavioral Neuroscience: Publishing replications, registered reports, and null results. Behav Neurosci 131, 275–276 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Chambers CD et al. (2014) Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neurosci 1, 4–17 [Google Scholar]
  • 8.Poldrack RA (2019) The Costs of Reproducibility. Neuron 101, 11–14 [DOI] [PubMed] [Google Scholar]
  • 9.Button KS et al. (2013) Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci 14, 365–376 [DOI] [PubMed] [Google Scholar]
  • 10.Albers C and Lakens D (2018) When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. J Exp Soc Psychol 74, 187–195 [Google Scholar]
  • 11.Lakens D (2013) Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Front Psychol 4, 1–12 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Keefe RSE et al. (2013) Defining a clinically meaningful effect for the design and interpretation of randomized controlled trials. Innov Clin Neurosci 10, 4S–19S [PMC free article] [PubMed] [Google Scholar]

RESOURCES