Skip to main content
. 2020 Dec 17;47:100902. doi: 10.1016/j.dcn.2020.100902

Table 1.

Selective overview of challenges in the field of developmental cognitive neuroscience.

Phase of study Practical, technical and ethical issues hindering reproducibility & replicability Potential or previously suggested solutions Useful links/selected examples
STATISTICAL POWER
1. To consider prior to & throughout data collection Low statistical power / low effect size Power analysis G*Power; NeuroPowerTools; BrainPower; fmripower
If no prior reliable data exists, consider a “smallest effect size of interest’’ consistent with the broader psychological community (e.g., ∼.10 - .30; according to Gignac and Szodorai, 2016)
Use of age-adequate and appealing protocols to increase power
Sequential interim analyses (e.g., transparent data peeking to determine cut-off point; Lakens, 2014)
Selective, small or non-representative samples
Selective/non-representative samples (e.g., Western, educated, industrialized, rich and democratic (WEIRD) population) Measurement invariance tests (e.g., Fischer and Karl, 2019)
Diversity considerations in study design & interpretation
Small N due to rare population (e.g., patients or other populations more challenging to recruit) Strong a priori hypothesis (e.g., adjust search space on a priori-defined ROIs; caution: (s)harking)
Increase power within subjects (e.g., consider fewer tasks with longer duration)
Data aggregation (e.g., more data through collaboration or consortia or data sharing, which also allows evidence synthesis through meta-analyses) Exemplary data sharing projects/platforms: Many Labs Study 1; Many Labs Study 2; Many Babies Project; Psychological Science Accelerator; Play and Learning Across a Year Project
Ethical concerns (e.g., privacy, vulnerability, subject protection, local IRB-bound restrictions) Data anonymization (e.g., use suggestions by the Declaration of Helsinki) DeclarationofHelsinki
Share and consistent use of standardized consent material/wording Open Brain Consent sample consent forms
Disclosure / restricted access if required
Biological considerations in DCN samples (e.g., distinct biology, reduced BOLD response, different physiology in MRI) Subject-specific solutions (e.g., child-friendly head coils or response buttons, specific sequence, use highly engaging tasks) CCHMC Pediatric Brain Templates; NIHPD pediatric atlases (4.5-18.5y); CCHMC Pediatric Brain Templates; Neurodevelopmental MRI Database
2. During & throughout data collection FLEXIBILITY IN DATA COLLECTION STRATEGIES
Researchers degree of freedom I (intransparent assessment choices, see Simmons et al., 2012, for a 21-word solution) Increase methods knowledge across scientists (e.g., through hackathons and workshops) Brainhack Global; Open Science MOOC; NeuroHackademy
Teaching reproducible research practices Mozilla Open Leadership training; Framework for Open and Reproducible Research Training
Variability & biases in study administration Research project management tools: standard training and protocol for data collection, use of logged lab notebooks, automation of processes Human Connectome Project Protocols; Open Science Framework
Standard operation procedure (public registry possible; see Lin and Green, 2016) Git version control (e.g., github.com)
Flexible choice of measurements, assessments or procedures Policies / standardization / use of fixed protocols / age-adequate tool-& answer boxes
Random choice of confounders Code sharing
Data manipulation checks
Clear documentation / detailed analysis plan / comprehensive data reporting FAIR (Findable, Accessible, Interoperable and Re-usable) data principles; JoVE video methods journal; Databrary for sharing video data
Preregistration
3. Issues arising post data collection & consider throughout ISSUES IN ANALYSES CHOICES & INTERPRETATION
Cross-validation (e.g., k-fold or leave-one-out methods)
Generalizability Robustness Replication (using alternative approaches or perform replication in alternative approaches) Replication grant programs (e.g., NWO); Replication awards (e.g, OHBM Replication Award)
Sensitivity analysis
Transparency (inadequate access to materials, protocols, analysis scripts, and experimental data) Make data accessible also furthering meta analytic options (e.g., sharing of raw data or statistical maps (i.e., fMRI), sharing code, sharing of analytical choices and references to the foundation for doing so) ideally in line with community standards NeuroVault for sharing unthresholded statistical maps; OpenNeuro for sharing raw imaging data; Dataverse open source research data repository; Brain Imaging Data Structure
Make studies auditable
Transparent, clear labelling of confirmatory vs. exploratory analyses TOP (Transparency and Openness Promotion) guidelines
Analytical Flexibility
Researchers degree of freedom II (intransparent analysis choices) Transparency Checklist (Azcel et al., 2019)
hindsight bias (consider results more likely after occurrence) disclosure / properly labeling hypothesis-driven vs. confirmatory research
p-hacking (data manipulation to find p-significance) Preregistration resources (may be embargoed/time-stamped amendments possible); The use of Preregistration Tools in Ongoing, Longitudinal Cohorts (SRCD 2019 Roundtable); Tools for Improving the Transparency and Replicability of Developmental Research (SRCD 2019 Workshop)
p-harking (hypothesizing after the results are known)
t-harking (transparently harking in the discussion section)
s-harking (secretly harking) Preregistration (e.g., OSF; Aspredicted.org)
cherry-picking (running multiple tests and only reporting significant ones) Registered Reports (review of study, methods, plan prior to data collection & independent of outcome) Registered Reports resources (including list of journals using RRs); Secondary data preregistration template; fMRI Preregistration template (Flannery, 2018); List of neuroimaging preregistrations and registered reports examples
Circularity (e.g., circular data analysis)
Need for multiple comparison correction p-curve analysis (testing for replicability)
Random choice of covariates
Specification curve analysis (a.k.a. multiverse analyses; allows quantification and visualization of the stability of an observed effect across different models) Specification curve analysis tutorial
Overfitting Cross-validation (tests overfitting by using repeated selections of training/test subsets within data)
Missing defaults (e.g., templates or atlases in MRI research), representative comparison group (e.g., age, gender), more motion in neuroimaging studies Subject-specific solutions (e.g., online motion control or protocols for motion control) Framewise Integrated Real-time MRI Monitoring (FIRMM) software
Use of standardized toolboxes Exemplary standardized analyses pipelines for MRI analyses: fMRIPrep preprocessing pipeline; LONI pipeline
Software issues
Variability due to differences in software versions and operating systems Disclosure of relevant software information for any given analyses Docker for containerizing software environments
Software errors Making studies re-executable (e.g., Ghosh et al., 2017)
Research Culture
Publication bias (e.g., publication of positive findings only) Incentives for publishing null-results / unbiased publication opportunities
Publishing null results:
Bias-selection and omission of null results (file drawer explanation: only positive results are published or publishing norms favoring novelty) Post data for evaluation & independent review Publishing null results: F1000 Research; bioRxiv preprint server; PsyArXiv preprints for psychological sciences
Less reliance on all-or-nothing significance testing (e.g., Wasserstein et al., 2019)
Use of confidence intervals (e.g., Cumming, 2013)
Bayesian modeling (e.g., Etz and Vandekerckhove, 2016)
Behavior change interventions (see Norris and O’Connor, 2019)
Scientist's personal concerns (e.g., risk of being scooped leading to non-transparent practices) Citizen science (co-producing research aims)
POPULATION SPECIFIC
Ethical reasons (e.g., that prohibit data sharing) Anonymization or sharing of group maps over individual data (i.e., T-maps) De-identification Guidelines; Anonymisation Decision-making Framework
Follow reporting guidelines EQUATOR reporting guidelines; COBIDAS checklist
Maximize participant's contribution (ethical benefit)