Skip to main content
Neuropsychopharmacology logoLink to Neuropsychopharmacology
. 2024 Sep 6;50(1):67–84. doi: 10.1038/s41386-024-01973-5

Reporting checklists in neuroimaging: promoting transparency, replicability, and reproducibility

Hamed Ekhtiari 1,2,, Mehran Zare-Bidoky 3, Arshiya Sangchooli 4, Alireza Valyan 3, Anissa Abi-Dargham 5,6, Dara M Cannon 7, Cameron S Carter 8, Hugh Garavan 9, Tony P George 10,11, Peyman Ghobadi-Azbari 3, Christoph Juchem 12,13, John H Krystal 14,15, Thomas E Nichols 16, Dost Öngür 17,18, Cyril R Pernet 19, Russell A Poldrack 20, Paul M Thompson 21, Martin P Paulus 2
PMCID: PMC11525976  PMID: 39242922

Abstract

Neuroimaging plays a crucial role in understanding brain structure and function, but the lack of transparency, reproducibility, and reliability of findings is a significant obstacle for the field. To address these challenges, there are ongoing efforts to develop reporting checklists for neuroimaging studies to improve the reporting of fundamental aspects of study design and execution. In this review, we first define what we mean by a neuroimaging reporting checklist and then discuss how a reporting checklist can be developed and implemented. We consider the core values that should inform checklist design, including transparency, repeatability, data sharing, diversity, and supporting innovations. We then share experiences with currently available neuroimaging checklists. We review the motivation for creating checklists and whether checklists achieve their intended objectives, before proposing a development cycle for neuroimaging reporting checklists and describing each implementation step. We emphasize the importance of reporting checklists in enhancing the quality of data repositories and consortia, how they can support education and best practices, and how emerging computational methods, like artificial intelligence, can help checklist development and adherence. We also highlight the role that funding agencies and global collaborations can play in supporting the adoption of neuroimaging reporting checklists. We hope this review will encourage better adherence to available checklists and promote the development of new ones, and ultimately increase the quality, transparency, and reproducibility of neuroimaging research.

Subject terms: Neuroscience, Translational research, Psychiatric disorders, Cognitive neuroscience, Diseases of the nervous system

Introduction

The impact of neuroimaging research could be enhanced by robust methods for increasing the reliability and generalizability of findings across numerous studies. A critical issue hindering generalizability is the need for more transparent reporting of crucial methodological details in neuroimaging studies [1, 2]. Incomplete reporting reduces the value of individual studies and prevents replication by other researchers [1, 3]. Furthermore, biased reporting practices, particularly the emphasis on positive results, can mislead researchers [4, 5]. Several factors hinder the generalizability of neuroimaging studies, including low statistical power, flexibility in data analysis, software errors, and a lack of direct replication [3]. In the field of psychiatry/neurology, these issues are compounded by the pitfalls of case-control designs, confounders in associational research, and a lack of individual-level analyses and causal mechanisms [6]. Furthermore, the inconsistency of individual studies due to small and heterogeneous samples, analytical flexibility, and publication bias can also affect the generalizability of neuroimaging findings [7]. To mitigate some of these challenges and limitations to improve rigor and reproducibility and facilitate reliable meta-analyses from available research, there have been growing calls for protocol pre-registration [8], standardized processing guidelines [9], and transparent reporting [1012].

The growing complexity and diversity of neuroimaging research methods and their incomplete reporting exacerbate these transparency and reproducibility issues. To improve the reporting of critical study design and execution details and promote robust and transparent research practices, we encourage the adoption of reporting checklists. These checklists outline minimum sets of details required to replicate studies, enable cross-study comparisons, and facilitate meta-analyses. Checklists can be broad in scope, outlining general reporting requirements across several neuroimaging modalities [13, 14]; or they may be specific to reporting requirements for one specific neuroimaging modality [15, 16].

Table 1 summarizes the terms used in neuroimaging checklists along with some examples or supporting evidence for each definition.

Table 1.

Key terms in neuroimaging reporting checklists: definitions and examples.

Term Definition/Description Example/Evidence
Best Practices Guidelines promoting transparency, reproducibility, and rigor in research methodology and reporting [13]

Best practices in structural neuroimaging of neurodevelopmental disorders [91]

Best practices in data analysis and sharing in neuroimaging using MRI [40]

Biased Reporting Systematic deviation from accurate and impartial reporting of research findings, methods, or interpretations, undermining the integrity and reliability of research outcomes [92] Potential reporting bias in neuroimaging studies of sex differences [5]
Checklist Adherence The extent to which researchers comply with requirements and recommendations in reporting checklists, indicating thoroughness and completeness in study reporting [39] High adherence to the CONSORT checklist in randomized controlled trials [93]
Checklist Development Systematic creation of reporting checklists through collaboration/consensus among experts to identify essential elements for transparent and reproducible research reporting [86] Guidelines for reporting health research [94]
Consensus Statements Formalized documents representing collective agreement among experts in a field on specific topics or practices [95]

The STARD (Standards for Reporting of Diagnostic Accuracy Studies) statement for diagnostic accuracy studies [14]

The International Associaton for the Study of Pain consensus statements on the evaluation of neuroimaging measures of chronic pain [96]

Consensus-Making Process of reaching agreement among a group of experts or stakeholders through discussion and negotiation [97] Development of the PRISMA statement through consensus among systematic review experts [98]
Delphi A process in which experts in the field approach consensus on a matter by participating in a series of commenting and/or item rating rounds with feedback [99] A checklist for assessing the methodological quality of concurrent tES-fMRI studies (ContES checklist): a consensus study and statement [16]
Data Repositories Centralized platforms for storing and sharing research data, often with standardized formats and accessibility [100]

OpenNeuro, a repository hosting neuroimaging datasets with standardized formatting [61]

The National Institutes of Health (NIH) data repositories for Alzheimer’s Disease Neuroimaging Initiative (ADNI) [101]

Data Sharing The practice of making research data available to facilitate transparency, reproducibility, and collaboration in scientific research [102] Sharing of raw neuroimaging data from the Human Connectome Project [103]
Efficiency Achieving maximum output with minimal resources, time, or effort, often through streamlined processes or workflows [104]

Use of automated data processing pipelines to analyze neuroimaging data more efficiently [105]

Optimizing MRI scan protocols to reduce scan time while maintaining image quality improves patient comfort and increases scanner availability [106]

Ethical Principles Fundamental guidelines governing the moral conduct of research, ensuring respect for participants and integrity [107] Declaration Of Helsinki: a landmark document in medical research ethics that outlines ethical principles for human experimentation, emphasizing participant welfare, informed consent, and research integrity [108]
Frameworks Structured outlines or systems providing a foundation for organizing and addressing complex research concepts [109] The FAIR (Findable, Accessible, Interoperable, Reusable) framework for data management [110]
Generalizability The extent to which research findings can be applied or generalized to other populations, settings, or conditions beyond the study sample [111] Evaluating the generalizability of neuroimaging findings across diverse demographic groups and clinical populations [112]
Good Research Practices Established methods and behaviors recognized as conducive to producing reliable and valid research outcomes [113] Good practice in food-related neuroimaging [114]
Guideline Detailed recommendations or instructions outlining best practices or standards for conducting research [115] The PRISMA 2020 statement: an updated guideline for reporting systematic reviews [116]
Harmonization Process of aligning or standardizing practices, procedures, or guidelines across different research contexts [117] Harmonization of neuroimaging data formats and analysis methods within the BIDS framework [2, 118]
Inclusiveness Ensuring the involvement and representation of diverse individuals or perspectives in research processes [119] Inclusion of participants from various demographic groups in neuroimaging studies to improve generalizability [120]
Manuals Comprehensive guides or handbooks providing detailed instructions or procedures for conducting specific tasks [121]

The MEEG manual for standardized procedures in magnetoencephalography and electroencephalography [122]

Harmonization efforts to ensure data from different PET scanners to be compared effectively in multi-center studies [123]

Protocols Detailed plans or procedures outlining the steps to be followed in research studies or experimental investigations [124]

Cerebral small vessel consortium (MarkVCID: II. Neuroimaging protocols [125]

An MRI protocol specifying the type of scan sequence, imaging parameters, and participant instructions for an experiment [126]

Questionable Research Practices (QRPs) Questionable or unethical behaviors in research, such as selective reporting or data manipulation [127]

Prevalence of research misconduct and questionable research practices [128]

Excluding data from participants who performed poorly in an fMRI task to achieve statistically significant results would be a QRP [129]

Reliability Consistency and dependability of research findings or measurements, indicating the extent to which results can be trusted and repeatable [130]

Test-retest reliability of Evoked BOLD signal [131]

Test-Retest Reliability of Common Task-fMRI Measures [132]

Repeatability The ability of a study or experiment to produce consistent results when repeated under similar conditions [133] Repeatability of neuroimaging findings demonstrated through replication studies using the same methodology [134]
Replicability The ability of a research study or experiment to produce consistent results when conducted by different researchers, using different methods, materials, or conditions [135] Open Science Collaboration: a collective effort to promote transparency, reproducibility, and openness in scientific research across various disciplines [136]
Reporting Checklist Structured list of essential items or criteria designed to guide researchers in accurately reporting research methods, results, and conclusions [137] A methodological checklist for fMRI drug cue reactivity studies: development and expert consensus [15]
Reporting Formats Standardized formats or templates for presenting research findings, facilitating clear communication and interpretation of results [138] Introduction-Methods-Results-Discussion (IMRAD) structure [139]
Reproducibility The ability of a study or experiment to produce the same outputs using the same methods, by different researchers [140] and possibly on different data Reproducibility of neuroimaging findings demonstrated by independent research groups using the same dataset [139, 141]
Research Integrity Adherence to ethical principles and professional standards in the conduct and reporting of research, ensuring honesty, accuracy, and transparency [142]

Considerations for Open Science practices in neuroimaging [143]

Ensuring informed consent, data security, and honest reporting of findings as parts of research integrity in neuroimaging studies [144]

Resource Sharing The practice of making research data, materials, and tools openly accessible to the scientific community for replication, verification, and reuse [145] TemplateFlow: an open-science platform to facilitate the sharing and dissemination of neuroimaging templates and atlases [146]
Robust Methodologies Reliable and well-established research methods and techniques that produce valid results across different conditions or contexts [147]

Common pitfalls and limitations associated with low statistical power and strategies for improving power estimation and reporting for conducting network neuroscience research [148]

Using validated image analysis pipelines in fMRI studies to ensure robust methodologies [149]

Scientific Rigor The degree of thoroughness, accuracy, and reliability in the design, conduct, and reporting of scientific research [150] Adoption of pre-registration practices in psychological science. (i.e., registering study hypotheses, methods, and analyses before data collection), to enhance the credibility of research findings [151]
Standards Published documents that establish technical specifications and procedures designed to maximize the reliability of the materials, products, methods, and/or services people use every day [152] Brain Imaging Data Structure (BIDS) for organizing neuroimaging data [2]
Transferability The extent to which research findings, methods, or interventions developed in one context can be applied or adapted to another context [153] Neurolaw in the Netherlands: investigating the applicability of neuroscientific findings to legal contexts, particularly in the domain of adolescent criminal law [154]
Transparency Openness, clarity, and completeness in the reporting of research methods, results, and interpretations, promoting scrutiny and reproducibility [155] Enhancing transparency in neuroimaging research by providing comprehensive methodological details and sharing raw data [2]
Trial Registries Databases or repositories where researchers prospectively register clinical trials, providing details about study design, interventions, and outcomes [156]

ClinicalTrials.gov [157]

International Clinical Trials Registry Platform (ICTRP) [158]

Validity The degree to which research findings accurately represent the phenomenon being studied, ensuring the robustness and trustworthiness of conclusions [159] Importance of validity in study design, data acquisition, and analysis and the necessity of ensuring that neuroimaging findings accurately reflect underlying neural processes and are not influenced by confounding factors or methodological limitations [39]

The following sections will delve into the fundamental values and concepts underpinning the development of reporting checklists in neuroimaging. We will explore the characteristics of a reporting checklist, outline the development milestones, and leverage insights from existing checklists to propose a roadmap for advancing reporting practices in neuroimaging.

Definition and impact of reporting checklists

Scientific research relies on robust methodologies and transparent reporting to ensure the credibility and reproducibility of methodologies, analytics, and findings. Reporting checklists serve as essential tools to achieve these goals. These systematic guides, often developed through consensus between researchers within a specific field, provide a comprehensive list of items to consider, implement, and report throughout the research process. By addressing key elements such as study design, data collection, analysis, and interpretation, reporting checklists promote the quality, integrity, and rigor of research methods. Ultimately, they enhance the reliability and validity of research findings and facilitate reproducibility [1720].

The importance of reporting checklists is underscored by the emergence of related yet distinct concepts within the scientific community. These include protocols for specific methodologies (e.g., The Comorbidity and Cognition in Multiple Sclerosis (CCOMS) neuroimaging protocols [21]), reporting formats (e.g., APA style [22]), frameworks (e.g., Reviewers’ Competency Framework [23]), methodological manuals (e.g., Cochrane Risk of Bias Tool [24]), data standards (e.g., Brain Imaging Data Structure (BIDS) [25]), trial registries (e.g., https://www.clinicaltrials.gov), ethical codes/principles [26], and consensus statements [27, 28].

Reporting checklists can increase the quality of published studies by increasing the transparency of the research process, methodology, and analytics in peer-reviewed publications. By encouraging the detailed reporting of research methods and findings, regardless of their outcome and its significance, they contribute to the trustworthiness of research. Opaque or incomplete reporting practices can erode trust in studies. While checklists alone cannot prevent data fabrication or misconduct, they can serve to promote transparency in research practices. This could be reached by systematically prompting researchers to report key methodological details and results, thus facilitating greater scrutiny and verification of study findings by peers and the scientific community. This transparency can help detect and deter research misconduct, as well as enhance the credibility and reliability of research outputs.

The benefits of reporting checklists extend to a range of stakeholders, first and foremost the researchers themselves. Checklists are often used at the reporting stage and can facilitate clear and transparent reporting, but would be even more consequential before most study design and analysis decisions have been made: consulting methodological checklists early-on in the research process can help investigators avoid poor design choices and serious errors. For example, checklists across neuroimaging modalities can (and often do) require researchers to justify their sample sizes, ideally with power analyses conducted before recruitment. They can also suggest or explicitly require various quality-checking procedures, depending on the specific modality. Both of these measures would ultimately reduce wasted time and resources and improve research quality, and are critical, especially in light of recent evidence about severe power issues [29] and the distorting impact of inadequate quality control [30] across neuroimaging research. Checklists could also help guide the interpretation of study results (for example by outlining specific criteria for the biological interpretation of neuroimaging derivatives). Reporting checklists enhance the assessment and transparency of research reports for other stakeholders as well: for example, reviewers can more clearly evaluate the quality of grant applications and manuscripts; journals and publishers can use these checklists to promote transparency in publications; editors, who may not specialize in every area, can use checklists to more comprehensively verify basic research rigor; members of industry can push for clearer reporting of sponsored research and place greater trust in academic reports; funding agencies can use checklists to help maximize the impact of their investments; and practitioners can depend on more reliable research findings to guide their practice.

Core values in developing methodological reporting checklists

Methodological reporting checklists reflect a pragmatic and proactive approach to increasing the reproducibility, transparency, and integrity of evidence in the field [31, 32]. To meet such an objective, the development of valid and useful checklists should follow some core values as suggested below.

Transparency

The checklist development process should be transparent, from the initial phase of developing the idea through proposing the sets of checklist items. Many checklist development steps are prone to misreporting, like only inviting familiar fellows to the panel or suggesting a mandatory item that the authors have conflicts of interest. Pre-registering the methodology of the checklist development could be helpful to promote transparency of the checklist development process [33].

Repeatability

A reporting checklist should be developed via a process that can be replicated by other teams; meaning that it should be developed and reported as objectively as possible so that if the checklist development process were to be hypothetically repeated, the process would be doable and nearly similar results would be achieved. The replicability/repeatability issue can be addressed in the process of shaping the steering committee and expert panel, the methodological steps for reaching a consensus, the thresholds of agreement for selecting the mandatory and optional items in the checklist, and the process for continued updating and monitoring [32]. Note that this point does not imply that a checklist creation process should be replicated; simply duplicating an existing checklist would hardly be useful. Rather, the core value is that the checklist development process should be described and followed objectively and precisely enough that the process is Repeatable. Here repeatability is an objective measure of transparent reporting of methodology not a recommendation to replicate the available checklists.

Data sharing

To increase the transparency and reproducibility of the reporting checklist development, all relevant data and data analytics used in the process should be publicly available, including the database of panelists and anonymous comments and ratings of experts. This will allow the scientific community to ascertain the validity of the reporting checklist [34].

Diversity, inclusiveness, and global collaborations

In the development process for reporting checklists in each field, it is important to consider voices from all active researchers in the field around the globe. The recruitment for the steering committee and expert panel should consider diversity and inclusiveness on multiple levels, e.g., gender, race, ethnicity, academic institutions, industries, and geographic location. Lack of diversity may result in missing perspectives and hinder widespread adoption. Efforts to enhance diversity could leverage existing initiatives like the ALBA Network [35] and the Organization for Human Brain Mapping’s Diversity and Inclusivity Committee [36]. While no relevant research exists on the importance of diversity for neuroimaging checklists and guideline development, it has been recognized that lack of diversity among clinical practice guideline authors can lead to inequities [37, 38].

Supporting innovation and new methodologies

While checklists are believed to help the scientific community in several ways, there are concerns that they might limit the diversity and novelty of scientific research endeavors [33]. It is thus important to highlight that a checklist is not usually meant to impose the precise methods employed in the formulation of specific studies. Rather, its purpose is to guide researchers in conscientiously addressing, documenting, reporting, and sharing different aspects of their study design, analysis, and reporting that could significantly influence the outcomes. Although there are cases where checklists can have a more prescriptive form - e.g., in the recent REFORMS checklist (https://reforms.cs.princeton.edu/) a goal is to prevent common pitfalls like leakage of test data into the training ones or as noted below regarding multiple testing in the COBIDAS guidelines. Researchers are encouraged to articulate their methodological choices/variations when planning and presenting the findings of their research outside the framework provided by the available reporting checklists.

Current experiences in developing reporting checklists in neuroimaging

This section explores the current landscape of reporting checklists in brain mapping, focusing on examples like the Organization for Human Brain Mapping (OHBM) Committee on Best Practices in Data Analysis and Sharing (COBIDAS) Checklists, International Society for Magnetic Resonance in Medicine (ISMRM) Magnetic Resonance Spectroscopy (MRS) checklist, Addiction Cue-Reactivity Initiative (ACRI) fMRI Drug Cue-Reactivity (FDCR) checklist, International Network of Neuroimaging and Neuromodulation (INNN) Concurrent tES-fMRI (ConTES) checklist, and Content and Format of PET Brain Data Checklist. It should be noted that this is not an exhaustive list of checklists and several other neuroimaging reporting checklists are available on Enhancing the QUAlity and Transparency Of health Research (EQUATOR) website (https://www.equator-network.org/reporting-guidelines).

OHBM committee on best practices in data analysis and sharing (COBIDAS) checklists

One of the most widely used and comprehensive sets of checklists in neuroimaging is developed by the Organization for Human Brain Mapping (OHBM) (https://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=1). This checklist grew out of the need for the application of principles for open science to promote reproducibility and transparency in neuroimaging studies, building on earlier reporting guidelines developed informally within the field [39]. Although there are controversies over what reproducibility and transparency really mean, the ultimate aim of reporting was to allow the reader to completely understand the strengths and limitations of the reported study and enable a researcher to exactly conduct or manipulate each aspect of that study. To reach this aim, in 2014, motivated by growing concerns about reproducibility in science, the OHBM formed a committee to establish recommendations for best practices. The Committee on Best Practices in Data Analysis and Sharing (COBIDAS) that was formed first focused on MRI. It considered seven different domains of practice: experimental design, acquisition, preprocessing, statistical modeling & inference, results, data sharing, and reproducibility. It was quickly apparent that prescribing the “right” practice was generally impossible because of the great variation in the objectives and methods of any particular study. Hence, the focus was on transparent reporting of practice, and a set of tabular items that should be reported was created for each domain. One exception was statistical modeling & inference, where it was agreed that some practices are ill-advised (e.g., not correcting for multiple testing), and thus for this section the checklists took on a more prescriptive tone. The report was finished in 2016, and the community was invited to comment. A blog was set up to collect input and address each comment received. The document was updated according to that input, and after approval by the OHBM Council, the “Best Practices in Data Analysis and Sharing in Neuroimaging using MRI” was published on bioRxiv [40], and a short descriptive commentary was published in Nature Neuroscience [13]. Based on the positive response from the community, a COBIDAS Magneto /Electro-Encephalography (MEEG) was established in 2017. It followed the same structure as the MRI report, with an important feature of the use of community feedback and open posting as a blog [9, 41]. Since 2018, the newly established OHBM Best Practices Committee has coordinated both house COBIDAS reports (for instance, for Brain Networks nomenclature [42] or Clinical language fMRI [43]) and endorses other imaging best practices from other groups and societies (see https://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=4027). The OHBM approach is inclusive, from the diversity of the COBIDAS to the mandatory open reporting for comments by the community. Those reports are also live documents with updates planned every 3 to 6 years. The collective experience is that there are differences between the minimal information needed to understand what was done in a study and what minimal information is needed to reproduce a study. Brain imaging involves an expanding set of complex data acquisitions and processing requiring far too many details. These details can however be listed as appendices or supplementary materials or linked to data and code repositories.

Minimum reporting standards for in vivo magnetic resonance spectroscopy (MRSinMRS)

Over the last 40 years, in vivo magnetic resonance spectroscopy (MRS) evolved from a pure research method to a diagnostic neuroimaging tool [44]. The MRS field, however, continues to receive critiques in terms of diverse and suboptimal methodologies [45], lack of rigor and reproducibility, not reporting the fundamental aspects of studies to be used in meta-analyses [46], and limited guidance on practice for researchers in spite of the rich literature [47]. These limitations have been speculated to hinder MRS from prevailing in routine clinical applications. To tackle these issues, diverse international subgroups were formed in a coordinated community-wide effort that resulted in experts’ consensus recommendations on a total of 13 major technical aspects of MRS published in a ‘Special Issue: Advanced methodology for in vivo magnetic resonance spectroscopy’ in the years 2019–2021 [28]. Among these publications was a set of minimum recommendations and reporting standards for MRS studies comprising a checklist among other recommendations. To this end, an initial set of guidelines established at the 2016 International Society of Magnetic Resonance in Medicine (ISMRM) workshop was reviewed by a diverse group of MRS researchers comprising both established and less experienced researchers including trainees. The reporting guidelines were then evaluated by experts in the field who met predefined criteria to be included in the panel for consensus on the checklist development. This led to a list of minimum standards comprising four sections namely: hardware, acquisition, data analysis methods and outputs, and data quality [48]. The aim of providing this checklist is to play a guidance role for authors, a quality assessment tool for journals, and standardize reporting for subsequent studies and meta-analyses.

Addiction cue-reactivity initiative (ACRI) fMRI drug cue-reactivity (FDCR) checklist

The field of fMRI drug cue-reactivity (FDCR) has seen a dramatic surge in the number of publications in the last 25 years, with 415 published articles by the 2023 [49]. In addition to some inherent issues of conducting FDCR studies, not accurately reporting the different aspects of these studies limits the overall reproducibility and rigor of the reports. After a series of meetings within the Enhanced NeuroImaging Genetics through Meta-Analyses (ENIGMA) Addiction working group (https://www.enigmaaddictionconsortium.com), experts witnessed a high level of heterogeneity in the field of fMRI drug cue-reactivity (FDCR), poor methods reporting, and disagreement over the main methodological parameters. These discussions led to the formation of a steering committee with the aim of preparing a reporting checklist in the field of FDCR [15].

After preparing the first draft of the reporting checklist and reaching an agreement among the steering committee, using a modified Delphi methodology, experts were invited to comment on and modify the checklist using predefined criteria. After implementing the received comments by the steering committee, the checklist was sent to the experts for the rating phase in which experts would rate items from not important [1] to extremely important [5]. It is of note that the thresholds for including/excluding an item in the checklist were pre-registered in the protocol of the checklist. The comments received in the revision phase and the ratings in the rating phase were made anonymous and publicly available. Figure 1 shows a summary of the steps taken to provide the checklist and the quantitative measures of agreement on the essential items to be reported.

Fig. 1. Addiction Cue-Reactivity Initiative (ACRI) fMRI Drug Cue Reactivity (FDCR) checklist development process and outcomes.

Fig. 1

a Procedure flowchart The process has been roughly divided into distinct stages: the selection of the steering committee (in black) using the results of an earlier mentioned systematic review to choose the initial checklist items and expert panel candidates (in pink), checklist development phase (in red), expert panel selection (in purple), checklist commenting and revision phase (in green), checklist rating phase (in yellow) and data analysis and Delphi process finalization (in blue). The number of contributors to each section is displayed by ‘n =’. Within the graph, an overview of the structure of the checklist at each stage is presented in terms of number of categories, essential items to be reported and further recommendations and their categories. recom: recommendations. b Checklist Rating by Experts: Each item was rated from 1 to 5 (not important to extremely important). All the items met threshold 1 and were rated as moderately, highly, or extremely important by >70% of the raters. In addition, 24 items reached the more stringent threshold 2 of being rated as either highly or extremely important by 80% of raters (the ones that did not reach this threshold are marked with ‘†’) (the figure is adopted and modified from [15]).

The checklist emphasizes the importance of reporting methodological details that are crucial for conducting an FDCR study that would merit universal inclusion as methodological details. This checklist includes seven main categories: participant characteristics, general fMRI Information, general task information, cue information, craving assessment inside scanner, craving assessment outside scanner, and pre- and post-scanning considerations (Fig. 1b). By adhering to the ACRI-FDCR checklist, researchers can ensure comprehensive and informative reporting of their fMRI studies in the context of addiction research, facilitating the advancement of knowledge in this critical domain. After the finalization of the reporting checklist, researchers checked the reporting checklist against the previous FDCR publications to assess the current reporting status of the checklist items. This step was done to assess retrospective adherence to the proposed reporting checklist items and to motivate future evaluations to assess how introduction of the checklist improved the quality of reporting in the field.

International network of neuroimaging neuromodulation (INNN) concurrent tES-fMRI (ConTES) checklist

Integration of transcranial electrical stimulation (tES) with concurrent fMRI allows for mapping neural activity during neuromodulation, supporting causal studies of both brain function and tES effects. Given the potential for variability in neural responses elicited by tES based on the methodological nuances and the increasing number of concurrent tES-fMRI studies, guidelines on factors that should be reported and/or controlled are essential to ensure accurate interpretation and reproducibility of research findings. Additionally, consistent methodology and reporting practices would facilitate meta-analyses. Raising these issues in the International Network of Neuroimaging and Neuromodulation (INNN), experts concluded that the field of concurrent tES-fMRI (ContES) has an incoherent reporting system. As a result, a steering committee was formed to develop a reporting checklist for the field of ConTES [16]. The overall process of this Delphi study is similar to the ACRI-FDCR study mentioned in the previous section. The checklist emphasizes the importance of reporting crucial methodological details for conducting a ContES study, encompassing essential methodological details for universal application. This checklist is categorized into three main sections: technological factors, safety and noise tests, and methodological factors. By adhering to the ContES checklist, researchers can enhance the methodological reporting quality of future concurrent tES-fMRI studies and increase methodological transparency and reproducibility, ultimately fostering more robust research and facilitating meta-analyses.

Content and format of PET brain data checklist

The need for replicating Positron Emission Tomography (PET) findings, ensuring quality control, multi-central collaborations through data sharing, and the growing interests of funding agencies and journals in transparency and data sharing mandated the field of PET neuroimaging to develop guidelines for data sharing and reporting. Data sharing of PET studies presents additional challenges to those faced by other neuroimaging modalities, including the need for sharing raw data alongside detailed construction methods, sharing blood sampling data and related analyses, and sharing dynamic modeling techniques. Efforts to address these challenges started at the NeuroReceptor Mapping Conference in 2016 (NRM2016), with a panel comprising more than 250 PET specialists discussing the need for standards in PET data sharing. This meeting led to the formation of working groups with the purpose of proposing standards for the content and structure of PET imaging studies to facilitate the sharing of PET data [50]. This checklist includes more than 100 mandatory and recommended items related to Radioligands, Data Acquisition, Data-analysis, and Statistics. This checklist is an important step in improving the interpretation and replicability of published work, while also facilitating the processes of archiving and sharing PET data.

Development Process for Reporting Checklists in Neuroimaging

Generally, the aim of developing reporting checklists is to provide essential elements that should be considered over the whole life cycle of research projects. This may provide transparent and detailed information about study design, methodology, results, interpretation, and practical implications. Therefore, reporting checklists are not merely designed to be used when drafting the paper, but more importantly, they should be used in the initial stages of designing the protocol to ensure that all the necessary factors are taken into account for conducting a rigorous study and necessary details are recorded throughout the study lifecycle. These sets of criteria are often established by observing the deficiencies in the current literature, which may undermine the reliability and reproducibility of the findings.

A typical reporting checklist often results from a consensus among experts in the field on the minimum set of items deemed critical to be reported in a research paper. The checklist often includes items categorized into different aspects of a study, e.g., participants characteristics, processing pipelines, results reporting, accompanied by suggestions on where and how to address the reporting requirement [51].

The development cycle for methodological reporting checklists consists of seven steps each of which consists of different activities while suggesting several tools and techniques (Fig. 2).

Fig. 2. Development cycle of reporting checklists.

Fig. 2

The inner circle depicts major steps with an example of relevant activities/tasks shown in the middle circle followed by an example of tools/mechanisms shown in the outer circle.

Genesis: What is the problem and what are its dimensions?

As the first step in developing each checklist, there should be a clear and defined problem statement reflecting what the checklist needs to address. These problems are often explored when experts are conducting research, reading published articles, or communicating with peers in meetings. To elaborate on the problem and define its many dimensions, experts often communicate their concerns with peers and form a committee to address this issue. For example, the MRSinMRS checklist originates from a workshop organized by a society (ISMRM). Then, under the umbrella of a broader community-based consensus effort in the field of MRS, the reporting concerns were discussed and a dedicated subgroup was formed to establish a checklist [48].

Development: What is the current situation in the field?

To make sure that the defined problem is valid and important for the field, it is helpful to map the parameter space of the problem in the early developmental stages. This is preferably done by a thorough systematic literature search to identify the parameters affecting the main problem along with their frequency and their impact on the transparency of procedures and reproducibility of the findings. For instance, a systematic review conducted on the field of FDCR [49] created the foundations of the ACRI-FDCR checklist [15]. This systematic review also further enriched the methodological reporting checklist by finding other parameters that were probably overlooked at the first steps.

Organization: Who are the main contributors?

Engaging the community of active researchers can happen by shaping a steering committee and/or an expert panel. This can be done by establishing a set of systematic criteria for experts to be invited, like the minimum number of publications each expert should have. With the help of the systematic literature search done in the previous step, it is possible to identify experts and organize them in either the steering committee (SC) and/or the expert panel (EP) in a replicable and inclusive way. As an example, the ConTES checklist achieved this aim by predefining its expert panel criteria as holding at least one first, last, or corresponding author position in publications of the field of concurrent tES-fMRI. To make sure that no other key expert is missed from the invitation process, invitees can also be asked to nominate further experts to be reviewed by the SC. In the process of checklist development, the SC is responsible for developing the initial draft of the protocol and checklist and making the initial decisions to move the checklist development forward. The members of EP, on the other hand, will shape the reporting checklist with their revisions. The process should be transparently recorded, documented, and reported.

Consensus-making: Is there a common ground for the checklist items?

Reporting checklists are developed to pave the way to adhere to some reporting standards in a specific research community and this could only be met if a common ground is shaped among active community members. There should be a predefined process of community engagement (as described above) and consensus-making (often through survey mechanisms) to ask for expert opinion. One of the popular ways to reach a consensus is through a Delphi process [52]. Experts are first provided with the initial draft developed by the steering committee listing all items the reporting checklist should include, and are asked to modify the initial checklist by changing the items or suggesting to remove or add new items. The steering committee collects all the inputs and updates the checklist items. At the next round, experts are provided again with the updated checklist and this time, are asked to rate items with a Likert scale from “not important to be reported” to “significantly important to be reported”. Based on the predefined thresholds in the protocol of the checklist development, items will either be removed or will be chosen as items to be included in the final checklist. There could be an iterative process defined to develop agreement in the next rounds of expert elicitation on the items that passed an initial threshold but have not achieved sufficient consensus.

For instance, the ACRI-FDCR checklist used a modified version of Delphi in which two rounds of expert elicitation were conducted. In the first round, the initial draft of the checklist was sent to the expert panel for comment and also suggestions for adding or removing items. After implementing the received comments by the steering committee, the checklist was sent to the experts for the rating phase in which experts would rate items from not important [1] to extremely important [5]. Thresholds for including/excluding an item in the checklist were pre-registered in the protocol of the checklist. The more-stringent threshold was a rating of 4 or 5 by more than 80% of the experts and the less-stringent threshold was a rating of 3, 4, or 5 by more than 70% of the experts. Also for the additional recommendation part which was simply rated as Yes/No, the rating of Yes by more than 50% of the experts would lead to the inclusion of the recommendation (not in the list of essential items).

Validation: Does the checklist reflect current publication norms?

After finalizing the reporting checklist items in the consensus-making stage, the applicability of the checklist could be tested against the currently available literature. This can be achieved by a reasonable predefined number of sample studies to be rated according to the developed checklist as was done in the MRSinMRS checklist. As reported in the checklist’s manuscript, some exemplar articles are rated to evaluate if the checklist can practically be utilized. This step not only validates the checklist’s effectiveness to support further utilization by scientific audiences but also can help improve the checklist items by removing vague or irrelevant items in the next revisions of the checklist.

Adherence: How much the checklist is being used in future publications?

To have an impactful reporting checklist, there should be a follow-up plan to reinforce adherence in the upcoming publications. This plan can include promoting a checklist to be used by other researchers in the field, providing education and support for checklist adopters, checking adherence in the new publications, and finding solutions to make the checklist easier to use. As an example, ACRI-FDCR is actively monitoring new publications in a live annual systematic review and checks their adherence to the checklist (https://med.umn.edu/addiction/network/acrin).

Notably another systematic literature search in this stage is needed to find the level of adherence to the checklist as well as possible pitfalls and defects that should be considered in the updating stage.

Updating; How this can be improved?

Reporting checklists are context-specific and technology-bound [31] and the ever-changing nature of the scientific research necessitates the checklists to be live entities. This requires mechanisms to keep the checklists updated according to the latest findings in the field and according to feedback from stakeholders. Nevertheless, there comes a time when the volume of updates may necessitate a reconsideration of the initial problem and parameter space. Thus, the checklist development model forms a cyclical view (instead of a linear one) reflecting a continuous and ever-increasing effort for further improvements (Fig. 2).

The evaluation of the above checklists based on the development cycle discussed in the previous section is summarized in Fig. 3.

Fig. 3. Checklist development process and research process coverage in sample neuroimaging checklists.

Fig. 3

The top panel assesses 6 sample checklists using the developmental cycle model. Dark blue means that the checklist fully reported that item in their initial publication, Light blue means partially reported, and Light yellow means not reported. The bottom panel shows the main steps in a research process and how various checklists covered those steps in their items. Light yellow indicates no coverage, Light blue indicates indirect coverage, and Dark blue indicates full coverage. ACRI-FDCR Addiction Cue-Reactivity Initiative fMRI drug cue-reactivity checklist, COBDIAS Committee on Best Practices in Data Analysis and Sharing, ContES Concurrent tES fMRI Checklist, MRI Magnetic Resonance Imaging, MEEG Magneticencephalography/Eelectroencephalography, MRSinMRS Minimum Reporting Standards for in vivo Magnetic Resonance Spectroscopy.

It is noteworthy to mention that not all of the steps of the proposed seven-step development cycle are adhered to by the checklists addressed above. However, the inclusion of these steps in the proposed development cycle is intended to provide a structured framework and best practices for checklist development, rather than to suggest that all checklists must follow each step rigidly. It also should be noted that the development of a reporting checklist will not necessarily be hampered by skipping certain steps, as the development process can vary depending on factors such as the specific objectives of the checklist, available resources, and the expertise of the developers. However, each step in the development cycle tends to provide criteria to increase the robustness, transparency, inclusiveness, comprehensiveness, and usability of the checklist as noted in the description of each step.

Limitations and concerns regarding methodological checklists

Despite their potential benefits, it is important to note some of the reporting checklists’ limitations and relevant concerns about their proliferation in neuroimaging research. One concern, as previously noted, is that enforcing adherence to checklists might stifle innovation in neuroimaging research [33], particularly if the checklists are out of date. Given the rapid pace of development in the field, editors and reviewers need to exercise nuanced judgment when handling research that does not clearly conform to the scope of extant methodological checklists. A related challenge is that certain checklist recommendations and requirements may be too inflexible: for example, checklists might require sample size justification through power analysis, even though appropriate power can also be achieved through deep phenotyping [53] or the use of multivariate methods [54]. This highlights the necessity of nuance and flexibility when checklists are developed, their updating in response to arising concerns, and judicious enforcement. A more fundamental concern is that checklists are often disseminated before evidence of their usefulness is collected. As noted, such evidence is difficult to gather and requires longitudinal and interventional investigations, as checklist adherence on its own can only indicate reach and usability.

As the list of available methodological checklists grows, there are also concerns about “checklist fatigue”: researchers may be overwhelmed by the number of available checklists, and it may be difficult to select appropriate guidelines in the absence of clear scopes and collaboration between different checklist development and teams and adherence-enforcing entities such as publishers. Enforcement of checklist adherence could also become taxing for publishers and add to reviewing overhead, necessitating creativity in checking adherence using a mix of automated tools and nuanced human intervention (for example by reviewers).

A final point is that checklists are only one tool in improving the state of scientific research, and are indeed likely to have a fairly modest impact on their own. Without regular updating, nuanced enforcement, ongoing revision and impact assessment (among other endeavors), methodological checklists would merely add to the box-ticking exercises researchers have to follow. Even at their best, however, and as noted in the introduction, many factors hamper the generalizability and reliability of neuroimaging research studies. It is important to acknowledge the limitations of checklists in enhancing research integrity and emphasize the importance of complementary measures, such as robust peer review, data sharing, and replication efforts. Checklists can mitigate questionable research practices (QRPs) in part by promoting such measures, for example, preregistration and the availability of raw and processed data and analysis code [55]. But checklists alone cannot bring about substantial shifts in research culture and practices.

The future of reporting checklists in neuroimaging

As discussed above, reporting checklists are increasingly developed and adopted for neuroimaging research. The following sections are dedicated to discussing the potential of these checklists to facilitate neuroimaging research in the future, important challenges, and the promise of new technologies.

Reporting checklists to support neuroimaging data repositories

With the increased availability of neuroimaging technologies, the proliferation of hardware and informatics infrastructure, and years of open science advocacy, unprecedented volumes of neuroimaging data are available to researchers [56, 57]. “Big data” can be used to detect true effects with appropriate statistical power and ensure that scientific findings are generalizable, overcoming challenges inherent in neuroimaging studies with small sample sizes [29, 58]. Some such data comes from large studies overseen by centralized committees which adhere to harmonized imaging protocols, such as the UK Biobank [59] and the Adolescent Brain Cognitive Development (ABCD) Study [60]; but a substantial portion of available neuroimaging data comes from smaller studies shared on repositories such as OpenNeuro [61]. Currently, the OpenNeuro repository alone hosts data from 41,599 participants across 1,025 datasets.

The great strength of aggregating data across studies may, however, also create vulnerabilities to particular types of pitfalls. Aggregating large amounts of data can yield impressive group-level statistics while obscuring possible errors in subsets of the data and important differences between pooled data. Since its founding, OpenNeuro has sought to ensure that data shared on its platform are “FAIR”: findable, accessible, interoperable, and reusable [62]. This has largely been attempted through the development and promotion of the Brain Imaging Data Structure (BIDS), requiring all submitted data to conform to this common BIDS standard [25]. Despite the rapid uptake of BIDS across the neuroimaging community however, there are gaps which could be addressed through the use of checklists to ensure there is sufficient context for data shared on OpenNeuro to be re-used and aggregated, whether in the form of meta-data or additional documentation. Automatic BIDS validators do not currently enforce the reporting of all participant or task characteristics which required to use data from a particular experiment. For example, in clinical research, even “caseness” is defined in various ways across studies, ranging from a formal diagnosis based on a clinical interview to exceeding a criterion score on a substance use questionnaire such as the AUDIT for assessing problematic alcohol use [63]. The full list of important variables and how they should be reported can be determined by checklist developers who are experts in the specific neuroimaging field.

Reporting checklists to support neuroimaging consortia

The most notable example of a neuroimaging consortium for systematic, large-scale aggregation of data from previous research is the ENIGMA consortium, founded in 2009 to provide the increased statistical power required for brain-gene association analyses through data pooling. It has conducted some of the largest neuroimaging-genetic analyses to date by gathering datasets for secondary analyses, including the discovery of the genetic correlates of human cortical structure in a sample of over 50,000 participants [64, 65] This efficient and cost-effective strategy has spawned over 30 disease- and syndrome-specific subgroups which have adapted this approach to study the brain and genetic correlates of various clinical conditions, such as substance use disorders [66]. The large sample size provided by data pooling enables researchers to identify reproducible (e.g., split-half confirmation) brain correlates of substance dependence, some that are common across substances and some that are substance-specific [67]. It can also provide better effect size estimates through meta-analyses [68].

ENIGMA has had to tackle the issue of heterogeneity and errors in subsets of the data. Taking ENIGMA-Addiction as an example, the consortium has pooled data from over a hundred studies contributed by 103 principal investigators at 71 different institutions in 16 countries and 6 continents. Successful integration of these data requires careful attention to the details of each study. The neuroimaging data vary in the specifics of their acquisition which occurs on different scanners from different manufacturers employing different image acquisition parameters. Functional data, both task and resting-state, admit even more variety with inter-study differences in task design including stimulus and response modalities and time-series duration. A newer ENIGMA group, focusing on neuromodulation, is systematically evaluating TMS protocols, examining how the frequency, duration, and placement of TMS coils can affect results. Similarly, ENIGMA’s EEG working group [69] has been evaluating how choices in reference electrode and montage affect individual differences in EEG parameters. Lastly, to better streamline future data collection, ENIGMA’s Brain Injury working group has created a set of guidelines termed ENIGMA’s ‘simple seven’ [70] to improve the reproducibility of resting-state fMRI in traumatic brain injury research. Careful attention to cross-study differences is essential when processing each for inclusion in large, pooled analyses.

However, even more complex than the heterogeneity of the varied neuroimaging data is the variety that exists in the non-imaging phenotyping of participants. Study samples can vary on the depth of assessment and definition of diagnostic groups, psychiatric co-morbidities, family history, sociodemographics, and so on. Assessment of other relevant psychological measures such as impulsivity and risk-taking can be extensive in some studies, meager in others, and even left unreported. Similarly, details on inclusion and exclusion criteria and sample characteristics can make ostensibly similar studies quite different. These and other factors can potentially threaten the validity of pooled analyses. Conversely, adequate recording and annotation can enable in-depth analyses of the importance of these factors across studies, enhancing research with pooled data. The availability of specific reporting checklists alongside databases can serve an important role in facilitating the careful and standardized reporting of important details.

Checklists providing standardized details on published studies would greatly enhance the data-pooling efforts of ENIGMA. Indeed, ongoing initiatives within ENIGMA towards harmonized analyses of structural and functional neuroimaging data make clear that many research details are regularly omitted from publications. Further, the fallibility of memory and the turnover of staff and trainees can impede secondary analyses. A standardized approach to recording and reporting these details (e.g., completion of a methods-related checklist at the time of submission to a journal) is a simple but valuable step to facilitate subsequent harmonized analyses.

It is important to note here that consortia datasets are often datasets of opportunity, where most of the data is retrospectively collated across studies that have already concluded. These studies would not have necessarily adhered to consortium standards when conducted and may have not consulted any neuroimaging checklists during study execution and reporting. In such cases, some key data may invariably have been lost, and the use of checklists should not prevent the inclusion of such studies if researchers can still provide enough information for their data to be of potential use in mega-analyses. Researchers can still consult checklists at this stage when submitting data from completed studies to consortia, and at least transparently report what details they can report and clarify what important information required by the checklist cannot be reported.

Promoting checklist adherence

So far, dozens of checklists have been developed to cover different aspects of neuroimaging studies. To measure the status of the field in terms of reporting the items deemed to be important by experts, some checklists assess the percentages of their items being reported in the literature. ACRI-FDCR checklist has evaluated the field after finalization of the checklist and before publishing it, as depicted in Fig. 4. More than 40% of peer reviewed publications in the field reported less than 70% of the essential items in ACRI-FDCR checklist [15]. This evidence further supports the need to use these checklists to enrich future publications in terms of reporting items that increase transparency and reproducibility.

Fig. 4. Reporting status of the Addiction Cue-Reactivity Initiative (ACRI) fMRI Drug Cue Reactivity (FDCR) checklist.

Fig. 4

This figure provides an overview of the reporting status of studies that employed the ACRI-FDCR checklist. The assessments are based on the analysis of 108 sample FDCR articles before the publication of the ACRI-FDCR checklist. a displays the percentage of articles that reported each specific checklist item. Each bar in this panel represents a checklist item, showing how frequently it was included in the published studies. b summarizes the overall reporting status of the articles as a whole. This panel aggregates the data to give a broader view of how well the articles adhere to the checklist criteria.

A study reported that enforcement of adherence to three general reporting guidelines (STROBE for observational studies, CONSORT for randomized clinical trial studies, and PRISMA for systematic reviews) by asking authors to supplement their manuscript with the relevant reporting checklists, increased the adherence to those checklists [71]. This was further emphasized in a systematic review of more than 16000 trials that shows the endorsement of CONSORT checklist by journals increased the adherence to this checklist [72]. The endorsement of using checklists not only increases the number of reported items but also it was shown that reporting more items of the checklist increases the number of citations the subsequent articles receive [73]. Although the previous findings need further supporting evidence, they partially support the checklist endorsement policies in high-impact journals, some of them considering adherence to certain general reporting checklists as mandatory [74]. This growing interest in high-impact journals in endorsing reporting checklists to improve the reproducibility and transparency of their publications is also evident in the field of neuroimaging. For instance, the Molecular Psychiatry journal added a note in its submission process “reporting standards for Magnetic Resonance Spectroscopy” and highly encourages researchers to use MRSinMRS in their submission process (https://www.nature.com/mp/authors-and-referees/preparation-of-articles). In summary, it is evident that journals can play a pivotal role in encouraging or enforcing checklist adherence. Note that we do not differentiate between general-purpose journals and those with a more technical audience: checklists specify the minimum level of detail which should be reported by all studies in a specific field or subfield of neuroimaging, regardless of which journal they are published in; though authors can opt to report details deemed less relevant for a specific readership in supplementary materials. On the other hand, journals should be cautious in overwhelming researchers with checklists that do not necessarily improve the reporting quality of the published papers. Furthermore, while direct empirical evidence linking checklist usage to improved replication rates or increased utilization in meta-analyses is desirable, it may be challenging to obtain comprehensive data on these aspects due to various confounding factors and methodological challenges. In this regard, the indirect indicators of checklist utility, such as the increasing adoption rates and ongoing discussions within the research community regarding checklist implementation and refinement could give a sense of the long-term impact of checklist usage on research quality and reproducibility in the field. However, there is no doubt that checklist adherence is an indirect and imperfect measurement more robust evaluations through longitudinal studies or comparative analyses can give deeper insights into the effectiveness of the checklists and identify areas for further improvement.

Artificial Intelligence (AI) and Large Language Models (LLMs) for checklist development and adherence

LLMs can significantly enhance transparency and reproducibility in the development and application of methodological checklists for neuroimaging publications, especially given recent attempts at training LLMs on scientific literature specifically [75]. These technologies can ensure adherence to reporting guidelines and methodological checklists, both during manuscript preparation and peer review, thereby improving the quality and reliability of scientific findings [76]. AI and LLMs, such as ChatGPT, can also play an important role in the development and enforcement of methodological checklists. These technologies can analyze vast quantities of existing research to identify gaps in reporting, helping to justify the need for specific reporting guidelines. By understanding the complexities of research methodologies, LLMs can aid in creating checklists that are both comprehensive and flexible, ensuring that they capture essential information without being overly restrictive. AI can support the development of these checklists through an evidence-based approach, analyzing research and meta-analyses to identify key items that should be included.

Moreover, AI can facilitate the consensus process and the pre-registration of checklist development protocols, ensuring that these guidelines are developed transparently and rigorously. By leveraging AI and LLMs in this manner, the scientific community can significantly enhance the transparency and reproducibility of research findings. Taken together, AI and LLMs offer powerful tools for advancing the development, dissemination, and adherence to methodological checklists, thereby promoting a more transparent, replicable, and rigorous scientific process. Note that our focus here is on the use of emerging AI and LLM technologies in developing checklists and adherence checking. Much of the discussion about the role of these technologies in science has been focused on their use in facilitating the generation and editing of analysis code and narrative text, which is beyond the scope of the present manuscript. An example of how GPT-4, an advanced LLM, can be utilized to improve scientific manuscript sections is evident in its analysis of a methods section provided for a study on fMRI as an outcome measure using CONSORT 2010 guidelines for reporting parallel group randomized trials (Box 1).

Box 1 Example of AI/LLM Enhancement of Methodological Reporting.

Using the CONSORT_GPT [160], the adherence of a sample report with fMRI as an outcome measure [161] to CONSORT 2010 guidelines for reporting parallel group randomized trials [77] was checked. This example underscores how AI and LLMs can assist researchers in developing more detailed, transparent, and reproducible method sections by ensuring that all essential elements of study design and implementation are thoroughly addressed.

Here's an evaluation of the study methods and reporting based on the CONSORT 2010 guidelines for reporting parallel group randomized trials:

Title and Abstract (Items 1a, 1b): The abstract provides a structured summary including the trial design, methods, results and conclusions. The title indicates it is a randomized trial.

Introduction (Items 2a, 2b): The scientific background and rationale for the trial are provided.

  • The specific objectives and hypotheses are stated.

    Methods

    Trial Design (Item 3a, 3b) :- Described as a randomized, triple-blind, sham-controlled parallel group clinical trial.

  • Important changes to methods after trial commencement are not mentioned.

    Participants (Items 4a, 4b)-

    Eligibility criteria for participants are clearly defined.

  • Settings and locations where data were collected are provided.

    Interventions (Items 5)

  • The interventions for each group (active and sham tDCS) are well-described including procedural details.

    Outcomes (Items 6a, 6b)

  • The primary and secondary outcome measures are clearly defined.

  • Any changes to trial outcomes after commencement are not mentioned.

    Sample Size (Item 7a, 7b)

  • The projected sample size of 60 and power calculations are reported.

  • No interim analyses and stopping rules are mentioned.

    Randomization

    Sequence Generation (Item 8a, 8b)

  • The method of random sequence generation is described.

  • The type of randomization with blocked randomization is reported.

    Allocation Concealment (Items 9, 10)

    Allocation concealment mechanism is not described in detail.

    Who generated, enrolled and assigned participants is not clearly reported.

    Blinding (Item 11a, 11b)

  • The triple-blinding procedure of participants, investigators and assessors is well-described.

  • The method for unblinding if needed is not mentioned.

    Statistical Methods (Item 12a, 12b)

  • Statistical methods for analyzing primary and secondary outcomes are outlined.

  • Methods for additional analyses such as subgroup analyses are not described.

    Results

    Participant Flow (Item 13a, 13b)

  • A participant flow diagram is not provided.

  • Reasons for non-participation and exclusions are briefly mentioned.

    Recruitment (Item 14a)

  • Dates defining periods of recruitment and follow-up are provided.

    Baseline Data (Item 15)

  • Relevant baseline demographic and clinical characteristics are summarized for each group.

    Numbers Analyzed (Item 16)

  • Number of participants included in each analysis is reported for primary outcomes.

    Outcomes and Estimation (Items 17a, 17b)

  • Results for primary outcome of craving self-reports are provided with effect estimates and confidence intervals.

  • Results for secondary neuroimaging outcomes are also comprehensively reported.

    Ancillary Analyses (Item 18)

  • Results of additional exploratory electric field modeling analyses are presented.

    Harms (Item 19)

  • Reports on adverse events and side effects recorded for each group.

    Discussion

    Limitations (Item 20)

  • Trial limitations including potential biases are discussed.

    Generalizability (Item 21)

  • Generalizability of the findings to other populations is not explicitly discussed.

    Interpretation (Item 22)

  • Interpretation of results in the context of existing evidence is provided.

    Other Information

    Registration (Item 23)

  • Trial registration number and name of registry are reported.

    Protocol (Item 24)

  • Access to full trial protocol is not mentioned.

    Funding (Item 25)

  • Sources of funding and other support are stated.

Overall, the paper follows CONSORT guidelines reasonably well in terms of describing the trial design, methods, analysis and results. Some areas that could be improved include providing a participant flow diagram, clearly reporting allocation concealment and blinding mechanisms, discussing generalizability of findings, and mentioning access to the full trial protocol.

Overlapping scopes and collaboration in reporting checklist development

Despite the vast diversity of neuroimaging modalities and research designs, there is substantial overlap in scope, methodological concerns, and good research practices across sub-disciplines. As such, many existing neuroimaging reporting checklists and guidelines overlap in their scope and items, and most neuroimaging research projects likely use more than a single checklist to ensure methodological rigor and transparency. This is especially the case when one considers that there are also reporting guidelines and checklists with broader scope and use which apply also to neuroimaging research: for example, a clinical trial using simultaneous transcranial electrical stimulation and fMRI with a drug cue reactivity task could use the ACRI FDCR checklist [15], the ContES checklist [16], the COBIDAS MRI checklist [40] which applies to neuroimaging research more broadly, and the still more widely-used CONSORT checklist [77] for clinical trials. To maximize the value of checklists, avoid duplication, and prevent user confusion, checklist development teams can collaborate with developers of other checklists to properly specify the scope of their work. In areas where the scope of several checklists overlap, care should be taken to either harmonize items across checklists or to refer readers to a single set of recommendations when possible. Such collaboration could also facilitate resource sharing between teams, and their supporting the uptake of each other’s checklists where appropriate.

As an example of this, the ACRI-FDCR checklist only points to some general participants’ characteristics and encourages researchers to use PhenX toolkit which has already designated a core assessment for mental health and addiction [78]. ACRI-FDCR checklist alsoincludes only a broad item to suggest reporting fMRI analysis details, and the checklist manuscript explicitly suggests that users consult the COBIDAS checklist for reporting statistical methods and results [79]. In the case of widely used checklists such as CONSORT, it is possible to develop new methodological checklists as “extensions” of the existing checklist. A relevant example is the recent development of SPIRIT-iNeurostim and CONSORT-iNeurostim extensions (to SPIRIT and CONSORT guidelines) for clinical trial protocols and reports of neurostimulation devices [80]. The most ambitious solution would be to develop “a network of consortia”, i.e., infrastructure for the development teams of neuroimaging guidelines and checklists to harmonize. A notable example is the EQUATOR network, an international network which maintains a central library of available reporting checklists, provides support for the development of new checklists, and aims to educate stakeholders about different checklists and guidelines [81].

Checklists and global neuroimaging education

Systematically developed reporting checklists and guidelines are distillations of evidence-based expert guidance on good research and reporting practices in neuroimaging, and as such could be helpful resources in promoting these practices among neuroimaging researchers and students. The EQUATOR network’s experience is again instructive: they have developed workshops to foster good reporting habits among young researchers and students (among others) and increase awareness of the network’s resources [81]. Courses on research integrity and good research practices already often incorporate methodological guidelines and checklists [82], and research institutions could endorse such checklists or use them in assessing trainees to teach adherence to best practice guidelines [83]. Checklists could also be used when developing and evaluating courses on research practices, for example by assessing trainees’ knowledge of checklist items before and after instruction or using the checklist to assess student projects completed during the course. A recent systematic review has noted that such standardized assessments are lacking in much of the pedagogical research literature on reproducible research practices [84].

Neuroimaging checklists are often written in deliberately plain, clear, and concise language, and could serve to promote good, consistent research and reporting practices globally. Increased awareness of these standards, and access to a shared set of instructions and terms, would also serve diversity and inclusivity purposes: It would help develop common frames of reference to facilitate the participation of researchers from under-represented backgrounds and countries in the production of robust research and the development of best practices. Reporting checklists could be easily introduced to a global audience of researchers in online seminars, workshops, and courses developed for this purpose, or they could be introduced in the context of more general neuroimaging and neuroscience courses. The Neuromatch Academy for example runs multi-week neuroscience summer courses for thousands of participants, who develop student research projects using neuroimaging data [85]. These courses already include guidance on best methodological practices, and could naturally incorporate reporting checklists.

Funding, reinforcement, and sustainability

As the repertoire of available reporting checklists in neuroimaging expands, there need to be mechanisms in place to ensure their ongoing development can be sustainably funded and their use can be reinforced. Thus far, the development of every neuroimaging methodological checklist that we are aware of has been funded through institutional support and research grants to individual researchers involved in checklist development, or even sometimes unfunded efforts, but this poses issues for sustainable funding of checklist development and enforcement. For instance, as mentioned in the experience of well-established reporting checklists, a minimum of $CAD120,000 is needed only for the costs of consensus meetings [86]. Although the checklists are more akin to public goods rather than the typical outcomes of research projects and justifying their impact to procure funding from traditional academic sources can be challenging. This aspect even affects the decision of researchers to spend their time on developing checklists for global benefits when there is no obvious funding or spend time on their individual research for their own professional advancements. Further, most methodological checklists are developed by teams representing many, sometimes dozens of different institutions spread across countries and need to be continually updated beyond the life cycle of individual research grants. As checklist development communities widen and involve different stakeholders, it can be difficult to decide how and from which sources such long-term funding can be procured. These challenges are not unique to checklist development, and have for example been extensively in the case of funding for other types of digital [87] and physical [88] research infrastructure, and by the research software engineering community [89]. Though there are no reviews of the landscape and gaps in available funding for reporting checklists and guidelines, sustainable funding will likely not come from a single type of funder in the future and as these checklists disseminate beyond purely academic research and are more widely adopted by publishers. One of the funding agencies aiming to fill this gap is the BRAIN Initiative’s Standards to Define Experiments Related to the BRAIN Initiative grant program from the National Institutes of Health (NIH) [90]. This program supports the development of data standards which have to be adhered to in new experiments used by Brain Initiative such as understanding circuit functions in the nervous system, invasive devices for recording and modulation human nervous system, non-invasive neuromodulation, and next-generation imaging.

Conclusions

The development of neuroimaging reporting checklists is a crucial step towards increasing transparency, reproducibility, and reliability, core elements of scientific rigor. This cannot be achieved without a systematic checklist development process that itself follows core values of transparency, reliability, diversity, and openness to innovations. Available neuroimaging checklists highlight the practical utility and potential impact of neuroimaging checklists in improving reporting practices.

Neuroimaging checklists can set standards for reporting practices and data sharing, considering the growing number of data repositories and consortia. Emerging technologies such as large language models can play a role in the near future in checking the adherence of scientific manuscripts to checklist recommendations and identify reporting aspects that could be improved. This will enhance the practicality and efficiency of implementing neuroimaging reporting checklists. Global collaboration in neuroimaging checklist development and involving a diversity of researchers and stakeholders, harmonizing the aims and scopes of various checklists, promoting the use of neuroimaging checklists by stakeholders, and adopting them in educational settings can all foster a culture of scientific integrity. These steps cannot be taken without funding agencies to ensure the development and continuity of these initiatives.

Taken together, neuroimaging reporting checklists can improve the quality and impact of research if they follow the core values and development process discussed in this article which will in turn pave the way for a more transparent and rigorous scientific future.

Author contributions

All authors contributed to the conception of the manuscript first through two panels held at the American College of Neuropsychopharmacology Annual Meeting (ACNP2023) and then through online rounds of discussions. HE, MZB, AS, AV, DMC, HG, TEN, CRP, PMT, and MPP contributed to drafting the first version of the manuscript. HE supervised the process of receiving and implementing the comments. All authors contributed to the manuscript revision, read, and approved the submitted version.

Funding

HE is supported by funds from Laureate Institute for Brain Research and Medical Discovery Team on Addiction and Brain and Behavior Foundation (NARSAD Young Investigator Award 27305). JK is supported by the Department of Veterans Affairs (National Center for PTSD), NIAAA (Center for the Translational Neuroscience of Alcohol (2P50AA012870-23), and National Center for Translational Science Clinical and Translational Science Award (2UL1TR001863-06). He has stock, options, or has received consultation fees from the following companies Aptinyx, Biogen, Biohaven Pharmaceuticals, Bionomics, Boehringer Ingelheim, Cartego Therapeutics, Damona Pharmaceuticals, Epiodyne, Epivario, Freedom Biosciences, Janssen Research and Development, Jazz Pharmaceuticals, Neumora Therapeutics, Otsuka America, Response Therapeutics, Rest Therapeutics, Spring Care, Sumitomi America, Terran Biosciencews, Tetricus, inc. He is an inventor on patents licensed by Yale University to Biohaven Pharmaceuticals, Freedom Biosciences, Janssen Pharmaceuticals, and Novartis Pharmaceuticals. DO has received R01MH114982 funding and also an honorarium from Boehringer-Ingelheim in the past 12 months. CRP is supported by the Novo Nordisk Fonden NNF20OC0063277. MPP is partly supported by The William K. Warren Foundation, the National Institute of General Medical Sciences Center (Grant 2 P20 GM121312), and the National Institute on Drug Abuse (U01DA050989). He advises Spring Care, Inc., receives royalties from an article on methamphetamine in UpToDate, and has a compensated consulting agreement with Boehringer Ingelheim International GmbH. Other authors declare no conflicts of interest.

Competing interests

TPG is the co-editor of the NeuroPschychoPharmacology. JK also serves as the editor of Biological Psychiatry. Other authors declare no conflicts of interest.

Footnotes

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Change history

3/19/2025

A Correction to this paper has been published: 10.1038/s41386-025-02087-2

References

  • 1.Carp J. The secret lives of experiments: methods reporting in the fMRI literature. Neuroimage. 2012;63:289–300. [DOI] [PubMed] [Google Scholar]
  • 2.Gorgolewski KJ, Poldrack RA. A practical guide for improving transparency and reproducibility in neuroimaging research. PLOS Biol. 2016;14:e1002506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Poldrack RA, Baker CI, Durnez J, Gorgolewski KJ, Matthews PM, Munafò MR, et al. Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat Rev Neurosci. 2017;18:115–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Fusar-Poli P, Radua J, Frascarelli M, Mechelli A, Borgwardt S, Di Fabio F, et al. Evidence of reporting biases in voxel-based morphometry (VBM) studies of psychiatric and neurological disorders: reporting biases in VBM Studies of Psychiatric and Neurological Disorders. Hum Brain Mapp. 2014;35:3052–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.David SP, Naudet F, Laude J, Radua J, Fusar-Poli P, Chu I, et al. Potential reporting bias in neuroimaging studies of sex differences. Sci Rep. 2018;8:6082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Etkin A. A reckoning and research agenda for neuroimaging in psychiatry. AJP. 2019;176:507–11. [DOI] [PubMed] [Google Scholar]
  • 7.Robbins KA, Touryan J, Mullen T, Kothe C, Bigdely-Shamlo N. How sensitive are EEG results to preprocessing methods: a benchmarking study. IEEE Trans Neural Syst Rehabil Eng. 2020;28:1081–90. [DOI] [PubMed] [Google Scholar]
  • 8.Gentili C, Cecchetti L, Handjaras G, Lettieri G, Cristea IA. The case for preregistering all region of interest (ROI) analyses in neuroimaging research. Eur J Neurosci. 2021;53:357–61. [DOI] [PubMed] [Google Scholar]
  • 9.Pernet C, Garrido MI, Gramfort A, Maurits N, Michel CM, Pang E, et al. Issues and recommendations from the OHBM COBIDAS MEEG committee for reproducible EEG and MEG research. Nat Neurosci. 2020;23:1473–83. [DOI] [PubMed] [Google Scholar]
  • 10.Carp J. Better living through transparency: improving the reproducibility of fMRI results through comprehensive methods reporting. Cogn Affect Behav Neurosci. 2013;13:660–6. [DOI] [PubMed] [Google Scholar]
  • 11.Klapwijk ET, van den Bos W, Tamnes CK, Raschle NM, Mills KL. Opportunities for increased reproducibility and replicability of developmental neuroimaging. Dev Cogn Neurosci. 2021;47:100902. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Hupalo S, Jordan CJ, Bowen T, Mahar J, Yepez E, Kunath L, et al. NPP’s approach toward improving rigor and transparency in clinical trials research. Neuropsychopharmacology. 2023;48:429–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Nichols TE, Das S, Eickhoff SB, Evans AC, Glatard T, Hanke M, et al. Best practices in data analysis and sharing in neuroimaging using MRI. Nat Neurosci. 2017;20:299–303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Bossuyt, Reitsma PM, Bruns DE JB, Gatsonis CA, Glasziou PP, Irwig L, et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. Clin Chem. 2015;61:1446–52. [DOI] [PubMed] [Google Scholar]
  • 15.Ekhtiari H, Zare-Bidoky M, Sangchooli A, Janes AC, Kaufman MJ, Oliver JA, et al. A methodological checklist for fMRI drug cue reactivity studies: development and expert consensus. Nat Protoc. 2022;17:567–95. [DOI] [PMC free article] [PubMed]
  • 16.Ekhtiari H, Ghobadi-Azbari P, Thielscher A, Antal A, Li LM, Shereen AD, et al. A checklist for assessing the methodological quality of concurrent tES-fMRI studies (ContES checklist): a consensus study and statement. Nat Protoc. 2022;17:596–617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Kousta S, Pastrana E, Swaminathan S. Three approaches to support reproducible research. Sci Editor. 2020;42:77–82. [Google Scholar]
  • 18.The NPQIP Collaborative group, Study steering committee, Macleod M, Sena E, Howells D, Macleod M, et al. Did a change in Nature Journals’ editorial policy for life sciences research improve reporting? BMJ Open Sci [Internet]. 2019 Feb [cited 2024 Mar 27]; 3. Available from: http://access.portico.org/stable?au=phzq8gmxdp1. [DOI] [PMC free article] [PubMed]
  • 19.Feng X, Park DS, Walker C, Peterson AT, Merow C, Papeş M. A checklist for maximizing reproducibility of ecological niche models. Nat Ecol Evol. 2019;3:1382–95. [DOI] [PubMed] [Google Scholar]
  • 20.de Jong Y, van der Willik EM, Milders J, Voorend CGN, Morton RL, Dekker FW, et al. A meta-review demonstrates improved reporting quality of qualitative reviews following the publication of COREQ- and ENTREQ-checklists, regardless of modest uptake. BMC Med Res Methodol. 2021;21:184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Uddin MN, Figley TD, Kornelsen J, Mazerolle EL, Helmick CA, O’Grady CB, et al. The comorbidity and cognition in multiple sclerosis (CCOMS) neuroimaging protocol: Study rationale, MRI acquisition, and minimal image processing pipelines. Front Neuroimaging [Internet]. 2022 Aug [cited 2024 Mar 10];1. Available from: https://www.frontiersin.org/articles/10.3389/fnimg.2022.970385. [DOI] [PMC free article] [PubMed]
  • 22.Appelbaum M, Cooper H, Kline RB, Mayo-Wilson E, Nezu AM, Rao SM. Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. Am Psychol. 2018;73:3–25. [DOI] [PubMed] [Google Scholar]
  • 23.Köhler T, González-Morales MG, Banks GC, O’Boyle EH, Allen JA, Sinha R, et al. Supporting robust, rigorous, and reliable reviewing as the cornerstone of our profession: introducing a competency framework for peer review. Ind Organ Psychol. 2020;13:1–27. [Google Scholar]
  • 24.Higgins JPT, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Gorgolewski KJ, Auer T, Calhoun VD, Craddock RC, Das S, Duff EP, et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci Data. 2016;3:160044. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Rid A, Schmidt H. The 2008 Declaration of Helsinki - first among equals in research ethics? J Law Med Ethics. 2010;38:143–8. [DOI] [PubMed] [Google Scholar]
  • 27.Buch ER, Santarnecchi E, Antal A, Born J, Celnik PA, Classen J, et al. Effects of tDCS on motor learning and memory formation: a consensus and critical position paper. Clin Neurophysiol. 2017;128:589–603. [DOI] [PubMed] [Google Scholar]
  • 28.Choi I, Kreis R. Advanced methodology for in vivo magnetic resonance spectroscopy. NMR Biomed. 2021;34:e4504. [DOI] [PubMed] [Google Scholar]
  • 29.Marek S, Tervo-Clemmens B, Calabro FJ, Montez DF, Kay BP, Hatoum AS, et al. Reproducible brain-wide association studies require thousands of individuals. Nature. 2022;603:654–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Elyounssi S, Kunitoki K, Clauss JA, Laurent E, Kane K, Hughes DE, et al. Uncovering and mitigating bias in large, automated MRI analyses of brain development. bioRxiv. 2023 Jan;2023.02.28.530498.
  • 31.Allen K, Geimer JL, Popp E. Context matters: developing peer reviewers to advance science and practice. Ind Organ Psychol. 2020;13:57–60. [Google Scholar]
  • 32.Nieminen P. Ten points for high-quality statistical reporting and data presentation. Appl Sci. 2020;10:3885. [Google Scholar]
  • 33.Eby LT, Shockley KM, Bauer TN, Edwards B, Homan AC, Johnson R, et al. Methodological checklists for improving research quality and reporting consistency. Ind Organ Psychol. 2020;13:76–83. [Google Scholar]
  • 34.Garcia-Costa D, Squazzoni F, Mehmani B, Grimaldo F. Measuring the developmental function of peer review: a multi-dimensional, cross-disciplinary analysis of peer review reports from 740 academic journals. PeerJ. 2022;10:e13539. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.ALBA Network. Alba Network. [cited 2024 Mar 22]. ALBA Declaration on Equity and Inclusion. 2024 Available from: https://www.alba.network/declaration.
  • 36.Tzovara A, Amarreh I, Borghesani V, Chakravarty MM, DuPre E, Grefkes C, et al. Embracing diversity and inclusivity in an academic setting: Insights from the Organization for Human Brain Mapping. NeuroImage. 2021;229:117742. [DOI] [PubMed] [Google Scholar]
  • 37.Silver JK. Is a lack of diversity among clinical practice guideline authors contributing to health inequalities for patients? BMJ. 2023;381:p1035. [DOI] [PubMed] [Google Scholar]
  • 38.Synnot A, Hill S, Jauré A, Merner B, Hill K, Bates P, et al. Broadening the diversity of consumers engaged in guidelines: a scoping review. BMJ Open. 2022;12:e058326. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Poldrack RA, Fletcher PC, Henson RN, Worsley KJ, Brett M, Nichols TE. Guidelines for reporting an fMRI study. Neuroimage. 2008;40:409–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Nichols TE, Das S, Eickhoff SB, Evans AC, Glatard T, Hanke M, et al. Best practices in data analysis and sharing in neuroimaging using MRI [Internet]. Neuroscience; 2016 May [cited 2024 Mar 18]. Available from: http://biorxiv.org/lookup/doi/10.1101/054262. [DOI] [PMC free article] [PubMed]
  • 41.Pernet CR, Garrido M, Gramfort A, Maurits N, Michel C, Pang E, et al. Best practices in data analysis and sharing in neuroimaging using MEEG [Internet]. Open Science Framework; 2018 Aug [cited 2024 Mar 21]. Available from: https://osf.io/a8dhx.
  • 42.Uddin LQ, Betzel RF, Cohen JR, Damoiseaux JS, De Brigard F, Eickhoff S, et al. Controversies and progress on standardization of large-scale brain network nomenclature [Internet]. Open Science Framework; 2022 Mar [cited 2024 Mar 21]. Available from: https://osf.io/25za6. [DOI] [PMC free article] [PubMed]
  • 43.Voets N et al. COBIDAS Clinical fMRI for language mapping. [cited 2024 Mar 21]. COBIDAS Clinical fMRI for language mapping. 2023 Available from: https://cobidasclinicalfmriforlanguagemapping.wordpress.com/.
  • 44.Oz G, Alger JR, Barker PB, Bartha R, Bizzi A, Boesch C, et al. Clinical proton MR spectroscopy in central nervous system disorders. Radiology. 2014;270:658–79. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Wilson M, Andronesi O, Barker PB, Bartha R, Bizzi A, Bolan PJ, et al. Methodological consensus on clinical proton MRS of the brain: review and recommendations. Magn Reson Med. 2019;82:527–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Peek AL, Rebbeck T, Puts NAJ, Watson J, Aguila MER, Leaver AM. Brain GABA and glutamate levels across pain conditions: a systematic literature review and meta-analysis of 1H-MRS studies using the MRS-Q quality assessment tool. NeuroImage. 2020;210:116532. [DOI] [PubMed] [Google Scholar]
  • 47.Öngür D. Making progress with magnetic resonance spectroscopy. JAMA Psychiatry. 2013;70:1265. [DOI] [PubMed] [Google Scholar]
  • 48.Lin A, Andronesi O, Bogner W, Choi I, Coello E, Cudalbu C, et al. Minimum Reporting Standards for in vivo Magnetic Resonance Spectroscopy (MRSinMRS): experts’ consensus recommendations. NMR Biomed. 2021;34:e4484. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Addiction Cue-Reactivity Initiative (ACRI) Network. Parameter Space and Potential for Biomarker Development in 25 Years of fMRI Drug Cue Reactivity: A Systematic Review. JAMA Psychiatry [Internet]. 2024 Feb [cited 2024 Feb 13]; Available from: 10.1001/jamapsychiatry.2023.5483. [DOI] [PMC free article] [PubMed]
  • 50.Knudsen GM, Ganz M, Appelhoff S, Boellaard R, Bormans G, Carson RE, et al. Guidelines for the content and format of PET brain data in publications and archives: a consensus paper. J Cereb Blood Flow Metab. 2020;40:1576–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7:e1000217. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Waggoner J, Carline JD, Durning SJ. Is there a consensus on consensus methodology? Descriptions and recommendations for future consensus research. Acad Med. 2016;91:663–8. [DOI] [PubMed] [Google Scholar]
  • 53.Gratton C, Nelson SM, Gordon EM. Brain-behavior correlations: two paths toward reliability. Neuron. 2022;110:1446–9. [DOI] [PubMed] [Google Scholar]
  • 54.Kragel PA, Han X, Kraynak TE, Gianaros PJ, Wager TD. Functional MRI can be highly reliable, but it depends on what you measure: a commentary on Elliott et al. (2020). Psychol Sci. 2021;32:622–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Banks GC, Rogelberg SG, Woznyj HM, Landis RS, Rupp DE. Editorial: evidence on questionable research practices: the good, the bad, and the ugly. J Bus Psychol. 2016;31:323–38. [Google Scholar]
  • 56.Ganz M, Poldrack RA. Data sharing in neuroimaging: experiences from the BIDS project. Nat Rev Neurosci. 2023;24:729–30. [DOI] [PubMed] [Google Scholar]
  • 57.Li X, Guo N, Li Q. Functional neuroimaging in the new era of big data. Genomics Proteom Bioinforma. 2019;17:393–401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Webb-Vargas Y, Chen S, Fisher A, Mejia A, Xu Y, Crainiceanu C, et al. Big data and neuroimaging. Stat Biosci. 2017;9:543–58. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Sudlow C, Gallacher J, Allen N, Beral V, Burton P, Danesh J, et al. UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 2015;12:e1001779. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Casey BJ, Cannonier T, Conley MI, Cohen AO, Barch DM, Heitzeg MM, et al. The Adolescent Brain Cognitive Development (ABCD) study: Imaging acquisition across 21 sites. Dev Cogn Neurosci. 2018;32:43–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Markiewicz CJ, Gorgolewski KJ, Feingold F, Blair R, Halchenko YO, Miller E, et al. OpenNeuro: An open resource for sharing of neuroimaging data. bioRxiv. 2021. [DOI] [PMC free article] [PubMed]
  • 62.Wilkinson MD, Dumontier M, Aalbersberg IJJ, Appleton G, Axton M, Baak A, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016;3:160018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Saunders JB, Aasland OG, Babor TF, De La Fuente JR, Grant M. Development of the Alcohol Use Disorders Identification Test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption-II. Addiction. 1993;88:791–804. [DOI] [PubMed] [Google Scholar]
  • 64.Grasby KL, Jahanshad N, Painter JN, Colodro-Conde L, Bralten J, Hibar DP, et al. The genetic architecture of the human cerebral cortex. Science. 2020;367:eaay6690.32193296 [Google Scholar]
  • 65.Thompson PM, Jahanshad N, Ching CRK, Salminen LE, Thomopoulos SI, Bright J, et al. ENIGMA and global neuroscience: a decade of large-scale studies of the brain in health and disease across more than 40 countries. Transl Psychiatry. 2020;10:1–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Mackey S, Kan KJ, Chaarani B, Alia-Klein N, Batalla A, Brooks S. et al. Chapter 10 - Genetic imaging consortium for addiction medicine: From neuroimaging to genes. In: Ekhtiari H, Paulus MP, editors. Progress in Brain Research [Internet]. Elsevier; 2016. p. 203–23. https://www.sciencedirect.com/science/article/pii/S0079612315001326 [cited 2021 Aug 14](Neuroscience for Addiction Medicine: From Prevention to Rehabilitation - Methods and Interventions; vol. 224). [DOI] [PMC free article] [PubMed]
  • 67.Mackey S, Allgaier N, Chaarani B, Spechler P, Orr C, Bunn J, et al. Mega-analysis of gray matter volume in substance dependence: general and substance-specific regional effects. AJP. 2019;176:119–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Cao Z, McCabe M, Callas P, Cupertino RB, Ottino-González J, Murphy A, et al. Recalibrating single-study effect sizes using hierarchical Bayesian models. Front Neuroimaging [Internet]. 2023 Dec [cited 2024 Mar 22];2. Available from: https://www.frontiersin.org/articles/10.3389/fnimg.2023.1138193. [DOI] [PMC free article] [PubMed]
  • 69.Smit DJA, Andreassen OA, Boomsma DI, Burwell SJ, Chorlian DB, de Geus EJC, et al. Large-scale collaboration in ENIGMA-EEG: a perspective on the meta-analytic approach to link neurological and psychiatric liability genes to electrophysiological brain activity. Brain Behav. 2021;11:e02188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Caeyenberghs K, Imms P, Irimia A, Monti MM, Esopenko C, de Souza NL, et al. ENIGMA’s simple seven: Recommendations to enhance the reproducibility of resting-state fMRI in traumatic brain injury. NeuroImage: Clin. 2024;42:103585. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Agha RA, Fowler AJ, Limb C, Whitehurst K, Coe R, Sagoo H, et al. Impact of the mandatory implementation of reporting guidelines on reporting quality in a surgical journal: A before and after study. Int J Surg. 2016;30:169–72. [DOI] [PubMed] [Google Scholar]
  • 72.Turner L, Shamseer L, Altman DG, Weeks L, Peters J, Kober T, et al. Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Methodology Review Group, editor. Cochrane Database of Systematic Reviews [Internet]. 2012 Nov [cited 2024 Mar 31]; 2013. Available from: 10.1002/14651858.MR000030.pub2. [DOI] [PMC free article] [PubMed]
  • 73.Vilaró M, Cortés J, Selva-O’Callaghan A, Urrutia A, Ribera JM, Cardellach F, et al. Adherence to reporting guidelines increases the number of citations: the argument for including a methodologist in the editorial process and peer-review. BMC Med Res Methodol. 2019;19:112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Shamseer L, Hopewell S, Altman DG, Moher D, Schulz KF. Update on the endorsement of CONSORT by high impact factor journals: a survey of journal “Instructions to Authors” in 2014. Trials. 2016;17:301. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Taylor R, Kardas M, Cucurull G, Scialom T, Hartshorn A, Saravia E, et al. Galactica: A Large Language Model for Science [Internet]. arXiv; 2022 [cited 2024 Mar 22]. Available from: http://arxiv.org/abs/2211.09085.
  • 76.Liu R, Shah NB ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing [Internet]. arXiv; 2023 [cited 2024 Mar 22]. Available from: http://arxiv.org/abs/2306.00622.
  • 77.Schulz KF, Altman DG, Moher D. the CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Med. 2010;8:18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Hamilton CM, Strader LC, Pratt JG, Maiese D, Hendershot T, Kwok RK, et al. The PhenX Toolkit: get the most from your measures. Am J Epidemiol. 2011;174:253–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Ekhtiari H, Zare-Bidoky M, Sangchooli A, Janes AC, Kaufman MJ, Oliver JA, et al. A methodological checklist for fMRI drug cue reactivity studies: development and expert consensus. Nat Protoc. 2022;17:567–95. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Duarte RV, Bresnahan R, Copley S, Eldabe S, Thomson S, North RB, et al. Reporting guidelines for clinical trial protocols and reports of implantable neurostimulation devices: protocol for the SPIRIT-iNeurostim and CONSORT-iNeurostim extensions. Neuromodulation Technol Neural Interface. 2022;25:1045–9. [DOI] [PubMed] [Google Scholar]
  • 81.Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med. 2010;8:24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Sarafoglou A, Hoogeveen S, Matzke D, Wagenmakers EJ. Teaching good research practices: protocol of a research master course. Psychol Learn Teach. 2020;19:46–59. [Google Scholar]
  • 83.Kohrs FE, Auer S, Bannach-Brown A, Fiedler S, Haven TL, Heise V, et al. Eleven strategies for making reproducible research and open science training the norm at research institutions. Zaidi M, editor. eLife. 2023;12:e89736. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Pownall M, Azevedo F, König LM, Slack HR, Evans TR, Flack Z, et al. Teaching open and reproducible scholarship: a critical review of the evidence base for current pedagogical methods and their outcomes. R Soc Open Sci. 2023;10:221255. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.van Viegen T, Akrami A, Bonnen K, DeWitt E, Hyafil A, Ledmyr H, et al. Neuromatch Academy: teaching computational neuroscience with global accessibility. Trends Cogn Sci. 2021;25:535–8. [DOI] [PubMed] [Google Scholar]
  • 86.Moher D, Altman DG, Schulz KF, Simera I. How to Develop a Reporting Guideline. In: Moher D, Altman DG, Schulz KF, Simera I, Wager E, editors. Guidelines for Reporting Health Research: A User’s Manual [Internet]. 1st ed. Wiley; 2014. p. 14–21. https://onlinelibrary.wiley.com/doi/10.1002/9781118715598.ch2. [Google Scholar]
  • 87.Bastow R, Leonelli S. Sustainable digital infrastructure. EMBO Rep. 2010;11:730–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Zakaria S, Grant J, Luff J. Fundamental challenges in assessing the impact of research infrastructure. Health Res Policy Sys. 2021;19:119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Barker M, Katz DS. Overview of research software funding landscape. 2022 Feb [cited 2024 Mar 22]; Available from: https://zenodo.org/records/6102487.
  • 90.RFA-MH-22-145: BRAIN Initiative: Standards to Define Experiments Related to the BRAIN Initiative (R01 Clinical Trial Not Allowed) [Internet]. [cited 2024 Mar 26]. Available from: https://grants.nih.gov/grants/guide/rfa-files/RFA-MH-22-145.html.
  • 91.Backhausen LL, Herting MM, Tamnes CK, Vetter NC. Best practices in structural neuroimaging of neurodevelopmental disorders. Neuropsychol Rev. 2022;32:400–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Wachinger C, Rieckmann A, Pölsterl S. Alzheimer’s Disease Neuroimaging Initiative. Detect and correct bias in multi-site neuroimaging datasets. Med Image Anal. 2021;67:101879. [DOI] [PubMed] [Google Scholar]
  • 93.Turner L, Shamseer L, Altman DG, Weeks L, Peters J, Kober T, et al. Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Database Syst Rev. 2012;11:MR000030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Altman DG, Simera I, Hoey J, Moher D, Schulz K. EQUATOR: reporting guidelines for health research. Lancet. 2008;371:1149–50. [DOI] [PubMed] [Google Scholar]
  • 95.Ros T, Enriquez-Geppert S, Zotev V, Young KD, Wood G, Whitfield-Gabrieli S, et al. Consensus on the reporting and experimental design of clinical and cognitive-behavioural neurofeedback studies (CRED-nf checklist). Brain. 2020;143:1674–85. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 96.Davis KD, Flor H, Greely HT, Iannetti GD, Mackey S, Ploner M, et al. Brain imaging tests for chronic pain: medical, legal and ethical issues and recommendations. Nat Rev Neurol. 2017;13:624–38. [DOI] [PubMed] [Google Scholar]
  • 97.Cisek P. Making decisions through a distributed consensus. Curr Opin Neurobiol. 2012;22:927–36. [DOI] [PubMed] [Google Scholar]
  • 98.Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6:e1000097. [PMC free article] [PubMed] [Google Scholar]
  • 99.Jorm AF. Using the Delphi expert consensus method in mental health research. Aust N Z J Psychiatry. 2015;49:887–97. [DOI] [PubMed] [Google Scholar]
  • 100.Eickhoff S, Nichols TE, van Horn JD, Turner JA. Sharing the wealth: neuroimaging data repositories. Neuroimage. 2016;124:1065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Petersen RC, Aisen PS, Beckett LA, Donohue MC, Gamst AC, Harvey DJ, et al. Alzheimer’s disease Neuroimaging Initiative (ADNI) clinical characterization. Neurology. 2010;74:201–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Poline JB, Breeze JL, Ghosh S, Gorgolewski K, Halchenko YO, Hanke M, et al. Data sharing in neuroimaging research. Front Neuroinform. 2012;6:9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.van Essen DC, Ugurbil K. The future of the human connectome. Neuroimage. 2012;62:1299–310. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104.Zidane YJT, Olsson NOE. Defining project efficiency, effectiveness and efficacy. Int J Manag Proj Bus. 2017;10:621–41. [Google Scholar]
  • 105.Roy A, Colpitts J, Becker K, Brewer J, van Lutterveld R. Improving efficiency in neuroimaging research through application of Lean principles. PloS One. 2018;13:e0205232. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 106.Shapiro L, Staroswiecki E, Gold G. Magnetic resonance imaging of the knee: optimizing 3 Tesla imaging. Semin Roentgenol. 2010;45:238–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 107.Khodyakov D, Mikesell L, Schraiber R, Booth M, Bromley E. On using ethical principles of community-engaged research in translational science. Transl Res. 2016;171:52–62. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108.Puri KS, Suresh KR, Gogtay NJ, Thatte UM. Declaration of Helsinki, 2008: implications for stakeholders in research. J Postgrad Med. 2009;55:131–4. [DOI] [PubMed] [Google Scholar]
  • 109.Rotstein HG, Santamaria F. Development of theoretical frameworks in neuroscience: a pressing need in a sea of data. arXiv preprint arXiv:220909953. 2022.
  • 110.Poline JB, Kennedy DN, Sommer FT, Ascoli GA, van Essen DC, Ferguson AR, et al. Is neuroscience FAIR? A call for collaborative standardisation of neuroscience data. Neuroinformatics. 2022;20:507–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Poldrack RA, Whitaker K, Kennedy D. Introduction to the special issue on reproducibility in neuroimaging. NeuroImage. 2020;218:116357. [DOI] [PubMed]
  • 112.Goldfarb MG, Brown DR. Diversifying participation: The rarity of reporting racial demographics in neuroimaging research. NeuroImage. 2022;254:119122. [DOI] [PubMed]
  • 113.Schwab S, Janiaud P, Dayan M, Amrhein V, Panczak R, Palagi PM, et al. Ten simple rules for good research practice. PLoS Comput Biol. 2022;18:e1010139. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Am Smeets P, Dagher A, Hare TA, Kullmann S, van der Laan LN, Poldrack RA, et al. Good practice in food-related neuroimaging. Am J Clin Nutr. 2019;109:491–503. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Nakayama T. What are “clinical practice guidelines”? J Neurol. 2007;254:2–7. [Google Scholar]
  • 116.Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 117.Pomponio R, Erus G, Habes M, Doshi J, Srinivasan D, Mamourian E, et al. Harmonization of large MRI datasets for the analysis of brain imaging patterns throughout the lifespan. Neuroimage. 2020;208:116450. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 118.Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Lancet. 1999;354:1896–900. [DOI] [PubMed] [Google Scholar]
  • 119.Matshabane OP. Promoting diversity and inclusion in neuroscience and neuroethics. EBioMedicine. 2021;67:103359. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 120.Noble S, Scheinost D, Constable RT. A decade of test-retest reliability of functional connectivity: a systematic review and meta-analysis. Neuroimage. 2019;203:116157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 121.Strickland JC. Guide to research techniques in neuroscience. J Undergrad Neurosci Educ. 2014;13:R1. [Google Scholar]
  • 122.Gross J, Baillet S, Barnes GR, Henson RN, Hillebrand A, Jensen O, et al. Good practice for conducting and reporting MEG research. Neuroimage. 2013;65:349–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 123.Shekari M, Verwer EE, Yaqub M, Daamen M, Buckley C, Frisoni GB, et al. Harmonization of brain PET images in multi-center PET studies using Hoffman phantom scan. EJNMMI Phys. 2023;10:68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Sullivan JA. The multiplicity of experimental protocols: a challenge to reductionist and non-reductionist models of the unity of neuroscience. Synthese. 2009;167:511–39. [Google Scholar]
  • 125.Lu H, Kashani AH, Arfanakis K, Caprihan A, DeCarli C, Gold BT, et al. MarkVCID cerebral small vessel consortium: II. Neuroimaging protocols. Alzheimer’s Dement. 2021;17:716–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 126.Murphy A, Weerakkody Y. MRI protocols. In: Radiopaedia.org. Radiopaedia.org; 2005.
  • 127.O’Boyle EH, Götz M, Questionable research practices. Jussim, LJ, Krosnick, JA, and Stevens, ST Research integrity: Best practices for the social and behavioral sciences. 2022;260–94.
  • 128.Xie Y, Wang K, Kong Y. Prevalence of research misconduct and questionable research practices: A systematic review and meta-analysis. Sci Eng Ethics. 2021;27:41. [DOI] [PubMed] [Google Scholar]
  • 129.Siritzky EM, Cox PH, Nadler SM, Grady JN, Kravitz DJ, Mitroff SR. Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples. Cogn Res: Princ Implic. 2023;8:66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 130.Barch DM, Yarkoni T. Introduction to the special issue on reliability and replication in cognitive and affective neuroscience research. Cogn Affect Behav Neurosci. 2013;13:687–9. [DOI] [PubMed] [Google Scholar]
  • 131.Plichta MM, Schwarz AJ, Grimm O, Morgen K, Mier D, Haddad L, et al. Test–retest reliability of evoked BOLD signals from a cognitive–emotive fMRI test battery. Neuroimage. 2012;60:1746–58. [DOI] [PubMed] [Google Scholar]
  • 132.Elliott ML, Knodt AR, Ireland D, Morris ML, Poulton R, Ramrakha S, et al. What is the test-retest reliability of common task-functional MRI measures? New empirical evidence and a meta-analysis. Psychol Sci. 2020;31:792–806. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 133.Rudeck J, Vogl S, Banneke S, Schönfelder G, Lewejohann L. Repeatability analysis improves the reliability of behavioral data. PloS One. 2020;15:e0230900. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 134.Sörös P, Wölk L, Bantel C, Bräuer A, Klawonn F, Witt K. Replicability, repeatability, and long-term reproducibility of cerebellar morphometry. Cerebellum. 2021;20:439–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 135.Miłkowski M, Hensel WM, Hohol M. Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail. J Comput Neurosci. 2018;45:163–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 136.Dienlin T, Johannes N, Bowman ND, Masur PK, Engesser S, Kümpel AS, et al. An agenda for open science in communication. J Commun. 2021;71:1–26. [Google Scholar]
  • 137.Kenall A, Edmunds S, Goodman L, Bal L, Flintoft L, Shanahan DR, et al. Better reporting for better research: a checklist for reproducibility. Gigascience. 2015;4:32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 138.Weissgerber TL, Garovic VD, Winham SJ, Milic NM, Prager EM. Transparent reporting for reproducible science. J Neurosci Res. 2016;94:859. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 139.Heßler N, Rottmann M, Ziegler A. Empirical analysis of the text structure of original research articles in medical journals. PloS One. 2020;15:e0240288. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 140.Botvinik-Nezer R, Wager TD. Reproducibility in neuroimaging analysis: challenges and solutions. Biol Psychiatry Cogn Neurosci Neuroimaging. 2023;8:780–8. [DOI] [PubMed] [Google Scholar]
  • 141.Glatard T, Lewis LB, Ferreira da Silva R, Adalat R, Beck N, Lepage C, et al. Reproducibility of neuroimaging analyses across operating systems. Front Neuroinformatics. 2015;9:12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 142.Valkenburg G, Dix G, Tijdink J, de Rijcke S. Expanding research integrity: a cultural-practice perspective. Sci Eng Ethics. 2021;27:10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 143.Beauvais MJS, Knoppers BM, Illes J. A marathon, not a sprint–neuroimaging, Open Science and ethics. Neuroimage. 2021;236:118041. [DOI] [PubMed] [Google Scholar]
  • 144.Graham M, Hallowell N, Savulescu J. A just standard: the ethical management of incidental findings in brain imaging research. J Law Med Ethics. 2021;49:269–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 145.Tedersoo L, Küngas R, Oras E, Köster K, Eenmaa H, Leijen Ä, et al. Data sharing practices and data availability upon request differ across scientific disciplines. Sci Data. 2021;8:192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 146.Ciric R, Thompson WH, Lorenz R, Goncalves M, MacNicol EE, Markiewicz CJ, et al. TemplateFlow: FAIR-sharing of multi-scale, multi-species brain models. Nat Methods. 2022;19:1568–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 147.Hedge C, Powell G, Sumner P. The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behav Res Methods. 2018;50:1166–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 148.Helwegen K, Libedinsky I, van den Heuvel MP. Statistical power in network neuroscience. Trends Cogn Sci. 2023;27:282–301. [DOI] [PubMed] [Google Scholar]
  • 149.Esteban O, Markiewicz CJ, Blair RW, Moodie CA, Isik AI, Erramuzpe A, et al. fMRIPrep: a robust preprocessing pipeline for functional MRI. Nat Methods. 2019;16:111–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 150.Loss CM, Melleu FF, Domingues K, Lino-de-Oliveira C, Viola GG. Combining animal welfare with experimental rigor to improve reproducibility in behavioral neuroscience. Front Behav Neurosci. 2021;15:763428. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 151.Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. Proc Natl Acad Sci USA. 2018;115:2600–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 152.Abrams MB, Bjaalie JG, Das S, Egan GF, Ghosh SS, Goscinski WJ, et al. A standards organization for open and FAIR neuroscience: the international neuroinformatics coordinating facility. Neuroinformatics. 2022;20:25–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 153.Barnes J, Conrad K, Demont-Heinrich C, Graziano M, Kowalski D, Neufeld J, et al. Understanding generalizability and transferability. Writing@ CSU. 2012.
  • 154.Schleim S. Real neurolaw in the Netherlands: the role of the developing brain in the new adolescent criminal law. Front Psychol. 2020;11:549375. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 155.Bradley SH, DeVito NJ, Lloyd KE, Richards GC, Rombey T, Wayant C, et al. Reducing bias and improving transparency in medical research: a critical overview of the problems, progress and suggested next steps. J R Soc Med. 2020;113:433–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 156.James S, Rao SV, Granger CB. Registry-based randomized clinical trials—a new clinical trial paradigm. Nat Rev Cardiol. 2015;12:312–6. [DOI] [PubMed] [Google Scholar]
  • 157.Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials. gov results database—update and key issues. N Engl J Med. 2011;364:852–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 158.Namiot ED, Smirnovová D, Sokolov AV, Chubarev VN, Tarasov VV, Schiöth HB. The international clinical trials registry platform (ICTRP): data integrity and the trends in clinical trials, diseases, and drugs. Front Pharmacol. 2023;14:1228148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 159.Andrade C. Internal, external, and ecological validity in research design, conduct, and evaluation. Indian J Psychol Med. 2018;40:498–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 160.Wrightson JJ. CONSORT_GPT [Internet]. [cited 2024 Apr 13]. Available from: https://chat.openai.com/g/g-jOiNJ3mhR-consort-gpt?utm_source=gptshunter.com.
  • 161.Ekhtiari H, Soleimani G, Kuplicki R, Yeh H, Cha Y, Paulus M. Transcranial direct current stimulation to modulate fMRI drug cue reactivity in methamphetamine users: a randomized clinical trial. Hum Brain Mapp. 2022;43:5340–57. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Neuropsychopharmacology are provided here courtesy of Nature Publishing Group

RESOURCES