Skip to main content
F1000Research logoLink to F1000Research
. 2017 Sep 22;6:1262. Originally published 2017 Jul 28. [Version 2] doi: 10.12688/f1000research.11964.2

Preprocessed Consortium for Neuropsychiatric Phenomics dataset

Krzysztof J Gorgolewski 1,a,#, Joke Durnez 1,2,b,#, Russell A Poldrack 1
PMCID: PMC5664981  PMID: 29152222

Version Changes

Revised. Amendments from Version 1

- We have extended the Introduction to give a clearer view on the purpose and possibilities this dataset gives. - We have changed the heading ‘Dataset validation’ to ‘Selected results’. - We have expanded the description of preprocessing and introduction to the FMRIPREP package. - We have clarified that not all subjects performed all of the tasks. - We added a data processing overview figure (new Figure 1). - We have spelled out the BIDS acronym and added a reference to a paper with more information. - We have added information about NIFTI and GIFTI file formats. - We have applied the suggested changes to the second paragraph of the Introduction - We have added information about the subjects - We have clarified that the FS analysis was done in parallel with the alignment. - We have added a few more details about preprocessing and an overview figure. Furthermore we have made sure that the online documentation of the version of FMRIPREP used to generate this data has been deposited in the Internet Archive for long term preservation. - We have clarified why different mask strategies were applied. - We have added the hyperslab figure to the manuscript (new Figure 3).

Abstract

Here we present preprocessed MRI data of 265 participants from the Consortium for Neuropsychiatric Phenomics (CNP) dataset. The preprocessed dataset includes minimally preprocessed data in the native, MNI and surface spaces accompanied with potential confound regressors, tissue probability masks, brain masks and transformations. In addition the preprocessed dataset includes unthresholded group level and single subject statistical maps from all tasks included in the original dataset. We hope that availability of this dataset will greatly accelerate research.

Keywords: fMRI, human, cognition, preprocessed

Introduction

Recently, the Consortium for Neuropsychiatric Phenomics published a dataset 1 with neuroimaging as well as phenotypic information for 272 participants. The subject population consists of healthy controls (130 subjects), as well as participants with diagnoses of adult ADHD (43 subjects), bipolar disorder (49 subjects) and schizophrenia (50 subjects). The goal of the study is to examine brain function and anatomy for these common neuropsychiatric syndromes. The study focuses on memory and response inhibition, with a large battery of questionnaires, neurocognitive tasks, a neuropsychological assessment and multiple neuroimaging modalities. Details on the complete assessment for each subject can be found in the data descriptor 1. It is undoubtedly a rich resource for the academic community that can help shed light on the relationship between brain and behavior, especially with respect to neuropsychiatric disorders. However, before any brain-behaviour relationships can be answered, computationally expensive processing steps need to be performed 2. In addition to requiring a substantial amount of computing resources, a certain level of expertise in MRI data processing and fMRI task modelling is required before the data can be used to test scientific hypotheses.

To facilitate answering scientific questions using the CNP dataset, we have performed standard preprocessing as well as statistical modeling on the data, and are making the results of these analyses openly available. The preprocessing was designed to facilitate a wide range of analyses, and includes outputs in native (aligned with participants T1 weighted scan), MNI (volumetric) and fsaverage5 (surface) spaces. The data have not been denoised, but potential confound regressors have been calculated for each run, giving researchers the freedom to choose their own denoising schemes. In addition, we also include group and single subject statistical maps for all tasks available in the original dataset. This preprocessed dataset joins the ranks of similar initiatives for other openly shared datasets 35, and we hope it will be equally useful to the scientific community.

The processed data can be found alongside the original unprocessed data in the OpenfMRI repository 6 under the revision 1.0.4.

Methods

Participants and procedures

The sample of subjects contains 155 men and 117 women, with ages between 21 and 50 years (mean: 33.23; median: 31.0). Each subject completed at least 8 years of formal education and have either English or Spanish as primary language. Subjects were recruited by community advertisement and through outreach to local clinics and online portals. The consortium excluded patients with diagnoses in at least 2 different patient groups. Furthermore, the following exclusion criteria were used: left-handedness, pregnancy, history of head injury with loss of consciousness or other contraindications to scanning.

Neuroimaging data were acquired on a 3T Siemens Trio scanner. Functional MRI data were collected with a T2*-weighted echoplanar imaging (EPI) sequence with parameters: slice thickness = 4mm, 34 slices, TR=2s, TE=30ms, flip angle=90°, matrix=64 × 64, FOV=192mm. A T1-weighted high-resolution anatomical scan (MPRAGE) were collected with the following parameter: slice thickness = 1mm, 176 slices, TR=1.9s, TE=2.26ms, matrix=256 x 256, FOV=250mm. Diffusion weighted imaging data were collected with parameters: slice thickness = 2mm, 64 directions, TR/TE=9000/93ms, flip angle=90°, matrix=96 × 96, axial slices, b=1000s/mm 2.

The following fMRI protocols were used (see full details in 1):

a. A resting state fMRI session of 304 seconds (eyes open)

b. Balloon analog risk task. Participants were allowed to pump a series of virtual balloons. Experimental balloons (green) resulted either in an explosion or in a successful pump (no explosion and 5 points). Control (white) balloons did not result in points nor exploded. Participants could choose not to pump but to cash out and start with a new balloon.

c. Paired associate memory task including a memory encoding task and a retrieval task. During the initial memory encoding task, two words were shown. Line drawings of those two objects were added after 1 second. During control trials, the line drawings were replaced with scrambled stimuli. On each trial, one of the drawings was black and white, while the other object was colored. Subjects were instructed to indicate by button press the side of the colored object. During the retrieval task, subjects were shown a pair of objects and rate their confidence in their memory of the pairing with response options ranging from Sure correct to Sure incorrect. During control trials, one of the response options was shown on side of the screen and ‘XXXX’ on the other side of the screen. Subjects were asked to press the button that corresponded to the response option displayed.

d. Spatial working memory task. Subjects were shown an array of 1, 3, 5 or 7 circles pseudorandomly positioned around a central fixation cross. After a delay, subjects were shown a green circle and were asked to indicate whether the circle was in the same position as one of the target circled. In addition to the memory load, the delay period was manipulated with delays of 1.5, 3 or 4.5s. Half the trials were true-positive and half were true negative.

e. Stop signal task. Participants were instructed to respond quickly when a ‘go’ stimulus was presented on the computer screen, except on the subset of trials where the ‘go’ stimulus was paired with a ‘stop’ signal. The ‘go’ stimulus was a pointing arrow, a stop-signal was a 500 Hz tone presented through headphones.

f. Task-switching task. Stimuli were shown varying in color (red or green) and in shape (triangle or shape). Participants were asked to respond to the stimulus based on the task cue (shape ‘S’ or color ‘C’). The task switched on 33% of the trials.

g. Breath holding task. Participants were asked to alternate between holding their breath and breathing regularly while resting.

The procedures were approved by the Institutional Review Boards at UCLA and the Los Angeles County Department of Mental Health.

Data processing overview

Data processing has been split into preprocessing and task analysis (model fitting). For an overview see Figure 1.

Figure 1. Overview of data processing and selected outputs.

Figure 1.

Preprocessing

The input dataset was acquired from OpenfMRI.org 6 - accession number ds000030, revision 1.0.3. Even though the original dataset included data from 272 participants, seven were missing T1 weighted scans (see Table 1) and thus only data from 265 participants were preprocessed.

Table 1. Known issues.

List of problems with the raw data we were aware of at the time of writing that impacted preprocessing.

Participants affected Issue
10971, 10501, 70036, 70035,
11121, 10299, 10428
Lack of T1w files. Preprocessing and task
modelling was not performed.
11067 Signal dropout in the cerebellum during BART,
rest, SCAP, stop-signal and task switch tasks.

Results included in this manuscript come from preprocessing performed using FMRIPREP version 0.4.4 ( http://fmriprep.readthedocs.io). This recently developed tool is a robust preprocessing pipeline based on the Nipype workflow engine 7. FMRIPREP aims at combining different implementations of various MR signal processing algorithms (from established software packages such as FSL, AFNI, or ANTs) to deliver a robust spatial normalization and nuisance estimation workflow. The tool was run with the following command line arguments:

--participant_label {sid} -w $LOCAL_SCRATCH --output-space T1w fsaverage5 template --nthreads 8 --mem_mb 20000

Where { sid} was the participant label and $LOCAL_SCRATCH was temporary folder for storing intermediate results.

Within the pipeline each T1 weighted volume was corrected for bias field using ANTs N4BiasFieldCorrection v2.1.0 8, skullstripped using antsBrainExtraction.sh v2.1.0 (using OASIS template), and coregistered to skullstripped ICBM 152 Nonlinear Asymmetrical template version 2009c 9 using symmetric image normalization method (SyN) nonlinear transformation with affine initialization implemented in ANTs v2.1.0 10.

Cortical surface was estimated from the bias field corrected T1 weighted volume (in subject space) using FreeSurfer v6.0.0 11. Due to its high quality, the brain mask derived by antsBrainExtraction.sh was used in the FreeSurfer pipeline instead of relying on the skullstripping algorithm included in FreeSurfer.

Functional data for each run was motion corrected using MCFLIRT v5.0.9 12. Functional data was skullstripped using combination of BET (from FSL) and 3dAutoMask (from AFNI) tools and was coregistered to the corresponding T1 weighted volume using boundary based registration with 9 degrees of freedom - implemented in FreeSurfer v6.0.0 13. Motion correcting transformations, transformation to T1 weighted space and MNI template warp were applied in a single step using antsApplyTransformations v2.1.0 with Lanczos interpolation.

Three tissue classes were extracted from T1 weighted images using FSL FAST v5.0.9 14. Voxels from cerebrospinal fluid and white matter were used to create a mask in turn used to extract physiological noise regressors using the principal component analysis-based method known as aCompCor 15. The mask was eroded and limited to subcortical regions to limit overlap with grey matter, six principal components were estimated. Framewise displacement and dvars 16 was calculated for each functional run using Nipype implementation. In addition to those regressors global signal and mean white matter signal was also calculated.

The whole dataset was preprocessed on the Stanford Sherlock supercomputer in total three times. After each iteration the decision to modify the preprocessing was purely based on the visual evaluation of the preprocessed data and not based on results of model fitting. First iteration (using FMRIPREP 0.4.2) uncovered inconsistent output image field of view and issues with EPI skullstripping, second iteration (using FMRIPREP 0.4.3) uncovered two cases of failed normalization due to poor initialization. In the final iteration all those issues were resolved. In total, the preprocessing consumed ~22,556 single CPU hours.

For more details of the pipeline see http://fmriprep.readthedocs.io/en/0.4.4/workflows.html (also archived in the Internet Archive at https://web.archive.org/web/20170913233706/http://fmriprep.readthedocs.io/en/0.4.4/workflows.html).

Volume-based task analysis

For a full description of the paradigms for each task, please refer to 1. We analysed the task data using FSL 17 and AFNI 18, implemented using Nipype 7. Spatial smoothing was applied using AFNI’s 3dBlurInMask with a Gaussian kernel with FWHM=5mm. Activity was estimated using a general linear model (GLM) with FEAT 17. Predictors were convolved with a double-gamma canonical haemodynamic response function 19. Temporal derivatives were added to all task regressors to compensate for variability in the haemodynamic response function. Furthermore, the following regressors were added to avoid confounding due to motion: standardised dvars, absolute dvars, the voxelwise standard deviation of dvars, framewise displacement, and the six motion parameters (translation in 3 directions, rotation in 3 directions).

For the Balloon Analog Risk Task (BART), we included 9 task regressors: for each condition (accept, explode, reject), we added a regressor with equal amplitude and durations of 1 second on each trial. Furthermore, we included the same regressors with the amplitude modulated by the number of trials before explosions (perceived as the probability of explosions). The modulator was mean centered to avoid estimation problems due to collinearity. For the conditions that require a response (accept, reject), a regressor was added with equal amplitude, and the duration equal to the reaction time. These regressors were orthogonalised with their fixed-duration counterpart to separate the fixed effect of the trial and the effect covarying with the reaction time. A regressor is added for the control condition.

In the retrieval phase of the Paired-Associate Memory Task (PAMRET), we modelled 4 conditions: true positives, false positives, true negatives, false negatives. For each condition, a regressor is modelled first with fixed durations (3s) and second with reaction time durations, with the latter orthogonalised with the former. With an extra regressor with control trials, there are 9 task regressors in total.

In the Spatial Capacity Task (SCAP), 25 task regressors were included. For each cognitive load (1 - 3 - 5 - 7) and each delay (1.5 - 3 - 4.5) with a correct response, two regressors were added: a regressor with fixed durations of 5 seconds and one with the duration equal to the reaction time, with the second orthogonalised with respect to the first. For both regressors, the onset is after the delay. The last regressor summarises all incorrect trials.

For the Stop-Signal Task (STOPSIGNAL), for each condition (go, stop - successful, stop - unsuccessful), one task regressor was included with a fixed duration of 1.5s. For the conditions requiring a response (go and stop-unsuccessful), an extra regressor was added with equal amplitude, but the duration equal to the reaction time. Again, these regressors were orthogonalised with respect to the fixed duration regressor of the same condition. A sixth regressor was added with erroneous trials.

In the Task Switching Task (TASKSWITCH), all manipulations were crossed (switch/no switch, congruent/incongruent, CSI delay short/long), resulting in 8 task conditions. As in the SCAP task, we added for each condition two regressors: a regressor with fixed durations of 1 second, and one with the duration equal to the reaction time, with the second orthogonalised with respect to the first. There is a total of 16 regressors.

Not all subjects performed all tasks. Furthermore for subjects who are missing at least one regressor used in the contrasts, the task data are discarded. This is the case for example when no correct answers are registered for a certain condition in the SCAP task. For the SCAP task, we discarded 16 subjects; 14 subjects were removed for TASKSWITCH, 2 subjects for STOPSIGNAL, 2 subjects for BART, and 12 for PAMRET. Thus the total number of subjects modelled in the BART task is 259, while 244 subjects were modelled for the SCAP task. 254 subjects were included the TASKSWITCH task analysis, 197 subjects in the PAMRET task and 255 subjects in the STOPSIGNAL task.

All modelled contrasts are listed in the Supplementary material. As is shown, all contrasts are estimated and tested for both a positive and a negative effect.

Group level analysis

Subsequent to the single subject analyses, all subjects were entered in a one-sample group level analysis for each task. Three second level analysis strategies were followed: (A) ordinary least squares (OLS) mixed modelling using FLAME 17, (B) generalized least squares (GLS) with a local estimate of random effects variance, using FSL 17, and (C) non-parametric modelling (NP) using RANDOMISE 20, with the whole brain first level parameter estimates for each subject as input, and 10,000 permutations. The first two analyses use a group brain mask with voxels that were present in 100% of all subjects, to ensure equal degrees of freedom in each voxel. For the permutation tests, a group mask was created where voxels were discarded for further analysis if less than 80% of the subjects have data in those voxels, to cover a larger part of the brain, especially in more remote area’s.

In addition to group level statistical maps, activation count maps (ACMs) were generated to show the proportion of participants that show activation, rather than average activation over subjects 21. These maps indicate whether the effects discovered in the group analyses are consistent over subjects. As in 21, the statistical map for each subject is binarized at z=+/-1.65. For each contrast, the average of these maps is computed over subjects. The average negative map (percentage of subjects showing a negative effect with z < -1.65) is subtracted from the average positive map to indicate the direction of effects.

Selected results

To validate the quality of volumetric spatial normalization we have looked at the overlap of the EPI derived brain masks in the MNI space (across all participants and runs - total of 1,969 masks - see Figure 2) and visualized alignment of a single line of voxels across all runs (see Figure 3). The within subject coregistration and between subject normalization worked well for the vast majority of participants, creating a very good overlap. All of the issues observed while processing the dataset are listed in Table 1.

Figure 2. Overlap of the EPI derived 1,969 brain masks in the MNI space: voxels inside the blue outlined were present within the mask for 85% of runs, purple: 95% of runs, black 100% of runs.

Figure 2.

Animated visualizations of all coregistrations are available inside the HTML reports included as part of this dataset.

Figure 3. Visualization of the coregistration quality (hyperslab).

Figure 3.

Each line in all columns represents a single line of corresponding voxels from 1,969 preprocessed EPI images in MNI space (voxel coordinates i=20, k=50, t=10).

A selection of the tested contrasts in the task analyses is shown in Figures 4 to 8. Figures were generated using nilearn 22.

Figure 4. Task analysis results for the BART task.

Figure 4.

In the left plot, the statistical map of the one-sample group test, computed with randomise. The right plot shows the difference between the positive and the negative activation count maps.

Figure 5. Task analysis results for the PAMRET task.

Figure 5.

In the left plot, the statistical map of the one-sample group test, computed with randomise. The right plot shows the difference between the positive and the negative activation count maps.

Figure 6. Task analysis results for the SCAP task.

Figure 6.

In the left plot, the statistical map of the one-sample group test, computed with randomise. The right plot shows the difference between the positive and the negative activation count maps.

Figure 7. Task analysis results for the STOPSIGNAL task.

Figure 7.

In the left plot, the statistical map of the one-sample group test, computed with randomise. The right plot shows the difference between the positive and the negative activation count maps.

Figure 8. Task analysis results for the TASKSWITCH task.

Figure 8.

In the left plot, the statistical map of the one-sample group test, computed with randomise. The right plot shows the difference between the positive and the negative activation count maps.

Data and software availability

The preprocessed images were deposited along the original dataset in the OpenfMRI repository – accession number: ds000030 6, under the revision 1.0.4. The preprocessed data is organized according the draft extension to the Brain Imaging Data Structure (BIDS – see 23) specification for describing derived data. All FMRIPREP derivatives are organized under fmriprep/sub-<participant_label>/

Derivatives related to T1 weighted files are in the anat subfolder:

  • *T1w_preproc.nii.gz - bias field corrected T1 weighted file, using ANTS’ N4BiasFieldCorrection

  • * T1w_brainmask.nii.gz - brain mask derived using ANTS

  • *T1w_dtissue.nii.gz -tissue class map derived using FAST.

  • *T1w_class-CSF_probtissue.nii.gz, *T1w_class-GM_probtissue.nii.gz, *T1w_class-WM_probtissue.nii.gz - probability tissue maps.

All of the above are available in native and MNI space.

  • *T1w_smoothwm.[LR].surf.gii - smoothed gray white matter interface surfaces.

  • *T1w_pial.[LR].surf.gii - pial surface.

  • *T1w_midthickness.[LR].surf.gii - MidThickness surfaces.

  • *T1w_inflated.[LR].surf.gii - FreeSurfer inflated surfaces for visualization.

  • *T1w_space-MNI152NLin2009cAsym_class-CSF_probtissue.nii.gz, *T1w_space-MNI152NLin2009cAsym_class-GM_probtissue.nii.gz, *T1w_space-MNI152NLin2009cAsym_class-WM_probtissue.nii.gz - probability tissue maps, transformed into MNI space.

  • *T1w_target-MNI152NLin2009cAsym_warp.h5 Composite (warp and affine) transform to transform participant's T1 weighted image into the MNI space (HDF5 format).

Derivatives related to EPI files are in the func subfolder:

  • *bold_space-<space>_brainmask.nii.gz Brain mask for EPI files.

  • *bold_space-<space>_preproc.nii.gz Motion-corrected (using MCFLIRT for estimation and ANTs for interpolation) EPI file

All of the above are available in the native T1 weighted space as well as the MNI space.

  • *bold_space-fsaverage5.[LR].func.gii Motion-corrected EPI file sampled to surface.

  • *bold_confounds.tsv A tab-separated value file with one column per calculated confound (see Methods) and one row per timepoint/volume.

File formats: files with the .nii.gz extension are in the NIfTI file format (see https://nifti.nimh.nih.gov/), files with the .gii are in the GIfTI file format (see https://www.nitrc.org/projects/gifti/).

In addition, the dataset includes 265 visual quality HTML reports (one per participant) generated by FMRIPREP that illustrate all mayor preprocessing steps (T1 skullstripping, T1 to MNI coregistration, EPI skullstripping, EPI to T1 coregistration, and CompCor regions of interest).

All the FreeSurfer derivatives are organized under freesurfer/sub-<participant_label>/ according to the FreeSurfer native file organization scheme.

The results of the single subject task modeling are available in task/sub-<participant_label>/ and the group level results can be found in task_group/. Each subject-specific folder holds 5 folders - bart.feat, scap.feat, pamret.feat, stopsignal.feat, taskswitch.feat - with the results from the respective task modeling, organised as standard FEAT output. The group-level folder contains a folder for every task, in turn containing a folder for each contrast (see Supplementary material for naming conventions) and below those folders are the results of the three modeling strategies.

The results for each contrast in the one-sample group task analyses are deposited and can be interactively viewed in NeuroVault 24: http://neurovault.org/collections/2606/.

Latest source code used to produce the task analyses: https://github.com/poldracklab/CNP_task_analysis

Archived source code as at the time of publication: http://doi.org/10.5281/zenodo.832319 25. License: MIT license.

All code has been run through a singularity container 26, created from a docker container poldracklab/cnp_task_analysis:1.0 available on Docker Hub ( https://hub.docker.com/r/poldracklab/cnp_task_analysis/).

To ensure long term preservation, the code has been shared on Zenodo and assigned a DOI. This does not only allow re-running of the analyses, but also regeneration of the singularity container with all necessary dependencies to do so. Furthermore, the data shared on NeuroVault and OpenfMRI are periodically archived in Stanford Digital Repository.

Acknowledgements

We would like to thank all of the developers and beta testers of the FMRIPREP package - especially Oscar Esteban, Chris Markiewicz, and Ross Blair.

Funding Statement

This work has been funded by the Laura and John Arnold Foundation. JD has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 706561. The acquisition of the original dataset was supported by the Consortium for Neuropsychiatric Phenomics (NIH Roadmap for Medical Research grants UL1-DE019580, RL1MH083268, RL1MH083269, RL1DA024853, RL1MH083270, RL1LM009833, PL1MH083271, and PL1NS062410).

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

[version 2; referees: 2 approved]

Supplementary material

Supplementary File 1: Task fMRI Contrasts.

References

  • 1. Poldrack RA, Congdon E, Triplett W, et al. : A phenome-wide examination of neural and cognitive function. Sci Data. 2016;3: 160110. 10.1038/sdata.2016.110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Poldrack RA, Gorgolewski KJ: Making big data open: data sharing in neuroimaging. Nat Neurosci. 2014;17(11):1510–7, [cited 2014 Oct 28]. 10.1038/nn.3818 [DOI] [PubMed] [Google Scholar]
  • 3. Puccio B, Pooley JP, Pellman JS, et al. : The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data. Gigascience. 2016;5(1):45. 10.1186/s13742-016-0150-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Bellec P, Chu C, Chouinard-Decorte F, et al. : The Neuro Bureau ADHD-200 Preprocessed repository. Neuroimage. 2017;144(Pt B):275–86. 10.1016/j.neuroimage.2016.06.034 [DOI] [PubMed] [Google Scholar]
  • 5. Glasser MF, Sotiropoulos SN, Wilson JA, et al. : The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage. 2013;80:105–24. 10.1016/j.neuroimage.2013.04.127 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Poldrack RA, Barch DM, Mitchell JP, et al. : Toward open sharing of task-based fMRI data: the OpenfMRI project. Front Neuroinform. 2013;7:12. 10.3389/fninf.2013.00012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Gorgolewski K, Burns CD, Madison C, et al. : Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Front Neuroinform. 2011;5:13. 10.3389/fninf.2011.00013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Tustison NJ, Avants BB, Cook PA, et al. : N4ITK: improved N3 bias correction. IEEE Trans Med Imaging. 2010;29(6):1310–20. 10.1109/TMI.2010.2046908 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Fonov VS, Evans AC, McKinstry RC, et al. : Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. Neuroimage. 2009;47:S102 10.1016/S1053-8119(09)70884-5 [DOI] [Google Scholar]
  • 10. Avants BB, Epstein CL, Grossman M, et al. : Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med Image Anal. 2008;12(1):26–41. 10.1016/j.media.2007.06.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Dale AM, Fischl B, Sereno MI: Cortical surface-based analysis. I. Segmentation and surface reconstruction. Neuroimage. 1999;9(2):179–94. 10.1006/nimg.1998.0395 [DOI] [PubMed] [Google Scholar]
  • 12. Jenkinson M, Bannister P, Brady M, et al. : Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage. 2002;17(2):825–41. 10.1006/nimg.2002.1132 [DOI] [PubMed] [Google Scholar]
  • 13. Greve DN, Fischl B: Accurate and robust brain image alignment using boundary-based registration. Neuroimage. 2009;48(1):63–72. 10.1016/j.neuroimage.2009.06.060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Zhang Y, Brady M, Smith S: Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans Med Imaging. 2001;20(1):45–57. 10.1109/42.906424 [DOI] [PubMed] [Google Scholar]
  • 15. Behzadi Y, Restom K, Liau J, et al. : A component based noise correction method (CompCor) for BOLD and perfusion based fMRI. Neuroimage. 2007;37(1):90–101. 10.1016/j.neuroimage.2007.04.042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Power JD, Mitra A, Laumann TO, et al. : Methods to detect, characterize, and remove motion artifact in resting state fMRI. Neuroimage. 2013;84:320–41. 10.1016/j.neuroimage.2013.08.048 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Jenkinson M, Beckmann CF, Behrens TE, et al. : FSL. Neuroimage. 2012;62(2):782–90. 10.1016/j.neuroimage.2011.09.015 [DOI] [PubMed] [Google Scholar]
  • 18. Cox RW: AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res. 1996;29(3):162–73. 10.1006/cbmr.1996.0014 [DOI] [PubMed] [Google Scholar]
  • 19. Glover GH: Deconvolution of impulse response in event-related BOLD fMRI. Neuroimage. 1999;9(4):416–29. 10.1006/nimg.1998.0419 [DOI] [PubMed] [Google Scholar]
  • 20. Winkler AM, Ridgway GR, Webster MA, et al. : Permutation inference for the general linear model. Neuroimage. 2014;92:381–97. 10.1016/j.neuroimage.2014.01.060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Barch DM, Burgess GC, Harms MP, et al. : Function in the human connectome: task-fMRI and individual differences in behavior. Neuroimage. 2013;80:169–89. 10.1016/j.neuroimage.2013.05.033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Abraham A, Pedregosa F, Eickenberg M, et al. : Machine Learning for Neuroimaging with Scikit-Learn. arXiv [cs.LG]. 2014. Reference Source [Google Scholar]
  • 23. Gorgolewski KJ, Auer T, Calhoun VD: The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci Data. 2016;3:160044. 10.1038/sdata.2016.44 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Gorgolewski KJ, Varoquaux G, Rivera G, et al. : NeuroVault.org: a web-based repository for collecting and sharing unthresholded statistical maps of the human brain. Front Neuroinform. 2015;9:8. 10.3389/fninf.2015.00008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Durnez J, Gorgolewski CJ, Poldrack RA: poldracklab/CNP_task_analysis: v0.1. Zenodo. 2017. Data Source [Google Scholar]
  • 26. Kurtzer GM, Sochat V, Bauer MW: Singularity: Scientific containers for mobility of compute. PLoS One. 2017;12(5):e0177459. 10.1371/journal.pone.0177459 [DOI] [PMC free article] [PubMed] [Google Scholar]
F1000Res. 2017 Oct 31. doi: 10.5256/f1000research.13763.r26274

Referee response for version 2

Angela R Laird 1

The authors have sufficiently addressed my previous concerns, and I now am happy to endorse publication of this manuscript.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2017 Oct 18. doi: 10.5256/f1000research.13763.r26275

Referee response for version 2

Anderson M Winkler 1,2

All suggestions I had for the first version have been addressed in this revised version, and I have no further concerns. I would like to thank and congratulate the authors for making this resource available.

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2017 Aug 22. doi: 10.5256/f1000research.12934.r24602

Referee response for version 1

Anderson M Winkler 1,2

First of all, I would like to congratulate the authors for making the dataset available, which should allow interested scientists to explore the data and enrich the research they conduct with more information that can eventually lead to helpful new discoveries. It is remarkable that more than one processing stream was used (FSL and AFNI for functional, and volumetric and surface-based for structural), and further, three different inference approaches were considered, all of which are a great bonus in terms of comparisons among methods.

I have very few concerns about the current version of the manuscript (v1, dated 28/July/2017):

  • Page 2, 1st column, 2nd paragraph of the Introduction: "giving researchers the freedom to fit many different models that incorporate different denoising schemes": as stated, it may suggest that it would be adequate to simply run multiple models, without attention to excess error due to multiple testing. Perhaps a different wording such as "giving researchers the freedom to choose their own denoising schemes" could still accommodate what the authors may have wished to state.

  • I cannot find information about the subjects. Who were them? Where was the data collected? With what scanner and sequences? Who approved the protocol? Presumably this information is on reference #1 but it cannot hurt to have that information here.

  • It would be good if a few more details on what exactly FMRIPREP does could be given, without having to rely completely on external links that may no longer be available in the future. Even more important considering that results did change with after minor version changes.

  • Page 2, 1st column, 4th paragraph of the Methods: As written, a reader may think that the input images to FreeSurfer were those non-linearly aligned to the MNI space, which surely was not the case. But if that was, then the FS analysis would have to re-done as the warps affect thickness and area measurements.

  • Page 3, 2nd column: Regarding the masks, one would have thought that using the mask from FEAT/FLAME could have been a good shortcut instead of creating a new one for randomise. Why wasn't that done?

  • Page 3, Validation section: The strategy using the mask contours to investigate between-subjects registration is surely not a good one for not showing how the overlap between structures. A hyperslab across subjects would have been more informative. Moreover, the contours shown in Figure 1 are a bit concerning for suggesting somewhat suboptimal registration.

  • Still in the Validation section: what exactly is being validated here? It doesn't seem to show that the dataset would be valid or not valid in any particular aspect. Consider investigating some specific validation parameters over different aspects (e.g., registration, bias correction, surface reconstruction, the tasks eliciting expected response, etc), or remove this section altogether, as it can be misleading for suggesting that the dataset is "valid" somehow.

  • Page 6: The description of the files is extremely helpful. I note that one of the files listed has extension .h5. Is this HDF5? If yes, please state so. I believe this format was used for the lack of another option, but in fact, this is a great format that probably should in the future be an option for most imaging data we use (both surface-based and volume-based).

  • Of the FreeSurfer surfaces, the white is the most important one, not pial or midthickness. The pial is computed after the white already exists, and its exactness depends on the white. The midthickness does not match any particular tissue border, and if one measures surface area from it, that area will depend on thickness, which would make it a poor phenotype. It would be great if the white surface files could be provided.

  • Page 7: I find it concerning that information and resources about this dataset are scattered over the internet: There is the current paper (PDF) and its Supplementary Material on F1000, then there are results stored in NeuroVault, source code on Github and Zenodo, and finally, a Docker container on DockerHub. Could not a copy of all these pieces be on a single place that can be simply downloaded and maintained on the long term, e.g., in DataDryad? How can the readers be sure that all these links will be alive in 10 or 20 years?

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

F1000Res. 2017 Sep 16.
Krzysztof J Gorgolewski 1

Dr. Winkler,

Thank you for the detailed review. Your comments helped us to improve the manuscript in the following way:

  • We have applied the suggested changes to the second paragraph of the introduction

  • We have added information about the subjects

  • We have clarified that the FS analysis was done in parallel with the alignment.

  • We have added a few more details about preprocessing and an overview figure. Furthermore we have mode sure that the online documentation of the version of FMRIPREP used to generate this data has been deposited in the Internet Archive for long term preservation.

  • We have clarified why different mask strategies were applied.

  • We have added the hyperslab figure to the manuscript.

  • The 95% and 85% overlap brain mask overlap contours show good agreement across the normalized masks with signal dropout in areas usually affected by susceptibility distortion artifacts. The 100% overlap is much worse since it requires voxels to be present in all of the 1,969 evaluated masks.

  • We have changed the header ‘Validation’ to ‘Selected Results’ to not give the wrong impression that we validated the analyses.

  • We added clarification on the file format for the .h5 files (indeed it’s HDF5!)

  • The white surface is provided (both in GIfTI and native FreeSurfer formats) - we made this information more prominent.

  • We have clarified that there is long-term storage of the data and code.

F1000Res. 2017 Aug 8. doi: 10.5256/f1000research.12934.r24599

Referee response for version 1

Angela R Laird 1

This Data Note reports on the availability of an fMRI dataset from the Consortium for Neuropsychiatric Phenomics (CNP), which includes both original and processed data. The publication of shared fMRI datasets is strongly encouraged to amplify our community's efforts in promoting open science. Although dataset publications are on the rise, unfortunately, only a handful currently exist. I am delighted to see this work being published, and expect that it may serve as a representative fMRI dataset publication in the future. With that in mind, I think it would be helpful to revise the manuscript to include a more detailed description of the data, acquisition methods, and individual tasks. 

Introduction: The first paragraph of the Introduction is extremely brief and should be expanded to include a description of the purpose of the study, participants, and the experimental protocol (imaging and behavioral). Only a very short mention of these three important aspects of the study are provided and collapsed into a single (somewhat awkward) first sentence. Such brevity might limit a reader's understanding of the overall context of the data that are being shared. The second sentence of the Introduction alludes to "relationships" being "answered" - this should be restated and expanded to more fully describe what questions may be asked from these data - again, this relates to the overall purpose of the CNP project. While such additional descriptions will result in a longer paper, the information will be helpful in allowing readers to understand how their specific research questions may be addressed by a deeper exploration of these data. Lastly, the Introduction should also state where the data may be downloaded and a summary of the different unprocessed and processed files that have been shared.

Methods: Demographic information on the participants should be provided, as well as a statement of IRB approval. A description of the MRI scanner should be included, along with the data acquisition parameters. Each of the fMRI tasks should be fully described - this will help clarify subsequent reference to different conditions (e.g., “accept, explode, reject”). Much of the Methods is written for those who are already very familiar with the software packages that are utilized. It would be helpful to improve accessibility by including a brief description of some of the newer, less ubiquitious software tools. In particular, given how the Methods is framed around use of FMRIPREP, a short intro should be included. The total numbers of participants for each of the tasks (shown on page 3) doesn’t agree with the numbers of task datasets discarded for being incomplete - please note why the additional participants were omitted from the final dataset. Overall, the flow of the Methods section could be improved by adding a workflow or pipeline figure that summarizes the different analysis steps and the versions of the data (e.g., volume vs. surface approaches). 

I’m not convinced that “Dataset validation” is an appropriate heading. Figure 1 is a good sanity check, but it’s not clear how Figures 2 - 6 are a validation.

Data and software availability: Some readers may not be familiar with the BIDS format - please add a description of this. In addition, some readers may be looking for information about the DICOM and NIfTI images, so an explicit mention may be helpful.

Minor comments:

- page 2: first report of DVARS is capitalized, but later mentions are not

- page 2: "Temporal derivatives were added to all task regressors to compensate for variability in the haemodynamic response function" 

- page 3: typo - “unsuccesful”

- ensure past tense used consistently throughout Methods (e.g., “tasks data were discarded”, “no correct answers were registered” on page 3).

I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

F1000Res. 2017 Sep 16.
Krzysztof J Gorgolewski 1

Dr. Laird,

Thank you for your review and comments. They were very helpful in preparation of a new revision of the paper.

  • We have extended the introduction to give a clearer view on the purpose and possibilities this dataset gives. In the methods sections, we have given demographic information on the participants, as well as IRB approval and a description of the MRI scanner, scanning parameters and tasks.

  • We have changed the heading ‘Dataset validation’ to ‘Selected results’.

  • We have expanded the description of preprocessing and introduction to the FMRIPREP package.

  • We have clarified that not all subjects performed all of the tasks.

  • We added a data processing overview figure.

  • We have spelled out the BIDS acronym and added a reference to a paper with more information.

  • We have added information about NIFTI and GIFTI file formats (DICOM files are not part of this dataset).

  • Furthermore, in accordance with the review, we have fixed the reported typos and changed the tense of the paper.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials


    Articles from F1000Research are provided here courtesy of F1000 Research Ltd

    RESOURCES