Skip to main content
Data in Brief logoLink to Data in Brief
. 2020 Jan 25;29:105170. doi: 10.1016/j.dib.2020.105170

Single-trial fMRI activation maps measured during the InterTVA event-related voice localizer. A data set ready for inter-subject pattern analysis

Virginia Aglieri a, Bastien Cagna a, Pascal Belin a,b, Sylvain Takerkart a,
PMCID: PMC7016221  PMID: 32071965

Abstract

Multivariate pattern analysis (MVPA) of functional neuroimaging data has emerged as a key tool for studying the cognitive architecture of the human brain. At the group level, we have recently demonstrated the advantages of an under-exploited scheme that consists in training a machine learning model on data from a set of subjects and evaluating its generalization ability on data from unseen subjects (see Inter-subject pattern analysis: A straightforward and powerful scheme for group-level MVPA [1]). We here provide a data set that is fully ready to perform inter-subject pattern analysis, which includes 5616 single-trial brain activation maps recorded in 39 participants who were scanned using functional magnetic resonance imaging (fMRI) with a voice localizer paradigm. This data set should therefore reveal valuable for data scientists developing brain decoding algorithms as well as cognitive neuroscientists interested in voice perception.

Keywords: Functional magnetic resonance imaging (fMRI), Multivariate pattern analysis (MVPA), Inter-subject pattern analysis (ISPA), Voice perception, Voice localizer, Single-trial betas


Specifications Table

Subject Cognitive Neuroscience
Specific subject area Functional neuroimaging, multivariate pattern analysis, brain decoding, voice perception
Type of data Image
How data were acquired Magnetic Resonance Imaging (3T Prisma scanner; Siemens, Eerlangen, Germany)
Data format Raw
NIFTI (Neuroimaging Informatics Technology Initiative) format
Parameters for data collection A multi-band gradient echo-planar imaging (EPI) sequence with a factor of 5 was used to cover the whole brain and cerebellum with 60 slices during the TR (repetition time) of 955 ms, with an isotropic resolution of 2 mm, a TE (echo time) of 35.2 ms, a flip angle of 56° and a field of view of 200 × 200 mm for each slice.
Description of data collection Data were recorded in 40 subjects. Each participant was asked to close their eyes while passively listening to 144 sounds each lasting approximately 500 ms, 72 vocal sounds and 72 non-vocal sounds. The fMRI data were processed to estimate the brain activation map induced by hearing each of these sounds, yielding 5616 brain images (144 stimuli x 39 subjects, after the exclusion of one subject)
Data source location Institution: Aix-Marseille Université
City/Town/Region: Marseille
Country: France
Data accessibility Repository name: Zenodo
Data identification number: 10.5281/zenodo.2591038
Direct URL to data: https://doi.org/10.5281/zenodo.2591038
Related research article Wang Q., Cagna B., Chaminade T., et Takerkart S. (2020). Inter-subject pattern analysis: A straightforward and powerful scheme for group-level MVPA. NeuroImage, 204: 116205. https://doi.org/10.1016/j.neuroimage.2019.116205
Value of the Data
  • These data can be used to replicate the results of [1]

  • These data can be exploited by data scientists developing brain decoding methods, in particular to perform inter-subject pattern analysis

  • These data can be analyzed by cognitive neuroscientists to further understand the processes involved in the perception of vocal information

1. Data description

The data made available through this article consists in a set of 5616 three-dimensional brain images in compressed NIFTI format, i.e 144 images for each of the 39 participants. For a given participant, each of the 144 images contains estimates of the blood-oxygen-level-dependent (BOLD) response induced by hearing an audio stimulus, either vocal or non-vocal, as measured using fMRI. These brain maps were computed using the processing pipeline described below. The naming of the files is straightforward, with one directory for each subject (called ‘sub-XX’, where XX is a two-digit number), and image names following the convention ‘beta_0YYY.nii.gz’, where YYY is a three-digit number between 1 and 144, padded with zeros when necessary. In order to allow reproducing the brain decoding results presented in Ref. [1], a tabulated file called ‘labels_voicelocalizer_voice_vs_nonvoice.tsv’ provides information on the stimuli; it contains 144 lines, each including the name of the file and the category of the corresponding stimulus (‘voice’ or ‘nonvoice’) that was presented for this experimental trial. Finally, a binary mask of the brain is included (called ‘brain_mask.nii.gz’).

2. Experimental design, materials, and methods

2.1. Participants

Forty healthy volunteers (28 females; mean age: 25.3 years, standard deviation: 5.5 years) participated in the InterTVA protocol, which aims at studying the inter-individual differences observed in people's ability at performing voice perception and voice identification tasks using multi-modal MRI recordings. The full InterTVA data set is available online [2]. All participants provided written informed consent in agreement with the local guidelines of the South Mediterranean ethics committee.

2.2. Stimuli and paradigm

As part of this experiment, the participants were presented with an event-related voice localizer protocol in the MRI scanner. They were asked to close their eyes while passively listening to 144 sounds each lasting approximately 500 ms, 72 vocal sounds and 72 non-vocal sounds. Most of the stimuli (95%) belonged to a database created for a previous study [3], while others were downloaded from public databases free of copyright. A sequence of randomized inter-stimulus intervals (ISI) of a duration that varied between 4 and 5 seconds separated each stimulus.

2.3. Data acquisition

The images were acquired on a 3T Prisma MRI scanner (Siemens, Eerlangen, Germany) with a 64-channel head coil. A multi-band gradient echo-planar imaging (EPI) sequence with a factor of 5 was used to cover the whole brain and cerebellum with 60 slices during the TR (repetition time) of 955 ms, with an isotropic resolution of 2 mm, a TE (echo time) of 35.2 ms, a flip angle of 56° and a field of view of 200 × 200 mm for each slice. A total of 792 vol were acquired in a single run of 12 min and 36 s.

In addition, a high resolution three-dimensional T1 image was acquired for each subject (isotropic voxel resolution of 0.8 mm, TR = 2400 ms, TE = 2.28 ms, field of view of 256 × 256 mm), as well as a phase-reversed pair of spin echo images with the same geometry as the fMRI data (field of view = 240 × 240 mm, slice thickness = 2.5 mm, TR = 677 ms, TE = 4.62 ms, encoding phase = anterior to posterior) in order to compute a field map.

2.4. Data processing

The data processing was performed in SPM12. The processing pipeline included co-registration of the EPIs with the T1 anatomical image, correction of the image distortions using the field maps, motion correction of the EPIs, construction of a population-specific anatomical template using the DARTEL method, transformation of the DARTEL template into MNI space and warping of the EPIs into this template space. The data from one of the forty subjects were excluded because of excessive motion. Then, a general linear model was set up with one regressor per trial, as well as other regressors of non interest such as motion parameters, following the least-squares-all approach described in Ref. [4]. The estimation of the parameters of this model yielded a set of beta maps that was each associated with a given experimental trial. We therefore obtained 144 single-trial beta maps for each of the 39 per subjects. The beta values contained in these maps allowed constructing the input vectors for the decoding algorithms, that can therefore operate on single trials. No spatial smoothing was applied on these data.

2.5. Inter-subject pattern analysis

The source code that performs group-level searchlight decoding using the inter-subject pattern analysis scheme on these 5616 images is available at http://www.github.com/SylvainTakerkart/inter_subject_pattern_analysis. It uses the searchlight decoding implementation available in the nilearn python module [5].

Acknowledgments

This work was carried out within the Institut Convergence ILCB (ANR-16-CONV-0002). The acquisition of the data, performed at the Centre IRM-INT in Marseille, France, was made possible thanks to the infrastructure France Life Imaging (11-INBS-0006) of the French program Investissements d’Avenir, as well as a grant from the Agence Nationale de la Recherche (ANR-15-CE23-0026).

Footnotes

Appendix A

Supplementary data to this article can be found online at https://doi.org/10.1016/j.dib.2020.105170.

Conflict of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A. Supplementary data

The following is the Supplementary data to this article:

Multimedia component 1
mmc1.xml (2.3KB, xml)

References

  • 1.Wang Q., Cagna B., Chaminade T., Takerkart S. Inter-subject pattern analysis: a straightforward and powerful scheme for group-level MVPA. Neuroimage. 2020;204 doi: 10.1016/j.neuroimage.2019.116205. [DOI] [PubMed] [Google Scholar]
  • 2.Aglieri V., Cagna B., Belin P., Takerkart S. InterTVA. A Multimodal MRI Dataset for the Study of Inter-individual Differences in Voice Perception and Identification. OpenNeuro. 2019 doi: 10.18112/openneuro.ds001771.v1.0.2. [DOI] [Google Scholar]
  • 3.Capilla A., Belin P., Gross J. The early spatio-temporal correlates and task independence of cerebral voice processing studied with MEG. Cerebr. Cortex. 2013;23(6):1388–1395. doi: 10.1093/cercor/bhs119. [DOI] [PubMed] [Google Scholar]
  • 4.Mumford J.A., Turner B.O., Ashby F.G., Poldrack R.A. Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. Neuroimage. 2012;59(3):2636–2643. doi: 10.1016/j.neuroimage.2011.08.076. févr. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Abraham A. Machine learning for neuroimaging with scikit-learn. Front. Neuroinf. 2014;8 doi: 10.3389/fninf.2014.00014. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Multimedia component 1
mmc1.xml (2.3KB, xml)

Articles from Data in Brief are provided here courtesy of Elsevier

RESOURCES