Abstract
The functional neuroanatomy of speech processing has been investigated using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) for more than 20 years. However, these approaches have relatively poor temporal resolution and/or challenges of acoustic contamination due to the constraints of echoplanar fMRI. Furthermore, these methods are contraindicated because of safety concerns in longitudinal studies and research with children (PET) or in studies of patients with metal implants (fMRI). High-density diffuse optical tomography (HD-DOT) permits presenting speech in a quiet acoustic environment, has excellent temporal resolution relative to the hemodynamic response, and provides noninvasive and metal-compatible imaging. However, the performance of HD-DOT in imaging the brain regions involved in speech processing is not fully established. In the current study, we use an auditory sentence comprehension task to evaluate the ability of HD-DOT to map the cortical networks supporting speech processes. Using sentences with two levels of linguistic complexity, along with a control condition consisting of unintelligible noise-vocoded speech, we recovered a hierarchical organization of the speech network that matches the results of previous fMRI studies. Specifically, hearing intelligible speech resulted in increased activity in bilateral temporal cortex and left frontal cortex, with syntactically complex speech leading to additional activity in left posterior temporal cortex and left inferior frontal gyrus. These results demonstrate the feasibility of using HD-DOT to map spatially distributed brain networks supporting higher-order cognitive faculties such as spoken language.
Graphical Abstract
Introduction
Cognitive neuroscientists who study how the brain perceives spoken language desire a quiet imaging technique that can record brain function noninvasively and provide reliable results. Such measurements have proven challenging to collect using functional magnetic resonance imaging (fMRI) due to the substantial acoustic noise associated with echoplanar imaging (Foster et al., 2000; McJury and Shellock, 2000; Moelker and Pattynama, 2003; Price et al., 2001; Ravicz et al., 2000). Background noise can interfere with the presentation of auditory stimuli and adds additional perceptual and cognitive demands to the experimental task (Peelle, 2014). Such auditory task demands are likely to differentially affect participants with hearing impairment or reduced cognitive capacity (Caldwell and Nittrouer, 2013; Grimault et al., 2001; Peelle et al., 2011). In addition, high magnetic fields generated by the scanner pose a critical limitation on the studies of patients with metal implants who cannot receive MRIs. Although electroencephalography (EEG), magnetoencephalography (MEG), and positron emission tomography (PET) provide quiet imaging settings, each of these modalities has limitations. For example, anatomical localization can be challenging with EEG and MEG (Baumgartner, 2004; He, 1999), and PET uses ionizing radiation and has relatively low temporal resolution (Cabeza and Nyberg, 1997).
In theory, optical neuroimaging offers an appealing alternative. Optical methods use a quiet, safe, and noninvasive technique based on near infrared spectroscopy (NIRS) to record hemodynamic activity from the brain. However, traditional functional NIRS (fNIRS) imaging suffers from low spatial resolution (sparse source-detector arrangements) and signal contamination from superficial tissues. More recently, the development of high-density diffuse optical tomography (HD-DOT) instrumentation has dramatically improved the spatial resolution and brain specificity of optical neuroimaging (Gregg et al., 2010; Joseph et al., 2006; Koch et al., 2010; Saager and Berger, 2008; White and Culver, 2010; Zeff et al., 2007). Further, algorithms incorporating realistic forward light models, spatial normalization methods, and advanced statistical tools have significantly improved overall image quality, coregistration to anatomy, and reliability (Custo et al., 2010; Eggebrecht et al., 2012; Ferradal et al., 2014; Hassanpour et al., 2014; Okamoto and Dan, 2005).
Early HD-DOT studies covered about ~1/8 of the head, limiting imaging to small select regions of the brain. Recently we reported a large field of view HD-DOT system that covers approximately 50% of the head surface and is capable of mapping distributed brain functions and networks (Eggebrecht et al., 2014). We validated the performance of this system for functional imaging of distributed cognitive processes and networks through quantitative comparisons to coregistered fMRI, and were able to map the neuroanatomical organization of single-word processing (i.e., distinct cortical regions for hearing, reading, speaking, and subvocally generating single words). However, the ability of HD-DOT to capture the neural responses to connected speech has not yet been established. Connected speech comprehension is more complex than single word perception, incorporating syntactic structure and richer conceptual representations (Price, 2012). Processing these relationships requires a larger network of cortical regions that must be imaged simultaneously. Thus, the ability of a technique to capture a range of cortical responses during speech processing is critical to studies that aim to understand how the healthy brain processes speech, and to understand the impact of auditory noise, hearing loss or cognitive deficits that modulate the brain’s strategy for speech comprehension.
To evaluate the performance of our HD-DOT system in imaging speech comprehension we presented listeners with spoken sentences that varied in their syntactic complexity. We chose this manipulation because in other neuroimaging modalities there are consistent differences in neural activity based on syntactic complexity, and because syntactic information is processed in a highly distributed and hierarchical fashion throughout the cortex at different cortical depths (e.g., sulci and gyri) (Bornkessel et al., 2005; Caplan et al., 2008; Friederici et al., 2003; Griffiths et al., 2013; Stromswold et al., 1996; Tyler et al., 2010). Using an event-related sentence comprehension task, we tested whether HD-DOT would be able to detect the effect of syntactic complexity caused by a word-order manipulation. Our imaging results show that HD-DOT is capable of mapping a hierarchical organization of the language system, and the spatial location of the functional maps are in good agreement with previous sentence comprehension studies using MRI and PET. Being able to detect the subtle changes in cortical activation induced by increased processing demand demonstrates that HD-DOT has the sensitivity and spatial specificity to serve as a general tool for cognitive neuroscience.
Materials and Methods
Participants
We scanned 10 healthy, right-handed, native English speakers (6 female) between the ages of 20 and 32 years (mean = 27.6, STD = 3.3). All had normal hearing by self-report and no history of neurological or psychiatric disorders. Written informed consent was obtained for all subjects as approved by the Human Research Protection Office by Washington University School of Medicine.
Subject-specific light models were generated using each subject’s own structural T1- and T2-weighted MRI images obtained from a previous study.
Materials
Auditory stimuli consisted of sentences and unintelligible noise (as a control condition). Sentences were constructed to contain a subject-relative (SR) or object-relative (OR) center-embedded clause. Sentences with object-relative clauses are reliably found to be more difficult to comprehend compared to subject-relative clauses (Gibson, 1998; Traxler et al., 2002), resulting in longer response times, more errors, or equivalent performance but through increased neural activity. These sentences were selected from a list of 60 meaningful 6-word sentences, each with a subject-relative embedded clause, used in previous studies (Peelle et al., 2004; Peelle et al., 2010b; Wingfield et al., 2003). In half of the sentences the character performing the action was a male (e.g., king, brother) and in the other half the actor was a female (e.g., queen, sister). The 60 original sentences were then re-worded to vary syntactic complexity (turning subject-relative into object-relative construction) and whether a male or female was performing the action, as shown in the following examples:
Subject-relative clause, male agent: “Men that assist women are helpful.”
Object-relative clause, male agent: “Women that men assist are helpful.”
Subject-relative clause, female agent: “Women that assist men are helpful.”
Object-relative clause, female agent: “Men that women assist are helpful.”
These rearrangements resulted in 240 total sentences, each of which was presented a single time during the experiment (120 subject-relative sentences and 120 object-relative sentences). During the experiment, subjects were asked to indicate the gender of the character performing the action (male or female) using a button-press response.
In addition to intelligible sentences, we included unintelligible speech trials (“noise”) as a control condition. The noise stimuli consisted of one channel noise-vocoded speech, created by modulating white noise (bandpass filtered at 0-8 kHz) with the amplitude envelope of the sentence (low pass filtered at 30 Hz); the vocoded sentences were a subset of the intelligible sentences used in the study. Noise vocoding removes the spectral detail from the sentence while retaining its temporal amplitude envelope (Shannon et al., 1995).
The mean length of auditory stimuli (sentences or noise) was 1.76 ± 0.05 s (range: 1.32–1.89 s).
HD-DOT system
Full details on our HD-DOT system are reported by Eggebrecht et al. (2014). Briefly, our HD-DOT array contains 96 sources and 92 detectors that are coupled with fiber optic bundles to a flexible imaging cap. Source locations are illuminated by continuous-wave light emitting diodes at two wavelengths (750 nm and 850 nm) that enable hemoglobin spectroscopy. Light is detected by avalanche photodiodes (Hamamatsu C5460-01) and digitized by dedicated 24-bit analog-to-digital converters (MOTU HD192) (Zeff et al., 2007), which enable high dynamic range (> 106) and low crosstalk (< 10−6). The dynamic range allows the detection of light from multiple source detector distances (e.g., first through fourth nearest neighbors are 13, 30, 39 and 47 mm apart) (Supplementary Figure 1A and 1B). This array provides more than 1,200 usable source-detector measurements at a 10 Hz full-field frame rate.
Procedure
Subjects were seated in an adjustable chair in a sound-isolated room facing a 19 inch LCD screen (located 1 m from subjects and at approximately eye level), and two stereo speakers (each located at 1.5 m from subjects at approximately ear level). Subjects held a keyboard on their lap. The HD-DOT cap was put on the subject’s head covering portions of occipital, temporal, motor, and frontal cortices (Figure 1A). Once the cap was placed comfortably with good signal-to-noise ratio (Supplementary Figure 1C), the placement of the cap with respect to anatomical landmarks on the head and face of the subject (e.g., the nasion) was noted to locate the cap relative to the subject’s head, later used to generate a subject-specific light model.
Figure 1.
(A) Schematic view of the HD-DOT experimental set up, subject position and imaging cap structure (a subset of optical fibers is shown for clarity). (B) Group field of view on the cortical surface of an MNI atlas.
We presented stimuli using Psychophysics Toolbox 3 (Brainard, 1997), sending audio to the speakers via an external audio interface (M-Audio Fast Track Pro). We set the sound level at a comfortable listening level that did not change over the course of a session. Stimuli were presented in four separate runs, each of which contained 30 subject-relative (easy) sentences, 30 object-relative (complex) sentences, and 10 noise trials. These stimuli were presented in a pseudorandom order, with the order of conditions varied between runs but constant across subjects. Following each sentence, subjects were instructed to press a key with their left index finger if the person performing action was female and a separate key with their right index finger if the person performing the action was male. A central fixation cross was displayed at the center of a gray screen; after each key press the cross was changed to an ‘x’ to inform the subject that a response was received. This sign was on the screen during part of interstimulus interval (ISI). ISIs were pseudorandomly distributed between 2-10 s; the subject’s reaction time on each trail was considered as a part of the ISI of that trial. If the reaction time was longer than the predetermined ISI for a given trial, the next stimulus was presented immediately following the subject’s key press. One second prior to the stimulus trial the ‘x’ was changed back to a cross to prepare the subject for listening to the next stimulus.
Prior to the experiment subjects were given a short practice session containing 24 trials (8 trials per condition) to explain the instructions and ensure they were performing the task correctly. None of these sentences appeared in the actual experiment.
Data preprocessing
A flow chart outlining data preprocessing is shown in Supplementary Figure 3. Raw detector data (sampling rate: 10 Hz) were decoded to source-detector pair data, and converted to log-ratio. The data were then bandpass filtered (0.02–0.5 Hz) to remove low-frequency trends and pulse artifacts. We averaged all signals from the first nearest neighbor channels to create a measure of superficial hemodynamics; we used linear regression to remove this nuisance signal from all channels. To create a realistic forward light model, we used subject-specific T1- and T2- weighted structural MRI images. After bias field correction using SPM8 software (Wellcome Trust Centre for Neuroimaging, London, UK), we used an in-house script to segment an individual head into five different tissue types (scalp/skin, skull, CSF, white matter, and gray matter) (Supplementary Figure 2A). We used the segmented images to create finite element head meshes using NIRview software (version 1.10, http://www.dartmouth.edu/~nir/nirfast/) (Jermyn et al., 2013). The light propagation inside the mesh was modeled using the diffusion approximation and a sensitivity matrix was generated using NIRFAST software (Dehghani et al., 2009a) (Supplementary Figure 2B). The sensitivity matrix was inverted, smoothed with a Gaussian kernel (σ = 2.4 mm), and used to reconstruct absorption coefficient changes for each wavelength (Eggebrecht et al., 2012). Relative changes in the concentrations of oxygenated hemoglobin (ΔHbO), deoxygenated hemoglobin (ΔHbR), and total hemoglobin (ΔHbT) were obtained from the absorption coefficient changes by the spectral decomposition of the extinction coefficients of HbO and HbR at the two wavelengths. Additionally, data were downsampled to 1 Hz. For group analysis, we registered all data to the Montreal Neurological Institute (MNI) 152 atlas using in-house developed linear affine transformation code and concurrently resampled data to a voxel size of 3 × 3 × 3 mm following Eggebrecht et al. (2014). Due to the cap fitting on a variety of head sizes and shapes, the field of view (FOV) measured within each subject varied across the group. For the current study, we included only voxels sampled with acceptable sensitivity in all subjects in the group FOV (displayed in white in Figure 1B). To find these voxels, we calculated a flat field reconstruction (Dehghani et al., 2009b) and considered voxels with a reconstructed value within two orders of magnitude of the maximum value to have acceptable sensitivity. The group FOV contains approximately 700 cm3 of head volume, covering occipital and parts of parietal, temporal, motor and frontal cortices and spans up to 2 cm into the brain tissue.
For the cortical surface representation of results, we mapped volumetric results onto the mid-thickness surface of MNI152 atlas extracted using FreeSurfer software (version 5.1.0, Martinos Center for Biomedical Imaging, Massachusetts General Hospital) (Dale et al., 1999). Volumetric activations are overlaid on the T1 images of the MNI152 atlas.
Timeseries analysis
We used custom HD-DOT SPM code for statistical analyses (Hassanpour et al., 2014), outlined in Supplementary Figure 4. Five conditions were included in the general linear model (GLM) design matrix: subject-relative sentences, object-relative sentences, noise trials, and left and right button presses. All trials were included, regardless of behavioral accuracy. Auditory stimuli were modeled as events with 2 s duration and button presses as events with 0 s duration. Events were convolved with a canonical hemodynamic response function (HRF) to model hemodynamic responses to the predicted neural activity. We constructed the canonical HRF using a double-gamma function matched to the general properties of the hemodynamic response in primary auditory cortex averaged over all data (e.g., delay time of 2 s, time to peak of 7 s and undershoot at 17 s).
For each subject, we combined data for all four runs using a fixed effects analysis and generated linear contrast maps. We then assessed group-level activity using random effects analyses of these contrast maps and calculated statistical z-value maps for each contrast. We calculated voxelwise degrees of freedom and spatial smoothness from estimates of temporal and spatial autocorrelation structures of GLM residuals, respectively. Unless otherwise specified, all statistical maps are thresholded at p<0.001 (voxelwise, uncorrected) and corrected for multiple comparisons using a nonstationary cluster analysis technique at p < 0.05 (Hassanpour et al., 2014; Hayasaka et al., 2004; Worsley et al., 1998).
In the main text we focus on maps of ΔHbO as we have found ΔHbO signal to exhibit a higher contrast-to-noise ratio compared to ΔHbR or ΔHbT (Eggebrecht et al., 2014; Hassanpour et al., 2014). Results from other hemoglobin contrasts are reported in Supplementary Figures 5–7 and are generally consistent with ΔHbO results.
Finally, we also estimated the temporal profile of hemodynamic activity for each stimulus type using the GLM un-mixing method (also known as a finite impulse response, or FIR model) (Glover, 1999; Hassanpour et al., 2014; Miezin et al., 2000). This procedure allowed us to evaluate the timecourse of the evoked responses to ensure they were physiologically plausible.
Results
Behavioral data
We collected behavioral measures including accuracy (percentage of correct responses) and response time (measured from the stimulus start time to key press time). The mean accuracy for the subject-relative (easy) sentences was 97.7% (STD=3.13), and for the object-relative (complex) sentences was 97.6% (STD=3.29). Accuracy was equivalent between these two conditions, t (78) =0.13. The mean response time for the subject-relative sentences was 2.0 s (STD=0.36), and for the object-relative sentences 2.1 s (STD=0.37), which also did not significantly differ, t (78) =0.37.
Hierarchical processing of spoken language
Hearing unintelligible vocoded speech (“noise”) caused an increase in oxy-hemoglobin (HbO) concentration in parts of the temporal cortex bilaterally (Figure 2A). Beyond these regions, hearing intelligible sentences resulted in widespread activity in bilateral temporal cortex along with additional activity in the left frontal cortex (Figure 2B and 2C). Similar results were obtained from other hemoglobin contrasts, as shown in Supplementary Figure 5.
Figure 2.
Oxy-hemoglobin increase in response to (A) noise, (B) subject-relative sentences and (C) object-relative sentences. Individual data were spatially normalized to MNI152 space and group averaged. The volumetric activations are overlaid on T1 images of MNI152 atlas, and are shown in parasagittal, coronal and axial views (x = −49, y = −16 and z = −3). Images are thresholded at 0.18 μMol.
We next statistically compared the responses to each sentence type with responses to noise, shown in Figure 3A and 3B, with maxima listed in Supplementary Tables 1 and 2. The comparison between activity in response to sentences from that to noise helps differentiate higher-level speech processing regions from general auditory processing regions. Compared to noise, both subject-relative and object-relative sentences led to significant increases in activation in large portions of the left hemisphere including frontal cortex, lateral superior and middle temporal cortex, and ventral premotor cortex. In addition, we found a significant response to both types of sentences relative to noise in anterior parts of the right superior and middle temporal cortex.
Figure 3.
Cortical processing for spoken language. (A) Differential activation by subject-relative sentences > noise highlights brain areas involved in intelligible speech processing. (B) A similar map was obtained for object-relative sentences > noise. (C) Directly comparing object-relative sentences to subject-relative sentences shows the effect of syntactic complexity. z-maps are thresholded at voxelwise p<0.001 (z = 3.1) and (corrected) cluster significance threshold of p<0.05. These are results obtained from the oxy-hemoglobin signal.
We then identified regions showing more activity for the complex object-relative sentences than the easier subject-relative sentences, shown in Figure 3C and listed in Supplementary Table 3. We found that the increased processing load resulted in a significant increase in the response in bilateral ventral parts of posterior prefrontal cortex, left lateral middle and superior temporal gyri (MTG and STG) and posterior parts of bilateral temporal cortex.
Figure 4 illustrates the hierarchical organization of the speech network by overlaying maps for speech intelligibility (all sentences > noise) and syntactic complexity (object-relative > subject-relative sentences) on the cortical surface. The overlap between these two contrasts (shown in white) includes left lateral superior temporal gyrus and ventral inferior frontal gyrus, regions that have been previously associated with processing sentences containing center-embedded relative clauses (Caplan et al., 2008; Friederici et al., 2003; Peelle et al., 2010b; Stromswold et al., 1996). Regions in yellow, including posterior parts of temporal cortex and more anterior region of ventral prefrontal cortex, are not recruited for easier intelligible speech, but increase activity when the processing load increases. These two patterns highlight a “core” speech processing network that is active for more basic auditory sentence processing (Davis and Johnsrude, 2003; Peelle et al., 2010a; Rauschecker and Scott, 2009), and an expanded associative network that is differentially engaged as linguistic demands increase (Peelle, 2012; Wingfield and Grossman, 2006).
Figure 4.
Hierarchy in speech processing. Regions with significant activity increase in response to syntactic complexity (yellow) are overlaid on the regions with significant activity increase in response to intelligibility (orange). Regions that overlap (white) are recruited by both types of sentences (easy and complex), but are recruited to greater degree by syntactically complex sentences, implying a role in syntax processing. Yellow regions are recruited only to support increased syntactic processing load.
To examine the temporal profile of subjects’ responses we extracted the hemodynamic response to sentences in several regions of interest (ROIs), listed in Table 1 and shown in Figure 5. The ROIs were defined as a cube of 3-voxels per side centered on the peak voxel of the clusters that passed the significance threshold in either noise > baseline contrast or sentences > noise contrast. Results show that the hemodynamic response starts with a delay (up to 3 seconds) and peaks approximately 4–8 seconds after the stimulus (e.g., Figure 5A and 5D). These responses serve as a quality control check and verify that our localized responses are consistent with evoked hemodynamic activity (Aguirre et al., 1998).
Table 1.
Coordinates of the centers of regions of interest
| Region | Center coordinate | Z score | |||
|---|---|---|---|---|---|
|
| |||||
| x | y | z | Noise | Speech | |
| Left: | |||||
| Lateral middle temporal cortex a,b | −61.5 | −21 | 0 | 3.81 | 3.80 |
| Lateral superior temporal cortex a,b | −61.5 | −30 | −9 | 3.81 | 3.13 |
| Anterior superior temporal cortex b | −61.5 | 6 | −9 | 2.16 | 3.21 |
| Middle prefrontal cortex b | −52.5 | 14 | 23 | 1.82 | 4.18 |
| Dorsal prefrontal cortex b | −46.5 | 28 | 30 | 0.94 | 4.30 |
| Ventral prefrontal cortex b | −43.5 | 39 | −6 | −0.72 | 3.46 |
| Right: | |||||
| Anterior superior temporal cortex b | 55.5 | 6 | −9 | 2.55 | 4.20 |
| Ventral prefrontal cortex b | 52.5 | 27 | −9 | −0.70 | 3.12 |
ROI is within a cluster from the noise > baseline comparison
ROI is within a cluster from the sentences > noise comparison
Figure 5.
Temporal profile of hemodynamic response: (A-H) show oxy-hemoglobin changes in response to subject-relative (blue) and object-relative (green) sentences at different ROIs. Shaded areas show standard errors.
Finally, in order to ascertain how reliably we were able to detect hierarchical responses to speech comprehension in individual subjects and individual runs of data, we created single-subject and single run renderings of the main contrasts (sentences > noise and complex > easy). The single subject maps of response to intelligibility were highly consistent across subjects (Supplementary Figure 8A). While the contrast to noise ratio drops significantly at single run level these also show qualitatively the same common features (Supplementary Figure 8B).
Discussion
The goal of the present study was to evaluate the use of HD-DOT for imaging speech comprehension. We assessed the ability of HD-DOT to measure brain activity during multiple levels of speech processing by presenting listeners with spoken sentences that varied in intelligibility and linguistic complexity. Overall, our results highlight a network of neural areas that are described in the literature for supporting speech comprehension, and map the hierarchical organization of spoken language processing. Below we assess our results in more detail and discuss advantages and drawbacks of using HD-DOT in neurocognitive studies.
Cortical responses to intelligibility and syntactic complexity
Our results show a distributed network of cortical activity associated with the processing of spoken language. This network includes regions located in the temporal, parietal and frontal cortices of both hemispheres. However, speech related processes activated up to six times larger cortical volume of the left hemisphere compared to its contralateral side. The largest significant cluster spans several left frontotemporal sub-regions, classical language processing centers. While both ventral and dorsal parts of the left hemisphere are involved in speech comprehension, right hemisphere activity is mainly limited to temporal cortex. These results are largely consistent with previous fMRI and PET studies (Davis and Johnsrude, 2003; Hickok and Poeppel, 2007; Osnes et al., 2011; Pulvermuller et al., 2006; Tyler and Marslen-Wilson, 2008; Wilson et al., 2008).
Sentences with object-relative construction are more complex than those with subject-relative clauses due to both memory and linguistic integration costs (Cooke et al., 2002; Fiebach et al., 2001). Although the behavioral performance of participants is sometimes poorer for object-relative sentences (Caplan et al., 2008; Wingfield et al., 2003), in our study differential processing was only revealed in the imaging results. One potential reason for this could be the age of our subjects, as younger adults are less impacted by syntactic complexity. However, processing differences were indeed apparent in the patterns of neural activation. During successful comprehension of object-relative sentences, we found brain regions showing significantly stronger activity that overlapped the core speech processing network, as well as complementary regions not seen in response to the simpler subject-relative sentences. Overall, HD-DOT revealed an enlargement of the speech processing network when processing load increased.
We designed this study to focus on the group level results, and therefore acquired approximately 30 minutes of data per subject. Nonetheless, individuals’ maps of the main effect of intelligibility were consistent across subjects. This includes a significant increase in the activity of the left temporal and prefrontal cortices (the core speech processing centers) during intelligible speech processing compared to listening to un-intelligible noise (in all subjects). In our previous study we have shown that the HDDOT performance at the single subject level is statistically comparable to fMRI (Eggebrecht et al., 2014). Further studies with larger number of samples per subject can shed more light whether the inter-subject variance is due to a) low number of samples per subject for detecting the subtle effect of a word-order change or b) individual differences in the brain recruitment for processing syntactically complex speech.
Using HD-DOT to measure neural responses to speech
Although the language subsystems of human brain have been studied for many years using both lesion studies and functional neuroimaging, HD-DOT has notable advantages compared to other methods. HD-DOT provides non-invasive, radiation-free and metal-compatible tomographic imaging of cortical hemodynamic activity in a noise-free environment with relatively good spatial and temporal resolution. These features provide an advantage in many settings including studies of cognitive development in children, studies of noise-degraded speech, and studies of subjects with implanted metal devices such as cochlear implants.
In particular, the quietness of the HD-DOT system is an important advantage compared to fMRI in speech studies, as the acoustic noise associated with fMRI scanning can interfere with the normal auditory language processing in numerous ways. Loud sounds can affect the hearing threshold by causing a stapedius muscle reflex (Olsen, 1999; Ulmer et al., 1998), and affect the perception of stimuli by acoustic-spectral masking (Shah et al., 1999). At the physiological level, acoustic noise can saturate neuronal populations in the auditory cortex (Bandettini et al., 1998; Gaab et al., 2007). At a cognitive level, speech comprehension in the midst of scanner noise may require additional functional responses in extra-auditory frontal areas (Peelle, 2014; Schmidt et al., 2008; Skouras et al., 2013), making it difficult to separate executive procsses required for language processing from those required to deal with the background noise. A common approach for reducing acoustic contamination in auditory fMRI studies is to use sparse imaging, which allows the presentation of stimuli in quiet by collecting fewer MRI volumes (Hall et al., 1999). However, sparse imaging reduces temporal resolution and thus the ability to efficiently detect the shape of hemodynamic responses. Thus, HD-DOT provides an acoustically superior alternative to fMRI for auditory neuroscience.
One potential drawback of using HD-DOT to assess language processing is that, unlike fMRI, HD-DOT imaging is limited to superficial cortex (i.e., ~ 1–2 cm into the brain) and cannot access deep cortical structures (e.g., insula or operculum) or subcortical brain structures (e.g., striatum or thalamus). Fortunately, a number of regions critical for speech processing are relatively near to the cortical surface, including large portions of frontal and temporal cortex frequently highlighted in anatomically-constrained models of language processing. A related limitation of the current study is that the HD-DOT system we used, while having a relatively large field of view, does not provide full head coverage—a limitation shared by all existing DOT systems. For the purposes of establishing the regional sensitivity to syntax processing the current system was sufficient. However, for more comprehensive mapping of the neural response to speech, particularly in frontal cortex, an HD-DOT system with greater coverage will be required. As with any modality, a full picture of network function can only be obtained through the use of converging evidence from multiple techniques.
Conclusion
Our results demonstrate the feasibility of imaging hierarchical cognitive processes during speech comprehension with HD-DOT. Our findings are in general agreement with previous fMRI studies in demonstrating that increased processing demand for syntactically complex sentences results in greater activation in left temporal and prefrontal cortex. With the advantages of being acoustically quiet, ability to image subjects with electronic implants, accurate anatomical localization, and relatively high spatial resolution, HD-DOT is well-suited for studying cortical responses to spoken language.
Supplementary Material
Highlights.
We evaluated HD-DOT in imaging the hierarchical organization of speech processing.
HD-DOT identified a core and an expanded associative speech networks.
The associative speech network supports processing increased linguistic demands.
Results demonstrate the feasibility of imaging higher-level cognition using HD-DOT.
Acknowledgments
This work was supported in part by NIH grants R01EB009223 (J.P.C), R01NS090874 (J.P.C), R01NS046424 (S.E.P), R01AG038490 (J.E.P.), the McDonnell Center for Systems Neuroscience, the Dana Foundation, McDonnell Foundation, and an Autism Speaks Postdoctoral Translational Research Fellowship 7962 (A.T.E.). The funding sources had no involvement in the study design, collection, analysis, interpretation of the data, writing of the paper, or decision to submit the paper for publication. J.P.C and Washington University have financial interests in Cephalogics LLC based on a license of related optical imaging technology by the University to Cephalogics LLC.
Abbreviations
- FOV
Field of view
- PET
Positron emission tomography
- MRI
Magnetic resonance imaging
- HD-DOT
High-density diffuse optical tomography
- EEG
Electroencephalography
- MEG
Magnetoencephalography
- NIRS
Near infrared spectroscopy
- SR
Subject-relative
- OR
Object-relative
- LCD
Liquid-crystal-display
- ISI
Interstimulus interval
- HbO
Oxygenated hemoglobin
- HbR
Deoxygenated hemoglobin
- HbT
Total hemoglobin
- GLM
General linear model
- HRF
Hemodynamic response function
- ROIs
Regions of interest
- STD
Standard deviation
- SD
Source- detector
- MFG
Middle frontal gyrus
- lIFG
Inferior frontal gyrus
- STG
Superior temporal gyrus
- STG
Superior temporal gyrus
- MTG
Middle temporal gyrus
- aSTG
Anterior superior temporal gyrus
- vPFC
Ventral prefrontal cortex
- dPFC
Dorsal prefrontal cortex
- MPFC
Middle prefrontal cortex
- STC
Superior temporal cortex
- MTC
Middle temporal cortex
- aSTC
Anterior superior temporal cortex
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- Aguirre GK, Zarahn E, D’Esposito M. The variability of human, BOLD hemodynamic responses. Neuroimage. 1998;8:360–369. doi: 10.1006/nimg.1998.0369. [DOI] [PubMed] [Google Scholar]
- Bandettini PA, Jesmanowicz A, Van Kylen J, Birn RM, Hyde JS. Functional MRI of brain activation induced by scanner acoustic noise. Magnetic Resonance in Medicine. 1998;39:410–416. doi: 10.1002/mrm.1910390311. [DOI] [PubMed] [Google Scholar]
- Baumgartner C. Controversies in clinical neurophysiology. MEG is superior to EEG in the localization of interictal epileptiform activity: Con. Clinical Neurophysiology. 2004;115:1010–1020. doi: 10.1016/j.clinph.2003.12.010. [DOI] [PubMed] [Google Scholar]
- Bornkessel I, Zysset S, Friederici AD, von Cramon DY, Schlesewsky M. Who did what to whom? The neural basis of argument hierarchies during language comprehension. Neuroimage. 2005;26:221–233. doi: 10.1016/j.neuroimage.2005.01.032. [DOI] [PubMed] [Google Scholar]
- Brainard DH. The psychophysics toolbox. Spatial Vision. 1997;10:433–436. [PubMed] [Google Scholar]
- Cabeza R, Nyberg L. Imaging cognition: An empirical review of PET studies with normal subjects. J Cogn Neurosci. 1997;9:1–26. doi: 10.1162/jocn.1997.9.1.1. [DOI] [PubMed] [Google Scholar]
- Caldwell A, Nittrouer S. Speech perception in noise by children with cochlear implants. Journal of speech, language, and hearing research: JSLHR. 2013;56:13–30. doi: 10.1044/1092-4388(2012/11-0338). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Caplan D, Stanczak L, Waters G. Syntactic and Thematic Constraint Effects on Blood Oxygenation Level Dependent Signal Correlates of Comprehension of Relative Clauses. J Cogn Neurosci. 2008;20:643–656. doi: 10.1162/jocn.2008.20044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cooke A, Zurif EB, DeVita C, Alsop D, Koenig P, Detre J, Gee J, Pinango M, Balogh J, Grossman M. Neural basis for sentence comprehension: Grammatical and short-term memory components. Human Brain Mapping. 2002;15:80–94. doi: 10.1002/hbm.10006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Custo A, Boas DA, Tsuzuki D, Dan I, Mesquita R, Fischl B, Grimson WEL, Wells W. Anatomical atlas-guided diffuse optical tomography of brain activation. Neuroimage. 2010;49:561–567. doi: 10.1016/j.neuroimage.2009.07.033. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dale AM, Fischl B, Sereno MI. Cortical surface-based analysis - I. Segmentation and surface reconstruction. Neuroimage. 1999;9:179–194. doi: 10.1006/nimg.1998.0395. [DOI] [PubMed] [Google Scholar]
- Davis MH, Johnsrude IS. Hierarchical processing in spoken language comprehension. Journal of Neuroscience. 2003;23:3423–3431. doi: 10.1523/JNEUROSCI.23-08-03423.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dehghani H, Eames ME, Yalavarthy PK, Davis SC, Srinivasan S, Carpenter CM, Pogue BW, Paulsen KD. Near infrared optical tomography using NIRFAST: Algorithm for numerical model and image reconstruction. Communications in Numerical Methods in Engineering. 2009a;25:711–732. doi: 10.1002/cnm.1162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dehghani H, White BR, Zeff BW, Tizzard A, Culver JP. Depth sensitivity and image reconstruction analysis of dense imaging arrays for mapping brain function with diffuse optical tomography. Applied Optics. 2009b;48:D137–D143. doi: 10.1364/ao.48.00d137. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eggebrecht AT, Ferradal SL, Robichaux-Viehoever A, Hassanpour MS, Dehghani H, Snyder AZ, Hershey T, Culver JP. Mapping distributed brain function and networks with diffuse optical tomography. NATURE PHOTONICS. 2014;8 doi: 10.1038/nphoton.2014.107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eggebrecht AT, White BR, Ferradal SL, Chen C, Zhan Y, Snyder AZ, Dehghani H, Culver JP. A quantitative spatial comparison of high-density diffuse optical tomography and fMRI cortical mapping. Neuroimage. 2012 doi: 10.1016/j.neuroimage.2012.01.124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ferradal SL, Eggebrecht AT, Hassanpour M, Snyder AZ, Culver JP. Atlas-based head modeling and spatial normalization for high-density diffuse optical tomography: In vivo validation against fMRI. Neuroimage. 2014;85:117–126. doi: 10.1016/j.neuroimage.2013.03.069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fiebach CJ, Schlesewsky M, Friederici AD. Syntactic working memory and the establishment of filler-gap dependencies: Insights from ERPs and fMRI. J Psycholinguist Res. 2001;30:321–338. doi: 10.1023/a:1010447102554. [DOI] [PubMed] [Google Scholar]
- Foster JR, Hall DA, Summerfield AQ, Palmer AR, Bowtell RW. Sound-level measurements and calculations of safe noise dosage during EPI at 3 T. Journal of Magnetic Resonance Imaging. 2000;12:157–163. doi: 10.1002/1522-2586(200007)12:1<157::aid-jmri17>3.0.co;2-m. [DOI] [PubMed] [Google Scholar]
- Friederici AD, Ruschemeyer SA, Hahne A, Fiebach CJ. The role of left inferior frontal and superior temporal cortex in sentence comprehension: Localizing syntactic and semantic processes. Cerebral Cortex. 2003;13:170–177. doi: 10.1093/cercor/13.2.170. [DOI] [PubMed] [Google Scholar]
- Gaab N, Gabrieli JDE, Glover GH. Assessing the influence of scanner background noise on auditory processing. II. An fMRI study comparing auditory processing in the absence and presence of recorded scanner noise using a sparse design. Human Brain Mapping. 2007;28:721–732. doi: 10.1002/hbm.20299. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gibson E. Linguistic complexity: locality of syntactic dependencies. Cognition. 1998;68:1–76. doi: 10.1016/s0010-0277(98)00034-1. [DOI] [PubMed] [Google Scholar]
- Glover GH. Deconvolution of impulse response in event-related BOLD fMRI. Neuroimage. 1999;9:416–429. doi: 10.1006/nimg.1998.0419. [DOI] [PubMed] [Google Scholar]
- Gregg NM, White BR, Zeff BW, Berger AJ, Culver JP. Brain specificity of diffuse optical imaging: improvements from superficial signal regression and tomography. Front Neuroenergetics. 2010;2 doi: 10.3389/fnene.2010.00014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Griffiths JD, Marslen-Wilson WD, Stamatakis EA, Tyler LK. Functional Organization of the Neural Language System: Dorsal and Ventral Pathways Are Critical for Syntax. Cerebral Cortex. 2013;23:139–147. doi: 10.1093/cercor/bhr386. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grimault N, Micheyl C, Carlyon RP, Arthaud P, Collet L. Perceptual auditory stream segregation of sequences of complex sounds in subjects with normal and impaired hearing. British journal of audiology. 2001;35:173–182. doi: 10.1080/00305364.2001.11745235. [DOI] [PubMed] [Google Scholar]
- Hall DA, Haggard MP, Akeroyd MA, Palmer AR, Summerfield AQ, Elliott MR, Gurney EM, Bowtell RW. “Sparse” temporal sampling in auditory fMRI. Human Brain Mapping. 1999;7:213–223. doi: 10.1002/(SICI)1097-0193(1999)7:3<213::AID-HBM5>3.0.CO;2-N. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hassanpour MS, White BR, Eggebrecht AT, Ferradal SL, Snyder AZ, Culver JP. Statistical analysis of high density diffuse optical tomography. Neuroimage 85 Pt 1. 2014:104–116. doi: 10.1016/j.neuroimage.2013.05.105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hayasaka S, Phan KL, Liberzon I, Worsley KJ, Nichols TE. Nonstationary cluster-size inference with random field and permutation methods. Neuroimage. 2004;22:676–687. doi: 10.1016/j.neuroimage.2004.01.041. [DOI] [PubMed] [Google Scholar]
- He B. Brain electric source imaging: scalp Laplacian mapping and cortical imaging. Critical reviews in biomedical engineering. 1999;27:149–188. [PubMed] [Google Scholar]
- Hickok G, Poeppel D. Opinion - The cortical organization of speech processing. Nature Reviews Neuroscience. 2007;8:393–402. doi: 10.1038/nrn2113. [DOI] [PubMed] [Google Scholar]
- Jermyn M, Ghadyani H, Mastanduno MA, Turner W, Davis SC, Dehghani H, Pogue BW. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography. J Biomed Opt. 2013;18:86007. doi: 10.1117/1.JBO.18.8.086007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Joseph DK, Huppert TJ, Franceschini MA, Boas DA. Diffuse optical tomography system to image brain activation with improved spatial resolution and validation with functional magnetic resonance imaging. Applied Optics. 2006;45:8142–8151. doi: 10.1364/ao.45.008142. [DOI] [PubMed] [Google Scholar]
- Koch SP, Habermehl C, Mehnert J, Schmitz CH, Holtze S, Villringer A, Steinbrink J, Obrig H. High-resolution optical functional mapping of the human somatosensory cortex. Front Neuroenergetics. 2010;2:12. doi: 10.3389/fnene.2010.00012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McJury M, Shellock FG. Auditory noise associated with MR procedures: a review. J Magn Reson Imaging. 2000;12:37–45. doi: 10.1002/1522-2586(200007)12:1<37::aid-jmri5>3.0.co;2-i. [DOI] [PubMed] [Google Scholar]
- Miezin FM, Maccotta L, Ollinger JM, Petersen SE, Buckner RL. Characterizing the hemodynamic response: Effects of presentation rate, sampling procedure, and the possibility of ordering brain activity based on relative timing. Neuroimage. 2000;11:735–759. doi: 10.1006/nimg.2000.0568. [DOI] [PubMed] [Google Scholar]
- Moelker A, Pattynama PM. Acoustic noise concerns in functional magnetic resonance imaging. Human Brain Mapping. 2003;20:123–141. doi: 10.1002/hbm.10134. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Okamoto M, Dan I. Automated cortical projection of head-surface locations for transcranial functional brain mapping. Neuroimage. 2005;26:18–28. doi: 10.1016/j.neuroimage.2005.01.018. [DOI] [PubMed] [Google Scholar]
- Olsen SO. The relationship between the uncomfortable loudness level and the acoustic reflex threshold for pure tones in normally-hearing and impaired listeners - A meta-analysis. Audiology. 1999;38:61–68. doi: 10.3109/00206099909073004. [DOI] [PubMed] [Google Scholar]
- Osnes B, Hugdahl K, Specht K. Effective connectivity analysis demonstrates involvement of premotor cortex during speech perception. Neuroimage. 2011;54:2437–2445. doi: 10.1016/j.neuroimage.2010.09.078. [DOI] [PubMed] [Google Scholar]
- Peelle JE. The hemispheric lateralization of speech processing depends on what “speech” is: a hierarchical perspective. Front Hum Neurosci. 2012;6 doi: 10.3389/fnhum.2012.00309. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peelle JE. Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in neuroscience. 2014;8:253. doi: 10.3389/fnins.2014.00253. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peelle JE, Johnsrude IS, Davis MH. Hierarchical processing for speech in human auditory cortex and beyond. Front Hum Neurosci. 2010a;4 doi: 10.3389/fnhum.2010.00051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peelle JE, McMillan C, Moore P, Grossman M, Wingfield A. Dissociable patterns of brain activity during comprehension of rapid and syntactically complex speech: Evidence from fMRI. Brain Lang. 2004;91:315–325. doi: 10.1016/j.bandl.2004.05.007. [DOI] [PubMed] [Google Scholar]
- Peelle JE, Troiani V, Grossman M, Wingfield A. Hearing Loss in Older Adults Affects Neural Systems Supporting Speech Comprehension. Journal of Neuroscience. 2011;31:12638–12643. doi: 10.1523/JNEUROSCI.2559-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peelle JE, Troiani V, Wingfield A, Grossman M. Neural Processing during Older Adults’ Comprehension of Spoken Sentences: Age Differences in Resource Allocation and Connectivity. Cerebral Cortex. 2010b;20:773–782. doi: 10.1093/cercor/bhp142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage. 2012;62:816–847. doi: 10.1016/j.neuroimage.2012.04.062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Price DL, De Wilde JP, Papadaki AM, Curran JS, Kitney RI. Investigation of acoustic noise on 15 MRI scanners from 0.2 T to 3 T. Journal of Magnetic Resonance Imaging. 2001;13:288–293. doi: 10.1002/1522-2586(200102)13:2<288::aid-jmri1041>3.0.co;2-p. [DOI] [PubMed] [Google Scholar]
- Pulvermuller F, Huss M, Kherif F, Martin FMDP, Hauk O, Shtyrov Y. Motor cortex maps articulatory features of speech sounds. Proceedings of the National Academy of Sciences of the United States of America. 2006;103:7865–7870. doi: 10.1073/pnas.0509989103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rauschecker JP, Scott SK. Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nat Neurosci. 2009;12:718–724. doi: 10.1038/nn.2331. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ravicz ME, Melcher JR, Kiang NYS. Acoustic noise during functional magnetic resonance imaging. Journal of the Acoustical Society of America. 2000;108:1683–1696. doi: 10.1121/1.1310190. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saager R, Berger A. Measurement of layer-like hemodynamic trends in scalp and cortex: implications for physiological baseline suppression in functional near-infrared spectroscopy. J Biomed Opt. 2008;13:034017. doi: 10.1117/1.2940587. [DOI] [PubMed] [Google Scholar]
- Schmidt CF, Zaehle T, Meyer M, Geiser E, Boesiger P, Jancke L. Silent and continuous fMRI scanning differentially modulate activation in an auditory language comprehension task. Human Brain Mapping. 2008;29:46–56. doi: 10.1002/hbm.20372. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shah NJ, Jancke L, Grosse-Ruyken ML, Muller-Gartner HW. Influence of acoustic masking noise in fMRI of the auditory cortex during phonetic discrimination. J Magn Reson Imaging. 1999;9:19–25. doi: 10.1002/(sici)1522-2586(199901)9:1<19::aid-jmri3>3.0.co;2-k. [DOI] [PubMed] [Google Scholar]
- Shannon RV, Zeng FG, Kamath V, Wygonski J, Ekelid M. Speech Recognition with Primarily Temporal Cues. Science. 1995;270:303–304. doi: 10.1126/science.270.5234.303. [DOI] [PubMed] [Google Scholar]
- Skouras S, Gray M, Critchley H, Koelsch S. fMRI Scanner Noise Interaction with Affective Neural Processes. PLoS One. 2013;8 doi: 10.1371/journal.pone.0080564. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stromswold K, Caplan D, Alpert N, Rauch S. Localization of syntactic comprehension by positron emission tomography. Brain Lang. 1996;52:452–473. doi: 10.1006/brln.1996.0024. [DOI] [PubMed] [Google Scholar]
- Traxler MJ, Morris RK, Seely RE. Processing subject and object relative clauses: Evidence from eye movements. Journal of Memory and Language. 2002;47:69–90. [Google Scholar]
- Tyler LK, Marslen-Wilson W. Fronto-temporal brain systems supporting spoken language comprehension. Philosophical Transactions of the Royal Society B-Biological Sciences. 2008;363:1037–1054. doi: 10.1098/rstb.2007.2158. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tyler LK, Wright P, Randall B, Marslen-Wilson WD, Stamatakis EA. Reorganization of syntactic processing following left-hemisphere brain damage: does right-hemisphere activity preserve function? Brain. 2010;133:3396–3408. doi: 10.1093/brain/awq262. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ulmer JL, Biswal BB, Mark LP, Mathews VP, Prost RW, Millen SJ, Garman JN, Horzewski D. Acoustic echoplanar scanner noise and pure tone hearing thresholds: the effects of sequence repetition times and acoustic noise rates. J Comput Assist Tomogr. 1998;22:480–486. doi: 10.1097/00004728-199805000-00022. [DOI] [PubMed] [Google Scholar]
- White BR, Culver JP. Quantitative evaluation of high-density diffuse optical tomography: in vivo resolution and mapping performance. J Biomed Opt. 2010;15 doi: 10.1117/1.3368999. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilson SM, Molnar-Szakacs I, Iacoboni M. Beyond superior temporal cortex: Intersubject correlations in narrative speech comprehension. Cerebral Cortex. 2008;18:230–242. doi: 10.1093/cercor/bhm049. [DOI] [PubMed] [Google Scholar]
- Wingfield A, Grossman M. Language and the aging brain: Patterns of neural compensation revealed by functional brain imaging. J Neurophysiol. 2006;96:2830–2839. doi: 10.1152/jn.00628.2006. [DOI] [PubMed] [Google Scholar]
- Wingfield A, Peelle JE, Grossman M. Speech rate and syntactic complexity as multiplicative factors in speech comprehension by young and older adults. Aging Neuropsychology and Cognition. 2003;10:310–322. [Google Scholar]
- Worsley KJ, Cao J, Paus T, Petrides M, Evans AC. Applications of random field theory to functional connectivity. Human Brain Mapping. 1998;6:364–367. doi: 10.1002/(SICI)1097-0193(1998)6:5/6<364::AID-HBM6>3.0.CO;2-T. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeff BW, White BR, Dehghani H, Schlaggar BL, Culver JP. Retinotopic mapping of adult human visual cortex with high-density diffuse optical tomography. Proceedings of the National Academy of Sciences of the United States of America. 2007;104:12169–12174. doi: 10.1073/pnas.0611266104. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.






