Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2019 Feb 27;40(9):2596–2610. doi: 10.1002/hbm.24547

Disparity level identification using the voxel‐wise Gabor model of fMRI data

Yuan Li 1, Chunping Hou 1, Li Yao 2,3, Chuncheng Zhang 4, Hongna Zheng 3, Jiacai Zhang 3, Zhiying Long 2,
PMCID: PMC6865565  PMID: 30811782

Abstract

Perceiving disparities is the intuitive basis for our understanding of the physical world. Although many electrophysiology studies have revealed the disparity‐tuning characteristics of the neurons in the visual areas of the macaque brain, neuron population responses to disparity processing have seldom been investigated. Many disparity studies using functional magnetic resonance imaging (fMRI) have revealed the disparity‐selective visual areas in the human brain. However, it is unclear how to characterize neuron population disparity‐tuning responses using fMRI technique. In the present study, we constructed three voxel‐wise encoding Gabor models to predict the voxel responses to novel disparity levels and used a decoding method to identify the new disparity levels from population responses in the cortex. Among the three encoding models, the fine‐coarse model (FCM) that used fine/coarse disparities to fit the voxel responses to disparities outperformed the single model and uncrossed‐crossed model. Moreover, the FCM demonstrated high accuracy in predicting voxel responses in V3A complex and high accuracy in identifying novel disparities from responses in V3A complex. Our results suggest that the FCM can better characterize the voxel responses to disparities than the other two models and V3A complex is a critical visual area for representing disparity information.

Keywords: disparity, fMRI, Gabor, identify, voxel‐wise encoding model

1. INTRODUCTION

The horizontal offset of our eyes causes slightly different images of solid objects and surfaces to be generated on the two retinas. These differences, known as retinal disparities, are exploited by our brain to determine the three‐dimensional layout of the environment and lead to a stereoscopic perception of depth (Howard & Rogers, 1995; Julesz, 1971; Wheatstone, 1838). Characterizing the neural response to disparities advances our understanding of the neuronal mechanism underlying stereoscopic depth perception in the human brain.

Many electrophysiology studies have investigated the disparity‐tuning properties of neurons in various cortex areas of the macaque brain. Neurons with disparity‐tuned responses were found in V1 (Poggio, 1995), V2 (Hubel & Livingstone, 1987), V3 (Adams & Zeki, 2001), V3A (Anzai, Chowdhury, & DeAngelis, 2011), MT (DeAngelis & Newsome, 1999), parietal (Gnadt & Mays, 1995), and inferior temporal cortices (Janssen, Vogels, & Orban, 2000). Moreover, previous macaque studies demonstrated that the disparity‐tuning characteristics of the neurons in visual areas could be modeled and fitted by Gabor functions (Anzai et al., 2011; DeAngelis & Uka, 2003; Prince, Cumming, & Parker, 2002; Prince, Pointon, Cumming, & Parker, 2002). Although those electrophysiology studies described the responses of individual neuron to disparities processing, they did not reveal neuron population responses to disparities processing (Cottereau, McKee, Ales, & Norcia, 2011).

The functional magnetic resonance imaging (fMRI) technique can be used to characterize the population‐level disparity responses. Several fMRI studies investigated population‐level disparity responses in the human cortex and revealed that V1, V2, V3, V3A, V3B, V7, hMT+/V5 (human motion complex), hV4 (human area V4), lateral occipital cortex, parietal cortex, and premotor cortex were selective for binocular disparities (Backus, Fleet, Parker, & Heeger, 2001; Brouwer, Van Ee, & Schwarzbach, 2005; Georgieva, Peeters, Kolster, Todd, & Orban, 2009; Gilaie‐Dotan, Ullman, Kushnir, & Malach, 2002; Jastorff, Abdollahi, Fasano, & Orban, 2016; Li et al., 2017; Minini, Parker, & Bridge, 2010; Neri, Bridge, & Heeger, 2004; Tsao et al., 2003; Welchman, Deubelius, Conrad, Bülthoff, & Kourtzi, 2005). Moreover, some studies applied multivoxel pattern analysis (MVPA) methods to explore the neural mechanisms that discriminate disparity levels and found that dorsal areas and posterior parietal areas had a higher predictive accuracy for decoding the disparity magnitude or depth sign than ventral areas (Patten & Welchman, 2015; Preston, Li, Kourtzi, & Welchman, 2008).

Although those fMRI studies revealed the neural correlates that were engaged in disparity processing and could discriminate disparities, little is known about the dynamic changes of the population responses in the visual cortex with regard to variations of binocular disparities. Cottereau et al. (2011) used source imaging of visual evoked potentials to measure neural population responses over a wide range of horizontal disparities (0.5–64 arcmin) and revealed the disparity‐tuning functions in the visual areas (Cottereau et al., 2011). However, it is hard to directly measure disparity‐tuning characteristics using fMRI due to the slow coupling of the blood oxygen level dependent signal to neural activity. Goncalves et al. (2015) used a Gabor function to describe the voxel responses to variations of binocular disparity and found that the Gabor model that used tuning parameters estimated from fMRI data were able to capture known variations in human psychophysical performance (Goncalves et al., 2015). Although Goncalves's study examined the relationship between the Gabor parameters and preferred disparity magnitude, it is unclear how the disparity information is encoded in the cortex and how the cortex represents the disparity information. Therefore, it is essential to investigate the computational map between binocular disparities and the voxel responses to understand the characteristics of neuron population disparity‐tuning responses and reveal the neural representations of binocular disparity.

Recently, the voxel‐wise encoding model was proposed and applied to fMRI data to characterize the population tuning responses in each voxel to low‐level visual features, such as the retinotopic location, spatial frequency, and orientation (Kay, Naselaris, Prenger, & Gallant, 2008). Compared to the traditional univariate analysis methods and MVPA methods, the voxel‐wise encoding model can isolate the specific components of variation in activity that are due to low‐level visual features (Naselaris, Olman, Stansbury, Ugurbil, & Gallant, 2015). In this study, we used the voxel‐wise encoding approach that built a computational map between disparity levels and the voxel responses to characterize the population disparity‐tuning responses in the cortex and the decoding approach to identify the binocular disparity from the population responses in cortex. Three voxel‐wise Gabor encoding models were constructed to predict the voxel responses to novel disparity levels. The three encoding models were the single model (SM), fine‐coarse model (FCM), and uncrossed‐crossed model (UCM). Our results showed that FCM fitted to the population disparity‐tuning responses better than SM and UCM. Moreover, V3A complex showed good model encoding prediction accuracies of the responses to disparity levels and better accuracy in identifying novel disparity levels than the other regions.

2. MATERIALS AND METHODS

2.1. Participants

Functional data were collected from six male subjects and two female subjects: Subject 1 (male, age 25), Subject 2 (male, age 26), Subject 3 (male, age 22), Subject 4 (female, age 25), Subject 5 (male, age 25), Subject 6 (male, age 26), Subject 7 (female, age 24), and Subject 8 (male, age 26). All subjects had normal or corrected‐to‐normal vision and were screened for stereo deficits using four stereo tests by random dot stereograms (Yan, 1985). The stereo tests included the stereo‐blindness screen stereogram test, stereo‐acuity test, crossed disparity measuring test, and uncrossed disparity measuring test. For the stereo‐blindness screen stereogram test, all subjects were asked to recognize three shapes (star, circle, and square) from the random dot stereogram within 60 s and all of them passed the test. For the stereo‐acuity test, all subjects were asked to recognize six shapes (“5” [−800 arcsec], star [−400 arcsec], “8” [−200 arcsec], triangle [−100 arcsec], circle [−60 arcsec] and ring [−40 arcsec]) from six random dot stereograms. Subjects 2, 4, and 8 could not recognize the ring and the other five subjects could recognize all shapes. For the crossed disparity measuring test, all subjects were asked to recognize six shapes (square [−1,200, −2,400, −3,600, −4,800, −6,000, −7,200 arcsec]) from six random dot stereograms and all of them passed the test. For the uncrossed disparity measuring test, all subjects were asked to recognize six shapes (circle [1,200, 2,400, 3,600, 4,800, 6,000, 7,200 arcsec]) from six random dot stereograms and all of them passed the test. The experiment was approved by the ethics committee of the Beijing Normal University. All subjects provided written informed consent according to the guidelines of the MRI Center of Beijing Normal University.

2.2. Data acquisition

Data were obtained using a 3‐T Siemens scanner. Echo‐planar imaging (EPI) and T1‐weighted (1.33 × 1 × 1 mm3) data were collected. EPI data (echo time [TE], 30 ms; repetition time [TR], 2000 ms; flip angle, 90°) were acquired at a higher resolution (experimental runs: field of view [FOV] = 200 × 200 mm, voxel size = 3.13 × 3.13 × 4.2 mm3, 33 slices; localizer runs: FOV = 192 × 192 mm; voxel size = 3 × 3 × 3 mm3; 30 slices) covering the visual, posterior parietal, and posterior temporal cortices.

2.3. Stimuli

As shown in Figure 1a, the stimuli consisted of random dot stereograms (22° × 16°) on a midgray background surrounded by static black and white dots. The density of dots was 100 dots/deg2. As shown in Figure 1b, four wedges were equally distributed around the circular aperture (1.2°), each subtending 6° in the radial direction and 50° in the polar angle. We varied the depth of the wedges by modulating the disparity levels in relation to the fixation point (±3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, and 36 arcmin; ±1 arcmin jitter). Although Subjects 2, 4, and 8 did not recognize the ring whose disparity was −40 arcsec (2/3 arcmin) in the stereo‐acuity test, they could recognize the circle whose disparity was −60 arcsec (1 arcmin) in the stereo‐acuity test. Because the minimum disparity magnitude (2 arcmin) in the fMRI experiment was larger than 1 arcmin, Subjects 2, 4, and 8 still participated in the fMRI experiment.

Figure 1.

Figure 1

Schematic illustrations of the stimuli and the experimental design. (a) Left and right views of random dot stereograms. (b) Shape perceived by the random dot stereograms. (c) Paradigms of the model‐fitting and model‐testing runs [Color figure can be viewed at http://wileyonlinelibrary.com]

2.4. Experimental design

The experiment runs that used a block design included model‐fitting runs and model‐testing runs. The localizer runs consisted of retinotopic mapping and a lateral occipital complex (LOC) localizer. Subjects 1 and 2 were scanned in one 3.25‐hr session that included the model‐fitting runs (8 runs), model‐testing runs (8 runs), retinotopic mapping (4 runs), LOC localizer (2 runs), and 3D T1 image (1 run). Subject 3 was scanned in one 2.25‐hr session and one 1.3‐hr session in the same day. The first session included the model‐fitting runs (4 runs), model‐testing runs (4 runs), retinotopic mapping (4 runs), LOC localizer (2 runs), and 3D T1 image (1 run). The second session included the model‐fitting runs (4 runs), model‐testing runs (4 runs), and 3D T1 image (1 run). Subject 4 was scanned in one 2.25‐hr session and two 0.7‐hr sessions. The time interval between the 2.25‐hr session and the two 0.7‐hr sessions was 7 days. The 2.25‐hr session included the model‐fitting runs (4 runs), model‐testing runs (4 runs), retinotopic mapping (4 runs), LOC localizer (2 runs), and 3D T1 image (1 run). The first 0.7‐hr session included the model‐fitting runs (4 runs) and 3D T1 image (1 run). The second 0.7‐hr session included the model‐testing runs (4 runs) and 3D T1 image (1 run). Subjects 5, 6, 7, and 8 were scanned in two 1.3‐hr sessions and one 0.6‐hr session. The time interval between the two 1.3‐hr sessions and the 0.6‐hr session was 1 day for Subjects 5, 7, and 8 while Subject 6 was scanned in the same day. The first/second 1.3‐hr session included the model‐fitting runs (4 runs), model‐testing runs (4 runs) and 3D T1 image (1 run). The 0.6‐hr session included the retinotopic mapping (4 runs), LOC localizer (2 runs) and 3D T1 image (1 run).

Stimuli with disparity levels of ±6, 12, 18, 24, 30, and 36 arcmin were presented during the eight model‐fitting runs, and stimuli with disparity levels of ±3, 9, 15, 21, 27, and 33 arcmin were presented during the eight model‐testing runs. Eight stimuli with an inter‐stimulus interval of 0.5 s were presented in each 12 s block. In each 468 s run, 12 task blocks corresponding to 12 different disparity levels were repeated three times, and there was a total of 36 task blocks. To reduce adaptation, we applied a random polar rotation to the set of disparity wedges of each stimulus in each task block (i.e., a rigid body rotation of the four depth wedges together around the fixation point). Subjects were required to press the button with the left index finger if the two sequent stimuli were the same during each task block. In addition, there were three 12 s fixation blocks in each run. Each run started and ended with a fixation block. The rest of the fixation blocks and 36 task blocks were presented during each run in a counterbalanced randomized order. The paradigms of the model‐fitting and model‐testing runs are shown in Figure 1c.

2.5. Mapping the visual regions of interest

For each participant, 15 regions of interest (ROIs) were drawn in BrainVoyager QX (BrainInnovation, Maastricht, The Netherlands) software. Retinotopic ROIs that included V1, V2d, V3d, V3A complex, V7, V2v, V3v, hV4, LO, and MTC (MT/V5 cluster) were defined using rotating wedges and expanding concentric rings stimuli (Warnking et al., 2002). In particular, hV4 was defined as the region of retinotopic activation in the ventral visual cortex that was adjacent to V3v (Tootell & Hadjikhani, 2001; Tyler et al., 2005). V3A complex that included four subregions (V3A, V3B, V3C and V3D) and LO that included two subregions (LO1 and LO2) were defined according to the previous study (Abdollahi et al., 2014). V7 was defined as the region anterior and dorsal to V3A complex (Tootell et al., 1998; Tsao et al., 2003; Tyler et al., 2005). MTC that includes subregions MT/V5, pMSTv (putative ventral part of the medial superior temporal area), pFST (putative fundus of the superior temporal area), and pV4t (putative V4 transitional zone) was defined using rotating wedges and expanding concentric rings stimuli according to the previous study (Kolster, 2010). Higher ventral visual ROI LOC was defined using LOC localizer scans. LOC was defined as the set of voxels in the lateral occipito‐temporal cortex that produced significantly stronger responses (p < 10−4) to intact than scrambled images of objects (Kourtzi & Kanwisher, 2000).

Previous studies indicated that the parietal areas including POIPS (parieto‐occipital IPS), DIPSM (dorsal IPS medial), DIPSA (dorsal IPS anterior), and phAIP (putative human anterior intraparietal area) were involved in the processing of disparity (Durand, Peeters, Norman, Todd, & Orban, 2009; Georgieva et al., 2009). In this study, the four parietal ROIs relevant to the disparity processing were defined based on the estimated disparity sensitive regions and the coordinates that were reported in the previous studies (Binkofski et al., 1999; Orban, Sunaert, Todd, Van Hecke, & Marchal, 1999; Culham et al., 2001; Claeys, Lindsey, De Schutter, & Orban, 2003; Denys et al., 2004; Orban, Claeys, et al., 2006; Cavina‐Pratesi, Goodale, & Culham, 2007; Kroliczak, Cavina‐Pratesi, Goodman, & Culham, 2007). First, the disparity sensitive regions in the parietal cortex were estimated by contrasting all the disparity levels to the fixation baseline in the model‐fitting runs (p < 10−6). Second, the mean coordinate across all the reported coordinates within the disparity sensitive region in the parietal cortex was calculated for each parietal area. Third, the four parietal ROIs were drawn based on the mean coordinates of the four parietal areas and the disparity sensitive regions. For the voxels between any two ROIs, they were assigned to one ROI that had shorter distance between the voxel and the mean coordinate than the other ROI. To avoid the proliferation of new area labels, we tentatively refer to the four areas as POIPSd, DIPSMd, DIPSAd, and phAIPd, the “d” standing for disparity. Supporting Information Tables S1 and S2 show the number of voxels in each ROI/subregion for each subject.

2.6. Data pre‐processing

The preprocessing steps were performed in BrainVoyager QX software. All functional images of each subject underwent slice timing correction and motion correction. The functional images were further co‐registered to correct differences in head position across sessions using the following steps. First, all functional runs in each session were aligned to the 3D T1 image in the same session. Then, the 3D T1 image in the reference session that included localizer runs was transformed to the ACPC (AC = anterior commissure, PC = posterior commissure) space and Talaraich space. Next, the 3D T1 images in the other sessions were aligned with the 3D ACPC T1 image in the reference session. The alignments of the 3D T1 images between sessions were not performed for Subjects 1and 2 because the two subjects only had one session. Last, all functional runs in each session were transformed to the ACPC space using the ACPC transformation parameters of the 3D T1 image from the same session and then transformed to the Talairach space using the Talairach transformation parameters of the 3D ACPC T1 image in the reference session.

2.7. Brain activity estimation

For eight model‐fitting runs of each subject, Generalized Linear Models (GLM) analysis was applied to each voxel of the preprocessed fMRI data using 12 disparity levels as regressors. The discrete cosine transform functions with a cutoff period of three cycles were added to GLM as the confounding repressors to remove low‐frequency noises. After the 12 regressor weights (beta‐values) were estimated for each voxel, the beta‐value of each disparity level was considered to be the response of the voxel to the disparity level. The same GLM analysis was applied to the eight model‐testing runs.

2.8. Estimation of the voxel‐wise Gabor encoding model

In terms of the range of the disparity level, three voxel‐wise encoding models, SM, FCM and UCM, were constructed by using the voxel responses in each ROI of the model‐fitting runs. For SM, a single 1D Gabor model was used to fit each voxel's responses to all of the disparity levels (±6, 12, 18, 24, 30, and 36 arcmin). For FCM, one 1D Gabor model (fine model) was fitted to each voxel's responses to stimuli with fine disparity levels (±6, 12, and 18 arcmin), and the other 1D Gabor model (coarse model) was fitted to the voxel's responses to stimuli with coarse disparity levels (±24, 30, and 36 arcmin). For UCM, one 1D Gabor model (uncrossed model) was fitted to each voxel's responses to stimuli with uncrossed disparity levels (6, 12, 18, 24, 30, and 36 arcmin), and the other 1D Gabor model (crossed model) was fitted to the voxel's responses to stimuli with crossed disparity levels (−6, −12, −18, −24, −30, and −36 arcmin). The 1D Gabor model was defined as follows (Anzai et al., 2011; DeAngelis & Uka, 2003; Prince, Pointon, et al., 2002):

Gd=A0+Aedd022σ2cos2πfdd0+φ, (1)

where G(d) is the voxel's response to disparity d, A is the amplitude, A 0 is the baseline, d 0 is the position of the Gaussian envelope, σ is the width of the Gaussian envelope, f is the frequency of the cosine, and φ is the phase shift between the cosine and center of the Gaussian envelope. We used a constrained optimization function (fmincon) in MATLAB to estimate the parameters of the Gabor model that best described each voxel's responses. Based on a previous study (Goncalves et al., 2015), the parameters of the model were set to vary within a range before fitting to avoid gross overfitting. Given the disparity levels of the stimuli, parameter d 0 was set to range from −36 to 36 arcmin. Parameter A0 varied between −1 and 1. Parameter A varied between 0.5 and 1.2 times the amplitude range of the voxel responses. Parameter σ varied between 5 and 12. Parameter f varied between 0 and 1/(dmax − dmin) cycles per arcmin, where d max and d min were the maximum (36 arcmin) and minimum (−36 arcmin) disparities, respectively. Parameter φ varied between −π/2 and π/2. For each voxel within ROIs, SM, FCM, and UCM were estimated separately using the voxel responses to different disparity levels in the model‐fitting runs. The outputs of the three models were used as the predicted responses to different disparity levels of each voxel. The model‐fitting error of each voxel‐wise encoding model was defined as the L2 norm of the difference between the predicted voxel's responses and measured voxel's responses (beta‐values) to all 12 disparity levels.

2.9. Disparity identification

Disparity identification analysis was performed to reveal the disparity identification power of different ROIs. Based on the method used in the previous study (Kay et al., 2008), the new disparity levels during the model‐testing runs were identified by using the three estimated voxel‐wise encoding models. Meanwhile, to avoid the influence of the number of selected voxels in each ROI, we performed disparity identification on different voxel populations and calculated the identification accuracy of each voxel population. Then, we averaged the predicted disparity levels across voxel populations to calculate the identification accuracy of each ROI. Disparity identification was performed using the following three steps:

In the first step, voxel selection was performed by using the model‐fitting runs. For each voxel, the model‐fitting error was calculated after the voxel‐wise encoding model was estimated. The voxels within a ROI were sorted in an ascending sequence according to the model‐fitting error. The first N voxels from the sorted voxel list were selected and consisted of a voxel population. N varied from 20 to 150 with an increment of 1 voxel. For each model, a total 131 voxel populations were generated in each ROI.

In the second step, each voxel population yielded 12 distinct predicted voxel response patterns that corresponded to 12 novel disparity levels in the model‐testing runs. For the kth voxel in the jth voxel population in the rth ROI, the predicted voxel's response corresponding to the ith disparity level was denoted as vijkr. For the jth voxel population in the rth ROI, the predicted voxel response pattern corresponding to the ith disparity level was denoted as Vijr=vij1rvij2rvijNr (i = 1, 2, …, 12), where N indicates the number of voxels in the jth voxel population. Let Vir=Vi1rVi2rVi131r represent the predicted voxel response patterns of 131 voxel populations in the rth ROI for the ith disparity level. Then, each voxel population yielded 12 distinct measured voxel response patterns that corresponded to the 12 novel disparity levels in the model‐testing runs. For the kth voxel in the jth voxel population in the rth ROI, the measured voxel's response (beta‐value) corresponding to the ith disparity level was denoted as vijkr. For the jth voxel population in the rth ROI, the measured voxel response pattern corresponding to the ith disparity level was denoted as Vijr=vij1rvij2rvijNr. Let Vir=Vi1rVi2rVi131r represent the measured voxel response patterns of the 131 voxel populations in the rth ROI for the ith disparity level.

In the third step, the disparity level of each measured voxel response pattern in the model‐testing runs was identified. For the jth voxel population in the rth ROI for the ith disparity level, the Pearson correlation between the measured voxel response pattern Vijr and 12 predicted response patterns V1jrV2jrV12jr corresponding to the 12 novel disparity levels was calculated separately. The disparity level dijr whose predicted voxel activity pattern was most correlated with the measured voxel response pattern Vijr was selected as the predicted disparity level of the measured response pattern Vijr. After the ith disparity level of all the 131 measured voxel response patterns in Virwere identified, the mean value mdir of the 131 predicted disparity levels was regarded as the predicted disparity level of the rth ROI's measured voxel response patterns that were induced by the ith disparity level.

After performing disparity identification analysis, we defined the identification accuracy of voxel population and that of ROI. For the jth voxel population, the identification accuracy of voxel population (IV) was calculated based on the following Equation (2).

IVj=i=1i=12hVdijr12×100%, (2)

where

hVdijr=1,dijr=di0,dijrdi (3)

and di is the ith disparity level.

For the rth ROI, the identification accuracy of ROI (IR) was calculated based on the following Equation (4).

IRr=i=1i=12hRmdir12×100%, (4)

where

hRmdir=1,mdirdi<3arcmin0,mdirdi3arcmin (5)

and di is the ith disparity level.

Moreover, we constructed the confusion matrixes to further examine the identification performance of each ROI. For the ith disparity level, the predicted disparity mdir of the rth ROI was regarded as the true positive instances if it was identified as the ith disparity level. Otherwise it was regarded as the false negative instance. The other predicted disparity mdjr (j = 1,2, …, 12, j i) was regarded as the false positive instance if it was identified as the ith disparity level. Otherwise it was regarded as the true negative instance. Based on the confusion matrix, the true negative rate TNRir was calculated for the ith disparity level in the rth ROI of each subject. Furthermore, the rth ROI's true negative rate TNRr was calculated by averaging the true negative rate (TNR) across the 12 disparity levels.

Because that the identification accuracy in Equation (4) was equal to the mean true positive rate across the 12 disparity levels, the mean true positive rate was not calculated by using the confusion matrix. For the identification accuracies and TNRs, the two‐way repeated measure ANOVA tests using ROI (15 levels) and the encoding model (three levels) as the two within‐subject factors were applied to the eight subjects to reveal the main factor effects and the interaction effect between the two factors.

2.10. Response prediction

Similar to the definition of the model‐fitting error, the model prediction error of each voxel‐wise encoding model was defined as the L2 norm of the difference between the predicted voxel's responses and measured voxel's responses (beta‐values) to all of the disparity levels in the model‐testing runs. A lower model prediction error indicated that the encoding model had a higher prediction accuracy. For each model, the voxel population with 150 voxels that were selected in the first step of the disparity identification was used to calculate each ROI's model prediction error by averaging the model prediction errors across the 150 voxels. For the model prediction errors, a two‐way repeated measures ANOVA test using ROI (15 levels) and the encoding model (three levels) as the two within‐subject factors was applied to reveal the main effects and the interaction effect between the two factors.

A permutation test was applied to each voxel in each ROI to identify voxels with significantly lower model prediction errors for each encoding model. First, each voxel's responses to different disparity levels in the model‐testing runs were shuffled across disparity levels. Then, the model prediction error of each voxel was calculated. This process was repeated 1,000 times for each voxel. The 1,000 model prediction errors were sorted in an ascending order. The ranking of the real model prediction error among the shuffled prediction errors gave a p‐value. If N shuffled prediction errors were lower than the real model prediction error, the p‐value was N/1,000. Voxels with a p‐value lower than 0.01 were selected as voxels with significantly lower model prediction errors. The p‐values of voxels with significantly lower model prediction errors were mapped onto the left and right cortical surfaces of each subject. In order to investigate how the inferred Gabor models varied across subjects, the voxels with significantly lower model prediction errors (p < 0.01) were used to generate the distribution histograms of the 6 parameters of each model for each subject.

3. RESULTS

3.1. Disparity identification

Figure 2 shows each ROI's mean identification accuracies of the three encoding models. Because Mauchly's tests of sphericity were significant for the ROI effect (p < 0.001) and the interaction effect (p < 0.001), the degrees of freedom were adjusted using Greenhouse–Geisser correction (ROI: ε = 0.267; interaction: ε = 0.188). The results of the two‐way repeated measures ANVOA revealed a significant model effect (F[2,14] = 24.504, p < 0.001), a significant ROI effect (F[3.732,26.123] = 4.654, p = 0.007) and a significant interaction effect (F[5.267,36.868] = 2.643, p = 0.036). Further simple effect analysis was performed to examine the differences between each pair of ROIs/models. The results of the simple effect were corrected by the Bonferroni method. For SM, V7 had significantly higher mean identification accuracy than DIPSAd (T [7] = 6.24, p = 0.04). For FCM, V3A complex had marginally significantly higher mean identification accuracy than V2d (T [7] = 5.55, p = 0.09) and significantly higher mean identification accuracy than V3d, V7, MTC, V2v, V3v, hV4, LO, DIPSMd, DISPAd, phAIPd and LOC (V3d: T [7] = 7.13, p = 0.019; V7: T [7] = 8.93, p = 0.004; MTC: T [7] = 6.62, p = 0.032; V2v: T [7] = 8.08, p = 0.009; V3v: T [7] = 9.32, p = 0.003; hV4: T [7] = 6.75, p = 0.028; LO: T [7] = 11.91, p = 0.001; DIPSMd: T [7] = 6.55, p = 0.033; DIPSAd: T [7] = 8, p = 0.01; phAIPd: T [7] = 9.78, p = 0.003; LOC: T [7] = 6.42, p = 0.038). Moreover, V2d and hV4 had marginally significantly higher mean identification accuracies for SM versus UCM (V2d: T [7] = 2.74, p = 0.084; hV4: T [7] = 3.07, p = 0.056) and had significantly higher mean identification accuracies for FCM versus UCM (V2d: T [7] = 3.21, p = 0.043; hV4: T [7] = 4.12, p = 0.013). V3d and V2v had significantly higher mean identification accuracies for FCM versus UCM (V3d: T [7] = 3.57, p = 0.028; V2v: T [7] = 4.25, p = 0.011). V3A complex and DIPSAd had significantly higher mean identification accuracies for FCM versus SM (V3A complex: T [7] = 4.79, p = 0.006; DIPSAd: T [7] = 9, p < 0.001) and FCM versus UCM (V3A complex: T [7] = 13.62, p < 0.001; DIPSAd: T [7] = 3.57, p = 0.028). V7 had marginally significantly higher mean identification accuracy for SM versus UCM (T [7] = 2.7, p = 0.088). The identification accuracies of all ROIs of the three encoding models for each subject are shown in Supporting Information Figure S1. It can be seen that only FCM results revealed that visual area V3A complex showed the highest identification accuracy for all the eight subjects among the three encoding models.

Figure 2.

Figure 2

Mean identification accuracies of all ROIs of the three encoding models across eight subjects. The error bars represent the SEM identification accuracies. V3A* indicates V3A complex. * indicates p < 0.05, • indicates 0.05 < p < 0.1 (Bonferroni corrected). FCM: fine‐coarse model; ROIs: regions of interest; SM: single model; UCM: uncrossed‐crossed model

Figure 3 shows the mean TNRs of each ROI of the three encoding models. Because Mauchly's tests of sphericity were significant for the ROI effect (p < 0.001) and the interaction effect (p < 0.001), the degrees of freedom were adjusted by Greenhouse–Geisser correction (ROI: ε = 0.279; interaction: ε = 0.197). The results of the two‐way repeated measures ANOVA revealed a significant model effect (F[2,14] = 25.972, p < 0.001), a significant ROI effect (F[3.9,27.3] = 4.594, p = 0.006) and a significant interaction effect (F[5.529,38.706] = 2.606, p = 0.036). Further simple effect analysis was performed to examine the differences between each pair of ROIs/models. The results of the simple effect were corrected by the Bonferroni method. For SM, V7 had significantly higher mean TNR than DIPSAd (T [7] = 7, p = 0.04). For FCM, V3A complex had marginally significantly higher mean TNR than V2d and V3d (V2d: T [7] = 5.17, p = 0.09; V3d: T [7] = 6, p = 0.053) and significantly higher mean TNR than V7, MTC, V2v, V3v, hV4, LO, DIPSMd, DISPAd, phAIPd, and LOC (V7: T [7] = 8.5, p = 0.004; MTC: T [7] = 6.33, p = 0.045; V2v: T [7] = 7.25, p = 0.009; V3v: T [7] = 10.67, p = 0.003; hV4: T [7] = 6, p = 0.029; LO: T [7] = 12.67, p = 0.001; DIPSMd: T [7] = 6.17, p = 0.033; DIPSAd: T [7] = 7.75, p = 0.01; phAIPd: T [7] = 11, p = 0.003; LOC: T [7] = 6.33, p = 0.038). Moreover, V2d and hV4 had significantly higher mean TNRs for SM versus UCM (V2d: T [7] = 3.27, p = 0.041; hV4: T [7] = 4.67, p = 0.011) and for FCM versus UCM (V2d: T [7] = 4, p = 0.029; hV4: T [7] = 5.67, p = 0.001). V3d and V2v had significantly higher mean TNRs for FCM versus UCM (V3d: T [7] = 3.35, p = 0.037; V2v: T [7] = 4.5, p = 0.009). V3A complex and DIPSAd had significantly higher mean TNRs for FCM versus SM (V3A complex: T [7] = 4.5, p = 0.006; DIPSAd: T [7] = 12, p < 0.001) and FCM versus UCM (V3A complex: T [7] = 10.33, p < 0.001; DIPSAd: T [7] = 3.67, p = 0.028). V7 had significantly higher mean TNR for SM versus UCM (T [7] = 3.25, p = 0.0498). The TNRs of all ROIs of the three encoding models for each subject are shown in Supporting Information Figure S2. It can be seen that only FCM results revealed that V3A complex showed the highest TNR for all subjects among the three encoding models.

Figure 3.

Figure 3

Mean true negative rates (TNRs) of all ROIs of the three encoding models across eight subjects. The error bars represent the SEM TNRs. V3A* indicates V3A complex. * indicates p < 0.05, • indicates 0.05 < p < 0.1 (Bonferroni corrected). FCM: fine‐coarse model; ROIs: regions of interest; SM: single model; UCM: uncrossed‐crossed model

Figures 4, S3, and S4 show eight subjects' decoding analysis results of V3A complex for FCM, SM, and UCM, respectively. Figures 4a, S3A, and S4A show the real disparity levels and predicted disparity levels of the eight subjects for the three encoding models. Figures 4b, S3B, and S4B show the mean correlation matrices of 131 voxel populations between the measured voxel response patterns and predicted voxel response patterns to different disparity levels in the model‐testing runs. In contrast to SM and UCM, V3A complex of each subject for FCM showed a smaller distance between the predicted disparity levels and real disparity levels and a stronger correlation between the measured voxel response patterns and predicted voxel response patterns.

Figure 4.

Figure 4

Eight subjects' decoding analysis results of V3A complex for fine‐coarse model (FCM). (a) Predicted disparities using the responses in V3A complex of the eight subjects for FCM. The black asterisks indicate the predicated mean disparity levels across the 131 voxel populations, and the red circles indicate the real disparity levels during the model‐testing runs. The error bars represent the SDs of the mean predicted disparity levels. IR represents identification accuracy of V3A complex. (b) The mean correlation matrices between the measured voxel response patterns and predicted voxel response patterns across 131 voxel populations [Color figure can be viewed at http://wileyonlinelibrary.com]

Because the variation of the identification accuracies of voxel populations in all ROIs showed similar fluctuation, only results of V3A complex are presented. Figures 5, S5, and S6 show the variation of the identification accuracies of voxel populations in V3A complex with the increase of the voxel population size for FCM, SM and UCM, respectively. The results showed that the eight subjects' identification accuracies of most voxel populations in V3A complex were higher than the chance level (1/12), but the identification accuracy of voxel population fluctuated with the increasing number of selected voxels and increasing the voxel population size did not improve the identification power of the model for all subjects.

Figure 5.

Figure 5

Variation of the identification accuracies of voxel populations in V3A complex for fine‐coarse model. (a) Subject 1. (b) Subject 2. (c) Subject 3. (d) Subject 4. (e) Subject 5. (f) Subject 6. (g) Subject 7. (h) Subject 8. Each black dot represents the identification accuracy of each voxel population

3.2. Response prediction

Figure 6 shows mean model prediction errors of each ROI of the three encoding models. Because Mauchly's tests of sphericity were significant for the model effect (p < 0.001), the ROI effect (p < 0.001) and the interaction effect (p < 0.001), the degrees of freedom were adjusted by Greenhouse–Geisser correction (model: ε = 0.505; ROI: ε = 0.165; interaction: ε = 0.074). The results of the two‐way repeated measures ANOVA only revealed a significant model effect (F[1.01,7.071] = 11.133, p = 0.012). For the model factor, the marginal mean of FCM across all ROIs was significantly lower than that of SM (T [7] = 23.76, p < 0.001) and UCM (T [7] = 3.9, p = 0.018). The model prediction errors of all ROIs of the three encoding models for each subject are shown in Supporting Information Figure S7. The results indicated FCM produced lower model prediction error than SM and UCM in all ROIs for seven subjects except Subject 5.

Figure 6.

Figure 6

Mean model prediction errors of all ROIs of the three encoding models across eight subjects. The error bars represent the SEM model prediction errors. V3A* indicates V3A complex. FCM: fine‐coarse model; ROIs: regions of interest; SM: single model; UCM: uncrossed‐crossed model

Figures 7, S8, and S10 show the occipital regions with significantly lower model prediction errors for FCM, SM, and UCM, respectively. Figures 8, S9, and S11 show parietal regions with significantly lower model prediction errors for FCM, SM, and UCM, respectively. For FCM, voxels with significantly lower prediction errors mainly located in MTC of eight subjects, in V1, V3A complex, V7, hV4, LO and LOC of seven subjects and in V2d and POIPSd of six subjects (see Figures 7 and 8). For SM, voxels with significantly lower prediction errors mainly located in V7 and LOC of seven subjects and in V3d, V3A complex, MTC and LO of six subjects (see Supporting Information Figures S8 and S9). For UCM, voxels with significantly lower prediction errors mainly located in V7 and MTC of seven subjects and in LOC of six subjects (see Supporting Information Figures S10 and S11).

Figure 7.

Figure 7

Statistical parametric maps showing significantly lower prediction errors of fine‐coarse model from the occipital view. (a) Subject 1. (b) Subject 2. (c) Subject 3. (d) Subject 4. (e) Subject 5. (f) Subject 6. (g) Subject 7. (h) Subject 8. 1: V3D, 2: V3C, 3: V3B, 4: V3A, 5: MT/V5, 6: pV4t, 7: pFST, 8: pMSTv, 9: LO1, 10: LO2. The black dashed lines describe the areas V1, V2d, V3d, V3A complex (1, 2, 3, 4), V7, MTC (5, 6, 7, 8), V2v, V3v, hV4, and LO (9, 10) in each subject. The white dashed lines describe the visual area lateral occipital complex in each subject [Color figure can be viewed at http://wileyonlinelibrary.com]

Figure 8.

Figure 8

Statistical parametric maps showing significantly lower model prediction errors of fine‐coarse model from the parietal view. (a) Subject 1. (b) Subject 2. (c) Subject 3. (d) Subject 4. (e) Subject 5. (f) Subject 6. (g) Subject 7. (h) Subject 8. The black dashed lines describe the areas POIPSd, DIPSMd, DIPSAd, and phAIPd in each subject [Color figure can be viewed at http://wileyonlinelibrary.com]

Because the eight subjects showed similar parameter distributions of the three encoding models, only the Subject 1's results are presented in this study. Figures 9, S12, and S13 show the Subject 1's distribution histograms of six parameters for FCM, SM, and UCM, respectively. For FCM, the d 0 distribution of the fine model was narrower than that of the coarse model (see Figure 9). For SM, the d 0 distribution showed one cluster in the positive values and one cluster in the negative values (see Supporting Information Figure S12). For UCM, the d 0 distribution of the crossed model mainly covered the negative values and that of the uncrossed model mainly covered the positive values (see Supporting Information Figure S13). The σ distribution histogram of FCM showed that the voxels with the maximum σ values for the fine model were less than those for the coarse model (see Figure 9). The f distribution histograms showed that the voxels mainly clustered in the maximum or minimum value for all the three models (see Figures 9, S12, and S13). The φ distribution histograms showed that the voxels mainly clustered in the maximum or minimum value for FCM and UCM (see Figures 9 and S13).

Figure 9.

Figure 9

Subject 1's model parameter distributions of voxels with significantly lower model prediction errors (p < 0.01) for fine‐coarse model [Color figure can be viewed at http://wileyonlinelibrary.com]

4. DISCUSSION

This study constructed three voxel‐wise encoding models (SM, FCM, and UCM) to predict voxel responses to different disparity levels and identify novel disparity levels using measured voxel responses. The following findings were revealed. First, FCM produced the smallest model prediction error among the three encoding models. Second, FCM results indicated that the identification accuracy of V3A complex was the highest among the 15 ROIs for all eight subjects. Third, FCM revealed that occipital areas V1, V2d, V3A complex, V7, MTC, hV4, LO, LOC, and parietal area POIPSd produced significantly higher accuracies for response prediction.

4.1. Model fitting performance of the three voxel‐wise encoding models

Macaque studies demonstrated that the Gabor or Gaussian function could be used to describe disparity‐tuning curves of neurons in visual areas (Anzai et al., 2011; DeAngelis & Newsome, 1999; DeAngelis & Uka, 2003; Prince, Cumming, et al., 2002; Prince, Pointon, et al., 2002). Although fMRI signals reflect hemodynamic responses and are indirectly relevant to neural activity, we still assume that the disparity‐tuning curves of voxel responses to disparity levels can be fitted by the 1D Gabor function. In this study, three encoding models (SM, FCM, and UCM) were used to predict voxel responses to novel disparity levels.

The results of encoding analysis showed that the mean model prediction error of FCM was significantly lower than those of SM and UCM in all ROIs (see Figure 6). The ANOVA results further indicated that mean model prediction error of FCM across ROIs was significantly lower than that that of SM/UCM. The results suggested that FCM fitted better to voxel responses in the cortex than the other two models. It was noted that the spatial scale must be taken into account as neurons respond to a limited range of disparities and spatial frequencies (Menz & Freeman, 2003). Neurons that are tuned to low spatial frequencies encode a relatively wide range of disparities, while neurons tuned to high spatial frequencies are limited to processing a narrow range of disparities. Because the fine and coarse models that were used in FCM considered the spatial scale of disparity, FCM better fitted the voxel responses than SM and UCM. Moreover, some studies noted that disparity detectors might have had larger receptive field sizes as the disparity magnitude increased (Goncalves et al., 2015; Lehky & Sejnowski, 1990; Stevenson, Cormack, Schor, & Tyler, 1992). Therefore, the width of the fitted Gabor models should be related to the disparity magnitude. Because FCM used a fine model to fit responses to fine disparities and a coarse model to fit responses to coarse disparities, the width of the Gabor function in the fine model should be smaller than that in the coarse model. Compared to SM and UCM, FCM is more adaptive to the changes of the receptive field size of the disparity and the disparity magnitude. Accordingly, FCM showed a better model fitting performance than SM and UCM.

4.2. Identification of novel disparities

The ANOVA results indicated that V3A complex had significantly higher identification accuracy and TNR than most ROIs for FCM (see Figures 2 and 3). Among the three encoding models, only FCM obtained a consistent result that indicated that V3A complex had the highest identification accuracy and TNR among all 15 ROIs for eight subjects (see Supporting Information Figures S1 and S2). No consistent results were observed in SM and UCM. Moreover, FCM showed better correlation between the identified disparities and the real disparities in V3A complex than SM and UCM (see Figures 4, S3, and S4). It should be noted that V3A complex in this study contained V3A that was demonstrated to be closely related to disparity processing in some previous studies (Backus et al., 2001; Preston et al., 2008; Tsao et al., 2003). Some fMRI studies of depth perception found that V3A was involved in disparity processing and showed remarkable sensitivity to stereoscopic stimuli (Backus et al., 2001; Tsao et al., 2003). Moreover, it was demonstrated that V3A was able to decode disparity magnitudes of viewed stimuli (Preston et al., 2008) and could reliably discriminate a large range of disparities (Minini et al., 2010), which indicated that V3A was related to disparity magnitude discrimination. Among the three encoding models, FCM used the fine and coarse models to discriminate the disparity magnitudes. Therefore, only FCM obtained consistent results and revealed the highest identification accuracy in V3A complex for all subjects. Our results may suggest that V3A complex plays an important role in disparity processing and that the voxel responses in V3A complex can be better characterized by FCM than the other ROIs, which is consistent with Goncalves's study that found a significant positive trend between the tuning width of the 1D Gabor function and disparity magnitude in V3A by modeling voxel responses (Goncalves et al., 2015).

In this study, the identification accuracy of voxel population did not rise and contained many fluctuations when the number of voxels in the voxel population increased (see Figures 5, S5, and S6). Thus, it is very difficult to determine the optimal number of voxels in the voxel population for all the ROIs. Although some studies selected a fixed amount of voxels to perform decoding analysis (Kay et al., 2008; Preston et al., 2008), the present study used an average method that varied the number of voxels from 20 to 150 and took the mean value of the predicted disparity levels of 131 voxel populations as the final predicted disparity level of each ROI. Then, the identification accuracy of each ROI was calculated based on 12 final predicted disparity levels in the model‐testing runs. In contrast to the fixed number of voxels, the average method can remove the effect of the number of voxels and obtain more stable results.

4.3. Visual areas with a high prediction accuracy

The results of encoding analysis based on FCM showed that voxels with significantly lower model prediction errors mainly located in the occipital areas (MTC: eight subjects; V1, V3A complex, V7, hV4, LO and LOC: seven subjects; V2d: six subjects) and parietal area POIPSd for six subjects (see Figures 7 and 8). The results indicated that these areas represented disparity information and that the disparity responses of voxels in these areas were better fitted by Gabor functions than those in the other areas. Moreover, some electrophysiology studies found that neurons in V1, V3A, and MT had disparity‐tuning curves that were adequately described by Gabor functions (Anzai et al., 2011; DeAngelis & Uka, 2003; Prince, Pointon, et al., 2002), which was consistent with the results of V1, V3A complex, and MTC in this study. For the occipital areas, there is general agreement that disparity signals are spread throughout the visual cortex (Neri et al., 2004; Orban, Janssen, & Vogels, 2006; Parker, 2007; Preston et al., 2008). The previous studies also demonstrated that the visual areas (V1, V2d, V3A, V7, hMT+/V5, hV4, LO, and LOC) were related to disparity processing (Backus et al., 2001; Brouwer et al., 2005; Gilaie‐Dotan et al., 2002; Kourtzi, Erb, Grodd, & Bülthoff, 2003; Li et al., 2017; Nasr & Tootell, 2018; Preston et al., 2008; Tsao et al., 2003; Welchman et al., 2005). For the parietal area POIPSd, some electrophysiology studies found 3D shape‐selective neurons in the parietal cortex of macaque (Durand et al., 2007; Srivastava, Orban, & Janssen, 2006). Some fMRI studies also suggested that the parietal cortex played an important role in disparity processing (Durand et al., 2009; Georgieva et al., 2009; Gilaie‐Dotan et al., 2002; Patten & Welchman, 2015; Tsao et al., 2003). Therefore, it is reasonable that the occipital and parietal cortices showed a high disparity prediction power in this study.

4.4. Difference between encoding and decoding analysis

In the present study, both encoding and decoding analyses were performed. Because the encoding model was built on a voxel‐by‐voxel basis, the encoding analysis that used disparity levels of visual stimuli to predict the voxel responses was a univariate method. By contrast, the decoding analysis that identified novel disparity levels from multivoxel activity patterns was a multivariate analysis method.

Both encoding and decoding analyses can be used to investigate some of the most common questions about how information is represented in the brain. In this study, encoding analysis found that the voxel responses to the disparity levels in V3A complex could be predicted by FCM with high accuracy. Decoding analysis revealed that the voxel responses in V3A complex could be used to identify novel disparity levels with high accuracy for FCM. Therefore, it can be inferred that V3A complex is a critical visual area for processing disparity information.

Naselaris, Kay, Nishimoto, and Gallant (2011) noted that an encoding model could, in principle, provide a complete functional description of a region of interest, while a decoding model could only provide a partial description (Naselaris et al., 2011). Therefore, the encoding analysis revealed that some additional regions (V1, V2d, V7, MTC, hV4, LO, LOC, and POIPSd) could better represent disparity information in contrast to the decoding analysis in this study. Our results suggest that encoding and decoding analyses can be used together to provide complementary information.

4.5. Parameter distributions of the three voxel‐wise encoding models

The parameter d 0 is the position of the Gaussian envelope of the Gabor model. The results indicated that the distribution of the estimated parameter d 0 was related with the disparity range of FCM and UCM. Because the coarse model used larger disparity range than the fine model in the model estimation for FCM, the coarse model had wider distribution range of d 0 than the fine model (see Figure 9). For UCM, the crossed and uncrossed disparity levels were used to estimate crossed model and uncrossed model respectively. Thus, the d 0 distribution of the crossed model mainly clustered in the negative (crossed) disparities while that of the uncrossed model mainly clustered in the positive (uncrossed) disparities (see Supporting Information Figure S13). Moreover, the parameter d 0 of all the three encoding models were mainly distributed between +30 arcmin and −30 arcmin for most voxels (see Figures 9, S12, and S13). The results suggest that most voxels were sensitive to the disparity levels with the magnitude less than 30 arcmin, which was consistent with the previous study (Backus et al., 2001). The parameter f determines the importance of cosine terms to the Gabor model. If f equals zero, the Gabor model is actually equivalent to the Gaussian model. FCM had less voxels with f equal to zero than SM and UCM, which may suggest that more voxels of SM and UCM used Gaussian model to fit the disparity responses (see Figures 9, S12, and S13). Although there were some electrophysiology studies that used Gaussian model to fit disparity‐tuning curve (DeAngelis & Uka, 2003; Prince, Pointon, et al., 2002), recent fMRI study suggested that Gaussian model was insufficient to capture the different profiles of the voxels compared to the Gabor model (Goncalves et al., 2015). Therefore, the f distribution possibly can interpret the better performance of FCM than SM and UCM. For fMRI data, the responses to a specific cognitive task typically extend over multiple voxels. The neighboring voxels generally showed similar responses to a task. Thus, the model parameters of the neighboring voxels may vary smoothly across voxels.

5. CONCLUSION

This study constructed three 1D Gabor encoding models (SM, FCM, and UCM) to explore the relationship between disparity levels and brain activities in the human brain. Encoding analysis revealed that FCM was the best model to fit disparity responses in the human brain. Meanwhile, both encoding and decoding analyses indicated that V3A complex showed a high prediction accuracy for brain activity and identification accuracy for disparity levels. These results indicate that V3A complex may be more important for processing disparity information than the other brain regions.

CONFLICT OF INTEREST

The authors declare no conflict of interest.

Supporting information

Figure S1 Identification accuracies of all ROIs of the three encoding models for Subject 1 (A), Subject 2 (B), Subject 3 (C), Subject 4 (D), Subject 5 (E), Subject 6 (F), Subject 7 (G), and Subject 8 (H). V3A* indicates V3A complex

Figure S2 True negative rates of all ROIs of the three encoding models for Subject 1 (A), Subject 2 (B), Subject 3 (C), Subject 4 (D), Subject 5 (E), Subject 6 (F), Subject 7 (G), and Subject 8 (H). V3A* indicates V3A complex

Figure S3 Eight subjects' decoding analysis results of V3A complex for SM. (A) Predicted disparities using the responses in V3A complex of the eight subjects for SM. The black asterisks indicate the predicated mean disparity levels across the 131 voxel populations, and the red circles indicate the real disparity levels during the model‐testing runs. The error bars represent the SDs of the mean predicted disparity levels. IR represents identification accuracy of V3A complex. (B) The mean correlation matrices between the measured voxel response patterns and predicted voxel response patterns across 131 voxel populations

Figure S4 Eight subjects' decoding analysis results of V3A complex for UCM. (A) Predicted disparities using the responses in V3A complex of the eight subjects for UCM. The black asterisks indicate the predicated mean disparity levels across the 131 voxel populations, and the red circles indicate the real disparity levels during the model‐testing runs. The error bars represent the SDs of the mean predicted disparity levels. IR represents identification accuracy of V3A complex. (B) The mean correlation matrices between the measured voxel response patterns and predicted voxel response patterns across 131 voxel populations

Figure S5 Variation of the identification accuracies of voxel populations in V3A complex for SM. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. Each black dot represents the identification accuracy of each voxel population

Figure S6 Variation of the identification accuracies of voxel populations in V3A complex for UCM. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. Each black dot represents the identification accuracy of each voxel population

Figure S7 Model prediction errors of all ROIs of the three encoding models for Subject 1 (A), Subject 2 (B), Subject 3 (C), Subject 4 (D), Subject 5 (E), Subject 6 (F), Subject 7 (G), and Subject 8 (H). V3A* indicates V3A complex

Figure S8 Statistical parametric maps showing significantly lower model prediction errors of SM from the occipital view. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. 1: V3D, 2: V3C, 3: V3B, 4: V3A, 5: MT/V5, 6: pV4t, 7: pFST, 8: pMSTv, 9: LO1, 10: LO2. The black dashed lines describe the areas V1, V2d, V3d, V3A complex (1, 2, 3, 4), V7, MTC (5, 6, 7, 8), V2v, V3v, hV4, and LO (9, 10) in each subject. The white dashed lines describe the visual area LOC in each subject

Figure S9 Statistical parametric maps showing significantly lower model prediction errors of SM from the parietal view. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. The black dashed lines describe the areas POIPSd, DIPSMd, DIPSAd and phAIPd in each subject

Figure S10 Statistical parametric maps showing significantly lower model prediction errors of UCM from the occipital view. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. 1: V3D, 2: V3C, 3: V3B, 4: V3A, 5: MT/V5, 6: pV4t, 7: pFST, 8: pMSTv, 9: LO1, 10: LO2. The black dashed lines describe the areas V1, V2d, V3d, V3A complex (1, 2, 3, 4), V7, MTC (5, 6, 7, 8), V2v, V3v, hV4, and LO (9, 10) in each subject. The white dashed lines describe the visual area LOC in each subject

Figure S11 Statistical parametric maps showing significantly lower model prediction errors of UCM from the parietal view. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. The black dashed lines describe the areas POIPSd, DIPSMd, DIPSAd, and phAIPd in each subject

Figure S12 Subject 1's model parameter distributions of voxels with significantly lower model prediction errors (p < 0.01) for SM

Figure S13 Subject 1's model parameter distributions of voxels with significantly lower model prediction errors (p < 0.01) for UCM

Table S1 The numbers of voxels in 15 ROIs for the eight subjects. V3A* indicates V3A complex

Table S2 The numbers of voxels in four subregions of V3A complex, four subregions of MTC, and two subregions of LO for the eight subjects

Li Y, Hou C, Zhang C, et al. Disparity level identification using the voxel‐wise Gabor model of fMRI data. Hum Brain Mapp. 2019;40:2596–2610. 10.1002/hbm.24547

Funding information Key Program of National Natural Science Foundation of China, Grant/Award Number: 61731003; National Natural Science Foundation of China, Grant/Award Numbers: 61671067, 61471262, 61520106002; the Fundamental Research Fund for the Central Universities, Grant/Award Number: 2017XTCX04; the Interdiscipline Research Fund of Beijing Normal University

REFERENCES

  1. Abdollahi, R. O. , Kolster, H. , Glasser, M. F. , Robinson, E. C. , Coalson, T. S. , Dierker, D. , … Orban, G. A. (2014). Correspondences between retinotopic areas and myelin maps in human visual cortex. NeuroImage, 99, 509–524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Adams, D. L. , & Zeki, S. (2001). Functional organization of macaque V3 for stereoscopic depth. Journal of Neurophysiology, 86(5), 2195–2203. [DOI] [PubMed] [Google Scholar]
  3. Anzai, A. , Chowdhury, S. A. , & DeAngelis, G. C. (2011). Coding of stereoscopic depth information in visual areas V3 and V3A. The Journal of Neuroscience, 31(28), 10270–10282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Backus, B. T. , Fleet, D. J. , Parker, A. J. , & Heeger, D. J. (2001). Human cortical activity correlates with stereoscopic depth perception. Journal of Neurophysiology, 86(4), 2054–2068. [DOI] [PubMed] [Google Scholar]
  5. Binkofski, F. , Buccino, G. , Posse, S. , Seitz, R. J. , Rizzolatti, G. , & Freund, H. J. (1999). A fronto‐parietal circuit for object manipulation in man: Evidence from an fMRI‐study. European Journal of Neuroscience, 11(9), 3276–3286. [DOI] [PubMed] [Google Scholar]
  6. Brouwer, G. J. , Van Ee, R. , & Schwarzbach, J. (2005). Activation in visual cortex correlates with the awareness of stereoscopic depth. The Journal of Neuroscience, 25(45), 10403–10413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Cavina‐Pratesi, C. , Goodale, M. A. , & Culham, J. C. (2007). FMRI reveals a dissociation between grasping and perceiving the size of real 3D objects. PLoS One, 2(5), e424. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Claeys, K. G. , Lindsey, D. T. , De Schutter, E. , & Orban, G. A. (2003). A higher order motion region in human inferior parietal lobule: Evidence from fMRI. Neuron, 40(3), 631–642. [DOI] [PubMed] [Google Scholar]
  9. Cottereau, B. R. , McKee, S. P. , Ales, J. M. , & Norcia, A. M. (2011). Disparity‐tuned population responses from human visual cortex. The Journal of Neuroscience, 31(3), 954–965. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Culham, J. C. , DeSouza, J. , Woodward, S. , Kourtzi, Z. , Gati, J. , Menon, R. , & Goodale, M. (2001). Visually‐guided grasping produces fMRI activation in dorsal but not ventral stream brain areas. Journal of Vision, 1(3), 194–194. [DOI] [PubMed] [Google Scholar]
  11. DeAngelis, G. C. , & Newsome, W. T. (1999). Organization of disparity‐selective neurons in macaque area MT. The Journal of Neuroscience, 19(4), 1398–1415. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. DeAngelis, G. C. , & Uka, T. (2003). Coding of horizontal disparity and velocity by MT neurons in the alert macaque. Journal of Neurophysiology, 89(2), 1094–1111. [DOI] [PubMed] [Google Scholar]
  13. Denys, K. , Vanduffel, W. , Fize, D. , Nelissen, K. , Peuskens, H. , Van Essen, D. , & Orban, G. A. (2004). The processing of visual shape in the cerebral cortex of human and nonhuman primates: A functional magnetic resonance imaging study. The Journal of Neuroscience, 24(10), 2551–2565. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Durand, J.‐B. , Nelissen, K. , Joly, O. , Wardak, C. , Todd, J. T. , Norman, J. F. , … Orban, G. A. (2007). Anterior regions of monkey parietal cortex process visual 3D shape. Neuron, 55(3), 493–505. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Durand, J.‐B. , Peeters, R. , Norman, J. F. , Todd, J. T. , & Orban, G. A. (2009). Parietal regions processing visual 3D shape extracted from disparity. NeuroImage, 46(4), 1114–1126. [DOI] [PubMed] [Google Scholar]
  16. Georgieva, S. , Peeters, R. , Kolster, H. , Todd, J. T. , & Orban, G. A. (2009). The processing of three‐dimensional shape from disparity in the human brain. The Journal of Neuroscience, 29(3), 727–742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Gilaie‐Dotan, S. , Ullman, S. , Kushnir, T. , & Malach, R. (2002). Shape‐selective stereo processing in human object‐related visual areas. Human Brain Mapping, 15(2), 67–79. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Gnadt, J. W. , & Mays, L. E. (1995). Neurons in monkey parietal area LIP are tuned for eye‐movement parameters in three‐dimensional space. Journal of Neurophysiology, 73(1), 280–297. [DOI] [PubMed] [Google Scholar]
  19. Goncalves, N. R. , Ban, H. , Sánchez‐Panchuelo, R. M. , Francis, S. T. , Schluppeck, D. , & Welchman, A. E. (2015). 7 tesla fMRI reveals systematic functional organization for binocular disparity in dorsal visual cortex. The Journal of Neuroscience, 35(7), 3056–3072. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Howard, I. P. , & Rogers, B. J. (1995). Binocular vision and stereopsis. New York, NY: Oxford University Press. [Google Scholar]
  21. Hubel, D. H. , & Livingstone, M. S. (1987). Segregation of form, color, and stereopsis in primate area 18. The Journal of Neuroscience, 7(11), 3378–3415. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Janssen, P. , Vogels, R. , & Orban, G. A. (2000). Selectivity for 3D shape that reveals distinct areas within macaque inferior temporal cortex. Science, 288(5473), 2054–2056. [DOI] [PubMed] [Google Scholar]
  23. Jastorff, J. , Abdollahi, R. O. , Fasano, F. , & Orban, G. A. (2016). Seeing biological actions in 3 D: An fMRI study. Human Brain Mapping, 37(1), 203–219. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Julesz, B. (1971). Foundations of cyclopean perception. Chicago: University of Chicago. [Google Scholar]
  25. Kay, K. N. , Naselaris, T. , Prenger, R. J. , & Gallant, J. L. (2008). Identifying natural images from human brain activity. Nature, 452(7185), 352–355. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Kolster, H. , Peeters, R. , & Orban, G. A. (2010). The retinotopic organization of the human middle temporal area mt/v5 and its cortical neighbors. Journal of Neuroscience, 30(29), 9801–9820. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kourtzi, Z. , Erb, M. , Grodd, W. , & Bülthoff, H. H. (2003). Representation of the perceived 3‐D object shape in the human lateral occipital complex. Cerebral Cortex, 13(9), 911–920. [DOI] [PubMed] [Google Scholar]
  28. Kourtzi, Z. , & Kanwisher, N. (2000). Cortical regions involved in perceiving object shape. The Journal of Neuroscience, 20(9), 3310–3318. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Kroliczak, G. , Cavina‐Pratesi, C. , Goodman, D. , & Culham, J. (2007). What does the brain do when you fake it? An FMRI study of pantomimed and real grasping. Journal of Neurophysiology, 97(3), 2410–2422. [DOI] [PubMed] [Google Scholar]
  30. Lehky, S. R. , & Sejnowski, T. J. (1990). Neural model of stereoacuity and depth interpolation based on a distributed representation of stereo disparity [published erratum appears in J Neurosci 1991 Mar; 11 (3): Following table of contents]. Journal of Neuroscience, 10(7), 2281–2299. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Li, Y. , Zhang, C. , Hou, C. , Yao, L. , Zhang, J. , & Long, Z. (2017). Stereoscopic processing of crossed and uncrossed disparities in the human visual cortex. BMC Neuroscience, 18(1), 80 10.1186/s12868-017-0395-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Menz, M. D. , & Freeman, R. D. (2003). Stereoscopic depth processing in the visual cortex: A coarse‐to‐fine mechanism. Nature Neuroscience, 6(1), 59–65. [DOI] [PubMed] [Google Scholar]
  33. Minini, L. , Parker, A. J. , & Bridge, H. (2010). Neural modulation by binocular disparity greatest in human dorsal visual stream. Journal of Neurophysiology, 104(1), 169–178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Naselaris, T. , Kay, K. N. , Nishimoto, S. , & Gallant, J. L. (2011). Encoding and decoding in fMRI. NeuroImage, 56(2), 400–410. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Naselaris, T. , Olman, C. A. , Stansbury, D. E. , Ugurbil, K. , & Gallant, J. L. (2015). A voxel‐wise encoding model for early visual areas decodes mental images of remembered scenes. NeuroImage, 105, 215–228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Nasr, S. , & Tootell, R. B. (2018). Visual field biases for near and far stimuli in disparity selective columns in human visual cortex. NeuroImage, 168, 358–365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Neri, P. , Bridge, H. , & Heeger, D. J. (2004). Stereoscopic processing of absolute and relative disparity in human visual cortex. Journal of Neurophysiology, 92(3), 1880–1891. [DOI] [PubMed] [Google Scholar]
  38. Orban, G. A. , Claeys, K. , Nelissen, K. , Smans, R. , Sunaert, S. , Todd, J. T. , … Vanduffel, W. (2006). Mapping the parietal cortex of human and non‐human primates. Neuropsychologia, 44(13), 2647–2667. [DOI] [PubMed] [Google Scholar]
  39. Orban, G. A. , Janssen, P. , & Vogels, R. (2006). Extracting 3D structure from disparity. Trends in Neurosciences, 29(8), 466–473. [DOI] [PubMed] [Google Scholar]
  40. Orban, G. A. , Sunaert, S. , Todd, J. T. , Van Hecke, P. , & Marchal, G. (1999). Human cortical regions involved in extracting depth from motion. Neuron, 24(4), 929–940. [DOI] [PubMed] [Google Scholar]
  41. Parker, A. J. (2007). Binocular depth perception and the cerebral cortex. Nature Reviews Neuroscience, 8(5), 379–391. [DOI] [PubMed] [Google Scholar]
  42. Patten, M. L. , & Welchman, A. E. (2015). fMRI activity in posterior parietal cortex relates to the perceptual use of binocular disparity for both signal‐in‐noise and feature difference tasks. PLoS One, 10(11), e0140696. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Poggio, G. F. (1995). Mechanisms of stereopsis in monkey visual cortex. Cerebral Cortex, 5(3), 193–204. [DOI] [PubMed] [Google Scholar]
  44. Preston, T. J. , Li, S. , Kourtzi, Z. , & Welchman, A. E. (2008). Multivoxel pattern selectivity for perceptually relevant binocular disparities in the human brain. The Journal of Neuroscience, 28(44), 11315–11327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Prince, S. , Cumming, B. , & Parker, A. (2002). Range and mechanism of encoding of horizontal disparity in macaque V1. Journal of Neurophysiology, 87(1), 209–221. [DOI] [PubMed] [Google Scholar]
  46. Prince, S. , Pointon, A. , Cumming, B. , & Parker, A. (2002). Quantitative analysis of the responses of V1 neurons to horizontal disparity in dynamic random‐dot stereograms. Journal of Neurophysiology, 87(1), 191–208. [DOI] [PubMed] [Google Scholar]
  47. Srivastava, S. , Orban, G. , & Janssen, P. (2006). Selectivity for three‐dimensional shape in macaque posterior parietal cortex Paper presented at the Online. https://lirias.kuleuven.be/handle/123456789/311835.
  48. Stevenson, S. B. , Cormack, L. K. , Schor, C. M. , & Tyler, C. W. (1992). Disparity tuning in mechanisms of human stereopsis. Vision Research, 32(9), 1685–1694. [DOI] [PubMed] [Google Scholar]
  49. Tootell, R. B. , & Hadjikhani, N. (2001). Where is ‘dorsal V4’ in human visual cortex? Retinotopic, topographic and functional evidence. Cerebral Cortex, 11(4), 298–311. [DOI] [PubMed] [Google Scholar]
  50. Tootell, R. B. , Hadjikhani, N. , Hall, E. K. , Marrett, S. , Vanduffel, W. , Vaughan, J. T. , & Dale, A. M. (1998). The retinotopy of visual spatial attention. Neuron, 21(6), 1409–1422. [DOI] [PubMed] [Google Scholar]
  51. Tsao, D. Y. , Vanduffel, W. , Sasaki, Y. , Fize, D. , Knutsen, T. A. , Mandeville, J. B. , … Van Essen, D. C. (2003). Stereopsis activates V3A and caudal intraparietal areas in macaques and humans. Neuron, 39(3), 555–568. [DOI] [PubMed] [Google Scholar]
  52. Tyler, C. W. , Likova, L. T. , Chen, C.‐C. , Kontsevich, L. L. , Schira, M. M. , & Wade, A. R. (2005). Extended concepts of occipital retinotopy. Current Medical Imaging Reviews, 1(3), 319–329. [Google Scholar]
  53. Warnking, J. , Dojat, M. , Guérin‐Dugué, A. , Delon‐Martin, C. , Olympieff, S. , Richard, N. , … Segebarth, C. (2002). fMRI retinotopic mapping—Step by step. NeuroImage, 17(4), 1665–1683. [DOI] [PubMed] [Google Scholar]
  54. Welchman, A. E. , Deubelius, A. , Conrad, V. , Bülthoff, H. H. , & Kourtzi, Z. (2005). 3D shape perception from combined depth cues in human visual cortex. Nature Neuroscience, 8(6), 820–827. [DOI] [PubMed] [Google Scholar]
  55. Wheatstone, C. (1838). Contributions to the physiology of vision—Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philosophical Transactions of the Royal Society of London, 128, 371–394. [Google Scholar]
  56. Yan, S. (1985). Digital stereoscopic test charts. Beijing, China: People's Medical Publishing House. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Figure S1 Identification accuracies of all ROIs of the three encoding models for Subject 1 (A), Subject 2 (B), Subject 3 (C), Subject 4 (D), Subject 5 (E), Subject 6 (F), Subject 7 (G), and Subject 8 (H). V3A* indicates V3A complex

Figure S2 True negative rates of all ROIs of the three encoding models for Subject 1 (A), Subject 2 (B), Subject 3 (C), Subject 4 (D), Subject 5 (E), Subject 6 (F), Subject 7 (G), and Subject 8 (H). V3A* indicates V3A complex

Figure S3 Eight subjects' decoding analysis results of V3A complex for SM. (A) Predicted disparities using the responses in V3A complex of the eight subjects for SM. The black asterisks indicate the predicated mean disparity levels across the 131 voxel populations, and the red circles indicate the real disparity levels during the model‐testing runs. The error bars represent the SDs of the mean predicted disparity levels. IR represents identification accuracy of V3A complex. (B) The mean correlation matrices between the measured voxel response patterns and predicted voxel response patterns across 131 voxel populations

Figure S4 Eight subjects' decoding analysis results of V3A complex for UCM. (A) Predicted disparities using the responses in V3A complex of the eight subjects for UCM. The black asterisks indicate the predicated mean disparity levels across the 131 voxel populations, and the red circles indicate the real disparity levels during the model‐testing runs. The error bars represent the SDs of the mean predicted disparity levels. IR represents identification accuracy of V3A complex. (B) The mean correlation matrices between the measured voxel response patterns and predicted voxel response patterns across 131 voxel populations

Figure S5 Variation of the identification accuracies of voxel populations in V3A complex for SM. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. Each black dot represents the identification accuracy of each voxel population

Figure S6 Variation of the identification accuracies of voxel populations in V3A complex for UCM. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. Each black dot represents the identification accuracy of each voxel population

Figure S7 Model prediction errors of all ROIs of the three encoding models for Subject 1 (A), Subject 2 (B), Subject 3 (C), Subject 4 (D), Subject 5 (E), Subject 6 (F), Subject 7 (G), and Subject 8 (H). V3A* indicates V3A complex

Figure S8 Statistical parametric maps showing significantly lower model prediction errors of SM from the occipital view. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. 1: V3D, 2: V3C, 3: V3B, 4: V3A, 5: MT/V5, 6: pV4t, 7: pFST, 8: pMSTv, 9: LO1, 10: LO2. The black dashed lines describe the areas V1, V2d, V3d, V3A complex (1, 2, 3, 4), V7, MTC (5, 6, 7, 8), V2v, V3v, hV4, and LO (9, 10) in each subject. The white dashed lines describe the visual area LOC in each subject

Figure S9 Statistical parametric maps showing significantly lower model prediction errors of SM from the parietal view. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. The black dashed lines describe the areas POIPSd, DIPSMd, DIPSAd and phAIPd in each subject

Figure S10 Statistical parametric maps showing significantly lower model prediction errors of UCM from the occipital view. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. 1: V3D, 2: V3C, 3: V3B, 4: V3A, 5: MT/V5, 6: pV4t, 7: pFST, 8: pMSTv, 9: LO1, 10: LO2. The black dashed lines describe the areas V1, V2d, V3d, V3A complex (1, 2, 3, 4), V7, MTC (5, 6, 7, 8), V2v, V3v, hV4, and LO (9, 10) in each subject. The white dashed lines describe the visual area LOC in each subject

Figure S11 Statistical parametric maps showing significantly lower model prediction errors of UCM from the parietal view. (A) Subject 1. (B) Subject 2. (C) Subject 3. (D) Subject 4. (E) Subject 5. (F) Subject 6. (G) Subject 7. (H) Subject 8. The black dashed lines describe the areas POIPSd, DIPSMd, DIPSAd, and phAIPd in each subject

Figure S12 Subject 1's model parameter distributions of voxels with significantly lower model prediction errors (p < 0.01) for SM

Figure S13 Subject 1's model parameter distributions of voxels with significantly lower model prediction errors (p < 0.01) for UCM

Table S1 The numbers of voxels in 15 ROIs for the eight subjects. V3A* indicates V3A complex

Table S2 The numbers of voxels in four subregions of V3A complex, four subregions of MTC, and two subregions of LO for the eight subjects


Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES