Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Jan 2.
Published in final edited form as: Neuroimage. 2021 Nov 16;245:118737. doi: 10.1016/j.neuroimage.2021.118737

A Method for Mapping Retinal Images in Early Visual Cortical Areas

Matthew Defenderfer 1,2, Pinar Demirayak 1,2,*, Kristina M Visscher 1,2
PMCID: PMC9807285  NIHMSID: NIHMS1861313  PMID: 34798232

Abstract

The visual cortex has been a heavily studied region in neuroscience due to many factors, not the least of which is its well-defined retinotopic organization. This organization makes it possible to predict the general location of cortical regions that stimuli will activate during visual tasks. However, the precise and accurate mapping of these regions in human patients takes time, effort, and participant compliance that can be difficult in many patient populations. In humans, this retino-cortical mapping has typically been done using functional localizers which maximally activate the area of interest, and then the activation profile is thresholded and converted to a binary mask region of interest (ROI). An alternative method involves performing population receptive field (pRF) mapping of the whole visual field and choosing vertices whose pRF centers fall within the stimulus. This method ignores the spatial extent of the pRF which changes dramatically between central and peripheral vision. Both methods require a dedicated functional scan and depend on participants’ stable fixation. The aim of this project was to develop a user-friendly method that can transform a retinal object of interest (for example, an image, a retinal lesion, or a preferred locus for fixation) from retinal space to its expected representation on the cortical surface without a functional scan. We modeled the retinal representation of each cortical vertex as a 2D Gaussian with a location and spatial extent given by a previously published retinotopic atlas. To identify how affected any cortical vertex would be by a given retinal object, we took the product of the retinal object with the Gaussian pRF of that cortical vertex. Normalizing this value gives the expected response of a given vertex to the retinal object. This method was validated using BOLD data obtained using a localizer with discrete visual stimuli, and showed good agreement to predicted values. Cortical localization of a visual stimulus or retinal defect can be obtained using our publicly available software, without a functional scan. Our software may benefit research with disease populations who have trouble maintaining stable fixation.

Keywords: Retino-cortical Mapping, Visual Cortex, population receptive field, fMRI, retinotopic map

1. INTRODUCTION

Identifying the predicted location of neural activity due to a given visual stimulus is useful in a range of experimental contexts. Example contexts include examining changes in the visual system after retinal degenerations, and examining patterns of connectivity to cortical regions representing different retinal locations. Previous research has used both retinotopy and functional localizers to map stimulus and retinal lesion locations from retinal or visual space onto the cortical surface (Baseler et al., 2011; Plank et al., 2014; Sunness et al., 2004). However, precise retinotopic localization and mapping can be difficult in certain low-vision patient populations. For example, macular degeneration is a degenerative eye disorder that causes loss of central vision (Bressler et al., 1988; Krill and Deutman, 1972). Loss of central vision and reliance on peripheral vision can cause a drastic decrease in fixation stability (Crossland et al., 2004; Rohrschneider et al., 1995). Poor fixation can lead to imprecise retinotopic mapping and localization. If the sight of a fixation target or stimulus is lost during the scan, visual search can also lead to increased head motion in these populations, causing motion artifacts. For populations where individual retinotopic mapping or functional localizers are not feasible, either because of vision issues or the extensive time required for data acquisition, a reliable method for transferring regions of interest in visual space to the cortical surface without needing functional scans is necessary. The method described here addresses this need, using a previously published retinotopic atlas (Benson et al., 2014) to define predicted visual activations in early visual areas.

Topographic maps are common in the cortical organization of sensory information such as tonotopic mapping (Formisano et al., 2003; Romani et al., 1982; Saenz and Langers, 2014), somatotopic mapping (Grafton et al., 1991; Rao et al., 1995), and retinotopic maps (Engel et al., 1997; Fox et al., 1987; Zeff et al., 2007). Previous studies have shown that the human visual system is retinotopically organized (Fox et al., 1987). Retinotopic organization is reliable across individuals. For example, posterior V1 responds to stimuli from the fovea and central vision whereas anterior V1 responds to peripheral vision. Further, stimuli in the lower vertical meridian, upper vertical meridian, and horizontal meridian are processed at the superior V1 border, inferior V1 border, and middle of V1, respectively (Wandell et al., 2007). Precise retinotopy is present in several brain areas, notably V1, V2, and V3, and because of this precise organization, it is possible to predict how retinal activity or injury maps to the cortical surface.

In general, the healthy human brain has a predictable large-scale folding pattern but some variability in local folding (Amunts et al., 2000). Typical examples are that the central sulcus lies along a vertical/horizontal axis near the middle of the lateral cortical surface and the calcarine sulcus lies along the medial surface of the occipital pole. This general folding pattern allows researchers to identify functional areas to a certain extent across the population (Frost and Goebel, 2013). Anatomical images can provide cortical landmarks that are accurate for delineating the visual cortex. The anatomy is also useful for predicting the visual field representations in the early visual cortex without needing functional scans (Benson et al., 2014, 2012). These atlases have been shown to accurately predict population receptive field (pRF) properties for each vertex in the early visual cortex (V1, V2, and V3) across individuals.

Here we provide a method to map retinal locations of interest (e.g. the location of a retinal scar) or visual stimuli to early visual areas on the human cortex. Our method allows use of a retinotopic atlas (such as Benson et al., 2014) to make a cortical surface-space representation of the object. The method iteratively applies a 2-D Gaussian with position and size equal to the pRF eccentricity, polar angle, and sigma (Dumoulin and Wandell, 2008) assigned to each vertex by the atlas to a visual or retinal space representation of an object of interest, for instance, a stimulus or lesion. Importantly, this method is appropriate for localizing visual objects that have highly irregular shapes such as some lesions due to macular degeneration. The accuracy of the model was tested and compared with beta values from a functional localizer experiment for both right and left visual field stimulation. Results indicated that our approach can precisely predict the spatial pattern of cortical response to a visual stimulus. The predicted responses are very accurate for a mean across participants, though there is variability in the fit for individual participants. The proposed anatomy-based model can shorten scanning time by reducing or removing the need for functional localizers in some study designs, especially important in patient populations where these scans can be unreliable.

2. MATERIALS AND METHODS

2.1. Cortical projection procedure.

The image projection to surface space begins by transferring a standard retinotopic atlas to the individual’s freesurfer cortical reconstruction. The atlas (Benson and Winawer, 2018; Benson et al., 2014) assigns predicted pRF positional properties to each vertex in early visual areas including V1, V2, and V3. These positional properties include eccentricity in degrees visual angle and angle in degrees of polar angle as well as the radius of the pRF corresponding to the full width at half maximum (FWHM) for the modeled 2D Gaussian (Figure 1). Using these properties, for every vertex in a chosen area, we can calculate how overlapped a vertex’s pRF is with a given region of interest in visual space (Figure 1). A graphical representation of the workflow is summarized in Figure 2. All cortical regions of interest (ROIs) from V1 to V3 are created by following the same steps, however only the V1 area is shown in Figure 2 to simplify the schematic diagram.

Figure 1.

Figure 1.

Schematic representation of positional properties of each pixel (left) and projected representation of retinotopic atlases for eccentricity (in degrees of visual angle), angle (in degrees of polar angle), and radius of population receptive field (in degrees visual angle) in V1 to V3 on the anatomical cortical surface of a representative participant. The image is referenced to visual space where 90° polar angle is the upper vertical meridian.

Figure 2.

Figure 2.

Visual flowchart showing the calculation of continuous fractional overlap values in V1, which correspond to the predicted response to a visual stimulus. Each step in the loop is performed for each vertex in a given visual region (V1, V2, or V3) according to the retinotopic atlas. A. For a given vertex, pRF properties are extracted from the atlas. B. In the stimulus image (black arc), relative to the given fovea location (gray dot), the pixel with eccentricity and polar angle with the closest match to the current vertex is defined as the center of the pRF in the image space. A 2D Gaussian representing the pRF is created with the center at the chosen pixel with full width at half maximum given by the sigma value for the current vertex. Example pRF widths are plotted with yellow, orange, green, and blue circles (sigma of 1, 2, 3, and 4 degrees visual angle, respectively). C. The stimulus image (C1) is dot multiplied by the pRF (a 2D Gaussian, C2) resulting in D. a pRF weighted by its overlap with the stimulus. E. The sum of the values in the weighted Gaussian (D) is then divided by the sum of the values in the original 2D Gaussian (C2) giving a fractional overlap. F. All fractional overlap values are stored in a label file that can be plotted on the cortical surface. The predicted response to the stimulus in (B) is plotted on the left hemisphere surface with fractional overlap values ranging from high (red) to low (dark blue).

The process begins with two 2D images, one representing a visual or retinal space image, and another image with the same dimensions showing the position of the center of the fovea. The fovea image is shifted to place the fovea in the image center, and the same shift is applied to the visual or retinal space image. After image centering, the algorithm checks whether the images are in retinal space or visual space, an option specified by the user when executing the script. If the images are in retinal space, they are flipped to visual space across the horizontal meridian defined by the placement of the fovea (because the retina is inverted relative to visual space). If the image is in visual space, no conversion is performed, and the pipeline continues. Each pixel is then assigned a polar angle and eccentricity based on distance from the center of the image. These distances are calculated using a degree per pixel conversion factor input by the user. The pRF position for any given vertex is identified in image space by finding the pixel with the eccentricity and polar angle that most closely matches the vertex’s (Figure 2B). After finding the center of the pRF in image space, a 2D Gaussian is created using the following formula:

f(x,y)=expexp(((xx0)22σx2+(yy0)22σy2))

Where x, x0, y, and y0 are the horizontal position of a pixel, the horizontal position for the center of the pRF, the vertical position of a pixel, and the vertical position of the center of the pRF, respectively. σx and σy represent the Gaussian width in horizontal and vertical directions, respectively, and are both set to be equal to the sigma value derived from the atlas for that vertex. A visual representation of the pRF is shown in Figure 2C2.

The dot product of the 2D Gaussian image and the ROI image is then calculated. The result is the original pRF masked by the ROI. The fractional overlap is then calculated as the sum of the pixel values in the masked pRF image divided by the sum of the pixel values in the unmasked pRF image. This process is repeated for every vertex in the chosen visual area for both hemispheres. The resulting fractional overlap values correspond to the predicted response of a cortical vertex to a given retinal input. Maximum eccentricity was restricted to 60 degrees visual angle to avoid errors in V3 area boundaries (Benson et al., 2014). These fractional overlap values are stored in freesurfer formatted label files. Values in these labels range from 0 to 1. If the area of the pRF is filled by the visual ROI (a visual stimulus, scotoma, etc.), then the fractional overlap will be 1. If the area of the pRF does not overlap much with the visual ROI, the fractional overlap will be close to 0.

These values take into account the standard Gaussian shape of the receptive field and are continuous. Alternatively, cortical labels can be made with binarized values that show whether the vertex has a predicted pRF center inside the visual or retinal ROI. In this case, no Gaussian is modelled, and the value of each vertex is determined by whether the pixel with the closest assigned eccentricity and polar angle is a part of the stimulus or not.

2.2. Structural and fMRI data acquisition.

Participants were scanned on a 3T Siemens Prisma scanner. High-resolution, three dimensional MPRAGE, T1-weighted anatomical images (TR= 2400 ms, TE= 2.14 ms, flip angle = 8, FOV= 224 × 224 mm2, voxel size = 1 mm isotropic, iPAT= 2) were acquired with a 64-channel phased-array head coil. For localizer fMRI scans we used an echo-planar imaging (EPI) sequence (voxel size = 2 mmisotropic; TR = 800 ms; TE = 33.1 ms; flip angle= 52 degrees; Multiband factor = 8; FOV = 208 × 180 mm2; phase encode direction = anterior-posterior; number of slices = 375). The functional data were acquired in a single run for 5 minutes. Stimuli (Figure 3) were shown to participants, with the colors switching white-to-black at 8 Hz for a total of 10 seconds at a time, with rest periods of 10 seconds where only the fixation cross was presented.

Figure 3.

Figure 3.

Flickering checkerboard stimulus used in an fMRI task to activate both visual hemifields. Participants were asked to fixate on the yellow cross which is located at the center of the screen. Visual stimuli have 2.5 degrees of visual angle radius and the center of the visual stimuli are located 8.5 degrees of visual angle of the fixation cross in the horizontal axis.

2.3. Structural and Functional MRI Data Preprocessing.

We followed HCP pipelines to perform distortion correction, surface construction, and alignment to standard space for anatomical data (Glasser et al., 2013). These anatomical preprocessing steps include preparation of anatomical data for freesurfer, correcting for gradient distortions, alignment of T1w and T2w, correction for magnetic field inhomogeneities, downsampling to 1 mm isotropic voxels. The preprocessing steps output the corrected T1 anatomical file in CIFTI and GIFTI formats. Based on these anatomical data, a mid-thickness surface on each hemisphere was identified between white and pial surfaces and then anatomical data were registered to standard space.

We also followed the HCP pipeline to preprocess functional data first in native volumetric space. These include distortion correction, motion correction, alignment to anatomical data, and standard space in volumetric space (Glasser et al., 2013). Then we converted volumetric functional data into 32k standard surface space.

2.4. Processing functional localizer data.

The individual subjects’ functional localizer data were analyzed with a GLM and the beta weights for each vertex were extracted, creating an activation map. The functional localizer time course was analyzed with a single subject general linear model (GLM) with FSL (FEAT, fMRI Expert Analysis Tool (Jenkinson et al., 2012)) by using onset of the flickering checkerboard stimuli as a predictor. The model was defined for 25 volumes where each block had a length of 12 volumes. The design of the study was entered as three-column format in FEAT and convolved with a canonical hemodynamic response (HRF) along with its temporal derivative. Cluster-based thresholding was set to p < 0.05, z > 2.3 with an implementation of Gaussian Random Field Theory (Friston et al., 1994). Beta weights that were extracted based on GLM analysis are demonstrated in Figure 4C.

Figure 4.

Figure 4.

Representation of created regions of interest and cortical activity based on the mean responses to a functional localizer. A. Early visual areas from V1 to V3, defined based on the Benson retinotopic atlas. B. Fractional overlap regions of interest were created in all early visual areas; these correspond to predicted activity in response to the stimuli in Figure 3. Yellow areas represent vertices whose pRFs have a high overlap with the stimulus (fractional overlap values > 0.5) whereas orange and red areas have medium to low fractional overlap. Vertices with less than 0.1 fractional overlap were not colored. C. Activation maps in response to functional localizer stimuli are represented on the 32k standard space cortical surface. This figure represents the mean activation map across all participants.

2.5. Participants.

We included data from 37 participants with healthy or corrected-to-normal vision (M/F:8/29; mean age = 23.92, age-range = 18–30 years; visual acuity range: 20/10 – 20/25). The present study was approved by the University of Alabama at Birmingham (UAB) Institutional Review Board and all participants provided informed consent for their participation.

2.6. Localizer fMRI stimuli.

The experiment was conducted in complete darkness, other than the visual stimulus presentation screen. The stimulus consists of a centrally represented cross and two peripherally represented scotoma in both visual fields (Figure 3). The stimulus on each side was a circle filled with a flickering checkerboard with radius 2.5 degrees of visual angle, located at 8.5 degrees visual angle to the left and right of the fixation cross. This stimulated neural activity in both the right and left hemisphere visual areas. The background of the screen was light gray in color (input to R, G and B channels was 128). To make sure that participants fixated at the center of the screen, the position of their eyes was tracked via MR compatible EyeLink 1000 eye tracker. Stimuli in both visual fields were shown simultaneously in a blocked fMRI design, each active and passive block was 10 seconds long. There were 15 active blocks during the 300 seconds-long run.

2.7. Testing Accuracy of Retino-Cortical Mapping by Comparing to Functional Localizers

The resulting fractional overlap ROIs as well as the functional localizer General Linear Model (GLM) analysis results were projected onto the cortical surface in 32k standard cifti space (Figure 4). V1, V2, and V3 boundaries were established and results of the functional localizer scan revealed that activity maps were accurately localized within the boundaries (Figure 4).

2.8. Comparison of Cortical Region of Interest and Mean Localizer Activity.

Dice Similarity Coefficient calculates spatial overlap between two spatial maps (Novosad et al. 2020; Zou et al. 2004). Weighted cortical region of interests that were created based on cortical projection of task-based fMRI stimulus by using our method and were thresholded at 1.5. In addition, mean cortical activity based on task-based functional localizer was thresholded at beta = 2. To compare spatial overlap of mean cortical activity and cortical regions of interest, we used the Dice Similarity Coefficient similarity metric given by the following formula:

DiceSimilarityCoefficient=2|XY||X|+|Y|

3. RESULTS

Mapping of a visual stimulus or a part of the retina to the cortex was performed to create images representing how strongly each vertex is expected to respond to a retinal object. The accuracy of the proposed method was tested by stimulating both visual hemifields with flickering checkerboard visual stimuli and comparing responses to predicted responses based on weighted fractional overlap values derived using the proposed method. The transfer method is described in Figure 2. Figure 4 compares the pattern of predicted responses to true activity during the localizer across multiple participants. In Figure 5, for each hemisphere in each individual, we chose the vertex with the maximum fractional overlap value to the stimulus as a central vertex. From this central vertex, geodesic distance from the central vertex to every other vertex on the surface can be calculated in terms of vertex steps. Figure 5 shows this concept, illustrating the central vertex at 0 steps, to a “ring” 10 steps from the central vertex. When the central vertex is located in the middle of the calcarine sulcus on Connectome Workbench’s fs_LR 32k atlas, midway from anterior to posterior V1, a 10-step geodesic distance is roughly the distance from the middle of the calcarine sulcus to the top of the gyrus where the V1/V2 border is.

Figure 5.

Figure 5.

Mean fractional overlap values from predicted responses and GLM beta values from the functional localizer plotted against geodesic distance from the center of the predicted response. A. Concept of steps from a central vertex shown on the left hemisphere fs_LR 32k surface. Gray shading represents curvature. Boundary lines represent areal borders from Multimodal Surface Matching algorithm (MSMAll) standard atlas. Given a chosen central vertex, steps are defined as the geodesic distance from that central vertex. RGB color represents this geodesic distance up to a maximum of 10 steps. Each color “ring” represents vertices of a given geodesic distance. B. Beta values were normalized to the mean beta at the center of the predicted response defined as 0 steps from the center on the x-axis. The same was done to fractional overlap values. Error bars show ±1 standard error. Ranges and standard deviations of these values are shown in Table 1.

Using these geodesic distances, we calculate, for each participant, the average beta value and the average fractional overlap value at each “ring” from the center of the predicted response within each subject. To set beta values and fractional overlap values to the same scale for ease of comparison, we normalized both, using the following procedure. Mean fractional overlap and beta values are both normalized to their respective between-subject mean at the central vertex. In other words, the average across-subject beta value was calculated at the chosen central vertex, and each individual beta value was divided by this mean. The same procedure was applied to the fractional overlap values. The sample mean betas and fractional overlaps for each step and each hemisphere can be seen in Figure 5. The pattern of descent in fractional overlap values very closely matches the descent of mean normalized beta values moving from the center of the label out. This indicates that on average, the pattern of activity from a functional localizer is well-predicted using retinotopic information from a standard atlas without the need for any additional functional scans. As well, since the mean normalized beta values only decrease further away from the center of the label, this indicates the placement of the middle of the label in the visual cortex on average matches the placement of the area activated by the functional localizer. In other words, there is no reliable offset in location between the predicted response and the functionally activated region.

Additionally, we used the Dice similarity coefficient to explore spatial similarity between cortical projections of functional localizer stimulus that were created by using our method (i.e. predicted cortical regions) and mean functional activity that were acquired based on GLM results (i.e. actual cortical regions that are active during stimulus onset). We found that the Dice similarity coefficient (averaged across left and right hemisphere) was 0.99. It shows that our method accurately maps corresponding cortical areas in the brain.

While comparisons of fractional overlaps and normalized beta values from functional localizers are well-matched at the population-average level, comparisons at the individual level can have much more variation. This variation is shown in Figure 6, which details four representative subjects’ data. PVTS117 (upper left) shows an overall close match between the activity pattern of the localizer and the fractional overlap values predicted using our method. Other participants show relatively close matches in terms of shape on the cortical surface, but the magnitude of activity is lower (PVTS133, lower left) or higher (PVTS123, upper right) than predicted. On the other hand, PVTS111 (lower right) shows a poor fit between our model and the actual functional activity. This is evidenced by the very low normalized beta values at the central steps and by the lack of a punctate cortical activity on the surface.

Figure 6.

Figure 6.

Representation of regions of interest and cortical activity based on functional localizer from 4 representative individuals. While some individuals’ normalized functional activity matches the predicted activity (“Overlap”), others showed differences in terms of amplitude or distribution of activity. Early visual areas from V1 to V3 were defined based on individual subject anatomical data. Fractional overlap regions of interest were created in all early visual areas. Both activation maps in response to functional localizer stimuli and weighted ROIs are represented on the 32k standard space cortical surface; functional activity was thresholded at 0.2 and region of interests were thresholded at 0.1 in all representative participants. Heat map showing Beta values and Fractional Overlap values are identical for all images.

While predicted fractional overlap values using our model are very accurate at a population level, caution should be applied when using the model to draw conclusions from a very low sample size due to variability in how well the population-based retinotopic atlas maps onto an individual participant’s visual cortex.

While predicting areas of cortical activity due to stimuli is one way to use this method, it can also be used to map lesions or other retinal areas of interest onto an individual’s visual cortex. Figure 7 illustrates an example of this: identifying the predicted cortical projection of a retinal lesion in a patient who has macular degeneration. Macular degeneration is an eye disease that leads to central vision loss due to a scotoma at and surrounding the fovea. Figure 7 shows that the scotoma region can be mapped on the early areas of the cortex that are known to process central vision. In this case, binary ROIs were made that represent vertices whose receptive field center is located within the scotoma boundary.

Figure 7.

Figure 7.

Example transfer of a region of interest from retinal space to cortical space. A. Photograph of a fundus from an individual with a lesion in central vision. B. The lesion has been outlined (black) with the placement of fovea marked (blue). Binary versions of the transferred labels (yellow) are shown in C. V1, D. V2, and E. V3. The black outlines represent the given visual area defined by the Benson retinotopic atlas.

4. DISCUSSION

In this study, we proposed a novel post-processing method for localizing the cortical representation of a visual stimulus or retinal region. Due to the reliable organization of the visual cortex across individuals, cortical maps can be obtained using an atlas-based approach. In a typical experiment, taking a fully personalized approach requires a great deal of fMRI data to localize cortical regions of interest. These may include a functional localizer and retinotopy experiments that will increase the time in the scanner and scanning cost. The proposed approach is an alternative, using solely anatomical data along with freely available code to precisely predict the activation patterns due to visual stimuli or localize the representation of a retinal location. The precision of the proposed retino-cortical mapping method was verified with a visual localizer fMRI study on a healthy population, showing a good agreement between measured and predicted responses. The retino-cortical mapping method enables localization of not only visual stimuli but also retinal defects such as scars on the early cortical visual areas.

4.1. Atlas-Based Retinotopic Estimations of Early Visual Areas

Prior studies showed that the fundus of the calcarine sulcus represents the horizontal azimuth of the perceived visual field (Frost and Goebel, 2013; Rajimehr and Tootell, 2009). Polar angle, eccentricity, and pRF estimations are tightly coupled to the pattern of gyral and sulcal curvature on the flattened 2D surface (Benson et al., 2012). Algebraic models of V1-V3 use a log-polar transformation to map visual field position to the flattened cortical surface (Balasubramanian et al., 2002). Functional retinotopy data was brought into the algebraic model to improve boundaries (Benson et al., 2018, 2014). These approaches result in accurate predictions of retinotopy, and in reliable boundaries between early visual areas. Analyses showed that the results of retinotopic atlases are as accurate as standard retinotopic atlases, when compared to the gold standard of obtaining extensive retinotopic data not feasible in most experimental paradigms (Benson et al., 2018, 2014).

4.2. Prediction Error in Atlas-Based Estimations

Overall prediction error for retinotopic atlases for V1 to V3 has been reported as 10.93° of polar angle and 0.51° of eccentricity (Benson et al., 2014) indicating good prediction accuracy. V1-V3 areas had generally small and uniform prediction bias; Benson and colleagues found that their anatomical template consistently overpredicts observed polar angle values at the V3/V3A border (Benson et al., 2014). Although we did not include the V3A region, Benson and colleagues (2014) recognized that there are higher error rates near the dorsal V3 boundary. A possible poor across-subject registration affects the V3A region extending into the V3 region. To account for this, we restricted maximum eccentricity to 60 degrees visual angle to overcome erroneous overprediction of the dorsal V3 boundary.

4.3. Utility of the Method

Visual field representations can be successfully identified on the human cortex by using retinotopy experiments (DeYoe et al., 1996; Engel et al., 1994). However, these mapping methods mainly depend on human-based boundary estimations and the estimations can differ between investigators. The proposed method can reduce the human-based estimation differences or the error by automating the retino-cortical mapping process. Also, it helps to reduce the number of scans that are required for cortical mapping.

Functional localizers and retinotopic mapping both increase time in the scanner and are subject to motion artifacts. They are also dependent on proper fixation ability from the participant which can be an issue in certain patient populations (Crossland et al., 2004). Additionally, standard retinotopic mapping is subject to the spatial constraints of the scanner and the stimulus presentation device and so is only able to accurately map a relatively small portion of the overall visual field. Population-based retinotopic atlases (e.g., Benson et al., 2014) have been shown to be very reliable in their pRF prediction accuracy, especially in V1, and are only dependent on having a good quality structural scan which is generally easier to obtain than functional data. The proposed method takes the population-based atlases a step further and uses their pRF estimations in place of standard visual field mapping in order to accurately transfer ROIs of arbitrary size and shape from visual or retinal space to the cortical surface. We showed that the resulting map of fractional overlap values accurately match functional activity profiles from a visual stimulus (Figure 5) meaning the method can provide a reasonable replacement for functional localizers.

One important application of an atlas-based approach is comparison of individuals with diseases to a healthy population (Brewer, 2009; Marquand et al., 2016; Wolfers et al., 2018). In some patient populations, retinotopy or other functional localizers may not be possible. For example, in retinal diseases, due to fixation instability, having a functional localizer or retinotopic mapping data may not result in accurate localization estimations. The proposed method enables users to characterize localized differences in structure and function in early visual cortical areas (V1-V3) in populations with poor fixation capabilities for comparison to the healthy population.

The utility of this method is far-reaching. Mapping locations of visual stimulation is the most obvious use-case, but the method also offers the possibility to precisely locate projections of retinal landmarks such as retinal lesions on the cortical surface. Additionally, the idea behind this method could be used beyond the visual cortex. Sensory and motor maps also occur in auditory cortex (Wessinger et al., 1997), somatosensory cortex (Sanchez-Panchuelo et al., 2010), and primary motor cortex (Lotze et al., 2000); similar modes of mapping to cortical space are possible if an accurate map or atlas exists.

4.4. Creating Regions Of Interest

The overlap values described here represent the association between a vertex in visual cortex with a visual or retinal region of interest, for example, a retinal lesion or a visual stimulus. These values can be continuous, taking into account the Gaussian shape of the pRF and how much it overlaps with the visual or retinal ROI (Figures 4 and 6). These types of labels are appropriate for studies where one wants to compare a predicted neural response to an actual neural response, or where one wants to weight vertices by their contribution to the representation of a given portion of visual space. These values can also be binarized, only taking into account whether the center of the receptive field is inside the ROI (Figure 7). This would be appropriate in a classical region of interest approach where each vertex is either 1 or 0, part of the region or not. For example, one may want to know which vertices have PRF centers within a certain location in visual space.

Using an atlas to define retinotopically specific regions of interest, or expected neural responses to a given stimulus has not been trivial to do until now. As we outlined in Figure 2, it isn’t as simple as having an obvious 1-to-1 relationship between a location in visual space and a cortical vertex. Our method makes it straightforward for researchers to map a given image to its expected cortical representation. The fMRI data we present here are a proof of concept, showing in a relevant dataset that this method works, and are not meant to be an assessment of across-subject reliability of the atlases (for that, see e.g. Benson et al, 2014). In fact, this method could be applied for any retinotopic atlas.

4.5. Limitations and Future Directions

This method is limited in some respects. Because the population-averaged atlas is used, mapping will not be perfectly accurate on an individual basis. While the human visual cortex is reliably organized across a population, differences in retinotopy in healthy individuals do exist (Benson and Winawer, 2018; Lage-Castellanos et al., 2020). Maps can be even more distorted in some patient populations as well (Morland et al., 2001). Care should be taken to ensure that if this method is being used in a patient population, that population has an average retinotopic organization matching healthy controls if the tested hypotheses rely on that assumption.

Figure 5 demonstrates that our model is capable of making accurate overall cortical localization predictions. However, in the middle steps away from the center, prediction accuracy is reduced compared to the minimum and maximum geodesic distances from the center. Although the method does not perfectly predict each subject’s data on an individual level, it makes a very good estimate on an aggregate level. Also, beta weights within the activated regions showed more variance in the center compared to far from the center. As the number of steps increases from the center, the number of vertices in the ring contributing to the mean increases. Thus, a possible explanation for the lower variance for peripheral rings is simply due to the larger number of vertices contributing, decreasing the impact of noise.

4.6. Conclusion

We proposed a new method for retino-cortical mapping which applies the estimated population receptive field (pRF) for each vertex in early visual cortex (Benson and Winawer, 2018; Benson et al., 2014, 2012) onto images of visual stimuli to calculate the predicted response of each cortical vertex to a stimulus. We found that this method could predict the functional activity pattern for a visual stimulus in the primary visual cortex while only requiring acquisition of a T1-weighted structural MRI scan.

The proposed retino-cortical mapping method is not limited to mapping visual stimuli, but also can be used to define the cortical projections of retinal regions of interest such as lesions (see Figure 7 for an example). This method can be used while planning an experiment to estimate the activation profile of a visual stimulus, or it can be used in an already acquired dataset that did not collect retinotopic or functional localizer scans if hypotheses involve comparisons at specific retinotopic locations. It can also be used to identify expected cortical projections of retinal features in populations with vision loss. Overall, this method provides a novel and multipurpose tool for accurately mapping retinal and visual stimuli onto the cortical surface while potentially reducing patient burden in the scanner and reliance on scan types difficult to collect in patient populations.

Table 1.

Maximum, minimum, standard deviation, and mean values from individual subjects as in Figure 5 B. Dist. is the distance from the center of the predicted response location in units of steps from the center.

Dist. min max sd m Dist. min max sd m
Localizer 0 −0.57 2.15 0.59 1 Overlap 0 0.99 1.01 0 1
1 −0.44 1.88 0.51 0.91 1 0.92 0.95 0.01 0.93
2 −0.22 1.73 0.41 0.78 2 0.74 0.81 0.02 0.77
3 −0.09 1.56 0.32 0.59 3 0.49 0.56 0.02 0.53
4 −0.11 1.2 0.25 0.39 4 0.29 0.32 0.01 0.3
5 −0.13 0.7 0.19 0.24 5 0.14 0.17 0.01 0.15
6 −0.19 0.43 0.15 0.13 6 0.06 0.09 0.01 0.07
7 −0.24 0.34 0.12 0.07 7 0.02 0.04 0.01 0.03
8 −0.23 0.28 0.1 0.04 8 0.01 0.02 0 0.01
9 −0.22 0.23 0.08 0.02 9 0 0.01 0 0
10 −0.17 0.17 0.07 0.01 10 0 0 0 0

Acknowledgments & Funding

We would like to thank Leland L Fleming, Mandy Biles and Paul Stewart for technical assistance in Matlab coding, magnetic resonance data acquisition. Also, we would like to thank the Connectomes in Human Diseases Grant to KMV NIH NEI 1 U01 EY025858-01A1, UAB Center for Clinical and Translational Science UL1 TR000165, Vision Science Research Center P30 EY003039, Civitan International Research Center McKnight Brain Research Foundation, Edward R. Roybal Center for Translational Research on Aging and Mobility NIA 2, P30 AG022838, UAB Comprehensive Center for Healthy Aging, for providing support for this study.

Footnotes

Conflict of interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Data/Code Availability Statement

Codes that were developed for this study are openly available in a Gitlab repository https://github.com/Visscher-Lab/image-to-surface

References

  1. Amunts K, Malikovic A, Mohlberg H, Schormann T, Zilles K, 2000. Brodmann’s areas 17 and 18 brought into stereotaxic space-where and how variable? Neuroimage 11, 66–84. doi: 10.1006/nimg.1999.0516 [DOI] [PubMed] [Google Scholar]
  2. Balasubramanian M, Polimeni J, Schwartz EL, 2002. The V1–V2–V3 complex: quasiconformal dipole maps in primate striate and extra-striate cortex. Neural Netw. 15, 1157–1163. doi: 10.1016/S0893-6080(02)00094-1 [DOI] [PubMed] [Google Scholar]
  3. Benson NC, Butt OH, Brainard DH, Aguirre GK, 2014. Correction of distortion in flattened representations of the cortical surface allows prediction of V1-V3 functional organization from anatomy. PLoS Comput. Biol. 10, e1003538. doi: 10.1371/journal.pcbi.1003538 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Benson NC, Butt OH, Datta R, Radoeva PD, Brainard DH, Aguirre GK, 2012. The retinotopic organization of striate cortex is well predicted by surface topology. Curr. Biol. 22, 2081–2085. doi: 10.1016/j.cub.2012.09.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Benson NC, Jamison KW, Arcaro MJ, Vu AT, Glasser MF, Coalson TS, Van Essen DC, Yacoub E, Ugurbil K, Winawer J, Kay K, 2018. The Human Connectome Project 7 Tesla retinotopy dataset: Description and population receptive field analysis. J. Vis. 18, 23. doi: 10.1167/18.13.23 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Benson NC, Winawer J, 2018. Bayesian analysis of retinotopic maps. eLife 7. doi: 10.7554/eLife.40224 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brewer JB, 2009. Fully-automated volumetric MRI with normative ranges: translation to clinical practice. Behav. Neurol. 21, 21–28. doi: 10.3233/BEN-2009-0226 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Crossland MD, Culham LE, Rubin GS, 2004. Fixation stability and reading speed in patients with newly developed macular disease. Ophthalmic Physiol. Opt. 24, 327–333. doi: 10.1111/j.1475-1313.2004.00213.x [DOI] [PubMed] [Google Scholar]
  9. DeYoe EA, Carman GJ, Bandettini P, Glickman S, Wieser J, Cox R, Miller D, Neitz J, 1996. Mapping striate and extrastriate visual areas in human cerebral cortex. Proc Natl Acad Sci USA 93, 2382–2386. doi: 10.1073/pnas.93.6.2382 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Dumoulin SO, Wandell BA, 2008. Population receptive field estimates in human visual cortex. Neuroimage 39, 647–660. doi: 10.1016/j.neuroimage.2007.09.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Engel SA, Glover GH, Wandell BA, 1997. Retinotopic organization in human visual cortex and the spatial precision of functional MRI. Cereb. Cortex 7, 181–192. doi: 10.1093/cercor/7.2.181 [DOI] [PubMed] [Google Scholar]
  12. Engel SA, Rumelhart DE, Wandell BA, Lee AT, Glover GH, Chichilnisky EJ, Shadlen MN, 1994. fMRI of human visual cortex. Nature 369, 525. doi: 10.1038/369525a0 [DOI] [PubMed] [Google Scholar]
  13. Formisano E, Kim DS, Di Salle F, van de Moortele PF, Ugurbil K, Goebel R, 2003. Mirror-symmetric tonotopic maps in human primary auditory cortex. Neuron 40, 859–869. doi: 10.1016/s0896-6273(03)00669-x [DOI] [PubMed] [Google Scholar]
  14. Fox PT, Miezin FM, Allman JM, Van Essen DC, Raichle ME, 1987. Retinotopic organization of human visual cortex mapped with positron-emission tomography. J. Neurosci. 7, 913–922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Friston KJ, Worsley KJ, Frackowiak RS, Mazziotta JC, Evans AC, 1994. Assessing the significance of focal activations using their spatial extent. Hum. Brain Mapp. 1, 210–220. doi: 10.1002/hbm.460010306 [DOI] [PubMed] [Google Scholar]
  16. Frost MA, Goebel R, 2013. Functionally informed cortex based alignment: an integrated approach for whole-cortex macro-anatomical and ROI-based functional alignment. Neuroimage 83, 1002–1010. doi: 10.1016/j.neuroimage.2013.07.056 [DOI] [PubMed] [Google Scholar]
  17. Glasser MF, Sotiropoulos SN, Wilson JA, Coalson TS, Fischl B, Andersson JL, Xu J, Jbabdi S, Webster M, Polimeni JR, Van Essen DC, Jenkinson M, WU-Minn HCP Consortium, 2013. The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage 80, 105–124. doi: 10.1016/j.neuroimage.2013.04.127 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Grafton ST, Woods RP, Mazziotta JC, Phelps ME, 1991. Somatotopic mapping of the primary motor cortex in humans: activation studies with cerebral blood flow and positron emission tomography. J. Neurophysiol. 66, 735–743. doi: 10.1152/jn.1991.66.3.735 [DOI] [PubMed] [Google Scholar]
  19. Jenkinson M, Beckmann CF, Behrens TE, Woolrich MW, Smith SM, 2012. FSL. Neuroimage 62, 782–790. doi: 10.1016/j.neuroimage.2011.09.015 [DOI] [PubMed] [Google Scholar]
  20. Lage-Castellanos A, Valente G, Senden M, De Martino F, 2020. Investigating the Reliability of Population Receptive Field Size Estimates Using fMRI. Front. Neurosci. 14, 825. doi: 10.3389/fnins.2020.00825 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Lotze M, Erb M, Flor H, Huelsmann E, Godde B, Grodd W, 2000. fMRI evaluation of somatotopic representation in human primary motor cortex. Neuroimage 11, 473–481. doi: 10.1006/nimg.2000.0556 [DOI] [PubMed] [Google Scholar]
  22. Marquand AF, Rezek I, Buitelaar J, Beckmann CF, 2016. Understanding Heterogeneity in Clinical Cohorts Using Normative Models: Beyond Case-Control Studies. Biol. Psychiatry 80, 552–561. doi: 10.1016/j.biopsych.2015.12.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Morland AB, Baseler HA, Hoffmann MB, Sharpe LT, Wandell BA, 2001. Abnormal retinotopic representations in human visual cortex revealed by fMRI. Acta Psychol (Amst) 107, 229–247. doi: 10.1016/S0001-6918(01)00025-7 [DOI] [PubMed] [Google Scholar]
  24. Novosad P, Fonov V, Collins DL, Alzheimer’s Disease Neuroimaging Initiative†, 2020. Accurate and robust segmentation of neuroanatomy in T1-weighted MRI by combining spatial priors with deep convolutional neural networks. Human brain mapping, 41(2), 309–327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Rajimehr R, Tootell RBH, 2009. Does retinotopy influence cortical folding in primate visual cortex? J. Neurosci. 29, 11149–11152. doi: 10.1523/JNEUROSCI.1835-09.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Rao SM, Binder JR, Hammeke TA, Bandettini PA, Bobholz JA, Frost JA, Myklebust BM, Jacobson RD, Hyde JS, 1995. Somatotopic mapping of the human primary motor cortex with functional magnetic resonance imaging. Neurology 45, 919–924. doi: 10.1212/wnl.45.5.919 [DOI] [PubMed] [Google Scholar]
  27. Romani GL, Williamson SJ, Kaufman L, 1982. Tonotopic organization of the human auditory cortex. Science 216, 1339–1340. doi: 10.1126/science.7079770 [DOI] [PubMed] [Google Scholar]
  28. Saenz M, Langers DRM, 2014. Tonotopic mapping of human auditory cortex. Hear. Res. 307, 42–52. doi: 10.1016/j.heares.2013.07.016 [DOI] [PubMed] [Google Scholar]
  29. Sanchez-Panchuelo RM, Francis S, Bowtell R, Schluppeck D, 2010. Mapping human somatosensory cortex in individual subjects with 7T functional MRI. J. Neurophysiol. 103, 2544–2556. doi: 10.1152/jn.01017.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Wandell BA, Dumoulin SO, Brewer AA, 2007. Visual field maps in human cortex. Neuron 56, 366–383. doi: 10.1016/j.neuron.2007.10.012 [DOI] [PubMed] [Google Scholar]
  31. Wessinger CM, Buonocore MH, Kussmaul CL, Mangun GR, 1997. Tonotopy in human auditory cortex examined with functional magnetic resonance imaging. Hum. Brain Mapp. 5, 18–25. doi: [DOI] [PubMed] [Google Scholar]
  32. Wolfers T, Doan NT, Kaufmann T, Alnæs D, Moberget T, Agartz I, Buitelaar JK, Ueland T, Melle I, Franke B, Andreassen OA, Beckmann CF, Westlye LT, Marquand AF, 2018. Mapping the heterogeneous phenotype of schizophrenia and bipolar disorder using normative models. JAMA Psychiatry 75, 1146–1155. doi: 10.1001/jamapsychiatry.2018.2467 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Zeff BW, White BR, Dehghani H, Schlaggar BL, Culver JP, 2007. Retinotopic mapping of adult human visual cortex with high-density diffuse optical tomography. Proc Natl Acad Sci USA 104, 12169–12174. doi: 10.1073/pnas.0611266104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Zou KH, Warfield SK, Bharatha A, Tempany CMC, Kaus MR, Haker SJ, Wells WM, Jolesz FA, Kikinis R, 2004. Statistical validation of image segmentation quality based on a spatial overlap index. Acad. Radiol. 11, 178–189. doi: 10.1016/S1076-6332(03)00671-8 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Codes that were developed for this study are openly available in a Gitlab repository https://github.com/Visscher-Lab/image-to-surface

RESOURCES