Skip to main content
eLife logoLink to eLife
. 2017 Jan 19;6:e21397. doi: 10.7554/eLife.21397

What the success of brain imaging implies about the neural code

Olivia Guest 1,*, Bradley C Love 1,2,*
Editor: Russell Poldrack3
PMCID: PMC5245971  PMID: 28103186

Abstract

The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI’s limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI’s successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

DOI: http://dx.doi.org/10.7554/eLife.21397.001

Research Organism: Human

eLife digest

We can appreciate that a cat is more similar to a dog than to a truck. The combined activity of millions of neurons in the brain somehow captures these everyday similarities, and this activity can be measured using imaging techniques such as functional magnetic resonance imaging (fMRI). However, fMRI scanners are not particularly precise – they average together the responses of many thousands of neurons over several seconds, which provides a blurry snapshot of brain activity. Nevertheless, the pattern of activity measured when viewing a photograph of a cat is more similar to that seen when viewing a picture of a dog than a picture of a truck. This tells us a lot about how the brain codes information, as only certain coding methods would allow fMRI to capture these similarities given the technique’s limitations.

There are many different models that attempt to describe how the brain codes similarity relations. Some models use the principle of neural networks, in which neurons can be considered as arranged into interconnected layers. In such models, neurons transmit information from one layer to the next.

By investigating which models are consistent with fMRI’s ability to capture similarity relations, Guest and Love have found that certain neural network models are plausible accounts of how the brain represents and processes information. These models include the deep learning networks that contain many layers of neurons and are popularly used in artificial intelligence. Other modeling approaches do not account for the ability of fMRI to capture similarity relations.

As neural networks become deeper with more layers, they should be less readily understood using fMRI: as the number of layers increases, the representations of objects with similarities (for example, cats and dogs) become more unrelated. One question that requires further investigation is whether this finding explains why certain parts of the brain are more difficult to image.

DOI: http://dx.doi.org/10.7554/eLife.21397.002

Introduction

Neuroimaging and especially functional magnetic resonance imaging (fMRI) has come a long way since the first experiments in the early 1990s. These impressive findings are curious in light of fMRI’s limitations. The blood-oxygen-level-dependent (BOLD) response measured by fMRI is a noisy and indirect measure of neural activity (Logothetis, 2002, 2008O'Herron et al., 2016) from which researchers try to infer neural function.

The BOLD response trails neural activity by 2 s, peaks at 5 to 6 s, and returns to baseline around 10 s, whereas neural activity occurs on the order of milliseconds and can be brief (Huettel et al., 2009). In terms of spatial resolution, the BOLD response may spill over millimeters away from neural activity due to contributions from venous signals (Turner, 2002). Likewise, differences in BOLD response can arise from incidental differences in the vascular properties of brain regions (Ances et al., 2009). Such sources of noise can potentially imply differences in neural activity in regions where there should not be.

The data acquisition process itself places limits on fMRI measurement. Motion artefacts (e.g., head movements by human subjects) and non-uniformity in the magnetic field reduce data quality. In analysis, three-dimensional images are constructed from slices acquired at slightly different times. Once collected, fMRI data are typically smoothed during analyses (Carp, 2012). All these factors place limits on what fMRI can measure.

Despite these weaknesses, fMRI has proved to be an incredibly useful tool. For example, we now know that basic cognitive processes involved in language (Binder et al., 1997) and working memory (Pessoa et al., 2002) are distributed throughout the cortex. Such findings challenged notions that cognitive faculties are in a one-to-one correspondence with brain regions.

Advances in data analysis have increased what can be inferred by fMRI (De Martino et al., 2008). One of these advances is multivariate pattern analysis (MVPA), which decodes a pattern of neural activity in order to assess the information contained within (Cox and Savoy, 2003). Rather than computing univariate statistical contrasts, such as comparing overall BOLD activity for a region when a face or house stimulus is shown, MVPA takes voxel patterns into account.

Using MVPA, so-called ‘mind reading’ can be carried out — specific brain states can be decoded given fMRI activity (Norman et al., 2006), revealing cortical representation and organization in impressive detail. For example, using these analysis techniques paired with fMRI we can know whether a participant is being deceitful in a game (Davatzikos et al., 2005), and we can determine whether a participant is reading an ambiguous sentence as well as infer the semantic category of a word they are reading (Mitchell et al., 2004).

Representational similarity analysis (RSA), another multivariate technique, is particularly suited to examining representational structure (Kriegeskorte et al., 2008; Kriegeskorte, 2009). We will focus on RSA later in this contribution, so we will consider this technique in some detail. RSA directly compares the similarity (e.g., by using correlation) of brain activity arising from the presentation of different stimuli. For example, the neural activity arising from viewing a robin and sparrow may be more similar to each other than between a robin and a penguin.

These pairwise neural similarities can be compared to those predicted by a particular theoretical model to determine correspondences. For example, Mack et al. (2013) identified brain regions where the neural similarity structure corresponded to that of a cognitive model of human categorization, which was useful in inferring the function of various brain regions. The neural similarities themselves can be visualized by applying multidimensional scaling to further understand the properties of the space (Davis et al., 2014). RSA has been useful in a number of other endeavors, such as understanding the role of various brain areas in reinstating past experiences (Tompary et al., 2016; Mack and Preston, 2016).

Given fMRI’s limitations in measuring neural activity, one might ask how it is possible for methods like RSA to be successful. The BOLD response is temporally and spatially imprecise, yet it appears that researchers can infer general properties of neural representations that link sensibly to stimulus and behavior. The neural code must have certain properties for this state of affairs to hold. What kinds of models or computations are consistent with the success of fMRI? If the brain is a computing device, it would have to be of a particular type for fMRI to be useful given its limitations in measuring neural activity.

Smoothness and the neural code

For fMRI to recover neural similarity spaces, the neural code must display certain properties. Firstly, the neural code cannot be so fine-grained that fMRI’s temporal and spatial resolution limitations make it impossible to resolve representational differences. Second, a notion of functional smoothness, which we will introduce and define, must also be partially satisfied.

Voxel inhomogeneity across space and time

The BOLD response summates neural activity over space and time, which places hard limits on what fMRI can measure. To make an analogy, 3+5 and 6+2 both equal 8 through different routes. If different ‘routes’ of neural activity are consequential to the neural code and are summated in the BOLD response, then fMRI will be blind to representational differences.

To capture representational differences, voxel response must be inhomogeneous both between voxels and within a voxel across time. Consider the fMRI analogues shown in Figure 1; paralleling neurons with pixels and voxels with the squares on the superimposed grid. The top-left image depicts neural activity that smoothly varies such that the transitions from red to yellow occur in progressive increments. Summating within a square, i.e., a voxel, will not dramatically alter the high-level view of a smooth transition from red to yellow (bottom-left image). Voxel response is inhomogeneous, which would allow decoding by fMRI (cf. Kamitani and Tong, 2005Alink et al., 2013). Altering the grid (i.e., voxel) size will not have a dramatic impact on the results as long as the square does not become so large as to subsume most of the pixels (i.e., neurons). This result is in line with basic concepts from information theory, such as the Nyquist-Shannon sampling theorem. The key is that the red and yellow pixels/neurons are topologically organized: their relationship to each other is for all intents and purposes invariant to the granularity of the squares/voxels (for more details see: Chaimow et al., 2011; Freeman et al., 2011; Swisher et al., 2010).

Figure 1. The activity of neurons in the top-left panel gradually changes from left to right, whereas changes are more abrupt in the top-middle and top-right panels.

Figure 1.

Each square in the grid represents a voxel which summates activity within its frame as shown in the bottom panels. For the smoother pattern of neural activity, the summation of each voxel (bottom left) captures the changing gradient from left to right depicted in the top-left, whereas for the less smooth representation in the middle panel all voxels sum to the same orange value (bottom middle). Thus, differences in activation of yellow vs. red neurons are detectable using fMRI for the smooth case, but not for the less smooth case because voxel response is homogenous. Improving spatial resolution (right panels) by reducing voxel size overcomes these sampling limits, resulting in voxel inhomogeneity (bottom-right panel).

DOI: http://dx.doi.org/10.7554/eLife.21397.003

In contrast, the center-top image in Figure 1 involves dramatic representational changes within voxel. Each voxel (square in the grid), in this case, will produce a homogenous orange color when its contents are summated. Thus, summating the contents of a voxel in this case obliterates the representational content: red and yellow; returning instead squares/voxels that are all the same uniform color: orange. This failure is due to sampling limits that could be addressed by smaller voxels (see rightmost column). Unfortunately, arbitrarily small voxels with high sampling rates is not a luxury afforded to fMRI.

The success of fMRI given its sampling limits is consistent with proposed neural coding schemes, such as population coding (Averbeck et al., 2006; Panzeri et al., 2015; Pouget et al., 2000) in cases where neurons with similar tunings spatially cluster (e.g., Maunsell and Van Essen, 1983). In population coding, neurons jointly contribute to represent a stimulus in much the same way as pixels were contributing to represent different colors in the leftmost column of Figure 1. When this inhomogeneity breaks down, similarity structures should be difficult to recover using fMRI. Indeed, a recent study with macaque monkeys which considered both single-cell and fMRI measures supports this viewpoint — stimulus aspects which were poorly spatially clustered in terms of single cell selectivity were harder to decode from the BOLD response (Dubois et al., 2015).

The same principles extend from the spatial to the temporal domain. The BOLD response will be blind to the differences between representations to the extent that the brain relies on the precise timing of neural activity to code information. For example, in burstiness coding, neural representations are distinguished from one another not by their average firing rate but by the variance of their activity (Fano, 1947; Katz, 1996). Under this coding scheme, more intense stimulus values are represented by burstier units, not units that fire more overall. Neural similarity is not recoverable by fMRI under a burstiness coding scheme. Because the BOLD signal roughly summates through time (Boynton et al., 1996), firing events will sum together to the same number irrespective of their burstiness.

Likewise, BOLD activity may be a composite of synchronized activity at multiple frequencies. Although gamma-band local field potential is most associated with BOLD response, oscillations at other frequency bands may also contribute to the BOLD response (Magri et al., 2012Scheeringa et al., 2011). If so, fMRI would fail to distinguish between representational states that are differentiated by the balance of contributions across bands, much like the arithmetic example at the start of this subsection in which different addends yield the same sum. As before, basic concepts in information theory, such as the Nyquist-Shannon sampling theorem, imply that temporally demanding coding schemes will be invisible to fMRI (cf. Nevado et al., 2004).

The success of fMRI does not imply that the brain does not utilize precise timing information, but it does mean that such temporally demanding coding schemes cannot be the full story given fMRI’s successes in revealing neural representations. Instead, the neural code must include in its mixture at least some coding schemes that are consistent with fMRI's successes. For example, rate coding (Adrian and Zotterman, 1926) in which the frequency at which neurons fire is a function of the intensity of a stimulus is consistent with the success of fMRI because changes in firing rate for a population of cells should be recoverable by fMRI as more blood flows to more active cells (O'Herron et al., 2016).

These examples make clear that the neural code must be somewhat spatially and temporally smooth with respect to neural activity (which is several orders of magnitude smaller than voxels) for fMRI to be successful. Whatever is happening in the roughly one million neurons within a voxel (Huettel et al., 2009) through time is being partially reflected by the BOLD summation, which would not be the case if each neuron was computing something dramatically different for in-depth discussion, see: (Kriegeskorte et al., 2010).

Functional smoothness

One general conclusion is that important aspects of the neural code are spatially and temporally smooth. In a sense, this notion of smoothness is trivial as it merely implies that changes in neural activity must be visible in the BOLD response (i.e., across-voxel inhomogeneity) for fMRI to be successful. In this section, we focus on a more subtle sense of smoothness that must also be satisfied, namely functional smoothness.

Neighboring voxels predominantly contain similar representations (Norman et al., 2006), i.e., they are topologically organized like in Figure 1. However, super-voxel smoothness is neither necessary nor sufficient for fMRI to succeed in recovering similarity structure. Instead, a more general notion of functional smoothness must be satisfied in which similar stimuli map to similar internal representations. Although super-voxel and functional smoothness are both specified at the super-voxel level, these distinct concepts should not be confused. A function f that maps from some input x to output y is functionally smooth if and only if

sim(x1,x2)sim(y1,y2), (1)

where f(x1)=y1 and f(x2)=y2. For example, x and y could be the beta estimates for voxels in two brain regions and sim could be Pearson correlation. To measure functional smoothness, the degree of proportionality between all possible similarity pairs sim(x1,x2) and sim(y1,y2) also could be assessed by Pearson correlation (i.e., does the similarity between a y1 and y2 pair increase as the similarity increases between the corresponding x1 and x2 pair). By definition, functional smoothness needs to be preserved in the neural code for fMRI to recover similarity correspondences (as in RSA), whether these correspondences are between stimuli (e.g., xs) and neural activity (e.g., ys), multiple brain regions (e.g., xs and ys), or model measure (e.g., xs) and some brain region (e.g., ys).

From this definition, it should be clear that functional smoothness is distinct from super-voxel smoothness. For example, a brain area that showed smooth activity patterns across voxels for each individual face stimulus but whose activity did not reflect the similarity structure of the stimuli would be super-voxel smooth, but not functionally smooth with respect to the stimulus set. Conversely, a later section of this contribution discusses how neural networks with random weights are functionally but not super-voxel smooth.

To help introduce the concept of functional smoothness, we consider two coding schemes used in engineering applications, factorial and hash coding, which are both inconsistent with the success of fMRI because they do not preserve functional smoothness. In the next section, we consider coding schemes, such as deep learning networks, that are functionally smooth to varying extents and are consistent with the success of fMRI.

Factorial design coding

Factorial design is closely related to the notion of hierarchy. For example, hierarchical approaches to human object recognition (Serre and Poggio, 2010) propose that simple visual features (e.g., a horizontal or vertical line) are combined to form more complex features (e.g., a cross). From a factorial perspective, the simple features can be thought of as main effects and the complex features, which reflect the combination of simple features, as interactions.

In Table 1, a 23 two-level full factorial design is shown with three factors A, B, C, three two-way interactions AB, AC, BC, and a three-way interaction ABC, as well as an intercept term. All columns in the design matrix are pairwise orthogonal.

Table 1.

Design matrix for a 23 full factorial design.

DOI: http://dx.doi.org/10.7554/eLife.21397.004

I A B C AB AC BC ABC
1 -1 -1 -1 1 1 1 -1
1 1 -1 -1 -1 -1 1 1
1 -1 1 -1 -1 1 -1 1
1 1 1 -1 1 -1 -1 -1
1 -1 -1 1 1 -1 -1 1
1 1 -1 1 -1 1 -1 -1
1 -1 1 1 -1 -1 1 -1
1 1 1 1 1 1 1 1

Applying the concept of factorial design to modeling the neural code involves treating each row in Table 1 as a representation. For example, each entry in a row could correspond to the activity level of a voxel. Interestingly, if any region in the brain solely had such a distribution of voxels, neural similarity would be impossible to recover by fMRI. The reason for this is that every representation (i.e., row in Table 1) is orthogonal to every other row, which means the neural similarity is the same for any pair of items. Thus, this coding scheme cannot uncover that low distortions are more similar to a category prototype than high distortions.

Rather than demonstrate by simulation, we can supply a simple proof to make this case using basic linear algebra. Dividing each item in the n×n design matrix (i.e., Table 1) by n, makes each column orthonormal, i.e., each column will represent a unit vector and be orthogonal to the other columns. This condition means that the design matrix is orthogonal. For an orthogonal matrix, Q, like our design matrix, the following property holds: Q×QT=QT×Q=I; where QT is the transpose of Q (a matrix obtained by swapping columns and rows), and I is the identity matrix. This property of orthogonal matrices implies that rows and columns in the factorial design matrix are interchangeable, and that both rows and columns are orthogonal.

The internal representations created using a factorial design matrix do not cluster in ways that meaningfully reflect the categorical structure of the inputs. Due to the fact that each representation is created such that it is orthogonal to every other, there can be no way for information, correlations within and between categories, to emerge. Two inputs varying in just one dimension (i.e., pixel) would have zero similarity; this is inherently not functionally smooth. In terms of Equation 1 and Table 1, an x would be a three dimensional vector consisting of the values of A, B, and C, whereas its y would be the entire corresponding row from the table. Setting aside the degenerate case of self-similarity, there is no proportional relationship between similarity pairs because all y pairs have zero similarity. If the neural code for a region was employing a technique similar to factorial design, neuroimaging studies would never uncover similarity structures by looking at the activity patterns of voxels in that region.

Hash function coding

Hash functions assign arbitrary unique outputs to unique inputs, which is potentially useful for any memory system be it digital or biological. However, such a coding scheme is not functionally smooth by design. Hashing inputs allow for a memory, a data store known as a hash table, that is content-addressable (Hanlon, 1966; Knott, 1975) — also a property of certain types of artificial neural network (Hopfield, 1982; Kohonen et al., 1987). Using a cryptographic hash function means that the arbitrary location in memory of an input is a function of the input itself.

We employed (using the procedure below) the secure cryptographic hash algorithm 1 (SHA-1), an often-used hash function, and applied it to each value in the input vector (National Institute of Standards and Technology, 2015). Two very similar inputs (e.g., members of the same category) are extremely unlikely to produce similar SHA-1 hashes. Thus, they will be stored distally to each other, and no meaningful within-category correlation will arise (i.e., functional smoothness is violated). Indeed, in cryptography applications any such similarities could be exploited to make predictions about the input.

If the neural code in a brain area was underpinned by behavior akin to that of a hash function, imaging would be unable to detect correlations with the input. This is due to the fact that hash functions are engineered in such a way as to destroy any correlations, while nonetheless allowing for the storage of the input in hash tables.

Although hash tables do not seem well-matched to the demands of cognitive systems that generalize inputs, they would prove useful in higher-level mental functions such as source memory monitoring. Indeed, to foreshadow a result below, the advanced layers of very deep artificial neural networks approximate a cryptographic hash function, which consequently makes it difficult to recover the similarity structure in those layers.

Model

In this section, we consider whether neural networks with random weights are consistent with the success of fMRI given its limitations in measuring neural activity. Simulations in the next section revisit these issues through the lens of a deep learning model trained to classify photographs of real-world categories, such as moped, tiger, guitar, robin, etc.

Each simulation is analogous to performing fMRI on the candidate neural code. These simple simulations answer whether in principle neural similarity can be recovered from fMRI data taken from certain neural coding schemes. Stimuli are presented to a model while its internal representations are measured by a simulated fMRI scanner.

The methods were as follows. The stimuli consist of 100-dimensional vectors that were distortions of an underlying prototype. As noise is added to the prototype and the distortion level increases, the neural similarity (measured using Pearson’s correlation coefficient ρ) between the prototype and its member should decrease. The question is whether we can recover this change in neural similarity in our simulated fMRI scanner.

First, each fully-connected network was initialized to random weights drawn from a Gaussian distribution (μ=0,σ=1). Then, a prototype was created from 100 draws from a Gaussian distribution (μ=0,σ=1). Nineteen distortions of the prototype were created by adding levels of Gaussian noise with increasing standard deviation (σ=σprev+0.05) to the prototype. Finally, each item was re-normalized and mean centered, so that μ=0 and σ=1 regardless of the level of distortion. This procedure was repeated for 100 random networks. In the simulations that follow, the models considered involved some portion of the randomly-initialized 8-layer network (8×1002 weights).

The coding schemes that follow are important components in artificial neural network models. The order of presentation is from the most basic components to more complex configurations of networks. To foreshadow the results shown in Figure 2, fMRI can recover the similarity structure for all of these models to varying degrees with the simpler models faring better than the more complex models.

Figure 2. As models become more complex with added layers, similarity structure becomes harder to recover, which might parallel function along the ventral stream. .

Figure 2.

(A) For the artificial neural network coding schemes, similarity to the prototype falls off with increasing distortion (i.e., noise). The models, numbered 1–11, are (1) vector space coding, (2) gain control coding, (3) matrix multiplication coding, (4), perceptron coding, (5) 2-layer network, (6) 3-layer network, (7) 4-layer network, (8) 5-layer network, (9) 6-layer network (10) 7-layer network, and (11), 8-layer network. The darker a model is, the simpler the model is and the more the model preserves similarity structure under fMRI. (B) A deep artificial neural network and the ventral stream can be seen as performing related computations. As in our simulation results, neural similarity should be more difficult to recover in the more advanced layers.

DOI: http://dx.doi.org/10.7554/eLife.21397.005

Vector space coding

The first in this line of models considered is vector space coding (i.e., n), in which stimuli are represented as a vector of real-valued features. Representing concepts in multidimensional spaces has a long and successful history in psychology (Shepard, 1987). For example, in a large space, lions and tigers should be closer to each other than lions and robins because they are more similar. The kinds of operations that are naturally done in vector spaces (e.g., additions and multiplications) are particularly well suited to the BOLD response. For example, the haemodynamic response to individual stimuli roughly summates across a range of conditions (Dale and Buckner, 1997) and this linearity seems to extend to representational patterns (Reddy et al., 2009).

In this neural coding scheme, each item (e.g., a dog) is represented as the set of values in its input vector (i.e., a set of numbers with range [-1,1]). This means that for a given stimulus, the representation this model produces is identical to the input. In this sense, vector space coding is functionally smooth in a trivial sense as the function is identity. As shown in Figure 2, neural similarity gradually falls off with added distortion (i.e., noise). Therefore, this very simple coding scheme creates representational spaces that would be successfully detected by fMRI.

Gain control coding

Building on the basic vector space model, this scheme encodes each input vector by passing it through a monotonic non-linear function, the hyperbolic tangent function (tanh), which is functionally smooth. This results in each vector element being transformed, or squashed, to values between [-1,1]. Such functions are required by artificial neural networks (and perhaps the brain) for gain control (Priebe and Ferster, 2002). The practical effect of this model is to push the values in the model’s internal representation toward either -1 or 1. As can be seen in Figure 2, neural similarity is well-captured by the gain control neural coding model.

Matrix multiplication coding

This model performs more sophisticated computations on the input stimuli. In line with early connectionism and Rescorla-Wagner modeling of conditioning, this model receives an input vector and performs matrix multiplication on it, i.e., computes the weighted sums of the inputs to pass on to the output layer (Knapp and Anderson, 1984; Rescorla and Wagner, 1972). These simple one-layer neural networks can be surprisingly powerful and account for a range of complex behavioral findings (Ramscar et al., 2013). As we will see in later subsections, when a non-linearity is added (e.g., tanh), one-layer networks can be stacked on one another to build deep networks.

This neural coding scheme takes an input stimulus (e.g., an image of a dog) and multiplies it by a weight matrix to create an internal representation, as shown in Figure 3. Interestingly, as shown in Figure 3, the internal representation of this coding scheme is completely nonsensical to the human eye and is not super-voxel smooth, yet it successfully preserves similarity structure (see Figure 2). Matrix multiplication maps similar inputs to similar internal representations. In other words, the result is not super-voxel smooth, but it is functionally smooth which we conjecture is critical for fMRI to succeed.

Figure 3. The effect of matrix multiplication followed by the tanh function on the input stimulus. .

Figure 3.

The output of this one-layer network is shown, as well as the outcome of applying a non-linearity to the output of the matrix multiplication. In this example, functional smoothness is preserved whereas super-voxel smoothness is not. The result of applying this non-linearity can serve as the input to the next layer of a multi-layer network.

DOI: http://dx.doi.org/10.7554/eLife.21397.006

Perceptron coding

The preceding coding scheme was a single-layer neural network. To create multi-layer networks, that are potentially more powerful than an equivalent single-layer network, a non-linearity (such as tanh) must be added to each network layer post-synaptically. Here, we consider a single-layer network with the tanh non-linearity included (see Figure 3). As with matrix multiplication previously, this neural coding scheme is successful (see Figure 2) with ‘similar inputs lead[ing] to similar outputs’ (Rummelhart et al., 1995, p. 31).

Multi-layered neural network coding

The basic network considered in the previous section can be combined with other networks, creating a potentially more powerful multi-layered network. These multi-layered models can be used to capture a stream of processing as is thought to occur for visual input to the ventral stream, shown in Figure 2B (DiCarlo and Cox, 2007; Riesenhuber and Poggio, 1999, 2000; Quiroga et al., 2005; Yamins and DiCarlo, 2016).

In this section, we evaluate whether the similarity preserving properties of single-layer networks extend to deeper, yet still untrained, networks. The simulations consider networks with 2 to 8 layers. The models operate in a fashion identical to the perceptron neural coding model considered in the previous section. The perceptrons are stacked such that the output of a layer serves as the input to the next layer. We only perform simulated fMRI on the final layer of each model. These simulations consider whether the representations that emerge in multi-layered networks are plausible given the success of fMRI in uncovering similarity spaces see also: (Cox et al., 2015; Cowell et al., 2009; Edelman et al., 1998; Goldrick, 2008; Laakso and Cottrell, 2000). Such representations, as found in deep artificial neural network architectures, are uncovered by adding layers to discover increasingly more abstract commonalities between inputs (Graves et al., 2013; Hinton et al., 2006; Hinton, 2007; Hinton et al., 2015; LeCun et al., 2015).

As shown in Figure 2, the deeper the network the less clear the similarity structure becomes. However, even the deepest network preserves some level of similarity structure. In effect, as layers are added, functional smoothness declines such that small perturbations to the initial input result in final-layer representations that tend to lie in arbitrary corners of the representational space, as the output takes on values that are +1 or -1 due to tanh. As layers are added, the network becomes potentially more powerful, but less functionally smooth, which makes it less suitable for analysis by fMRI because the similarity space breaks down. In other words, two similar stimuli can engender near orthogonal (i.e., dissimilar) representations at the most advanced layers of these networks. We measured functional smoothness for a large set of random input vectors using Equation 1 with Pearson correlation serving as both the similarity measure and measure of proportionality. Consistent with Figure 2’s results, at layer 1 (equivalent to the perceptron coding model) functional smoothness was 0.86, but declined to 0.22 by the eighth network layer. These values were calculated using all item pairs consisting of a prototype and one of its distortions. In the Discussion section, we consider the theoretical significance of these results in tandem with the deep learning network results (next section).

Deep learning networks

Deep learning networks (DLNs) have led to a revolution in machine learning and artificial intelligence (Krizhevsky et al., 2012; LeCun et al., 1998; Serre et al., 2007; Szegedy et al., 2015a). DLNs outperform existing approaches on object recognition tasks by training complex multi-layer networks with millions of parameters (i.e., weights) on large databases of natural images. Recently, neuroscientists have become interested in how the computations and representations in these models relate to the ventral stream in monkeys and humans (Cadieu et al., 2014; Dubois et al., 2015; Guclu and van Gerven, 2015; Hong et al., 2016; Khaligh-Razavi and Kriegeskorte, 2014; Yamins et al., 2014; Yamins and DiCarlo, 2016). For these reasons, we choose to examine these models, which also allow for RSA at multiple representational levels.

In this contribution, one key question is whether functional smoothness breaks down at more advanced layers in DLNs as it did in the untrained random neural networks considered in the previous section. We address this question by presenting natural image stimuli (i.e., novel photographs) to a trained DLN, specifically Inception-v3 GoogLeNet (Szegedy et al., 2015b), and applying RSA to evaluate whether the similarity structure of items would be recoverable using fMRI.

Architecture

The DLN we consider, Inception-v3 GoogLeNet, is a convolutional neural network (CNN), which is a type of DLN especially adept at classification and recognition of visual inputs. CNNs excel in computer vision, learning from huge amounts of data. For example, human-like accuracy on test sets has been achieved by: LeNet, a pioneering CNN that identifies handwritten digits (LeCun et al., 1998); HMAX, trained to detect objects, e.g., faces, in cluttered environments (Serre et al., 2007); and AlexNet, which classifies photographs into 1000 categories (Krizhevsky et al., 2012).

The high-level architecture of CNNs consists of many layers (Szegedy et al., 2015a). These are stacked on top of each other, in much the same way as the stacked multilevel perceptrons described previously. A key difference is that CNNs have more variety especially in breadth (number of units) between layers.

In many CNNs, some of the network’s layers are convolutional, which contain components that do not receive input from the whole of the previous layer, but a small subset of it (Szegedy et al., 2015b). Many convolutional components are required to process the whole of the previous layer by creating an overlapping tiling of small patches. Often convolutional layers are interleaved with max-pooling layers (Lecun et al., 1998), which also contain tile-like components that act as local filters over the previous layer. This type of processing and architecture is both empirically driven by what works best, as well as inspired by the visual ventral stream, specifically receptive fields (Fukushima, 1980; Hubel and Wiesel, 1959, 1968; Serre et al., 2007).

Convolutional and max-pooling layers provide a structure that is inherently hierarchical. Lower layers perform computations on small localized patches of the input, while deeper layers perform computations on increasingly larger, more global, areas of the stimuli. After such localized processing, it is typical to include layers that are fully-connected, i.e., are more classically connectionist. And finally, a layer with the required output structure, e.g., units that represent classes or a yes/no response as appropriate.

Inception-v3 GoogLeNet uses a specific arrangement of these aforementioned layers, connected both in series and in parallel (Szegedy et al., 2015b, 2015a, 2016). In total it has 26 layers and 25 million parameters inclusive of connection weights (Szegedy et al., 2015b). The final layer is a softmax layer that is trained to activate a single unit per class. These units correspond to labels that have been applied to sets of photographs by humans, e.g., ‘space shuttle’, ‘ice cream’, ‘sock’, within the ImageNet database (Russakovsky et al., 2015).

Inception-v3 GoogLeNet has been trained on millions of human-labeled photographs from 1000 of ImageNet’s synsets (sets of photographs). The 1000-unit wide output produced by the network when presented with a photograph represents the probabilities of the input belonging to each of those classes. For example, if the network is given a photograph of a moped it may also activate the output unit that corresponds to bicycle with activation 0.03. This is interpreted as the network expressing the belief that there is a 3% probability that the appropriate label for the input is ‘bicycle’. In addition, this interpretation is useful because it allows for multiple classes to co-exist within a single input. For example, a photo with a guillotine and a wig in it will cause it to activate both corresponding output units. Thus the network is held to have learned a distribution of appropriate labels that reflect the most salient items in a scene. Inception-v3 GoogLeNet, achieves human levels of accuracy on test sets, producing the correct label in its five most probable guesses approximately 95% of the time (Szegedy et al., 2015b).

Deep learning network simulation

We consider whether functional smoothness declines as inputs are processed by the more advanced layers of Inception-v3 GoogLeNet. If so, fMRI should be less successful in brain regions that instantiate computations analogous to the more advanced layers of such networks. Unlike the previous simulations, we present novel photographs of natural categories to these networks. The key question is whether items from related categories (e.g., banjos and guitars) will be similar at various network layers. The 40 photographs (i.e., stimuli) are divided equally amongst 8 subordinate categories: banjos, guitars, mopeds, sportscars, lions, tigers, robins, and partridges, which in turn aggregate into 4 basic-level categories: musical instruments, vehicles, mammals, and birds; which in turn aggregate into 2 superordinates: animate and inanimate.

We consider how similar the internal network representations are for pairs of stimuli by comparing the resulting network activity, which is analogous to comparing neural activity over voxels in RSA. Correlations for all possible pairings of the 40 stimuli were calculated for both a mid and a later network layer (see Figure 4).

Figure 4. Similarity structure becomes more difficult to recover in the more advanced layers of the DLN.

Figure 4.

(A) The similarity structure in a middle layer of a DLN, Inception-v3 GoogLeNet. The mammals (lions and tigers) and birds (robins and partridges) correlate forming a high-level domain, rendering the upper-left quadrant a darker shade of red. Whereas the vehicles (sportscars and mopeds) and musical instruments (guitars and banjos) form two high-level categories. (B) In contrast, at a later layer in this network, the similarity space shows high within-category correlations and weakened correlations between categories. While some structure between categories is preserved, mopeds are no more similar to sportscars than they are to robins.

DOI: http://dx.doi.org/10.7554/eLife.21397.007

The middle layer (Figure 4A) reveals cross-category similarity at both the basic and superordinate level. For example, lions are more like robins than guitars. However, at the later layer (Figure 4B) the similarity structure has broken down such that subordinate category similarity dominates (i.e., a lion is like another lion, but not so much like a tiger). Interestingly, the decline in functional smoothness is not a consequence of sparseness at the later layer as the Gini coefficient, a measure of sparseness (Gini, 1909), is 0.947 for the earlier middle layer (Figure 4A) and 0.579 for the later advanced layer (Figure 4B), indicating that network representations are distributed in general and even more so at the later layer. Thus, the decline in functional smoothness at later layers does not appear to be a straightforward consequence of training these networks to classify stimuli, although it would be interesting to compare to unsupervised approaches that can perform at equivalent accuracy levels (no such network currently exists).

These DLN results are directly analogous to those with random untrained networks (see Figure 2). In those simulations, similar input patterns mapped to orthogonal (i.e., dissimilar) internal representations in later layers. Likewise, the trained DLN at later layers can only capture similarity structure within subordinate categories (e.g., a tiger is like another tiger) which the network was trained to classify. The effect of training the network was to create equivalence classes based on the training label (e.g., tiger) such that members of that category are mapped to similar network states. Violating functional smoothness, all other similarity structure is discarded such that a tiger is no more similar to a lion than to a banjo from the network’s perspective. Should brain regions operate in a similar fashion, fMRI would not be successful in recovering similarity structure therein. In the Discussion, we consider the implications of these findings on our understanding of the ventral stream and the prospects for fMRI.

Discussion

Neuroscientists would rightly prefer a method that had both excellent spatial and temporal resolution for measuring brain activity. However, as we demonstrate in this article, the fact that fMRI has proven useful in examining neural representations, despite limitations in both its temporal and spatial resolution, says something about the nature of the neural code. One general conclusion is that the neural code must be smooth, both at voxel (such that voxel responses are inhomogeneous across time and space) and functional levels.

The latter notion of smoothness is often overlooked or confused with super-voxel smoothness, but is necessary for fMRI to recover similarity spaces in the brain. Coding schemes, such as factorial and hash coding, are useful in numerous real-world applications and have an inverse function (i.e., one can go backwards from the internal representation to recover the unique stimulus input). However, these schemes are incompatible with the success of fMRI because they are not functionally smooth. For example, if the brain solely used such coding schemes, the neural representation of a robin would be no more similar to that of a sparrow than to that of a car. The fact that such neural similarities are recoverable by fMRI suggests that the neural code differs from these schemes in many cases.

In contrast, we found that the types of representations used and generated by artificial neural networks, including deep learning networks, are broadly compatible with the success of fMRI in assessing neural representations. These coding schemes are functionally smooth in that similar inputs tend toward similar outputs, which allows item similarity to be reflected in neural similarity (as measured by fMRI). However, we found that functional smoothness breaks down as additional network layers are added. Specifically, we have shown that multi-layer networks eventually converge to something akin to a hash function, as arbitrary locations in memory correspond to categories of inputs.

These results take on additional significance given the recent interest in deep artificial neural networks as computational accounts of the ventral stream. One emerging view is that the more advanced the layers of these models correspond to more advanced regions along the ventral stream (Cadieu et al., 2014-12; Dubois et al., 2015; Guclu and van Gerven, 2015; Hong et al., 2016; Khaligh-Razavi and Kriegeskorte, 2014; Yamins et al., 2014; Yamins and DiCarlo, 2016).

If this viewpoint is correct, our results indicate that neural representations should progressively become less functionally smooth and more abstract as one moves along the ventral stream (recall Figure 2). Indeed, neural representations appear to become more abstract, encoding whole concepts or categories, as a function of how far along the ventral stream they are located (Bracci and Op de Beeck, 2016; DiCarlo and Cox, 2007; Riesenhuber and Poggio, 1999, 2000; Yamins and DiCarlo, 2016). For example, early on in visual processing, the brain may extract so-called basic features, such as in broadly-tuned orientation columns (Hubel and Wiesel, 1959, 1968). In contrast, later on in processing, cells may selectively respond to particular individual stimulus classes i.e., Jennifer Aniston, grandmother, concept, or gnostic cells (Gross, 2002; Konorski, 1967; Quiroga et al., 2005), irrespective of orientation, etc.

Likewise, we found that Inception-v3 GoogLeNet’s representations became symbol-like at advanced network layers such that items sharing a category label (e.g., tigers) engendered related network states, while items in other categories engendered orthogonal states (recall Figure 4). Our simulations of random networks also found reduced functional smoothness at advanced network layers, suggesting a basic geometric property of multi-layer networks. The effect of training seems limited to creating network states in which stimuli that share the same label (e.g., multiple viewpoints of Jennifer Aniston) become similar and items from all other categories (even if conceptually related) become orthogonal. If so, areas further along the ventral stream should prove less amenable to imaging (recall Figure 2). Indeed, a recent object recognition study found that the ceiling on observable correlation values becomes lower as one moves along the ventral stream (Bracci and Op de Beeck, 2016).

Here, we focused on using fMRI to recover non-degenerate similarity spaces (i.e., where there are similarities beyond self-similarities). However, functional smoothness is also important for other analysis approaches. For example, MVPA decoders trained to classify items (e.g., is a house or a face being shown?) based on fMRI activity will only generalize to novel stimuli when functional smoothness holds. Likewise, univariate clusters (e.g., a house or face area) will most likely be found and generalize to novel stimuli when functional smoothness holds because functional smoothness implies similar activation profiles for similar stimuli. Functional smoothness should be an important factor in determining how well classifiers perform and how statistically robust univariate clusters of voxels are.

In cognitive science, research is often divided into levels of analysis. In Marr’s levels, the top level is the problem description, the middle level captures how the problem is solved, and bottom level concerns how the solution is implemented in the brain (Marr, 1982). Given that the ‘how’ and ‘where’ of cognition appear to be merging, some have questioned the utility of this tripartite division (Love, 2015).

Our results suggest another inadequacy of these three levels of description, namely that the implementation level itself should be further subdivided. What is measured by fMRI is at a vastly more abstract scale than what can be measured in the brain. For example, major efforts, like the European Human Brain Project and the Machine Intelligence from Cortical Networks project (Underwood, 2016), are chiefly concerned with fine-grained aspects of the brain that are outside the reach of fMRI (Chi, 2016; Frégnac and Laurent, 2014). Likewise, models of spiking neurons e.g., (Wong and Wang, 2006) are at a level of analysis lower than where fMRI applies.

Nevertheless, fMRI has proven useful in understanding neural representations that are consequential to behavior. Perhaps this success suggests that the appropriate level for relating brain to behavior is close to what fMRI measures. This does not mean lower-level efforts do not have utility when the details are of interest. However, fMRI’s success might mean that when one is interested in the nature of computations carried out by the brain, the level of analysis where fMRI applies may be preferred. To draw an analogy, one could construct a theory of macroeconomics based on quantum physics, but it would be incredibly cumbersome and no more predictive nor explanatory than a theory that contained abstract concepts such as money and supply. Reductionism, while seductive, is not always the best path forward.

Acknowledgement

This work was supported by the Leverhulme Trust (Grant RPG-2014-075), the NIH (Grant 1P01HD080679), and a Wellcome Trust Investigator Award (Grant WT106931MA) to BCL, as well as The Alan Turing Institute under the EPSRC grant EP/N510129/1. Correspondences regarding this work can be sent to o.guest@ucl.ac.uk or b.love@ucl.ac.uk. The authors declare that they have no competing interests. The authors would like to thank Sabine Kastner, Russ Poldrack, Tal Yarkoni, Niko Kriegeskorte, Sam Schwarzkopf, Christiane Ahlheim, and Johan Carlin for their thoughtful feedback on the preprint version, http://dx.doi.org/10.1101/071076. The code used to run these experiments is freely available, http://osf.io/v8baz.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Funding Information

This paper was supported by the following grants:

  • Leverhulme Trust RPG-2014-075 to Bradley C Love.

  • Wellcome WT106931MA to Bradley C Love.

  • National Institutes of Health 1P01HD080679 to Bradley C Love.

Additional information

Competing interests

The authors declare that no competing interests exist.

Author contributions

OG, Conceptualization, Data curation, Software, Formal analysis, Visualization, Writing—original draft, Writing—review and editing.

BCL, Conceptualization, Resources, Supervision, Funding acquisition, Writing—original draft, Project administration, Writing—review and editing.

References

  1. Adrian ED. The impulses produced by sensory nerve endings: Part I. The Journal of Physiology. 1926;61:49–72. doi: 10.1113/jphysiol.1926.sp002273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Alink A, Krugliak A, Walther A, Kriegeskorte N. fMRI orientation decoding in V1 does not require global maps or globally coherent orientation stimuli. Frontiers in Psychology. 2013;4 doi: 10.3389/fpsyg.2013.00493. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ances BM, Liang CL, Leontiev O, Perthen JE, Fleisher AS, Lansing AE, Buxton RB. Effects of aging on cerebral blood flow, oxygen metabolism, and blood oxygenation level dependent responses to visual stimulation. Human Brain Mapping. 2009;30:1120–1132. doi: 10.1002/hbm.20574. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nature Reviews Neuroscience. 2006;7:358–366. doi: 10.1038/nrn1888. [DOI] [PubMed] [Google Scholar]
  5. Binder JR, Frost JA, Hammeke TA, Cox RW, Rao SM, Prieto T. Human brain language areas identified by functional magnetic resonance imaging. Journal of Neuroscience. 1997;17:353–362. doi: 10.1523/JNEUROSCI.17-01-00353.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Boynton GM, Engel SA, Glover GH, Heeger DJ. Linear systems analysis of functional magnetic resonance imaging in human V1. Journal of Neuroscience. 1996;16:4207–4221. doi: 10.1523/JNEUROSCI.16-13-04207.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bracci S, Op de Beeck H. Dissociations and associations between shape and category representations in the two visual pathways. Journal of Neuroscience. 2016;36:432–444. doi: 10.1523/JNEUROSCI.2314-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Cadieu CF, Hong H, Yamins DL, Pinto N, Ardila D, Solomon EA, Majaj NJ, DiCarlo JJ. Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS Computational Biology. 2014;10:e1003963. doi: 10.1371/journal.pcbi.1003963. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Carp J. The secret lives of experiments: methods reporting in the fMRI literature. NeuroImage. 2012;63:289–300. doi: 10.1016/j.neuroimage.2012.07.004. [DOI] [PubMed] [Google Scholar]
  10. Chaimow D, Yacoub E, Ugurbil K, Shmuel A. Modeling and analysis of mechanisms underlying fMRI-based decoding of information conveyed in cortical columns. NeuroImage. 2011;56:627–642. doi: 10.1016/j.neuroimage.2010.09.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Chi KR. Neural modelling: Abstractions of the mind. Nature. 2016;531:S16–S17. doi: 10.1038/531S16a. [DOI] [PubMed] [Google Scholar]
  12. Cowell RA, Huber DE, Cottrell GW. Virtual brain reading: A connectionist approach to understanding fMRI. In 31st Annual Meeting of the Cognitive Science Society.2009. [Google Scholar]
  13. Cox CR, Seidenberg MS, Rogers TT. Connecting functional brain imaging and parallel distributed processing. Language, Cognition and Neuroscience. 2015;30:380–394. doi: 10.1080/23273798.2014.994010. [DOI] [Google Scholar]
  14. Cox DD, Savoy RL. Functional magnetic resonance imaging (fMRI) "brain reading": detecting and classifying distributed patterns of fMRI activity in human visual cortex. NeuroImage. 2003;19:261–270. doi: 10.1016/S1053-8119(03)00049-1. [DOI] [PubMed] [Google Scholar]
  15. Dale AM, Buckner RL. Selective averaging of rapidly presented individual trials using fMRI. Human Brain Mapping. 1997;5:329–340. doi: 10.1002/(SICI)1097-0193(1997)5:5<329::AID-HBM1>3.0.CO;2-5. [DOI] [PubMed] [Google Scholar]
  16. Davatzikos C, Ruparel K, Fan Y, Shen DG, Acharyya M, Loughead JW, Gur RC, Langleben DD. Classifying spatial patterns of brain activity with machine learning methods: application to lie detection. NeuroImage. 2005;28:663–668. doi: 10.1016/j.neuroimage.2005.08.009. [DOI] [PubMed] [Google Scholar]
  17. Davis T, Xue G, Love BC, Preston AR, Poldrack RA. Global neural pattern similarity as a common basis for categorization and recognition memory. Journal of Neuroscience. 2014;34:7472–7484. doi: 10.1523/JNEUROSCI.3376-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. De Martino F, Valente G, Staeren N, Ashburner J, Goebel R, Formisano E. Combining multivariate voxel selection and support vector machines for mapping and classification of fMRI spatial patterns. NeuroImage. 2008;43:44–58. doi: 10.1016/j.neuroimage.2008.06.037. [DOI] [PubMed] [Google Scholar]
  19. DiCarlo JJ, Cox DD. Untangling invariant object recognition. Trends in Cognitive Sciences. 2007;11:333–341. doi: 10.1016/j.tics.2007.06.010. [DOI] [PubMed] [Google Scholar]
  20. Dubois J, de Berker AO, Tsao DY. Single-unit recordings in the macaque face patch system reveal limitations of fMRI MVPA. Journal of Neuroscience. 2015;35:2791–2802. doi: 10.1523/JNEUROSCI.4037-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Edelman S, Grill-Spector K, Kushnir T, Malach R. Toward direct visualization of the internal shape representation space by fMRI. Psychobiology. 1998;26:309–321. [Google Scholar]
  22. Fano U. Ionization yield of radiations. II. The fluctuations of the number of ions. Physical Review. 1947;72:26–29. doi: 10.1103/PhysRev.72.26. [DOI] [Google Scholar]
  23. Freeman J, Brouwer GJ, Heeger DJ, Merriam EP. Orientation decoding depends on maps, not columns. Journal of Neuroscience. 2011;31:4792–4804. doi: 10.1523/JNEUROSCI.5160-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Frégnac Y, Laurent G. Neuroscience: Where is the brain in the human brain project? Nature. 2014;513:27–29. doi: 10.1038/513027a. [DOI] [PubMed] [Google Scholar]
  25. Fukushima K. Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics. 1980;36:193–202. doi: 10.1007/BF00344251. [DOI] [PubMed] [Google Scholar]
  26. Gini C. Il diverso accrescimento delle classi sociali e la concentrazione della ricchezza. Giornale Degli Economisti. 1909;38:27–83. [Google Scholar]
  27. Goldrick M. Does like attract like? Exploring the relationship between errors and representational structure in connectionist networks. Cognitive Neuropsychology. 2008;25:287–313. doi: 10.1080/02643290701417939. [DOI] [PubMed] [Google Scholar]
  28. Graves A, Mohamed A, Hinton G. Speech recognition with deep recurrent neural networks. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing; 2013. pp. 6645–6649. [DOI] [Google Scholar]
  29. Gross CG. Genealogy of the "grandmother cell". The Neuroscientist. 2002;8:512–518. doi: 10.1177/107385802237175. [DOI] [PubMed] [Google Scholar]
  30. Güçlü U, van Gerven MA. Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience. 2015;35:10005–10014. doi: 10.1523/JNEUROSCI.5023-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Hanlon AG. Content-Addressable and associative memory systems a survey. IEEE Transactions on Electronic Computers. 1966;EC-15:509–521. doi: 10.1109/PGEC.1966.264358. [DOI] [Google Scholar]
  32. Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. arXiv. 2015 https://arxiv.org/abs/1503.02531
  33. Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Computation. 2006;18:1527–1554. doi: 10.1162/neco.2006.18.7.1527. [DOI] [PubMed] [Google Scholar]
  34. Hinton GE. Learning multiple layers of representation. Trends in Cognitive Sciences. 2007;11:428–434. doi: 10.1016/j.tics.2007.09.004. [DOI] [PubMed] [Google Scholar]
  35. Hong H, Yamins DL, Majaj NJ, DiCarlo JJ. Explicit information for category-orthogonal object properties increases along the ventral stream. Nature Neuroscience. 2016;19:613–622. doi: 10.1038/nn.4247. [DOI] [PubMed] [Google Scholar]
  36. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. PNAS. 1982;79:2554–2558. doi: 10.1073/pnas.79.8.2554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Hubel DH, Wiesel TN. Receptive fields of single neurones in the cat's striate cortex. The Journal of Physiology. 1959;148:574–591. doi: 10.1113/jphysiol.1959.sp006308. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Hubel DH, Wiesel TN. Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology. 1968;195:215–243. doi: 10.1113/jphysiol.1968.sp008455. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Huettel SA, Song AW, McCarthy G, Huettel SA, Song AW, McCarthy G. Functional Magnetic Resonance Imaging. Sunauer Associates; 2009. [Google Scholar]
  40. Kamitani Y, Tong F. Decoding the visual and subjective contents of the human brain. Nature Neuroscience. 2005;8:679–685. doi: 10.1038/nn1444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Katz SM. Distribution of content words and phrases in text and language modelling. Natural Language Engineering. 1996;2:15–59. doi: 10.1017/S1351324996001246. [DOI] [Google Scholar]
  42. Khaligh-Razavi SM, Kriegeskorte N. Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Computational Biology. 2014;10:e1003915. doi: 10.1371/journal.pcbi.1003915. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Knapp AG, Anderson JA. Theory of categorization based on distributed memory storage. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1984;10:616–637. doi: 10.1037/0278-7393.10.4.616. [DOI] [Google Scholar]
  44. Knott GD. Hashing functions. The Computer Journal. 1975;18:265–278. doi: 10.1093/comjnl/18.3.265. [DOI] [Google Scholar]
  45. Kohonen T, Huang TS, Schroeder MR. Content-Addressable Memories. Springer-Verlag; 1987. [DOI] [Google Scholar]
  46. Konorski J. Integrative Activity of the Brain. University of Chicago Press; 1967. [Google Scholar]
  47. Kriegeskorte N, Cusack R, Bandettini P. How does an fMRI voxel sample the neuronal activity pattern: compact-kernel or complex spatiotemporal filter? NeuroImage. 2010;49:1965–1976. doi: 10.1016/j.neuroimage.2009.09.059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Kriegeskorte N, Mur M, Bandettini P. Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience. 2008;2:4. doi: 10.3389/neuro.06.004.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Kriegeskorte N. Relating population-code representations between man, monkey, and computational models. Frontiers in Neuroscience. 2009;3:363–373. doi: 10.3389/neuro.01.035.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; 2012. pp. 1097–1105. [Google Scholar]
  51. Laakso A, Cottrell G. Content and cluster analysis: Assessing representational similarity in neural systems. Philosophical Psychology. 2000;13:47–76. doi: 10.1080/09515080050002726. [DOI] [Google Scholar]
  52. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  53. Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86:2278–2324. doi: 10.1109/5.726791. [DOI] [Google Scholar]
  54. Logothetis NK. The neural basis of the blood-oxygen-level-dependent functional magnetic resonance imaging signal. Philosophical Transactions of the Royal Society B: Biological Sciences. 2002;357:1003–1037. doi: 10.1098/rstb.2002.1114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Logothetis NK. What we can do and what we cannot do with fMRI. Nature. 2008;453:869–878. doi: 10.1038/nature06976. [DOI] [PubMed] [Google Scholar]
  56. Love BC. The algorithmic level is the bridge between computation and brain. Topics in Cognitive Science. 2015;7:230–242. doi: 10.1111/tops.12131. [DOI] [PubMed] [Google Scholar]
  57. Mack ML, Preston AR, Love BC. Decoding the brain's algorithm for categorization from its neural implementation. Current Biology. 2013;23:2023–2027. doi: 10.1016/j.cub.2013.08.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Mack ML, Preston AR. Decisions about the past are guided by reinstatement of specific memories in the hippocampus and perirhinal cortex. NeuroImage. 2016;127:144–157. doi: 10.1016/j.neuroimage.2015.12.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Magri C, Schridde U, Murayama Y, Panzeri S, Logothetis NK. The amplitude and timing of the BOLD signal reflects the relationship between local field potential power at different frequencies. Journal of Neuroscience. 2012;32:1395–1407. doi: 10.1523/JNEUROSCI.3985-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Marr D. Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information. New York, USA: Henry Holt and Co., Inc; 1982. [Google Scholar]
  61. Maunsell JH, Van Essen DC. Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation. Journal of Neurophysiology. 1983;49:1127–1147. doi: 10.1152/jn.1983.49.5.1127. [DOI] [PubMed] [Google Scholar]
  62. Mitchell TM, Hutchinson R, Niculescu RS, Pereira F, Wang X, Just M, Newman S. Learning to decode cognitive states from brain images. Machine Learning. 2004;57:145–175. doi: 10.1023/B:MACH.0000035475.85309.1b. [DOI] [Google Scholar]
  63. National Institute of Standards and Technology . Secure Hash Standard (SHS) (Standard No. 180) Gaithersburg, USA: FIPS PUB; 2015. [Google Scholar]
  64. Nevado A, Young MP, Panzeri S. Functional imaging and neural information coding. NeuroImage. 2004;21:1083–1095. doi: 10.1016/j.neuroimage.2003.10.043. [DOI] [PubMed] [Google Scholar]
  65. Norman KA, Polyn SM, Detre GJ, Haxby JV. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in Cognitive Sciences. 2006;10:424–430. doi: 10.1016/j.tics.2006.07.005. [DOI] [PubMed] [Google Scholar]
  66. O'Herron P, Chhatbar PY, Levy M, Shen Z, Schramm AE, Lu Z, Kara P. Neural correlates of single-vessel haemodynamic responses in vivo. Nature. 2016;534:378–382. doi: 10.1038/nature17965. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Panzeri S, Macke JH, Gross J, Kayser C. Neural population coding: combining insights from microscopic and mass signals. Trends in Cognitive Sciences. 2015;19:162–172. doi: 10.1016/j.tics.2015.01.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Pessoa L, Gutierrez E, Bandettini P, Ungerleider L. Neural correlates of visual working memory: fMRI amplitude predicts task performance. Neuron. 2002;35:975–987. doi: 10.1016/s0896-6273(02)00817-6. [DOI] [PubMed] [Google Scholar]
  69. Pouget A, Dayan P, Zemel R. Information processing with population codes. Nature Reviews. Neuroscience. 2000;1:125–132. doi: 10.1038/35039062. [DOI] [PubMed] [Google Scholar]
  70. Priebe NJ, Ferster D. A new mechanism for neuronal gain control (or how the gain in brains has mainly been explained) Neuron. 2002;35:602–604. doi: 10.1016/S0896-6273(02)00829-2. [DOI] [PubMed] [Google Scholar]
  71. Quiroga RQ, Reddy L, Kreiman G, Koch C, Fried I. Invariant visual representation by single neurons in the human brain. Nature. 2005;435:1102–1107. doi: 10.1038/nature03687. [DOI] [PubMed] [Google Scholar]
  72. Ramscar M, Dye M, Klein J. Children value informativity over logic in word learning. Psychological Science. 2013;24:1017–1023. doi: 10.1177/0956797612460691. [DOI] [PubMed] [Google Scholar]
  73. Reddy L, Kanwisher NG, VanRullen R. Attention and biased competition in multi-voxel object representations. PNAS. 2009;106:21447–21452. doi: 10.1073/pnas.0907330106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Rescorla RA, Wagner AR. Classical Conditioning II. In: Black A. H, Prokasy W. F, editors. Appleton-Century-Crofts. 1972. pp. 64–99. [Google Scholar]
  75. Riesenhuber M, Poggio T. Hierarchical models of object recognition in cortex. Nature Neuroscience. 1999;2:1019–1025. doi: 10.1038/14819. [DOI] [PubMed] [Google Scholar]
  76. Riesenhuber M, Poggio T. Models of object recognition. Nature Neuroscience. 2000;3 Suppl:1199–1204. doi: 10.1038/81479. [DOI] [PubMed] [Google Scholar]
  77. Rummelhart DE, Durbin R, Golden R, Chauvin Y. Backpropagation: Theory, architectures, and applications. In: Chauvin Y, Rumelhart D. E, editors. Backpropagation: Theory, Architectures, and Applications. USA: Lawrence Erlbaum associates Associates Inc; 1995. pp. 1–34. [Google Scholar]
  78. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L. ImageNet large scale visual recognition challenge. International Journal of Computer Vision. 2015;115:211–252. doi: 10.1007/s11263-015-0816-y. [DOI] [Google Scholar]
  79. Scheeringa R, Fries P, Petersson KM, Oostenveld R, Grothe I, Norris DG, Hagoort P, Bastiaansen MC. Neuronal dynamics underlying high- and low-frequency EEG oscillations contribute independently to the human BOLD signal. Neuron. 2011;69:572–583. doi: 10.1016/j.neuron.2010.11.044. [DOI] [PubMed] [Google Scholar]
  80. Serre T, Poggio T. A neuromorphic approach to computer vision. Communications of the ACM. 2010;53:54–61. doi: 10.1145/1831407.1831425. [DOI] [Google Scholar]
  81. Serre T, Wolf L, Bileschi S, Riesenhuber M, Poggio T. Robust object recognition with Cortex-Like mechanisms. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2007-03;29:411–426. doi: 10.1109/TPAMI.2007.56. [DOI] [PubMed] [Google Scholar]
  82. Shepard RN. Toward a universal law of generalization for psychological science. Science. 1987;237:1317–1323. doi: 10.1126/science.3629243. [DOI] [PubMed] [Google Scholar]
  83. Swisher JD, Gatenby JC, Gore JC, Wolfe BA, Moon CH, Kim SG, Tong F. Multiscale pattern analysis of orientation-selective activity in the primary visual cortex. Journal of Neuroscience. 2010;30:325–330. doi: 10.1523/JNEUROSCI.4811-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Szegedy C, Ioffe S, Vanhoucke V. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. abs/1602.07261; CoRR.2016. [Google Scholar]
  85. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2015a. [DOI] [Google Scholar]
  86. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. abs/1512.00567; CoRR.2015b. [Google Scholar]
  87. Tompary A, Duncan K, Davachi L. High-resolution investigation of memory-specific reinstatement in the hippocampus and perirhinal cortex. Hippocampus. 2016;26:995–1007. doi: 10.1002/hipo.22582. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Turner R. How much cortex can a vein drain? Downstream dilution of activation-related cerebral blood oxygenation changes. NeuroImage. 2002;16:1062–1067. doi: 10.1006/nimg.2002.1082. [DOI] [PubMed] [Google Scholar]
  89. Underwood E. Barcoding the brain. Science. 2016;351:799–800. doi: 10.1126/science.351.6275.799. [DOI] [PubMed] [Google Scholar]
  90. Wong KF, Wang XJ. A recurrent network mechanism of time integration in perceptual decisions. Journal of Neuroscience. 2006;26:1314–1328. doi: 10.1523/JNEUROSCI.3733-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Yamins DL, DiCarlo JJ. Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience. 2016;19:356–365. doi: 10.1038/nn.4244. [DOI] [PubMed] [Google Scholar]
  92. Yamins DL, Hong H, Cadieu CF, Solomon EA, Seibert D, DiCarlo JJ. Performance-optimized hierarchical models predict neural responses in higher visual cortex. PNAS. 2014;111:8619–8624. doi: 10.1073/pnas.1403112111. [DOI] [PMC free article] [PubMed] [Google Scholar]
eLife. 2017 Jan 19;6:e21397. doi: 10.7554/eLife.21397.008

Decision letter

Editor: Russell Poldrack1

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "What the Success of Brain Imaging Implies about the Neural Code" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Sabine Kastner as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Rogier Andrew Kievit (Reviewer #1); Stefano Panzeri (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

This paper asks how the success of fMRI (particularly in the context of representational analysis) has implications for the structure of the neural code. The authors first argue that neural representations must be smooth enough in the spatial and temporal domains to afford effective measurement with fMRI. They then argue that the neural code must also obey a principle of "functional smoothness", such that similar stimuli map to similar neural representations. Using a range of coding schemes, they examine which ones could potentially satisfy this requirement for functional smoothness and thus allow successful neural decoding from fMRI. They show that a number of simple neural models demonstrate such representational isomorphism; in particular, they demonstrate at the earlier layers of a deep convolutional neural network (CNN) show functional similarity, whereas the later layers in the hierarchy do not. The authors conclude that the success of fMRI places limits on the range of plausible coding schemes, and in particular that it implies that neural coding schemes must be both spatiotemporally and functionally smooth.

Essential revisions:

The three reviewers and I all agree that this paper raises an interesting set of questions. However, there were a number of concerns about the paper as it is currently written. Here are the main concerns as I see them, and how they should be addressed:

– Two reviewers raise the question of whether your claims are fully supported by the results, suggesting in many cases that you are making strong claims that are not fully supported by your results. I think that in general the revision needs to better calibrate the claims to the evidence, and avoid overreaching. In particular, the fact that fMRI is successful does not mean that it is capturing all, or even the majority, of interesting neural signals. This issue needs to be clearly addressed in the revision.

– The concept of "functional smoothness" is central to your manuscript, but the reviewers raised multiple questions regarding the clarity of this concept in your paper. The revision needs to be much clearer about what this concept means and how it relates more specifically to the other concepts of smoothness.

– Reviewer 2 points out that your paper needs to make better contact with the literature that has investigated the neural basis of the BOLD signal. In particular, specific attention must be paid to the points raised by reviewer 2 regarding the implications for temporal aspects of the BOLD signal.

– Reviewer 3 points out that the paper lacks methodological details, and that they were not able to access the materials linked at OSF. In the revision, please include a detailed description of your methods along with a working link to the code and materials.

I am including the reviewers' comments below, which provide greater detail regarding the issues outlined above.

Reviewer #1:

In this paper, Guest and Love propose an interesting and provocative claim: The success of fMRI writ large can, and should, constrain theories about the underlying neural code. The paper is engaging and provides a considerable amount of food for thought. However, upon close reading a number of problems are present, including the scope and precision of the central claims, the use and definition of terminology, the choice of models and appropriate engagement with relevant literature. These problems are outlined below. Given the scope of the paper the comments are lengthy, but necessary to clearly describe what we found challenging about the manuscript.

1) What are the scopes of the claims being made?

One main issue with the paper is the precise nature of the central claim. This claim varies in scope throughout the paper from more modest and well-evidenced to more all-encompassing and arguably beyond the evidence put forth. The main claim can loosely be phrased as: The fact that fMRI writ large “works” tells us something about the underlying neural mechanisms. The precise cashing out of this claim includes the following descriptions:

Abstract “certain neural coding schemes are more likely than others” (Plausible but not necessarily true).

Subsection “Sub-voxel Smoothness”, “the rate of change (i.e., smoothness) of neural activity must not exceed what can be measured within a voxel.” (This phrasing seems overly general – There is no reason why rates of change couldn't also exceed what can be measured. All that is compellingly shown is that a non-trivial part of the neural code behaves in a temporally smooth manner).

In the same section “temporally demanding coding schemes cannot carry the day” (Not clear what “carry the day” means – If it means “Can't fully explain all patterns” then it is likely true and well supported, if it means “temporally demanding coding schemes cannot play an important role also” I think it's too strong a claim).

Also in this section “the neural code must be spatially and temporally smooth with respect to neural activity” (Overly general in that it implies there is only one neural code ('The')).

Discussion section first paragraph “the neural code must be smooth, both at the sub-voxel and functional levels.” (overly general – It can be smooth and non-smooth simultaneously to currently unknown degrees, e.g. Figure 1 left and right could be overlapping).

Discussion section paragraph ten “However, fMRI's success might mean that when one is interested in the nature of computations carried out by the brain, the level of analysis where fMRI applies should be preferred.” (A considerable overgeneralisation – All we know for sure is that fMRI can provide important information, but there is no reason why lower (or higher) levels of temporal or spatial abstraction can't be even more informative.).

More limited and arguably more accurate are:

– Abstract section “Deep neural network approaches, are consistent with the success of fMRI” (True and well-supported).

– Subsection “Sub-voxel Smoothness”, “The success of fMRI does not imply that the brain does not utilize precise timing information” (an important explication of the point raised above that multiple coding schemes could operate concurrently, and that fMRI would 'work' if it only picks up parts).

Similarly in subsection “Factorial Design Coding”, discussing factorial coding, the authors state “Interestingly, if any region in the brain had such a distribution of voxels, neural similarity would be impossible to recover by fMRI.” And later “If the neural code for a region was employing a technique similar to factorial design, neuroimaging studies would never recover similarity structures by looking at the patterns of active voxels in that region.” These statements are only true if other coding schemes aren't allowed to co-exist. It may well be that factorial coding schemes co-exist alongside spatially smooth codes but simply cannot be detected using current methods and/or analyses. Or, to couch it in terms of Figure 1, there is no reason why the neural code might not be an overlay, or mixture, of A and B. This implicit suggestion that there must be a single coding scheme is reflected in the title, which refers to 'The' neural code.

As can be seen when read in succession, these are not interchangeable epistemic claims – Some are 'merely' existence claims whereas others rule out alternative explanations.

In our view, the main claim of that article, that is important and well supported, can be phrased as follows:

The success of fMRI makes it highly likely that at least some non-trivial subset of the signal/computations made by the neural code must be smooth.

Or, in the authors words “One general conclusion is that important aspects of the neural code are spatially and temporally smooth at the sub-voxel level”.

In other words, the article provides solid evidence for a more modest positive claim but at times shifts into language that suggests it has ruled out (or rendered exceedingly unlikely) other neural codes. We do not believe this is the case (nor, for that matter, do I think this is an achievable goal in the context of any single paper, for both principled mathematical reasons that every data pattern is compatible with an infinity of data generating mechanisms, and the pragmatic reasons of scope). In other words, we would suggest that the precise scope of the central claim is delineated more clearly, and that this scope is retained throughout

2) Is "sub-voxel smoothness" an accurate and clear description of what is required for successful fMRI?

The other main challenge in the article is the introduction of two new terms, spatial and functional smoothness. It seems as though these are not entirely precisely defined, and to the extent that they are, one might wonder whether a) 'smoothness' is the best term and b) what, if anything, is similar about spatial and functional smoothness such that they share a conceptual term. We outline these challenges below.

With regards to spatial smoothness, I wonder whether the term inhomogeneity isn't more appropriate or central to the claim being made. For instance, it seems as though one could rearrange the voxel columns in Figure 1B whilst preserving sub-voxel smoothness as well as functional smoothness – The arrangement of the voxels as a gradient could be misunderstood as being about super-voxel smoothness (especially in the light of evidence for early visual retinotopic mapping). (more detail below).

The paper distinguishes between the concepts of sub-voxel and super-voxel smoothness. These concepts appear to be independent – i.e. one could have subvoxel smoothness combined with non-smooth super-voxel patterns. i.e., it's not entirely clear why “In such a spatially smooth representation, the transitions from red to yellow occur in progressive increments” follows necessarily from subvoxel smoothness? The later definition that voxel size can be changed arbitrarily does capture this, but for a *given* voxel size a pattern could be subvoxel smooth and super-voxel non-smooth. Note that the empirical examples outlines in subsection “Sub-voxel Smoothness” are compelling evidence that sub and super-voxel smoothness are related, but in principle they needn't be (this point is made by the authors in subsection “Functional Smoothness” with respect to functional smoothness).

It might be helpful to show in Figure 1 examples of all four possible combinations. In the current Figure 1, the left panels appear to show both sub- and super-voxel smoothness (there is a smooth gradient of functional tunings both within a voxel, and across voxels once tunings have been averaged within each voxel). The right panel appears to show a case where neither hold (within a single voxel, tunings are arranged into jagged repeating stripes, and across voxels all tunings are identical once averaged). It would also be possible to have sub-voxel smoothness without super-voxel smoothness (e.g. if one were to shuffle the "columns" of the arrangement on the left), and vice versa (e.g. a version of the arrangement on the right in which the red stripes became progressively thinner and the yellow stripes progressively wider as one moved across the cortical sheet).

Picturing these four alternatives, it seems that in order for different stimuli to create different patterns at the level of fMRI voxels, the important factor is neither sub-voxel nor super-voxel smoothness, but something which might be termed "across-voxel inhomogeneity" – i.e. that different voxels sample populations of neurons with different functional properties. For example, in the example just suggested, wherein the 'columns' of the left-hand panel are shuffled, there would no longer be a smooth gradient in the functional selectivities of voxels as one moves across the cortical sheet, but different stimuli could still evoke unique multi-voxel patterns.

It is important here to engage with previous literature on fMRI decoding, especially Kamitani & Tong (2005) and subsequent reflections on why stimulus orientation can be decoded from V1 via fMRI (e.g. Alink et al. 2013; Carlson, 2014). Orientation columns in V1 are arranged in a similar fashion to the "sub-voxel non-smooth" depiction in Figure 1, yet orientation can reliably be decoded. One plausible reason is that the chance of each voxel sampling all orientation selectivities exactly equally is very low, so even if neuronal selectivities are intermixed at a sub-voxel level, slight differences in orientation preference are likely to emerge at the voxel level.

The caption of Figure 1 suggests non-smoothness is preserved “regardless of the precise boundaries of the voxels which quantize the brain” but it seems that this would only be true conditional on preserving approximate voxel size? If I am allowed to change both the size and boundaries I could create mostly-yellow and mostly-red voxels from Figure 1B? Moreover, as mentioned above, random variations in sampling can lead to successful decoding of schemes similar to the non-smooth pattern.

Sub-voxel smoothness might not be the most intuitive term to use, not only for the reasons mentioned under point 2, but also because it implies spatial and not temporal smoothness. It seems more appropriate to talk about spatial and temporal scales at which useful information (about stimuli, thoughts, actions) is represented by brain activity.

3) "Functional smoothness" is a relational property between two models or between data and a model, not an inherent property of one model.

Arguably the most challenging term introduced by the paper is "functional smoothness." It is not made clear what this is a property of. From the definition, that "similar stimuli map to similar representations," a model is "functionally smooth" with respect to some second representational model if it preserves the representational geometry of that target model. In the first set of simulations involving noise-corrupted images, the target model is pixel space. In the second set of simulations involving objects of different categories being presented to a deep neural network, the target model is a semantic or conceptual space. In other words, functional smoothness is not an objective property of the model, but a relational property between an input and an output space. This seems to be recognised in some sections, e.g. subsection “Vector Space Coding”: "vector space coding is functionally smooth in the trivial sense as the function is identity."

However, many other phrases in the paper imply that smoothness is intrinsic to models, e.g.:

– subsection “Deep Learning Networks”: "one key question is whether functional smoothness breaks down at more advanced layers in DLNs…"

– subsection “Deep Learning Network Simulation”: "We consider whether functional smoothness declines as stimuli progress to more advanced layers…"

– and: "Violating functional smoothness, all other similarity structure is discarded…"

The simulations underlying Figures 2 and 3 add to this confusion rather than illustrating functional smoothness. Presumably the weights within the weight matrices used in the models from "matrix multiplication coding" onwards were random? If so, then Figure 3 seems to illustrate the trivial effect that as one performs increasingly many random nonlinear transformations on an input, distances between stimuli in the input space will be less and less well correlated with distances in the output. This result doesn't seem to reveal anything fundamental about the merit of (non-random) deep nonlinear networks as models of brain representation.

To highlight the fact that functional smoothness is a relational property, it might instead be helpful to create a simulation akin to these, but using a trained network, and adding noise in a more abstract "target space" (e.g. noise could be added to the location of an object within an image, for example by sliding around the location of a dog image superimposed on a natural background). The earliest layer of the network should be "functionally smooth" w.r.t. changes in pixels but not location (i.e. pixelwise differences will make a large difference to early representations); middle layers may be "functionally smooth" w.r.t. location but not pixels (e.g. the same dog at nearby locations will activate the same feature detectors looking for eyes, legs, etc), while the final categorical layer would ideally be wholly invariant both to pixel and location changes provided the image continues to contain a dog.

It would also help to distinguish between coding schemes that have the capacity to be functionally smooth, vs. those that actually functionally smooth, with respect to a particular input space. Factorial and hash coding are raised as examples of codes that are "not functionally smooth," while neural-network-style encodings of various complexity are evaluated for whether the simulated examples happen to be functionally smooth with respect to the input spaces (pixel space, then semantic similarity space). However, there is an important difference in the sense in which factorial coding is "not functionally smooth" and that in which the late layers of GoogLeNet are not – factorial and hash coding are not capable of being functionally smooth (because every representation is orthogonal to every other), whereas the neural networks considered are capable in principle of strongly preserving the representational geometry of any input space desired (although see Point 6).

4) What is the relationship, if any, between spatial and functional smoothness?

It seems as if spatial and functional smoothness, as defined in the paper, should be completely independent of one another, and they are described in paragraph two of subsection “Functional Smoothness” as "distinct concepts." However:

Subsection “Matrix Multiplication Coding”: "…the internal representation of this coding scheme is completely nonsensical to the human eye and is not super-voxel smooth."

This is confusing for two reasons. First, there is no reason for any internal representation to be "sensible to the human eye." The fact that the input representation is "sensible" is just an artefact of using images as the inputs. Even then, the image is only understandable by the eye if one preserves the exact spatial order of units and arranges them into rows and columns of exactly the right dimensions. Any shuffling or re-arrangement of the pixels would constitute an identical representation, and yet would no longer be sensible to the eye – so sensibility to the eye does not seem relevant.

Second, how does this minimal model (a transformation of the input by a single matrix multiplication) specify properties about the spatial arrangement of voxels? Paragraph two subsection “Functional Smoothness” states that functional smoothness is defined at the level of voxels (although this seems a little counter-intuitive, since brains and neural networks encode information at the level of neurons). So in the "matrix multiplication code," we should imagine a case where there are as many voxels in the output as there are pixels in the input image, and the activation level of each one is determined by an arbitrary linear combination of all input pixels. This output will be "functionally smooth" w.r.t. pixel space if the matrix transformation is one that preserves representational geometry (e.g. a rotation matrix; see first comment under "Smaller points" below for more on this…). This will be true however one arranges the voxels spatially. Some possible arrangements will appear super-voxel smooth (e.g. if voxels are placed next to those with the most similar selectivities), and some will not (e.g. if voxels are randomly placed), but all arrangements will be functionally smooth.

If there is some deeper connection between functional and spatial smoothness, this needs to be more clearly explained and illustrated.

5) Different neural code properties may be required for "successful fMRI" when doing mean-activation vs. decoding vs. representational similarity analyses.

The title-bearing central claim refers to the “success” of brain imaging, but it is not entirely clear how this should be conceived. It might be worth briefly describing what counts as success, and how each result constrains possible neural codes. For example, it would be good to separately consider what findings from (1) old school mean-activation “blobology”, (2) classifiers performing above chance, (3) RSA imply about neural coding. It seems uncontroversial that all require "temporal smoothness" and some form of across-voxel spatial inhomogeneity in order for different stimuli to create detectably different fMRI activations. But they may have different further implications, for example:

1) The success of "blobology" in finding multi-voxel clusters with similar functional properties suggests some degree of super-voxel smoothness.

2) Above-chance decoding does not seem to even require a neural code capable of functional smoothness. E.g. in a factorial code, although every stimulus elicits a pattern orthogonal to that elicited by any other, one could still do successful "mind-reading" as long as one had access to data from previous trials on which subjects had viewed that stimulus.

3) The success of RSA (i.e. finding interesting and nuanced similarity patterns between patterns evoked by different stimuli, which seem to bear some relation to the geometry of those stimuli within other models such as pixel space or semantic space and can be compared to predictions from computational models) does require a neural code which is capable of functional smoothness. The importance of functional smoothness only to RSA does seem to be recognised in the paper, but could be made more explicit, e.g.:

Subsection “Factorial Design Coding” paragraph five: "If the neural code for a region was employing a technique similar to factorial design, neuroimaging studies would never recover similarity structures by looking at the patterns of active voxels in that region."

One thing to note here is that model comparisons do not necessarily require RSA. Does the success of other analysis methods that compare models (e.g. voxel-receptive-field modelling) also point to functional smoothness? Or is this term strongly linked to RSA as an analysis framework, to the extent that it does not have a meaning outside it? If so, why do the authors focus so strongly on functional smoothness? (RSA is successful, but so are (linear) classifiers).

6) Simulations with a network trained on a task other than categorisation would help justify the claim that "non-smoothness" is an inevitable property of deep nonlinear neural networks.

The final simulations, in which distances within a “semantic space” are informally compared to distances within successive layers of the deep neural network GoogLeNet, conclude that later layers are less functionally smooth w.r.t. semantic space than early ones (since they lose between-category similarity information), and that “the decline in functional smoothness at later layers does not appear to be a straightforward consequence of training these networks to classify stimuli.” This latter conclusion is likely to be controversial, and is not strongly supported by the sparsity analysis.

The simplest way to clarify the contribution of the training task to the representational geometry in the final layers would be to show RDMs from (a) randomly-weighted networks, and (b) an unsupervised network, or one trained on a task orthogonal to categorisation. A good candidate would be the unsupervised seven layer neural network in Wang & Gupta (2015), which is available from https://github.com/xiaolonw/caffe-video_triplet

Again though, this would not reveal anything about the "functional smoothness of the model," since that is not an inherent property, but only the similarity between the representations within the model and in semantic space.

7) Previous work

One key feature missing from this paper is closer engagement with previous literature on decoding, such as the seminal findings in Kamitani & Tong (2005) (discussed above in Point 2), computational accounts (e.g. Kriegeskorte, N., Cusack, R., & Bandettini, P. (2010) or de Beeck, H. P. O. (2010), and those discussing the plausibility of more trivial structural explanations such as differences in vasculature (e.g. Shmuel, A., Chaimow, D., Raddatz, G., Ugurbil, K., & Yacoub, E. (2010)).

Miscellaneous

Subsection “Matrix Multiplication Coding” states “Matrix multiplication maps similar inputs to similar internal representations.” The claim seems to refer to multiplication of a 1xn input by an arbitrary nxn matrix (such as would be implemented by a 1-layer fully connected linear neural network). Although there are some such matrix multiplications which would perfectly preserve distances between different inputs in their original vs. transformed spaces (e.g. rotation matrices), and with random matrices, distances in the original and transformed spaces will tend to correlate (as your simulations show), the claim is not generally true. There are many matrix multiplications which will completely disrupt representational geometry.

Subsection “Deep Learning Network Simulation” paragraph three states that sparseness of representation does not decline for a "later advanced layer" of the category-supervised deep neural net. Which layer is this? It seems surprising that sparseness does not increase in at least the final layer (i.e. the output of the softmax operation). Relatedly, is it worth showing more layers in Figure 5? If not, why are these two layers shown?

Related to Point 3 above, it would help to use more precise language about the nature of the sensory inputs or brain representations being discussed. For example, subsection “Matrix Multiplication Coding” says that a particular coding scheme "takes an input stimulus (e.g. a dog) and multiplies it by a weight matrix" – given that a dog is not the sort of thing that can be multiplied by a weight matrix, does this mean either (specifically) an image of a dog, or (generally) the activity within a preceding layer of neurons in a neural network model? Referring to the (arbitrary) input images in the simulations as "prototypes" is also confusing, as it suggests they have some special status to the models.

– Although I like the hash coding example in subsection “Hash Function Coding” says it's “potentially useful for biological systems” – it might be worth elaborating briefly in what circumstances a biological system would evolve something akin to hashcoding for certain stimuli? It seems rather inefficient and hard to reconcile with basic facts about learning (e.g. co-occurrence increasing association strengths, and thereby similarity) and memory (e.g. semantic co-activation)? I appreciate the later evidence for the compatibility of higher layers with hash coding but the above claim seems more general – This relates to the above discussion on what

– The selection of coding schemes cover interesting (and rhetorically convincing) ground but the motivation for this set of coding schemes doesn't seem to be motivated. E.g., why focus on hash coding and factorial mapping – Are those a subset of a broader range that could be considered? Something similar could be said about the selection of NN algorithms. The selection of codes seems to constitute an input being transformed by successively more complex neural networks ("vector space coding" = no transformation of the input; "gain control coding" = one nonlinearity; "matrix multiplication coding" = one linear transformation; "perceptron coding" = one linear transformation plus one nonlinearity; "multi-layer neural network"….etc). Although this is logical, the descriptions (e.g. "vector space coding") misleadingly imply that these are qualitatively distinct strategies for encoding a stimulus, and leave it to the reader to discover the logic of selecting these particular "codes".

– The paper by Bracci & de Boeck (2016) seems worth discussing in more detail, as it provides potential direct evidence for the hierarchy of smoothness. One wonders whether there are plausible alternative explanations that should be taken into account wrt varying levels of prediction accuracy across the ventral stream? For instance, the noise ceiling also often goes down (i.e. there is less signal to be explained in principle).

Reviewer #2:

This paper addresses an interesting subject, that of what we can learn about the neural code from fMRI. The paper makes a valuable conceptual effort to think about which neural codes are supported by fMRI observations and which are not. Much as I like the paper and I think it is important to discuss these issues, I think that the connection between neural activity and fMRI, which should be central to this topic, is not sufficiently well discussed. My fear is that places of the current manuscript would look insufficiently developed to neurophysiologists investigating neural coding. In the following I raise the attention of the authors to what are in my view problems in the current manuscript that need addressing, and I provide a few suggestions. I hope that this will improve their paper.

The current writing of the paper may be taken at specific places to argue that the success of fMRI implies that a coding scheme that does not come though fMRI is not one used by the brain to compute. ("Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity"… "The neural code must have certain properties for this state of affairs to hold. What kinds of models or computations are consistent with the success of fMRI?" … "The success of fMRI does not imply that the brain does not utilize precise timing information, but it does mean that such temporally demanding coding schemes cannot carry the day given the successes fMRI has enjoyed in understanding neural representations.")

Of course there would be no basis for such a strong claim, and the authors should state and discuss this clearly. It is for example possible that fMRI gets only a part of the neural code used by the brain, and that other parts, perhaps as important as others, are simply lost by the limitations of fMRI but are important for brain function. Another example of the possible dangers of this argument is reported in my comments about the temporal domain. I think that the authors should carefully reconsider how they write these statements.

Subsection “Sub-voxel Smoothness” paragraph four: the problems related to temporal domain seem to be conceptualized in a way that is at odds with what we know of how fMRI is sensitive to the timing of neural population activity. The authors seem to put forward the idea that BOLD fMRI roughly corresponds to a firing rate averaged over long time windows, and that it will be insensitive to timing of spikes for example synchronous firing:

"BOLD will fail to measure other temporal coding schemes, such as neural coding chemes that rely on the precise timing of neural events, as required by accounts that posit that the synchronous firing of neurons is relevant to coding (Abeles, Bergman, Margalit, & Vaadia, 1993; Gray & Singer, 1989). Unless synchronous firing is accompanied by changes in activity that fMRI can measure, such as mean population activity, it will be invisible to fMRI".. "Because the BOLD signal roughly summates through time.."

This reasoning appears at odds with what concurrent recordings of neural activity and fMRI show. First, as the pioneering work of Logothetis et al. (Nature 2001) already revealed and many studies from his groups confirmed over the years, the BOLD correlates strongly with LFPs (a measure of mass synaptic activity) and it correlates with spike rates only when those correlate with LFPs. Second, the degree of millisecond-scale synchronization among neurons is not only picked by BOLD: it actually seems to be a primary correlate of fMRI BOLD, and much more so than the firing rate or multi-unit activity computed over long windows. One study of Logothetis group (Magri et al. J Neurosci 2012) showed that the primary correlate of BOLD is the γ-band LFP power. Γ band power expressed the strength of local neural synchronization over a scales of few ms to one or two tens of ms. These results are also reported in human studies using EEG with fMRI (see Scheeringa et al. Neuron 2011).

So, my suggestion is to rewrite completely the "temporal dimension" part of this paper. This can also serve as an example suggestion of how very dangerous it would be to rule out a coding scheme based considering the success of fMRI and its spatio-temporal limitations (see my comments above).

Reviewer #3:

Summary:

The authors applied an fMRI data analysis method called representational similarity analysis (RSA) to artificial neural network data. They argued that neural code must be both sub-voxel smooth and functionally smooth for RSA to recover the neural similarities from fMRI data.

Comments:

1) What is the definition of the term "functional smoothness"? In the "Functional smoothness" section, the authors only stated that factorial design coding and hash function coding are not functionally smooth, but neural network models are functional smooth. I only see examples but no definition.

2) If the main contribution of the paper is that the neural code must be smooth for RSA method to decode. Then the authors should provide necessity and sufficiency proofs of this statement (Discussion section first paragraph): (1) if RSA can decode the similarity in the fMRI data, then the neural code must be sub- voxel smooth and functional smooth. (2) As long as the neural code is sub- voxel smooth and functional smooth, RSA can encode the similarity in the fMRI data.

3) The authors should explain the reason they choose Deep Neural Network for their experiments. Friston 2003's dynamic causal model is a popular model for fMRI data simulation. Spiking neural network is another candidate used to study neural code. Please explain why Deep Neural Network is favorable for the experiments in this paper.

4) Experimental detail is lacking. There is no methods section. I also tried to look at the code the author provided at http://osf.io/v8baz, but the access was forbidden. It seems like the code folder is private. So there is not much I can comment on the methods used in this paper.

eLife. 2017 Jan 19;6:e21397. doi: 10.7554/eLife.21397.009

Author response


Essential revisions:

The three reviewers and I all agree that this paper raises an interesting set of questions. However, there were a number of concerns about the paper as it is currently written. Here are the main concerns as I see them, and how they should be addressed:

– Two reviewers raise the question of whether your claims are fully supported by the results, suggesting in many cases that you are making strong claims that are not fully supported by your results. I think that in general the revision needs to better calibrate the claims to the evidence, and avoid overreaching. In particular, the fact that fMRI is successful does not mean that it is capturing all, or even the majority, of interesting neural signals. This issue needs to be clearly addressed in the revision.

As detailed below, we made an effort throughout the manuscript to qualify our claims. The changes, following the reviewers’ suggestions, include being clear that the brain may use other coding schemes that are inconsistent with the success of fMRI. We make clear that our results do not rule out these coding schemes, but do suggest that the brain is using coding schemes that are to some extent consistent with fMRI’s success. To flesh out this idea, we introduce the notion that the neural code could consist of a mixture of coding schemes and that at least some portion of this mixture is consistent with the success of fMRI.

– The concept of "functional smoothness" is central to your manuscript, but the reviewers raised multiple questions regarding the clarity of this concept in your paper. The revision needs to be much clearer about what this concept means and how it relates more specifically to the other concepts of smoothness.

As detailed below, we made substantial improvements that include formalizing functional smoothness as a straightforward equation and applying this measure to the simulation results. We also made an effort to improve the discussion of functional smoothness, as well as to disentangle it from related concepts.

– Reviewer 2 points out that your paper needs to make better contact with the literature that has investigated the neural basis of the BOLD signal. In particular, specific attention must be paid to the points raised by reviewer 2 regarding the implications for temporal aspects of the BOLD signal.

We have incorporated all the literature that reviewer 2 suggested. In particular, reviewer 2’s pointers helped us improve the subsection on the temporal aspects of the BOLD signal.

– Reviewer 3 points out that the paper lacks methodological details, and that they were not able to access the materials linked at OSF. In the revision, please include a detailed description of your methods along with a working link to the code and materials.

We regret that the OSF repository was set to be private at the time of submission. We have made the repository public, which contains all scripts and readme files to guide the user. We also now provide complete methods to the simulations in the main text.

I am including the reviewers' comments below, which provide greater detail regarding the issues outlined above.

Reviewer #1:

[…]

1) What are the scopes of the claims being made?

One main issue with the paper is the precise nature of the central claim. This claim varies in scope throughout the paper from more modest and well-evidenced to more all-encompassing and arguably beyond the evidence put forth. The main claim can loosely be phrased as: The fact that fMRI writ large “works” tells us something about the underlying neural mechanisms. The precise cashing out of this claim includes the following descriptions:

Abstract “certain neural coding schemes are more likely than others” (Plausible but not necessarily true).

Subsection “Sub-voxel Smoothness”, “the rate of change (i.e., smoothness) of neural activity must not exceed what can be measured within a voxel.” (This phrasing seems overly general – There is no reason why rates of change couldn't also exceed what can be measured. All that is compellingly shown is that a non-trivial part of the neural code behaves in a temporally smooth manner).

Thank you, we now make clear that multiple codes could coexist and that our results imply that the mixture will contain components with certain properties.

In the same section “temporally demanding coding schemes cannot carry the day” (Not clear what “carry the day” means – If it means “Can't fully explain all patterns” then it is likely true and well supported, if it means 'temporally demanding coding schemes cannot play an important role also' I think it's too strong a claim).

Also in this section “the neural code must be spatially and temporally smooth with respect to neural activity” (Overly general in that it implies there is only one neural code ('The')).

Edited.

Discussion section first paragraph “the neural code must be smooth, both at the sub-voxel and functional levels.” (overly general – It can be smooth and non-smooth simultaneously to currently unknown degrees, e.g. Figure 1 left and right could be overlapping).

We revised Figure 1 and the supporting discussion to make the concepts clearer.

Discussion section paragraph ten “However, fMRI's success might mean that when one is interested in the nature of computations carried out by the brain, the level of analysis where fMRI applies should be preferred.” (A considerable overgeneralisation – All we know for sure is that fMRI can provide important information, but there is no reason why lower (or higher) levels of temporal or spatial abstraction can't be even more informative.).

There were already qualifications in surrounding passages, but we have further tempered this statement.

More limited and arguably more accurate are:

– Abstract section “Deep neural network approaches, are consistent with the success of fMRI” (True and well-supported).

– Subsection “Sub-voxel Smoothness”, “The success of fMRI does not imply that the brain does not utilize precise timing information” (an important explication of the point raised above that multiple coding schemes could operate concurrently, and that fMRI would “work” if it only picks up parts).

We agree and have increased discussion of what fMRI may be missing and the possibility of code mixtures.

Similarly in subsection “Factorial Design Coding”, discussing factorial coding, the authors state “Interestingly, if any region in the brain had such a distribution of voxels, neural similarity would be impossible to recover by fMRI.” And later “If the neural code for a region was employing a technique similar to factorial design, neuroimaging studies would never recover similarity structures by looking at the patterns of active voxels in that region.” These statements are only true if other coding schemes aren't allowed to co-exist. It may well be that factorial coding schemes co-exist alongside spatially smooth codes but simply cannot be detected using current methods and/or analyses. Or, to couch it in terms of Figure 1, there is no reason why the neural code might not be an overlay, or mixture, of A and B. This implicit suggestion that there must be a single coding scheme is reflected in the title, which refers to “The” neural code.

This is addressed in the mixture concept, as well as qualifying statements throughout to make clear such results would hold only if that were the “sole” code used. One contribution of this paper is to work through the properties of several coding schemes with respect to the BOLD response. In some circumstances, it is helpful to consider the properties of single coding schemes in isolation for purposes of clarity, though we make clear that the brain may utilize mixtures.

As can be seen when read in succession, these are not interchangeable epistemic claims – Some are “merely” existence claims whereas others rule out alternative explanations.

In our view, the main claim of that article, that is important and well supported, can be phrased as follows:

The success of fMRI makes it highly likely that at least some non-trivial subset of the signal/computations made by the neural code must be smooth.

Or, in the authors words “One general conclusion is that important aspects of the neural code are spatially and temporally smooth at the sub-voxel level”.

In other words, the article provides solid evidence for a more modest positive claim but at times shifts into language that suggests it has ruled out (or rendered exceedingly unlikely) other neural codes. We do not believe this is the case (nor, for that matter, do I think this is an achievable goal in the context of any single paper, for both principled mathematical reasons that every data pattern is compatible with an infinity of data generating mechanisms, and the pragmatic reasons of scope). In other words, we would suggest that the precise scope of the central claim is delineated more clearly, and that this scope is retained throughout

We believe that the claims and the supporting evidence are now better aligned.

2) Is "sub-voxel smoothness" an accurate and clear description of what is required for successful fMRI?

The other main challenge in the article is the introduction of two new terms, spatial and functional smoothness. It seems as though these are not entirely precisely defined, and to the extent that they are, one might wonder whether a) “smoothness” is the best term and b) what, if anything, is similar about spatial and functional smoothness such that they share a conceptual term. We outline these challenges below.

With regards to spatial smoothness, I wonder whether the term inhomogeneity isn't more appropriate or central to the claim being made. For instance, it seems as though one could rearrange the voxel columns in Figure 1B whilst preserving sub-voxel smoothness as well as functional smoothness – The arrangement of the voxels as a gradient could be misunderstood as being about super-voxel smoothness (especially in the light of evidence for early visual retinotopic mapping). (more detail below).

We are essentially in agreement, though perhaps this was not clear in the original submission. Indeed, we retitled a subsection “Voxel Inhomogeneity Across Space and Time” and have used this language throughout in light of your comments. Subvoxel smoothness in the original manuscript was motivated by the mathematical concept of smoothness, which involves “clumpiness” and lack of sharp discontinuities. In other words, the concept does not imply smooth global changes in responses and would be consistent with your thought experiment of rearranging elements. One peril in related thought experiments is assuming that the brain somehow “knows” the voxel size or that voxels favorably align.

The paper distinguishes between the concepts of sub-voxel and super-voxel smoothness. These concepts appear to be independent – i.e. one could have subvoxel smoothness combined with non-smooth super-voxel patterns. i.e., it's not entirely clear why “In such a spatially smooth representation, the transitions from red to yellow occur in progressive increments” follows necessarily from subvoxel smoothness? The later definition that voxel size can be changed arbitrarily does capture this, but for a *given* voxel size a pattern could be subvoxel smooth and super-voxel non-smooth. Note that the empirical examples outlines in subsection “Sub-voxel Smoothness” are compelling evidence that sub and super-voxel smoothness are related, but in principle they needn't be (this point is made by the authors in subsection “Functional Smoothness” with respect to functional smoothness).

[…]

Picturing these four alternatives, it seems that in order for different stimuli to create different patterns at the level of fMRI voxels, the important factor is neither sub-voxel nor super-voxel smoothness, but something which might be termed "across-voxel inhomogeneity" – i.e. that different voxels sample populations of neurons with different functional properties. For example, in the example just suggested, wherein the “columns” of the left-hand panel are shuffled, there would no longer be a smooth gradient in the functional selectivities of voxels as one moves across the cortical sheet, but different stimuli could still evoke unique multi-voxel patterns.

We now better define functional smoothness, going as far as to formalize it. We made an effort in the original manuscript to note how functional and super-voxel smoothness were distinct, which is what motivated the inclusion of Figure 4 (now Figure 3). We now go further throughout and have added a paragraph discussing how these concepts diverge (subsection “Functional Smoothness”).

We also discussed the importance of voxel size in the original manuscript, but these points were too obscured. We now highlight these points more (subsection “Voxel Inhomogeneity Across Space and Time”) and have amended the Figure 1 example to demonstrate the role of voxel size. This might be a case of agreement that was not made sufficiently clear in the original manuscript.

It is important here to engage with previous literature on fMRI decoding, especially Kamitani & Tong (2005) and subsequent reflections on why stimulus orientation can be decoded from V1 via fMRI (e.g. Alink et al. 2013; Carlson, 2014). Orientation columns in V1 are arranged in a similar fashion to the "sub-voxel non-smooth" depiction in Figure 1, yet orientation can reliably be decoded. One plausible reason is that the chance of each voxel sampling all orientation selectivities exactly equally is very low, so even if neuronal selectivities are intermixed at a sub-voxel level, slight differences in orientation preference are likely to emerge at the voxel level.

We enjoyed reading these papers and now cite Kamitani and Tong (2005) and Alink et al. (2013) as additional examples of successful decoding.

The caption of Figure 1 suggests non-smoothness is preserved “regardless of the precise boundaries of the voxels which quantize the brain” but it seems that this would only be true conditional on preserving approximate voxel size? If I am allowed to change both the size and boundaries I could create mostly-yellow and mostly-red voxels from Figure 1B? Moreover, as mentioned above, random variations in sampling can lead to successful decoding of schemes similar to the non-smooth pattern.

Issues of voxel size are covered above.

Sub-voxel smoothness might not be the most intuitive term to use, not only for the reasons mentioned under point 2, but also because it implies spatial and not temporal smoothness. It seems more appropriate to talk about spatial and temporal scales at which useful information (about stimuli, thoughts, actions) is represented by brain activity.

This issue has also been covered above.

3) "Functional smoothness" is a relational property between two models or between data and a model, not an inherent property of one model.

Arguably the most challenging term introduced by the paper is "functional smoothness." It is not made clear what this is a property of. From the definition, that "similar stimuli map to similar representations," a model is "functionally smooth" with respect to some second representational model if it preserves the representational geometry of that target model. In the first set of simulations involving noise-corrupted images, the target model is pixel space. In the second set of simulations involving objects of different categories being presented to a deep neural network, the target model is a semantic or conceptual space. In other words, functional smoothness is not an objective property of the model, but a relational property between an input and an output space. This seems to be recognised in some sections, e.g. subsection “Vector Space Coding”: "vector space coding is functionally smooth in the trivial sense as the function is identity."

We now make the definition of functional smoothness more clear and discuss after the definition how it can apply.

We also removed the random network simulations involving images and instead report the simulations with the Gaussian vectors. We only included the simulations with the images because we thought they would be more intuitive, but that seems to not be the case and they might lead to confusion in some readers with the deep learning simulations that classify photographs.

However, many other phrases in the paper imply that smoothness is intrinsic to models, e.g.:

– subsection “Deep Learning Networks”: "one key question is whether functional smoothness breaks down at more advanced layers in DLNs…"

– subsection “Deep Learning Network Simulation”: "We consider whether functional smoothness declines as stimuli progress to more advanced layers…"

– and: "Violating functional smoothness, all other similarity structure is discarded…"

These issues are addressed in a comment above.

The simulations underlying Figures 2 and 3 add to this confusion rather than illustrating functional smoothness. Presumably the weights within the weight matrices used in the models from "matrix multiplication coding" onwards were random? If so, then Figure 3 seems to illustrate the trivial effect that as one performs increasingly many random nonlinear transformations on an input, distances between stimuli in the input space will be less and less well correlated with distances in the output. This result doesn't seem to reveal anything fundamental about the merit of (non-random) deep nonlinear networks as models of brain representation.

We made the methods for these simulations more clear. Yes, the random network simulations show that functional smoothness will break down, even in an untrained network, such that more advanced layers should be harder to recover similarity spaces from using fMRI. Prior to this paper, I don’t think people appreciated the basic geometry of projecting vectors repeatedly would have these effects as every neural network paper we have read focuses on the role of training! Correspondingly, imaging researchers speak of the signal to noise ratio or noise ceiling of various regions as if these are basic properties of those regions as opposed to a consequence of receiving input that has been several times over quasi-linearly transformed.

To highlight the fact that functional smoothness is a relational property, it might instead be helpful to create a simulation akin to these, but using a trained network, and adding noise in a more abstract "target space" (e.g. noise could be added to the location of an object within an image, for example by sliding around the location of a dog image superimposed on a natural background). The earliest layer of the network should be "functionally smooth" w.r.t. changes in pixels but not location (i.e. pixelwise differences will make a large difference to early representations); middle layers may be "functionally smooth" w.r.t. location but not pixels (e.g. the same dog at nearby locations will activate the same feature detectors looking for eyes, legs, etc), while the final categorical layer would ideally be wholly invariant both to pixel and location changes provided the image continues to contain a dog.

I think the simulations should now be clearer as we have replaced the dog and truck image inputs with vectors drawn from a Gaussian.

It would also help to distinguish between coding schemes that have the capacity to be functionally smooth, vs. those that actually functionally smooth, with respect to a particular input space. Factorial and hash coding are raised as examples of codes that are "not functionally smooth," while neural-network-style encodings of various complexity are evaluated for whether the simulated examples happen to be functionally smooth with respect to the input spaces (pixel space, then semantic similarity space). However, there is an important difference in the sense in which factorial coding is "not functionally smooth" and that in which the late layers of GoogLeNet are not – factorial and hash coding are not capable of being functionally smooth (because every representation is orthogonal to every other), whereas the neural networks considered are capable in principle of strongly preserving the representational geometry of any input space desired (although see Point 6).

Interestingly, the neural networks essentially do hash coding as additional layers are added. We note this connection in two places in the manuscript, final paragraph of subsection “Hash Function Coding” and Discussion section paragraph three. Like the neural networks, intermediate steps in a hash coding algorithm may be functionally smooth with only the end step non-smooth,

4) What is the relationship, if any, between spatial and functional smoothness?

It seems as if spatial and functional smoothness, as defined in the paper, should be completely independent of one another, and they are described in paragraph two of subsection “Functional Smoothness” as "distinct concepts." However:

Subsection “Matrix Multiplication Coding”: "…the internal representation of this coding scheme is completely nonsensical to the human eye and is not super-voxel smooth."

This is confusing for two reasons. First, there is no reason for any internal representation to be "sensible to the human eye." The fact that the input representation is "sensible" is just an artefact of using images as the inputs. Even then, the image is only understandable by the eye if one preserves the exact spatial order of units and arranges them into rows and columns of exactly the right dimensions. Any shuffling or re-arrangement of the pixels would constitute an identical representation, and yet would no longer be sensible to the eye – so sensibility to the eye does not seem relevant.

You are grasping our intended point perfectly – that figure and caption were included to demonstrate a case in which functional smoothness is preserved but super-voxel smoothness is not. We now make this clearer, as well as expand on the differences between these concepts. Particular care was paid to this issue in the section on Voxel Inhomogeneity Across Space and Time and in the caption of what is now Figure 3.

Second, how does this minimal model (a transformation of the input by a single matrix multiplication) specify properties about the spatial arrangement of voxels? Paragraph two subsection “Functional Smoothness” states that functional smoothness is defined at the level of voxels (although this seems a little counter-intuitive, since brains and neural networks encode information at the level of neurons). So in the "matrix multiplication code," we should imagine a case where there are as many voxels in the output as there are pixels in the input image, and the activation level of each one is determined by an arbitrary linear combination of all input pixels. This output will be "functionally smooth" w.r.t. pixel space if the matrix transformation is one that preserves representational geometry (e.g. a rotation matrix; see first comment under "Smaller points" below for more on this…). This will be true however one arranges the voxels spatially. Some possible arrangements will appear super-voxel smooth (e.g. if voxels are placed next to those with the most similar selectivities), and some will not (e.g. if voxels are randomly placed), but all arrangements will be functionally smooth.

We moved away from the images in the random networks to reduce this confusion.

If there is some deeper connection between functional and spatial smoothness, this needs to be more clearly explained and illustrated.

Thank you, as mentioned above this was one of the major revisions of the paper. We also view it as critical to make this distinction clear.

5) Different neural code properties may be required for "successful fMRI" when doing mean-activation vs. decoding vs. representational similarity analyses.

The title-bearing central claim refers to the “success” of brain imaging, but it is not entirely clear how this should be conceived. It might be worth briefly describing what counts as success, and how each result constrains possible neural codes. For example, it would be good to separately consider what findings from (1) old school mean-activation “blobology”, (2) classifiers performing above chance, (3) RSA imply about neural coding. It seems uncontroversial that all require "temporal smoothness" and some form of across-voxel spatial inhomogeneity in order for different stimuli to create detectably different fMRI activations. But they may have different further implications, for example:

1) The success of "blobology" in finding multi-voxel clusters with similar functional properties suggests some degree of super-voxel smoothness.

2) Above-chance decoding does not seem to even require a neural code capable of functional smoothness. E.g. in a factorial code, although every stimulus elicits a pattern orthogonal to that elicited by any other, one could still do successful "mind-reading" as long as one had access to data from previous trials on which subjects had viewed that stimulus.

3) The success of RSA (i.e. finding interesting and nuanced similarity patterns between patterns evoked by different stimuli, which seem to bear some relation to the geometry of those stimuli within other models such as pixel space or semantic space and can be compared to predictions from computational models) does require a neural code which is capable of functional smoothness. The importance of functional smoothness only to RSA does seem to be recognised in the paper, but could be made more explicit, e.g.:

These are good points which we tried to bring forward throughout and in particular in an added paragraph in the Discussion, paragraph seven.

Subsection “Factorial Design Coding” paragraph five: "If the neural code for a region was employing a technique similar to factorial design, neuroimaging studies would never recover similarity structures by looking at the patterns of active voxels in that region."

One thing to note here is that model comparisons do not necessarily require RSA. Does the success of other analysis methods that compare models (e.g. voxel-receptive-field modelling) also point to functional smoothness? Or is this term strongly linked to RSA as an analysis framework, to the extent that it does not have a meaning outside it? If so, why do the authors focus so strongly on functional smoothness? (RSA is successful, but so are (linear) classifiers).

We focused on recovering similarity spaces, but do discuss the implications for other analysis approaches in the aforementioned added Discussion paragraph.

6) Simulations with a network trained on a task other than categorisation would help justify the claim that "non-smoothness" is an inevitable property of deep nonlinear neural networks.

The final simulations, in which distances within a “semantic space” are informally compared to distances within successive layers of the deep neural network GoogLeNet, conclude that later layers are less functionally smooth w.r.t. semantic space than early ones (since they lose between-category similarity information), and that “the decline in functional smoothness at later layers does not appear to be a straightforward consequence of training these networks to classify stimuli.” This latter conclusion is likely to be controversial, and is not strongly supported by the sparsity analysis.

The simplest way to clarify the contribution of the training task to the representational geometry in the final layers would be to show RDMs from (a) randomly-weighted networks, and (b) an unsupervised network, or one trained on a task orthogonal to categorisation. A good candidate would be the unsupervised seven layer neural network in Wang & Gupta (2015), which is available from https://github.com/xiaolonw/caffe-video_triplet

Again though, this would not reveal anything about the "functional smoothness of the model," since that is not an inherent property, but only the similarity between the representations within the model and in semantic space.

This was a very nice suggestion. We spent some weeks getting their code to run and made some interesting discoveries. In this literature, classification decisions are made by unsupervised networks by first training the networks on unlabelled data (i.e., unsupervised learning). The final layer of such networks is then considered to have distilled the meaning of the image. To evaluate the performance of the unsupervised network, a simple classifier is then trained using this final layer as input. Wang and Gupta did not include this critical component in their code repository, but we corresponded with them extensively on email to discuss the myriad of ways in which they boosted performance with their network.

Unfortunately, the bottom line is that their unsupervised network coupled with a classifier operating over the final layer only retrieves the correct label for an image in its top 20 guesses 40% of the time, which is not in the same league as InceptionV3 model which achieves 97% accuracy in only its top 5 guesses. In short, we can’t use this mostly unsupervised network to compare to the supervised network because it can’t properly classify images. In light of their model’s troubles, we made an edit to our discussion unsupervised methods that now focuses on performance rather than scope.

7) Previous work

One key feature missing from this paper is closer engagement with previous literature on decoding, such as the seminal findings in Kamitani & Tong (2005) (discussed above in Point 2), computational accounts (e.g. Kriegeskorte, N., Cusack, R., & Bandettini, P. (2010) or de Beeck, H. P. O. (2010), and those discussing the plausibility of more trivial structural explanations such as differences in vasculature (e.g. Shmuel, A., Chaimow, D., Raddatz, G., Ugurbil, K., & Yacoub, E. (2010)).

We included some of this work in places where we could find a fit.

Miscellaneous

Subsection “Matrix Multiplication Coding” states “Matrix multiplication maps similar inputs to similar internal representations.” The claim seems to refer to multiplication of a 1xn input by an arbitrary nxn matrix (such as would be implemented by a 1-layer fully connected linear neural network). Although there are some such matrix multiplications which would perfectly preserve distances between different inputs in their original vs. transformed spaces (e.g. rotation matrices), and with random matrices, distances in the original and transformed spaces will tend to correlate (as your simulations show), the claim is not generally true. There are many matrix multiplications which will completely disrupt representational geometry.

The methods now make clear that the matrices were random, which makes it extremely unlikely they will be rank deficient. As a further safeguard, the simulation results are averaged over 100 different networks, which is now made clear in the methods description.

Subsection “Deep Learning Network Simulation” paragraph three states that sparseness of representation does not decline for a "later advanced layer" of the category-supervised deep neural net. Which layer is this? It seems surprising that sparseness does not increase in at least the final layer (i.e. the output of the softmax operation). Relatedly, is it worth showing more layers in Figure 5? If not, why are these two layers shown?

We tried to keep it simple and ensure that the comparison was not biased in our favor. We chose these layers because sparsity (which we quantified with the Gini coefficient) goes against our hypothesis. We also tried to compare two layers that had the same structure (both pooling layers) to make sure we were comparing apples to apples. We didn’t explicitly number the layers because the architecture of the model is so complicated that three people may number the layers three different ways. Fortunately, the OSF repository and code makes clear which structures we used from InceptionV3 for those interested.

Related to Point 3 above, it would help to use more precise language about the nature of the sensory inputs or brain representations being discussed. For example, subsection “Matrix Multiplication Coding” says that a particular coding scheme "takes an input stimulus (e.g. a dog) and multiplies it by a weight matrix" – given that a dog is not the sort of thing that can be multiplied by a weight matrix, does this mean either (specifically) an image of a dog, or (generally) the activity within a preceding layer of neurons in a neural network model? Referring to the (arbitrary) input images in the simulations as "prototypes" is also confusing, as it suggests they have some special status to the models.

Fixed. Again, we switched to Gaussian vectors here to avoid such confusions. We do retain the one figure with the image matrix multiplication because it makes it easier to see that even when the network states look “scrambled” they actually preserve functional smoothness, which was discussed above.

– Although I like the hash coding example in subsection “Hash Function Coding” says it's “potentially useful for biological systems” – it might be worth elaborating briefly in what circumstances a biological system would evolve something akin to hashcoding for certain stimuli? It seems rather inefficient and hard to reconcile with basic facts about learning (e.g. co-occurrence increasing association strengths, and thereby similarity) and memory (e.g. semantic co-activation)? I appreciate the later evidence for the compatibility of higher layers with hash coding but the above claim seems more general – This relates to the above discussion on what.

We mention one possible mental function in the final paragraph of subsection “Hash Function Coding”.

– The selection of coding schemes cover interesting (and rhetorically convincing) ground but the motivation for this set of coding schemes doesn't seem to be motivated. E.g., why focus on hash coding and factorial mapping – Are those a subset of a broader range that could be considered? Something similar could be said about the selection of NN algorithms. The selection of codes seems to constitute an input being transformed by successively more complex neural networks ("vector space coding" = no transformation of the input; "gain control coding" = one nonlinearity; "matrix multiplication coding" = one linear transformation; "perceptron coding" = one linear transformation plus one nonlinearity; "multi-layer neural network"….etc). Although this is logical, the descriptions (e.g. "vector space coding") misleadingly imply that these are qualitatively distinct strategies for encoding a stimulus, and leave it to the reader to discover the logic of selecting these particular "codes".

We chose popular coding schemes and models that helped illustrate the key concepts. We state in subsection “Vector Space Coding” that the neural network models are all components that can be assembled to form one larger model, as opposed to distinct coding strategies. Hopefully, this nicely transitions to the deep learning network simulation.

– The paper by Bracci & de Boeck (2016) seems worth discussing in more detail, as it provides potential direct evidence for the hierarchy of smoothness. One wonders whether there are plausible alternative explanations that should be taken into account wrt varying levels of prediction accuracy across the ventral stream? For instance, the noise ceiling also often goes down (i.e. there is less signal to be explained in principle).

We agree and actually mention this work both in the original submission and revision (Discussion section paragraph six). We suggest that our work may help explain this noise ceiling.

Reviewer #2:

This paper addresses an interesting subject, that of what we can learn about the neural code from fMRI. The paper makes a valuable conceptual effort to think about which neural codes are supported by fMRI observations and which are not. Much as I like the paper and I think it is important to discuss these issues, I think that the connection between neural activity and fMRI, which should be central to this topic, is not sufficiently well discussed. My fear is that places of the current manuscript would look insufficiently developed to neurophysiologists investigating neural coding. In the following I raise the attention of the authors to what are in my view problems in the current manuscript that need addressing, and I provide a few suggestions. I hope that this will improve their paper.

The current writing of the paper may be taken at specific places to argue that the success of fMRI implies that a coding scheme that does not come though fMRI is not one used by the brain to compute. ("Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity"… "The neural code must have certain properties for this state of affairs to hold. What kinds of models or computations are consistent with the success of fMRI?" … "The success of fMRI does not imply that the brain does not utilize precise timing information, but it does mean that such temporally demanding coding schemes cannot carry the day given the successes fMRI has enjoyed in understanding neural representations.").

As mentioned in response to reviewer 1, we have made efforts to moderate these claims.

Of course there would be no basis for such a strong claim, and the authors should state and discuss this clearly. It is for example possible that fMRI gets only a part of the neural code used by the brain, and that other parts, perhaps as important as others, are simply lost by the limitations of fMRI but are important for brain function. Another example of the possible dangers of this argument is reported in my comments about the temporal domain. I think that the authors should carefully reconsider how they write these statements.

Thank you. We have done so and made an effort throughout to correct the tone and try to find the balancing point between claims and supporting evidence.

Subsection “Sub-voxel Smoothness” paragraph four: the problems related to temporal domain seem to be conceptualized in a way that is at odds with what we know of how fMRI is sensitive to the timing of neural population activity. The authors seem to put forward the idea that BOLD fMRI roughly corresponds to a firing rate averaged over long time windows, and that it will be insensitive to timing of spikes for example synchronous firing:

"BOLD will fail to measure other temporal coding schemes, such as neural coding chemes that rely on the precise timing of neural events, as required by accounts that posit that the synchronous firing of neurons is relevant to coding (Abeles, Bergman, Margalit, & Vaadia, 1993; Gray & Singer, 1989). Unless synchronous firing is accompanied by changes in activity that fMRI can measure, such as mean population activity, it will be invisible to fMRI".. "Because the BOLD signal roughly summates through time.."

This was not our intent and these passages have been heavily edited in response to reviewer 2’s comments. The overarching intended point was that BOLD will be blind to such temporal coding schemes unless they have a correlate that BOLD is sensitive to.

This reasoning appears at odds with what concurrent recordings of neural activity and fMRI show. First, as the pioneering work of Logothetis et al. (Nature 2001) already revealed and many studies from his groups confirmed over the years, the BOLD correlates strongly with LFPs (a measure of mass synaptic activity) and it correlates with spike rates only when those correlate with LFPs. Second, the degree of millisecond-scale synchronization among neurons is not only picked by BOLD: it actually seems to be a primary correlate of fMRI BOLD, and much more so than the firing rate or multi-unit activity computed over long windows. One study of Logothetis group (Magri et al. J Neurosci 2012) showed that the primary correlate of BOLD is the γ-band LFP power. Γ band power expressed the strength of local neural synchronization over a scales of few ms to one or two tens of ms. These results are also reported in human studies using EEG with fMRI (see Scheeringa et al. Neuron 2011).

We have reworked the discussion to center upon these contributions. One point that we enjoyed in Magri et al. (2012) that is now a center point of this section is that different mixes of activity at different bands can lead to an identical BOLD response. These kinds of points are exactly what we tried to bring out in this section in the original manuscript. We now believe we are more successful in doing so.

So, my suggestion is to rewrite completely the "temporal dimension" part of this paper. This can also serve as an example suggestion of how very dangerous it would be to rule out a coding scheme based considering the success of fMRI and its spatio-temporal limitations (see my comments above).

That was always part of our intended message, which should now be clearer. At a very broad level, we needed to bring out these issues clearly because establishing these views makes it surprising (i.e., informative) at some level that BOLD is useful in recovering similarity spaces, despite limitations in what it measures.

Reviewer #3:

Summary:

The authors applied an fMRI data analysis method called representational similarity analysis (RSA) to artificial neural network data. They argued that neural code must be both sub-voxel smooth and functionally smooth for RSA to recover the neural similarities from fMRI data.

Thank you for your time and useful input. As discussed below, we have improved the formal presentation in light of reviewer 3’s comments.

Comments:

1) What is the definition of the term "functional smoothness"? In the "Functional smoothness" section, the authors only stated that factorial design coding and hash function coding are not functionally smooth, but neural network models are functional smooth. I only see examples but no definition.

We tried to be clear, but at times a formal definition is useful. We now offer a formal definition (Equation 1) and apply this measure to the simulations. For example, the measure of functional smoothness does decline as layers in a neural network are traversed.

2) If the main contribution of the paper is that the neural code must be smooth for RSA method to decode. Then the authors should provide necessity and sufficiency proofs of this statement (Discussion section first paragraph): (1) if RSA can decode the similarity in the fMRI data, then the neural code must be sub- voxel smooth and functional smooth. (2) As long as the neural code is sub- voxel smooth and functional smooth, RSA can encode the similarity in the fMRI data.

We now make clear that similarity spaces by definition require functional smoothness (Equation 1). We discuss how RSA solutions and decoding solutions without functional smoothness would be degenerate in that any positive results would need to be driven by self-similarity.

3) The authors should explain the reason they choose Deep Neural Network for their experiments. Friston 2003's dynamic causal model is a popular model for fMRI data simulation. Spiking neural network is another candidate used to study neural code. Please explain why Deep Neural Network is favorable for the experiments in this paper.

We now explain the motivation for this choice on subsection “Deep Learning Networks”. Importantly, these models are not only simulating neural data, but are actually completing the behavioral task (in this case object recognition) at human-level proficiency.

4) Experimental detail is lacking. There is no methods section. I also tried to look at the code the author provided at http://osf.io/v8baz, but the access was forbidden. It seems like the code folder is private. So there is not much I can comment on the methods used in this paper.

We apologise for not making the repository public at the time of submission. It is now public with all code fully documented. We also have provided full methods for the simulations in the main text.


Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

RESOURCES