Skip to main content
eLife logoLink to eLife
. 2020 Nov 26;9:e59360. doi: 10.7554/eLife.59360

Tracking prototype and exemplar representations in the brain across learning

Caitlin R Bowman 1,2,, Takako Iwashita 1, Dagmar Zeithamova 1,
Editors: Timothy E Behrens3, Morgan Barense4
PMCID: PMC7746231  PMID: 33241999

Abstract

There is a long-standing debate about whether categories are represented by individual category members (exemplars) or by the central tendency abstracted from individual members (prototypes). Neuroimaging studies have shown neural evidence for either exemplar representations or prototype representations, but not both. Presently, we asked whether it is possible for multiple types of category representations to exist within a single task. We designed a categorization task to promote both exemplar and prototype representations and tracked their formation across learning. We found only prototype correlates during the final test. However, interim tests interspersed throughout learning showed prototype and exemplar representations across distinct brain regions that aligned with previous studies: prototypes in ventromedial prefrontal cortex and anterior hippocampus and exemplars in inferior frontal gyrus and lateral parietal cortex. These findings indicate that, under the right circumstances, individuals may form representations at multiple levels of specificity, potentially facilitating a broad range of future decisions.

Research organism: Human

Introduction

The ability to form new conceptual knowledge is a key aspect of healthy memory function. There has been a longstanding debate about the nature of the representations underlying conceptual knowledge, which is exemplified in the domain of categorization. Some propose that categories are represented by their individual category members and that generalizing the category label to new examples involves joint retrieval and consideration of individual examples encountered in the past (i.e., exemplar models, Figure 1A; Kruschke, 1992; Medin and Schaffer, 1978; Nosofsky, 1986). Others propose that categories are represented by their central tendency – an abstract prototype containing all the most typical features of the category (i.e., prototype models, Figure 1B; Homa, 1973; Posner and Keele, 1968; Reed, 1972). Category generalization then involves consideration of a new item’s similarity to relevant category prototypes.

Figure 1. Category-learning task.

Figure 1.

Conceptual depiction of (A) exemplar and (B) prototype models. Exemplar: categories are represented as individual exemplars. New items are classified into the category with the most similar exemplars. Prototype: categories are represented by their central tendencies (prototypes). New items are classified into the category with the most similar prototype. (C) Example stimuli. The leftmost stimulus is the prototype of category A and the rightmost stimulus is the prototype of category B, which shares no features with prototype A. Members of category A share more features with prototype A than prototype B, and vice versa. (D) During the learning phase, participants completed four study-test cycles while undergoing fMRI. In each cycle, there were two runs of observational study followed by one run of an interim generalization test. During observational study runs, participants saw training examples with their species labels without making any responses. During interim test runs, participants classified training items as well as new items at varying distances. (E) After all study-test cycles were complete, participants completed a final generalization test that was divided across four runs. Participants classified training items as well as new items at varying distances.

Both the prototype and exemplar accounts have been formalized as quantitative models and fit to behavioral data for decades, with numerous studies supporting each model (exemplar meta-analysis: Nosofsky, 1988; prototype meta-analysis: Smith and Minda, 2000). Neuroimaging studies have also provided support for these models. Studies using univariate contrasts showed overlap between neural systems supporting categorization and recognition (Nosofsky et al., 2012), as well as medial temporal lobe involvement in categorization (Koenig et al., 2008; Lech et al., 2016; Nomura et al., 2007), both of which have been interpreted as indicating a role of exemplar retrieval in categorization. More recently, studies have used parameters generated from formal prototype and exemplar models with neuroimaging data, but with conflicting results. Mack et al., 2013 found similar behavioral fits for the two models, but better fit of the exemplar model to brain data. Parts of the lateral occipital, lateral prefrontal and lateral parietal cortices tracked exemplar model predictors. No region tracked prototype predictors. The authors concluded that categorization decisions are based on memory for individual items rather than abstract prototypes. In contrast, Bowman and Zeithamova, 2018 found better fit of the prototype model in both brain and behavior. The ventromedial prefrontal cortex and anterior hippocampus tracked prototype predictors, demonstrating that neural category representations can involve more than representing the individual category members, even in regions like the hippocampus typically thought to support memory for specific episodes.

Interestingly, the different brain regions identified across these two studies aligned well with the larger literature contrasting memory specificity with memory integration and generalization. Lateral prefrontal regions are thought to resolve interference between similar items in memory (Badre and Wagner, 2005; Bowman and Dennis, 2016; Jonides et al., 1998; Kuhl et al., 2007), and lateral parietal cortex supports recollective experience (Vilberg and Rugg, 2008) and maintains high fidelity representations of individual items during memory retrieval (Kuhl and Chun, 2014; Xiao et al., 2017). That these regions also tracked exemplar predictors suggests that these functions may also support categorization by maintaining representations of individual category members as distinct from one another and from non-category members. In contrast, the VMPFC and hippocampus are known to support episodic inference through memory integration of related episodes (Schlichting et al., 2015; Shohamy and Wagner, 2008; Zeithamova et al., 2012) and encoding of new information in light of prior knowledge (van Kesteren et al., 2012). That these regions also tracked prototype predictions suggests that prototype extraction may involve integrating across category exemplars, linking across items sharing a category label to form an integrated, abstract category representation. However, as neural prototype and exemplar representations were identified across studies that differed in both task details and in the categorization strategies elicited, it has not been possible to say whether differences in the brain regions supporting categorization were due to differential strength of prototype versus exemplar representations or some other aspect of the tasks.

It is possible that the seemingly conflicting findings regarding the nature of category representations arose because individuals are capable of forming either type of representation. Prior studies have compared different category structures and task instructions to identify multiple memory systems supporting categorization (e.g., Aizenstein et al., 2000; Ashby et al., 1998; Ell et al., 2010; Zeithamova et al., 2008). While such findings show that the nature of concept representations depend on task demands, it is unclear if both prototype and exemplar representations can co-exist within the same task. Such mixed representations have been identified in episodic memory tasks, with individuals sometimes forming both integrated and separated representations for the same events (Schlichting et al., 2015) and a single episode sometimes represented at multiple levels of specificity, even within the hippocampus (Collin et al., 2015). We also know that individuals sometimes use a mix of strategies in categorization, for example when most category members are classified according to a simple rule while others are memorized as exceptions to that rule (Davis et al., 2012; Nosofsky et al., 1994). These differing representations may emerge because they allow for flexibility in future decision-making, as abstract representations that discard details of individual items are well suited to making generalization judgments but are poorly suited to judgments that require specificity. Alternatively, prototype representations may emerge as a byproduct of retrieving category exemplars, and they may themselves be encoded via recurrent connections, becoming an increasingly robust part of the concept representation (Hintzman, 1986; Koster et al., 2018; Zeithamova and Bowman, 2020). Thus, under some circumstances, both prototype and exemplar representations may be apparent within the same task.

To test this idea, we used fMRI in conjunction with a categorization task designed to balance encoding of individual examples vs. abstract information. This task used a training set with examples relatively close to the prototype, which has been shown to promote prototype abstraction (Bowman and Zeithamova, 2018; Bowman and Zeithamova, 2020). To promote exemplar encoding, we used an observational training task rather than feedback-based training (Cincotta and Seger, 2007; Heindel et al., 2013; Poldrack et al., 2001). We then looked for evidence of prototype and exemplar representations in the brain and in behavioral responses. In behavior, the prototype model assumes that categories are represented by their prototypes and predicts that subjects should be best at categorizing the prototypes themselves, with decreasing accuracy for items with fewer shared features with prototypes. The prototype model does not make differential predictions for new and old (training) items at the same distance from the prototype. The exemplar model assumes that categories are represented by the previously encountered exemplars and predicts that subjects should be best at categorizing old items and new items closest to the old exemplars. The mathematical formalizations of the models further take into account that a participant may not pay equal attention to all stimulus features and that perceived distance increases non-linearly with physical distance (see Methods for more details). We note that it is sometimes possible to observe behavioral evidence for both types of representations. For example, in our prior study (Bowman and Zeithamova, 2018), participants’ behavior was better explained by the prototype model than the exemplar model, but we also observed an advantage for old items relative to new items at the same distance to prototypes, in line with exemplar but not prototype model predictions.

The key behavioral prediction of each model is the trial-by-trial probability of responding category A vs category B. These probabilities are determined for each trial by the relative similarity of the test item to the category A and category B representations proposed by each model. Once these probabilities are generated for each model, they are compared to the participant’s actual responses to determine which model better predicted the subject’s observed behavior. We also used output from the models to generate subject-specific, trial-by-trial fMRI predictions. These were derived from the similarity of each test item to either an exemplar-based or prototype-based category representation (see Methods for details). We then measured the extent to which prototype- and exemplar-tracking brain regions could be identified, focusing on the VMPFC and anterior hippocampus as predicted prototype-tracking regions, and lateral occipital, prefrontal, and parietal regions as predicted exemplar-tracking regions.

We also asked whether there are shifts across learning in the type of concept representation individuals rely on to make categorization judgments. While some have suggested that memory systems compete with one another during learning (Poldrack and Packard, 2003; Seger, 2005), prior studies fitting exemplar and prototype models to fMRI data have done so only during a categorization test that followed extensive training, potentially missing dynamics occurring earlier in concept formation. Notably, memory consolidation research suggests that memories become abstract over time, often at the expense of memory for specific details (McClelland et al., 1995; Moscovitch et al., 2016; Payne et al., 2009; Posner and Keele, 1970), suggesting that early concept representations may be exemplar-based. In contrast, research on schema-based memory shows that abstract knowledge facilitates learning of individual items by providing an organizational structure into which new information can be incorporated (Bransford and Johnson, 1972; Tse et al., 2007; van Kesteren et al., 2012). Thus, early learning may instead emphasize formation of prototype representations, with exemplars emerging later. Finally, abstract and specific representations need not trade-off in either direction. Instead, the brain may form these representations in parallel (Collin et al., 2015; Schlichting et al., 2015) without trade-off between concept knowledge and memory for individual items (Schapiro et al., 2017), generating the prediction that both prototype and exemplar representations may grow in strength over the course of learning.

In the present study, participants underwent fMRI scanning while learning two novel categories or ‘species,’ which were represented by cartoon animals varying on eight binary dimensions (Figure 1C). The learning phase consisted of two types of runs: observational study runs and interim generalization test runs (Figure 1D). During study runs, participants passively viewed individual category members with their accompanying species label (‘Febble’ or ‘Badoon’). All of the items presented during study runs differed by two features from their respective prototypes (for example, exemplars depicted in Figure 1A). After completing two runs of observational study, participants underwent an interim generalization test run in which participants classified cartoon animals into the two species. Test items included the training items as well as new items at varying distances from category prototypes. Across the entire learning phase, there were four study-test cycles, with different new test items at every cycle. The learning phase was followed by a final generalization test, whose structure was similar to the interim test runs but more extensive (Figure 1E).

To test for evidence of prototype and exemplar representations in behavior across the group, we compared accuracy for items varying in distance from category prototypes and for an accuracy advantage for training items relative to new items matched for distance from category prototypes. We also fit formal prototype and exemplar models to behavior in individual subjects, which involves computing the similarity of a given test item to either the prototype of each category (prototype model) or the individual training items from each category (exemplar model), which is then used to make predictions about how likely it is that an item will be classified into one category versus the other. The model whose predictions better match a given subject’s actual classification responses will have better fit. However, it is also possible that evidence for each of the models will be similar, potentially reflecting a mix of representations.

To test for co-existing prototype and exemplar correlates in the brain during interim and final generalization tests, we used latent metrics generated from each model as trial-by-trial predictors of BOLD activation in six regions of interest (Figure 2): ventromedial prefrontal cortex, anterior hippocampus, posterior hippocampus, lateral occipital cortex, inferior frontal gyrus, and lateral parietal cortex. To identify potential changes with learning, we tested these effects separately in the first half of the learning phase (interim tests 1 and 2) and second half of the learning phase (interim tests 3 and 4) as well as in the final test.

Figure 2. Regions of interest from a representative subject.

Figure 2.

Regions were defined in the native space of each subject using automated segmentation in Freesurfer.

Results

Behavioral

Accuracy

Interim tests

Categorization performance across the four interim tests is presented in Figure 3A. We first tested whether generalization accuracy improved across the learning phase and whether generalization of category labels to new items differed across items of varying distance to category prototypes. There was a significant main effect of interim test number [F(3,84)=3.27, p=0.03, ηp2 = 0.11], with a significant linear effect [F(1,28)=9.91, p=0.004, ηp2 = 0.26] driven by increasing generalization accuracy across the interim tests. There was also a significant main effect of item distance [F(3,84)=51.75, p<0.001, ηp2 = 0.65] with a significant linear effect [F(1,28)=126.04, p<0.001, ηp2 = 0.82] driven by better accuracy for items closer to category prototypes. The interim test number x item distance interaction effect was not significant [F(9,252)=0.62, p=0.78, ηp2 = 0.02]. We next tested whether accuracy for old training items was higher than new items of the same distance (i.e., distance 2) and whether that differed over the course of the learning phase. There was a linear effect of interim test number [F(1,28)=16.78, p<0.001, ηp2 = 0.38] driven by increasing accuracy across the tests. There was also a significant main effect of item type (old vs. new) [F(1,28)=8.76, p=0.01, ηp2 = 0.24], driven by higher accuracy for old items (M = 0.83, SD = 0.11) relative to new items of the same distance from the prototypes (M = 0.77, SD = 0.10). The interim test number x item type interaction effect was not significant [F(3,84)=0.35, p=0.79, ηp2 = 0.01], indicating that the advantage for old compared to new items was relatively stable across learning. To summarize, we observed a reliable typicality gradient where accuracy decreased with the distance from the prototypes and both old and new items at the distance two numerically fell between distance 1 and distance three items (Figure 3A). However, within distance two items, we also observed a reliable advantage for the old items compared to new items, an aspect of the data that would not be predicted by the prototype model.

Figure 3. Behavioral accuracy for interim and final tests.

Figure 3.

(A) Mean generalization accuracy across each of four interim tests completed during the learning phase. Source data can be found in Figure 3—source data 1. (B) Mean categorization accuracy in the final test. Source data can be found in Figure 3—source data 2. In both cases, accuracies are separated by distance from category prototypes (0–3) and old vs. new (applicable to distance two items only). Error bars represent the standard error of the mean. 

Figure 3—source data 1. Behavioral accuracy - interim tests.
Figure 3—source data 2. Behavioral accuracy - final test.

Final test

Accuracies for generalization items at each distance from the prototype as well as for training items (all training items were at distance two from the prototypes) are presented in Figure 3B. A repeated measures ANOVA on new items that tested the effect of distance from category prototypes on generalization accuracy showed a main effect of item distance [F(3,84)=53.61, p<0.001, ηp2 = 0.66] that was well characterized by a linear effect [F(1,28)=124.55, p<0.001, ηp2 = 0.82]. Thus, the categorization gradient driven by higher accuracy for items closer to category prototypes observed during learning was also strong during the final test. In contrast, a paired t-test for accuracy on old relative to new items at distance two showed that the numeric advantage for old relative to new items was not statistically significant in the final test [t(28)=0.93, p=0.36, CI95[−0.03,.08], d = 0.22].

Behavioral model fits

Figure 4a-c presents model fits in terms of raw negative log likelihood for each phase (lower numbers mean lower model fit error and thus better fit). Fits from the two models tend to be correlated. If a subject randomly guesses on the majority trials (such as early in learning), neither model will fit the subject’s responses well and the subject will have higher (mis)fit values for both models. As a subject learns and does better on the task, fits of both models will tend to improve because items close to the old exemplars of category A tend to be, on average, closer to the category A prototype than the category B prototype and vice versa. For example, even if a subject had a purely exemplar representation, the prototype model would still fit that subject’s behavior quite well, albeit not as well as the exemplar model. Due to the correlation between model fits, the exact fit value for one model is not sufficient to determine a subject’s strategy, only the relative fit of one model compared to the other. Visually, in Figure 4a–c, subjects above the diagonal are better fit by the exemplar model, participants below the line are better fit by the prototype model, and participants near the line are fit comparably well by both models. Thus, although the model fits tend to be correlated across-subject, the within-subject advantage for one model over another is still detectable and meaningful. To quantify which model fits are comparable and which are reliably different, we took a Monte Carlo approach and compared the observed model fit differences to a null distribution expected by chance alone (see Methods for details). Figure 4d–f presents the percentage of subjects that were classified as having used a prototype strategy, exemplar strategy, or having model fits that were not reliably different from one another (‘similar’ fit). In the first half of learning, the majority of subjects (66%) had similar prototype and exemplar model fits. In the second half of learning and the final test, the majority of subjects (56% and 66%, respectively) were best fit by the prototype model. Prototype and exemplar model fits may not differ reliably for a given subject, such as when the subject’s responses are perfectly consistently with both models (as can happen in high-performing subjects) or when some responses are more consistent with one model while other response are more consistent with the other model. In such cases, a subject may be relying on a single representation but we cannot discern which, or the subject may rely to some extent on both types of representations.

Figure 4. Behavioral model fits.

Figure 4.

Scatter plots indicate the relative exemplar vs. prototype model fits for each subject. Fits are given in terms of negative log likelihood (i.e., model error) such that lower values reflect better model fit. Each dot represents a single subject and the trendline represents equal prototype and exemplar fit. Dots above the line have better exemplar relative to prototype model fit. Dots below the line have better prototype relative to exemplar model fit. Pie charts indicate the percentage of individual subjects classified as best fit by the prototype model (in blue), the exemplar model (in red), and those similarly fit by the two models (in grey). Model fits were computed separately for the 1st half of the learning phase (interim tests 1–2, A,D), the 2nd half of the learning phase (interim tests 3–4, B,E), and the final test (C,F). Source data for all phases can be found in Figure 4—source data 1.

Figure 4—source data 1. Behavioral model fits - all phases.

We formally compared model fits for interim tests across the first and second half of the learning phase using a repeated-measures ANOVA on raw model fits. There was a significant main effect of learning phase [F(1,28)=39.74, p<0.001, ηp2 = 0.59] with better model fits (i.e., lower error) in the second half of the learning phase (M = 5.98, SD = 5.81) compared to the first half (M = 10.64, SD = 6.72). There was also a significant main effect of model [F(1,28)=17.50, p<0.001, ηp2 = 0.39] with better fit for the prototype model (M = 7.86, SD = 5.95) compared to the exemplar model (M = 8.77, SD = 6.02). The learning phase x model interaction effect was not significant [F(1,28)=0.01, p=0.91, ηp2 = 0.001], with a similar prototype advantage in the first half (d = 0.13, CI95[0.31,1.45]) as in the second half (d = 0.16, CI95[0.22,1.65]). When we compared prototype and exemplar model fits in the final test, we again found a significant advantage for the prototype model over the exemplar model [t(28)=3.53, p=0.001, CI95[0.89, 3.39], d = 0.23]. Thus, the prototype model provided an overall better fit to behavioral responses throughout the learning phase and final test, and the effect size of the prototype advantage was largest in the final test.

fMRI

Model-based MRI

The behavioral model fitting described above maximizes the correspondence between response probabilities generated by the two models and the actual participants’ patterns of responses. Once the parameters for the best fitting prototype and best fitting exemplar representations were estimated from the behavioral data, we utilized them to construct model-based fMRI predictors, one exemplar-based predictor and one prototype-based predictor for each participant. For each test item, a model prediction was computed as the similarity of the item to the underlying prototype or exemplar representation regardless of category (representational match; see Methods for details). The trial-by-trial model predictions from both models were then used for fMRI analysis to identify regions that have signal consistent with either model. Importantly, even when behavioral fits are comparable between the two models, the neural model predictions can remain dissociable as they more directly index the underlying representations that are different between the models (Mack et al., 2013). For example, the prototypes would be classified into their respective categories with high probability by either model because they are much closer to one category’s representation than the other, generating similar behavioral prediction for that trial. However, the representational match will be much higher for the prototype model than the exemplar model as the prototype is not particularly close to any old exemplars. Thus, the neural predictors can dissociate the models to a greater degree than behavioral predictions (Mack et al., 2013). Furthermore, the neural model fits can help detect evidence of both kinds of representations, even if one dominates the behavior.

Learning phase

We first tested the degree to which prototype and exemplar information was represented across ROIs and across different points of the learning phase. Using the data from the interim generalization tests, we compared neural model fits across our six ROIs across the first and second half of the learning phase. Full ANOVA results are presented in Table 1. Figure 5 presents neural model fits for each ROI. Figure 5A represents 1st half of the learning phase, Figure 5B represents the 2nd half of the learning phase, and Figure 5C represents fits collapsed across the entire learning phase (to illustrate the main effects of ROI, model and ROI x model interaction).

Table 1. ANOVA results for model-based fMRI during the learning phase.
Effect df F P ηp2
ROI 3.4,95.6 GG 3.90 .002 .12
Model 1,28 2.60 .12 .09
Learning half 1,28 2.18 .15 .07
ROI x Model 2.9,80.3 GG 5.91 .001 .17
ROI x Learning half 3.1,86.9 GG 0.53 .67 .02
Model x Learning half 1,28 0.09 .76 .003
ROI x Model x Learning Half 3.2,89.6 GG 2.31 .08 .08
Figure 5. Neural prototype and exemplar model fits.

Figure 5.

Neural model fits for each region of interest for (A) the first half of the learning phase, (B) the second half of the learning phase, (C) the overall learning phase (averaged across the first and second half of learning), and (D) the final test. Prototype fits are in blue, exemplar fits in red. Neural model fit is the effect size: the mean/SD of ß-values within each ROI, averaged across appropriate runs. VMPFC = ventromedial prefrontal cortex, ahip = anterior hippocampus, phip = posterior hippocampus, LO = lateral occipital cortex, IFG = inferior frontal gyrus, and Lat. Par. = lateral parietal cortex. Source data for interim tests is in Figure 5—source data 1 and Figure 5—source data 2 for the final test.

Figure 5—source data 1. Neural model fits - interim tests.
Figure 5—source data 2. Neural model fits - final test.

As predicted, there was a significant ROI x Model interaction effect, indicating that there were differences across regions in the type of category information that they tracked. To understand the nature of this interaction, we computed follow-up t-tests on the neural model fits in each ROI, collapsed across the first and second half of the learning phase. Consistent with prior work (Bowman and Zeithamova, 2018), the VMPFC and anterior hippocampus (our predicted prototype regions) significantly tracked prototype information [VMPFC: t(28) = 2.86, p=0.004, CI95[µ >0.06], d = 0.75]; [anterior hippocampus: t(28) = 1.88, p=0.04, CI95[µ >0.009], d = 0.49]. Prototype correlates were numerically but not significantly stronger than exemplar correlates in both regions [VMPFC: t(28) = 1.23, p=0.11, d = 0.34, CI95[µ >−0.03]]; (anterior hippocampus: t(28) = 0.87, p=0.19, d = 0.22, CI95[µ >−0.05]). For the predicted exemplar regions, we found that both lateral parietal cortex and inferior frontal gyrus significantly tracked exemplar model predictions [lateral parietal: t(28) = 2.06, p=0.02, CI95[µ >0.02], d = 0.54]; [inferior frontal: t(28) = 2.40, p=0.01, CI95[µ >0.03], d = 0.63], with numerically positive exemplar correlates in lateral occipital cortex that were not statistically significant [t(28)=0.78, p=0.22, CI95[µ >−0.05], d = 0.20]. When comparing neural exemplar fits to neural prototype fits, there was a significant exemplar advantage in both lateral parietal cortex [t(28)=3.00, p=0.003, d = 0.71, CI95[µ >0.09]], and in inferior frontal gyrus [t(28)=2.63, p=0.01, d = 0.67, CI95[µ >0.06]], that did not reach significance in the lateral occipital cortex [t(28)=1.44, p=0.08, d = 0.36, CI95[µ >−0.06]].

As in our prior study, the posterior hippocampus showed numerically better fit of the exemplar predictor, but neither the exemplar effect [t(28)=1.88, p=0.07, CI95[−0.01,.13], d = 0.49] nor the prototype effect reached significance [t(28)=−1.14, p=0.26, CI95[−0.12,.03], d = 0.30]. Comparing the effects in the two hippocampal regions as part of a 2 (hippocampal ROI: anterior, posterior) x 2 (model: prototype, exemplar) repeated-measures ANOVA, we found a significant interaction [F(1,28)=9.04, p=0.006, ηp2 = 0.24], showing that there is a dissociation along the hippocampal long axis in the type of category information represented. Taken together, we found evidence for different types of category information represented across distinct regions of the brain.

We were also interested in whether there was a shift in representations that could be detected across learning. The only effect that included learning phase that approached significance was the three-way ROI x model x learning phase interaction, likely reflecting the more pronounced region x model differences later in learning (Figure 5A vs. Figure 5B).

Final test

Figure 5D presents neural model fits from each ROI during the final test. We tested whether the differences across ROIs identified during the learning phase were also present in the final test. As during the learning phase, we found a significant main effect of ROI [F(2.9,79.8)=9.13, p<0.001, ηp2 = 0.25, GG] and no main effect of model [F(1,28)=1.65, p=0.21, ηp2 = 0.06]. However, unlike the learning phase, we did not find a significant model x ROI interaction effect [F(3.3,91.2)=1.81, p=0.15, ηp2 = 0.06, GG]. Because this was a surprising finding, we wanted to better understand what had changed from the learning phase to the final test. Thus, although the ROI x model interaction was not significant in the final test, we computed follow-up tests on regions that had significantly tracked prototype and exemplar predictors during the learning phase. As in the learning phase, both the VMPFC and anterior hippocampus continued to significantly track prototype predictors during the final test with effect sizes similar to those observed during learning [VMPFC: t(28) = 2.83, p=0.004, CI95[µ >0.06], d = 0.74]; [anterior hippocampus: t(28) = 1.98, p=0.03, CI95[µ >0.01], d = 0.52]. Here, prototype correlates were significantly stronger than exemplar correlates in the anterior hippocampus [t(28)=2.28, p=0.02, d = 0.63], CI95[µ >0.02] and marginally stronger in the VMPFC [t(28)=1.67, p=0.053, d = 0.46, CI95[µ > - 0.03]]. However, exemplar correlates did not reach significance in any of the predicted exemplar regions (all t < 1.18, p>0.12, d < 0.31).

Discussion

In the present study, we tested whether exemplar- and prototype-based category representations could co-exist in the brain within a single task under conditions that favor both exemplar memory and prototype extraction. We found signatures of both types of representations across distinct brain regions when participants categorized items during the learning phase. Consistent with predictions based on prior studies, the ventromedial prefrontal cortex and anterior hippocampus tracked abstract prototype information, and the inferior frontal gyrus and lateral parietal cortex tracked specific exemplar information. In addition, we tested whether individuals relied on different types of representations over the course of learning. We did not find evidence of representational shifts either from specific to abstract or vice versa. Instead, results suggested that both types of representations emerged together during learning, although prototype correlates came to dominate by the final test. Together, we show that specific and abstract representations may instead exist in parallel for the same categories.

A great deal of prior work in the domain of category learning has focused on whether classification of novel category members relies on retrieval of individual category exemplars (Kruschke, 1992; Medin and Schaffer, 1978; Nosofsky, 1986; Nosofsky and Stanton, 2005; Zaki et al., 2003) or instead on abstract category prototypes (Dubé, 2019; Homa, 1973; Posner and Keele, 1968; Reed, 1972; Smith and Minda, 2002). These two representations are often pitted against one another with one declared the winner over the other, which is based largely on typical model-fitting procedures for behavioral data. Indeed, fitting exemplar and prototype models to behavioral data in the present study generally showed better fit of the prototype model over the exemplar model. However, using neuroimaging allowed us to detect both types of representations apparent across different parts of the brain. These results thus contribute to the ongoing debate about the nature of category representations in behavioral studies of categorization by showing that individuals may maintain multiple representations simultaneously even when one model shows better overall fit to behavior.

In addition to contributing novel findings to a longstanding debate in the behavioral literature, the present study also helps to resolve between prior neuroimaging studies fitting prototype and exemplar models to brain data. Specifically, two prior studies found conflicting results: one study found only exemplar representations in the brain (Mack et al., 2013) whereas another found only prototype representations (Bowman and Zeithamova, 2018). Notably the brain regions tracking exemplar predictions were different than those identified as tracking prototype predictions, showing that these studies engaged different brain systems in addition to implicating different categorization strategies. However, because the category structures, stimuli and analysis details also differed between these studies, the between-studies differences in the identified neural systems could not be uniquely attributed to the distinct category representations that participants presumably relied on. The present data newly show that neural prototype and exemplar correlates can exist not only across different task contexts but also within the same task, providing evidence that these neural differences reflect distinct category representations rather than different task details.

Moreover, our results aligned with those found separately across two studies, replicating the role of the VMPFC and anterior hippocampus in tracking prototype information (Bowman and Zeithamova, 2018) and replicating the role of inferior prefrontal and lateral parietal cortices in tracking exemplar information (Mack et al., 2013). Prior work has shown that the hippocampus and VMPFC support integration across related experiences in episodic inference tasks (for reviews, see Schlichting and Preston, 2017; Zeithamova and Bowman, 2020). We have now shown for the second time that these same regions also track prototype information during category generalization, suggesting that they may play a common role across seemingly distinct tasks. That is, integrating across experiences may not only link related elements as in episodic inference tasks, but may also serve to derive abstract information such as category prototypes. We also replicated a dissociation within the hippocampus from Bowman and Zeithamova, 2018 in which the anterior hippocampus showed significantly stronger prototype representations than the posterior hippocampus. Our findings are consistent with a proposed gradient along the hippocampal long axis, with representations becoming increasingly coarse in spatial and temporal scale moving from posterior to anterior portions of the hippocampus (Brunec et al., 2018; Poppenk et al., 2013). Lastly, we note that while the VMPFC significantly tracked prototype predictions, there was only a marginal difference between prototype and exemplar correlates in this region. Thus, it remains an open question whether representations in VMPFC are prototype specific or instead may reflect some mix of coding.

Our finding that IFG and lateral parietal cortices tracked exemplar predictions is consistent not only with prior work showing exemplar correlates in these regions during categorization (Mack et al., 2013), but also with the larger literature on their role in maintaining memory specificity. In particular, IFG is thought to play a critical role in resolving interference between similar items (Badre and Wagner, 2005; Bowman and Dennis, 2016; Jonides et al., 1998; Kuhl et al., 2007) while lateral parietal cortices often show high fidelity representations of individual items and features necessary for task performance (Kuhl and Chun, 2014; Xiao et al., 2017). The present findings support and further this prior work by showing that regions supporting memory specificity across many memory tasks may also contribute to exemplar-based concept learning.

In addition to IFG and lateral parietal cortex, we predicted that lateral occipital cortex would track exemplar information. This prediction was based both on its previously demonstrated exemplar correlates in the Mack et al. study as well as evidence that representations in visual regions shift with category learning (Folstein et al., 2013; Freedman et al., 2001; Myers and Swan, 2012; Palmeri and Gauthier, 2004). Such shifts are posited to be the result of selective attention to visual features most relevant for categorization (Goldstone and Steyvers, 2001; Medin and Schaffer, 1978; Nosofsky, 1986). Consistent with the selective attention interpretation, Mack et al., showed that LO tracked similarity between items when feature weights estimated by the exemplar model were taken into account, above-and-beyond tracking physical similarity. In the present study, LO showed an overall similar pattern as IFG and lateral parietal cortex, but exemplar correlates did not reach significance during any phase of the experiment, providing only weak evidence for exemplar coding in this region. However, in contrast to this prior work, all stimulus features in our study were equally relevant for determining category membership. This aspect of our task may have limited the role of selective attention in the present study and thus the degree to which perceptual regions tracked category information.

In designing the present study, we aimed to increase exemplar strategy use as compared to our prior study in which the prototype model fit reliably better than the exemplar model in 73% of the sample (Bowman and Zeithamova, 2018). We included a relatively coherent category structure that was likely to promote prototype formation (Bowman and Zeithamova, 2018; Bowman and Zeithamova, 2020), but tried to balance it with an observational rather than feedback-based training task in hopes of emphasizing individual items and promoting exemplar representations. The results suggest some shift in model fits, albeit modest. The prototype strategy was still identified as dominant in the latter half of learning and the final test, but we also observed more participants who were comparably fit by both models. Moreover, we detected exemplar correlates in the brain in the present study, albeit only during the second half of the learning phase. Thus, while the behavioral shift in model fits was modest, it may have been sufficient to make exemplar representations detectable despite prototype dominance in behavior. Notably, our prior study did show some evidence of exemplar-tracking regions (including portions of LO and lateral parietal cortex) but only when we used a lenient, uncorrected threshold. This suggests that exemplar-based representations may form in the brain even though they are not immediately relevant for the task at hand.

It may be that representations form at multiple levels of specificity to promote flexibility in future decision-making because it is not always clear what aspects of current experience will become relevant (Zeithamova and Bowman, 2020). Consistent with this idea, research shows that category representations can spontaneously form alongside memory for individuals, even when task instructions emphasize distinguishing between similar individuals (Ashby et al., 2020). In the present context, accessing prototype representations may be efficient for making generalization judgments, but they cannot on their own support judgments that require discrimination between separate experiences or between members of the same category. Thus, exemplar representations may also form to support judgments requiring specificity. Precedence for co-existing representations also comes from neuroimaging studies of spatial processing (Brunec et al., 2018), episodic inference (Schlichting et al., 2015), and memory for narratives (Collin et al., 2015). These studies have all shown evidence for separate representations of individual experiences alongside representations that integrate across experiences. The present results show that these parallel representations may also be present during category learning.

While co-existing prototype and exemplar representations were clear during the learning phase of our task, they were not present during the final test phase. The VMPFC and anterior hippocampus continued to track prototypes during the final test, but exemplar-tracking regions no longer emerged. The lack of exemplar correlates in the brain was matched by a weaker exemplar effect in behavior. While we observed a reliable advantage for old relative to new items matched for distance during the interim tests, the old advantage was no longer significant in the final test. The effect size for the prototype advantage in model fits was also larger in the final test than in the learning phase. This finding was unexpected, but we offer several possibilities that can be investigated further in future research. One possibility is that exemplar representations were weakened in the absence of further observational study runs that had boosted exemplars in earlier phases. Similarly, framing it as a ‘final test’ may have switched participants from trying to gather multiple kinds of information that might improve later performance (i.e., both exemplar and prototype) to simply deploying the strongest representation that they had, which seems to have been prototype-based. Alternatively, there may be real, non-linear dynamics in how prototype and exemplar representations develop. For example, exemplar representations may increase up to some threshold while individuals are encoding these complex stimuli, then decrease as a result of repetition suppression (Desimone, 1996; Gonsalves et al., 2005; Henson et al., 2002) once individual items are sufficiently well represented. Of course, future studies will be needed to both replicate this finding and directly test these differing possibilities.

In addition to identifying multiple, co-existing types of category representations during learning, we sought to test whether there were representational shifts as category knowledge developed. While there was a prototype advantage in the brain during the final test, we found no evidence for a shift between exemplar and prototype representations over the course of learning. Both prototype and exemplar correlates showed numerical increases across learning in brain and behavior, suggesting strengthening of both types of representations in parallel. Prior work has shown that individuals may use both rules and memory for individual exemplars throughout learning without strong shifts from one to the other (Thibaut et al., 2018). Others have suggested that there may be representational shifts during category learning, but rather than shifting between exemplar and prototype representations, early learning may be focused on detecting simple rules and testing multiple hypotheses (Johansen and Palmeri, 2002; Nosofsky et al., 1994; Paniukov and Davis, 2018), whereas similarity-based representations such as prototype and exemplar representations may develop later in learning (Johansen and Palmeri, 2002). Our findings are consistent with this framework, with strong prototype and exemplar representations emerging across distinct regions primarily in the second half of learning. Our results are also consistent with recent neuroimaging studies showing multiple memory representations forming in parallel without need for competition (Collin et al., 2015; Schlichting et al., 2015), potentially allowing individuals to flexibly use prior experience based on current decision-making demands.

Conclusion

In the present study, we found initial evidence that multiple types of category representations may co-exist across distinct brain regions within the same categorization task. The regions identified as prototype-tracking (anterior hippocampus and VMPFC) and exemplar-tracking (IFG and lateral parietal cortex) in the present study align with prior studies that have found only one or the other. These findings shed light on the multiple memory systems that contribute to concept representation and provide novel evidence of how the brain may flexibly represent information at different levels of specificity and that these representations may not always compete during learning.

Materials and methods

Participants

Forty volunteers were recruited from the University of Oregon and surrounding community and were financially compensated for their research participation. This sample size was determined based on effect sizes for neural prototype-tracking and exemplar-tracking regions estimated from prior studies (Bowman and Zeithamova, 2018; Mack et al., 2013), allowing for detection of the minimum effect size (prototype correlates in anterior hippocampus, d = 0.43 with n = 29) using a one-tailed, one-sample t-test with at least 80% power. All participants provided written informed consent, and Research Compliance Services at the University of Oregon approved all experimental procedures. All participants were right-handed, native English speakers and were screened for neurological conditions, medications known to affect brain function, and contraindications for MRI.

A total of 11 subjects were excluded. Six subjects were excluded prior to fMRI analyses: three subjects for chance performance (<0.6 by the end of the training phase and/or <0.6 for trained items in the final test), one subject for excessive motion (>1.5 mm within multiple runs), and two subjects for failure to complete all phases. An additional five subjects were excluded for high correlation between fMRI regressors that precluded model-based fMRI analyses of the first or second half of learning phase: three subjects had r > 0.9 for prototype and exemplar regressors and two subjects had a rank deficient design matrix driven by a lack of trial-by-trial variability in the exemplar predictor. In all five participants, attentional weight parameter estimates from both models indicated that most stimulus dimensions were ignored, which in some cases may lead to a lack of variability in model fits. This left 29 subjects (age: M = 21.9 years, SD = 3.3 years, range 18–30 years; 19 females) reported in all analyses. Additionally, we excluded single runs from three subjects who had excessive motion limited to that single run.

Materials

Stimuli consisted of cartoon animals that differed on eight binary features: neck (short vs. long), tail (straight vs. curled), foot shape (claws vs. round), snout (rounded vs. pig), head (ears vs. antennae), color (purple vs. red), body shape (angled vs. round), and design on the body (polka dots vs. stripes) (Bozoki et al., 2006; Zeithamova et al., 2008; available for download osf.io/8bph2). The two possible versions of all features can be seen across the two prototypes shown in Figure 1C. For each participant, the stimulus that served as the prototype of category A was randomly selected from four possible stimuli and all other stimuli were re-coded in reference to that prototype. The stimulus that shared no features with the category A prototype served as the category B prototype. Physical distance between any pair of stimuli was defined by their number of differing features. Category A stimuli were those that shared more features with the category A prototype than the category B prototype. Category B stimuli were those that shared more features with the category B prototype than the category A prototype. Stimuli equidistant from the two prototypes were not used in the study.

Training set

The training set included four stimuli per category, each differing from their category prototype by two features (see Table 2 for training set structure). The general structure of the training set with regard to the category prototypes was the same across subjects, but the exact stimuli differed based on the prototypes selected for a given participant. The training set structure was selected to generate many pairs of training items that were four features apart both within the same category and across the two categories. This design ensured that categories could not be learned via unsupervised clustering based on similarity of exemplars alone.

Table 2. Dimension values for example prototypes and training stimuli from each category.
Dimension values
Stimulus 1 2 3 4 5 6 7 8
Prototype A 1 1 1 1 1 1 1 1
A1 1 1 1 1 1 1 0 0
A2 0 1 1 1 0 1 1 1
A3 1 0 1 0 1 1 1 1
A4 1 1 0 1 1 0 1 1
Prototype B 0 0 0 0 0 0 0 0
B1 0 0 0 0 0 0 1 1
B2 1 0 0 0 1 0 0 0
B3 0 1 0 1 0 0 0 0
B4 0 0 1 0 0 1 0 0

Interim test sets

Stimuli in the interim generalization tests included 22 unique stimuli: the eight training stimuli, both prototypes, and two new stimuli at each distance (1, 2, 3, 5, 6, 7) from the category A prototype. Distance 1, 2 and 3 items were scored as correct when participant labeled them as category A members. Items at the distance 5, 6 and 7 from the category A prototype (thus distance 3, 2, and one from the B prototype) were scored as correct when participant labeled them as category B members. While new unique distance 1–3, 5–7 items were selected for each interim test set, the old training stimuli and the prototypes were necessarily the same for each test.

Final test set

Stimuli in the final test included 58 unique stimuli. Forty-eight of those consisted of 8 new stimuli selected at each distance 1–3, 5–7 from the category A prototype, each presented once during the final test. These new items were distinct from those used in either the training set or the interim test sets with the exception of the items that differed by only one feature from their respective prototypes. Because there are only 8 distance one items for each prototype, they were all used as part of the interim test sets before being used again in the final test set. The final test also included the eight training stimuli and the two prototypes, each presented twice in this phase (Bowman and Zeithamova, 2018; Kéri et al., 2001; Smith et al., 2008). The stimulus structure enabled dissociable behavioral predictions from the two models. While stimuli near the prototypes also tend to be near old exemplars, the correlation is imperfect. For example, when attention is equally distributed across features, the prototype model would make the same response probability prediction for all distance three items. However, some of those distance three items were near an old exemplar while others were farther from all old exemplars, creating distinct exemplar model predictions. Because we varied the test stimuli to include all distances from the prototypes, and because within each distance to the prototype there was variability in how far the stimuli are from the old exemplars, the structure was set up to facilitate dissociation between the model predictions.

Experimental design

The study consisted of two sessions: one session of neuropsychological testing and one experimental session. Only results from the experimental session are reported in the present manuscript. In the experimental session, subjects underwent four cycles of observational study and interim generalization tests (Figure 1D), followed by a final generalization test (Figure 1E), all while undergoing fMRI.

In each run of observational study, participants were shown individual animals on the screen with a species label (Febbles and Badoons) and were told to try to figure out what makes some animals Febbles and others Badoons without making any overt responses. Each stimulus was presented on the screen for 5 s followed by a 7 s ITI. Within each study run, participants viewed the training examples three times in a random order. After two study runs, participants completed an interim generalization test. Participants were shown cartoon animals without their labels and classified them into the two species without feedback. Each test stimulus was presented for 5 s during which time they could make their response, followed by a 7 s ITI. After four study-test cycles, participants completed a final categorization test, split across four runs. As in the interim tests, participants were asked to categorize animals into one of two imaginary species (Febbles and Badoons) using the same button press while the stimulus was on the screen. Following the MRI session, subjects were asked about the strategies they used to learn the categories, if any, and then indicated which version of each feature they thought was most typical for each category. Lastly, subjects were verbally debriefed about the study.

fMRI Data Acquisition

Raw MRI data are available for download via OpenNeuro (openneuro.org/datasets/ds002813). Scanning was completed on a 3T Siemens MAGNETOM Skyra scanner at the University of Oregon Lewis Center for Neuroimaging using a 32-channel head coil. Head motion was minimized using foam padding. The scanning session started with a localizer scan followed by a standard high-resolution T1-weighted MPRAGE anatomical image (TR 2500 ms; TE 3.43 ms; TI 1100 ms; flip angle 7°; matrix size 256 256; 176 contiguous slices; FOV 256 mm; slice thickness 1 mm; voxel size 1.0 1.0 1.0 mm; GRAPPA factor 2). Then, a custom anatomical T2 coronal image (TR 13,520 ms; TE 88 ms; flip angle 150°; matrix size 512 512; 65 contiguous slices oriented perpendicularly to the main axis of the hippocampus; interleaved acquisition; FOV 220 mm; voxel size 0.4 0.4 2 mm; GRAPPA factor 2) was collected. This was followed by 16 functional runs using a multiband gradient echo pulse sequence [TR 2000 ms; TE 26 ms; flip angle 90°; matrix size 100 100; 72 contiguous slices oriented 15° off the anterior commissure–posterior commissure line to reduced prefrontal signal dropout; interleaved acquisition; FOV 200 mm; voxel size 2.0 2.0 2.0 mm; generalized autocalibrating partially parallel acquisitions (GRAPPA) factor 2]. One hundred and forty-five volumes were collected for each observational study run, 133 volumes for each interim test run, and 103 volumes for each final test run.

Behavioral accuracies

Interim tests

To assess changes in generalization accuracy across train-test cycles, we computed a 4 (interim test run: 1–4) x 4 (distance: 0–3) repeated-measures ANOVA on accuracy for new items only. We were particularly interested in linear effects of interim test run and distance. We also tested whether there was a difference across training in accuracy for the training items themselves versus new items at the same distance from their prototypes, which can index how much participants learn about specific items above-and-beyond what would be expected based on their typicality. We thus computed a 4 (interim test run: 1–4) x 2 (item type: training, new) repeated-measures ANOVA on accuracies for distance two items.

Final test

First, to assess the effect of item typicality, classification performance in the final test (collapsed across runs) was assessed by computing a one-way, repeated-measures ANOVA across new items at distances (0–3) from either prototype. Second, we assessed whether there was an old-item advantage by comparing accuracy for training items and new items of equal distance from prototypes (distance 2) using a paired-samples t-test. For all analyses (including fMRI analyses described below), a Greenhouse-Geisser correction was applied whenever the assumption of sphericity was violated as denoted by ‘GG’ in the results.

Prototype and exemplar model fitting

As no responses were made during the study runs, prototype and exemplar models were only fit to test runs – interim and final tests. As the number of trials in each interim test was kept low to minimize exposure to non-training items during the learning phase, we concatenated across interim tests 1 and 2 and across interim tests 3 and 4 to obtain more robust model fit estimates for the first half vs. second half of the learning phase. Model fits for the final test were computed across all four runs combined. Each model was fit to trial-by-trial data in individual participants.

Prototype similarity

As in prior studies (Bowman and Zeithamova, 2018; Maddox et al., 2011; Minda and Smith, 2001), the similarity of each test stimulus to each prototype was computed, assuming that perceptual similarity is an exponential decay function of physical similarity (Shepard, 1957), and taking into account potential differences in attention to individual features. Formally, similarity between the test stimulus and the prototypes was computed as follows:

SA(x)=exp[ci=1m(wi|xi protoAi|r)1/r], (1)

where SA(x) is the similarity of item x to category A, xi represents the value of the item x on the ith dimension of its m binary dimensions (m = 8 in this study), protoA is the prototype of category A, r is the distance metric (fixed at one for the city-block metric for the binary dimension stimuli). Parameters that were estimated from each participant’s pattern of behavioral responses were w (a vector with eight weights, one for each of the eight stimulus features and constrained to sum to 1) and c (sensitivity: the rate at which similarity declines with physical distance, constrained to be 0–100).

Exemplar similarity

Exemplar models assume that categories are represented by their individual exemplars, and that test items are classified into the category with the highest summed similarity across category exemplars (Figure 1A). As in the prototype model, a nonlinear exponential decay function is used to transform physical similarity into subjective similarity, based on research on how perceived similarity relates to physical similarity (Shepard, 1957). Using a nonlinear function has the effect of weighting the most similar exemplars more heavily than the least similar exemplars, as the similarity value for two items that are physically one feature apart will be more than twice the similarity value of two items that are two features apart. Using the sum of similarity across all exemplars within a category provides an opportunity for multiple highly similar exemplars to be considered in making decisions, which allows the model to generate different predictions when there is a single close exemplar versus when there are multiple close exemplars. Together, this means that the most similar training exemplars drive the summed similarity metric but there is still differentiation in the predictions informed by other exemplars beyond the closest exemplar. This is canonically how an item’s similarity to each category is computed in exemplar models (Nosofsky, 1987; Zaki et al., 2003).Formally, similarity of each test stimulus to the exemplars of each category was computed as follows:

SA(x)= yAexp[ci=1m(wi|xiyi|r)1/r] (2)

where y represents the individual training stimuli from category A, and the remaining notation and parameters are as in Equation 1.

Parameter estimation

For both models, the probability of assigning a stimulus x to category A is equal to the similarity to category A divided by the summed similarity to categories A and B, formally, as follows:

P(A|x)=SA(x)SA(x)+SB(x) (3)

Using these equations, the best fitting w1-8 (attention to each feature) and c (sensitivity) parameters were estimated from the behavioral data of each participant, separately for the first half of the learning phase, second half of the learning phase, and the final test, and separately for the prototype and exemplar models. To estimate these parameters for a given model, the trial-by-trial predictions generated by Equation 3 were compared with the participant’s actual series of responses, and model parameters were tuned to minimize the difference between predicted and observed responses. An error metric (negative log likelihood of the entire string of responses) was computed for each model by summing the negative of log-transformed probabilities, and this value was minimized by adjusting w and c using standard maximum likelihood methods, implemented in MATLAB (Mathworks, Natick, MA), using the ‘fminsearch’ function.

Group analyses

After optimization, we computed a 2 (model: prototype, exemplar) x 2 (learning phase half: 1st, 2nd) repeated-measures ANOVA on the model fit values (i.e., negative log likelihood) to determine which model provided a better fit to behavioral responses at the group level and if there were shifts across learning in which model fit best. We used a paired-samples t-test comparing model fits during the final test to determine whether the group as a whole was better fit by the prototype or exemplar model by the end of the experiment.

We also tested whether individual subjects were reliably better fit by one model or the other using a permutation analysis. For each subject in each phase of the experiment, we created a null distribution of model fits by shuffling the order of stimuli associated with the subject’s actual string of responses, then fitting the prototype and exemplar models to this randomized set of response – stimulus mappings. We repeated this process 10,000 times for each subject in each phase. We first confirmed that the actual prototype and exemplar model fits were reliably better than would be expected if subjects were responding randomly by comparing these real fits to the null distribution of prototype and exemplar model fits (alpha = 0.05, one-tailed). Indeed, both the prototype and exemplar models fit reliably better than chance for all subjects in all phases of the experiment. Next, we tested whether one model reliably outperformed the other model by taking the difference in model fits generated by the permutation analysis. We then compared the observed difference in model fits to the null distribution of model fit differences and determined whether the observed difference appeared with a frequency of less than 5% (alpha = 0.05, two-tailed). Using this procedure, we labeled each subject as having used a prototype strategy, exemplar strategy, or having fits that did not differ reliably from one another (‘similar’ model fits) for each phase of the experiment.

fMRI Preprocessing

The raw data were converted from dicom files to nifti files using the dcm2niix function from MRIcron (https://www.nitrc.org/projects/mricron). Functional images were skull-stripped using BET (Brain Extraction Tool), which is part of FSL (http://www.fmrib.ox.ac.uk/fsl). Within-run motion correction was computed using MCFLIRT in FSL to realign each volume to the middle volume of the run. Across-run motion correction was then computed using ANTS (Advanced Normalization Tools) by registering the first volume of each run to the first volume of the first functional run (i.e., the first training run). Each computed transformation was then applied to all volumes in the corresponding run. Brain-extracted and motion-corrected images from each run were entered into the FEAT (fMRI Expert Analysis Tool) in FSL for high-pass temporal filtering (100 s) and spatial smoothing (4 mm FWHM kernel).

Regions of interest

Regions of interest (ROIs, Figure 2) were defined anatomically in individual participants’ native space using the cortical parcellation and subcortical segmentation from Freesurfer version 6 (https://surfer.nmr.mgh.harvard.edu/) and collapsed across hemispheres to create bilateral masks. Past research has indicated that there may be a functional gradient along the hippocampal long axis, with detailed, find-grained representations in the posterior hippocampus and increasingly coarse, generalized representations proceeding toward the anterior hippocampus (Brunec et al., 2018; Frank et al., 2019; Poppenk et al., 2013). As such, we divided the hippocampal ROI into anterior/posterior portions at the middle slice. When a participant had an odd number of hippocampal slices, the middle slice was assigned to the posterior hippocampus. Based on our prior report (Bowman and Zeithamova, 2018), we expected the anterior portion of the hippocampus to track prototype predictors, together with VMPFC (medial orbitofrontal label in Freesurfer). Based on the prior study by Mack et al., 2013, we expected lateral occipital cortex, inferior frontal gyrus (combination of pars opercularis, pars orbitalis, and pars triangularis freesurfer labels), and lateral parietal cortex (combination of inferior parietal and superior parietal freesurfer labels) to track exemplar predictors. The posterior hippocampus was also included as an ROI, to test for an anterior/posterior dissociation within the hippocampus. While one might expect the posterior hippocampus to track exemplar predictors based on the aforementioned functional gradient, our prior report Bowman and Zeithamova, 2018 found only a numeric trend in this direction and Mack et al., 2013 did not report any hippocampal findings despite significant exemplar correlates found in the cortex. Thus, we did not have strong predictions regarding the posterior hippocampus, other than being distinct from the anterior hippocampus.

Model-based fMRI analyses

fMRI data were modeled using the GLM. Three task-based regressors were included in the GLM: one for all trial onsets, one that included modulation for each trial by prototype model predictions, and one that included modulation for each trial by exemplar model predictions. Events were modeled with a duration of 5 s, which was the fixed length of the stimulus presentation. Onsets were then convolved with the canonical hemodynamic response function as implemented in in FSL (a gamma function with a phase of 0 s, and SD of 3 s, and a mean lag time of 6 s). The six standard timepoint-by-timepoint motion parameters were included as regressors of no interest.

The regressor for all trial onsets was included to account for activation that is associated with performing a categorization task generally, but does not track either model specifically. The modulation values for each model were computed as the summed similarity across category A and category B (denominator of Equation 3) generated by the assumptions of each model (from Equations 1 and 2). This summed similarity metric indexes how similar the current item is to the existing category representations as a whole (regardless of which category it is closer to) and has been used by prior studies to identify regions that contain such category representations (Bowman and Zeithamova, 2018; Davis and Poldrack, 2014; Mack et al., 2013). Correlations between prototype and exemplar summed similarity values ranged from r = −0.73 to. 82 for included subjects, with a mean of absolute values of r = 0.32. A vast majority (80%) of included runs had prototype and exemplar correlations between +/-. 5. To account for any shared variance between the regressors, we included both model predictors in the same GLM. We verified that the pattern of results remained the same when analyses are limited to participants with absolute correlations r < 0.5 in all runs, with most correlations being quite small.

For region of interest analyses, we obtained an estimate of how much the BOLD signal in each region tracked each model predictor by dividing the mean ROI parameter estimate by the standard deviation of parameter estimates (i.e., computing an effect size measure). Normalizing the beta values by their error of the estimate de-weighs values associated with large uncertainty, similar to how lower level estimates are used in group analyses as implemented in FSL (Smith et al., 2004). These normalized beta values were then averaged across the appropriate runs (interim tests 1–2, interim tests 3–4, all four runs of the final test) and submitted to group analyses.

We tested whether prototype and exemplar correlates emerged across different regions and/or at different points during the learning phase. To do so, we computed a 2 (model: prototype, exemplar) x 2 (learning phase: 1st half, 2nd half) x 6 (ROI: VMPFC, anterior hippocampus, posterior hippocampus, lateral occipital, lateral prefrontal, and lateral parietal cortices) repeated-measures ANOVA on parameter estimates from the interim test runs. We were interested in a potential model x ROI interaction effect, indicating differences across brain regions in the type of category information represented. Following any significant interaction effect, we computed one-sample t-tests to determine whether each region significantly tracked a given model and paired-samples t-tests to determine whether the region tracked one model reliably better than the other. Given a priori expectations about the nature of these effects, we computed one-tailed tests only on the effects of interest: for example, in hypothesized prototype-tracking ROIs (anterior hippocampus and VMPFC), we computed one-sample t-tests to compare prototype effects to zero and a paired-samples t-test to test whether the prototype correlates were stronger than exemplar correlates. We followed a similar procedure in hypothesized exemplar-tracking ROIs (inferior frontal gyrus, lateral parietal cortex, lateral occipital cortex). We were also interested in potential interactions with the learning phase, which would indicate shift across learning in category representations. Following any such interaction, follow-up ANOVAs or t-tests were performed to better understand drivers of the effect.

We next tested ROI differences in the final generalization phase. To do so, we computed a 2 (model: prototype, exemplar) x 6 (ROI: see above) repeated-measures ANOVA on parameter estimates from the final generalization test. We were particularly interested in the model x ROI interaction effect, which would indicate that regions differ in which model they tracked. Because each participant’s neural model fit is inherently dependent on their behavioral model fit, we focused on group-average analyses and did not perform any brain-behavior individual differences analyses.

Acknowledgements

Funding for this work was provided in part by the National Institute of Neurological Disorders and Stroke Grant R01-NS112366 (DZ), National Institute on Aging Grant F32-AG054204 (CRB), and Lewis Family Endowment, which supports the Robert and Beverly Lewis Center for Neuroimaging at the University of Oregon (DZ).

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Caitlin R Bowman, Email: cbowman@uoregon.edu.

Dagmar Zeithamova, Email: dasa@uoregon.edu.

Timothy E Behrens, University of Oxford, United Kingdom.

Morgan Barense, University of Toronto, Canada.

Funding Information

This paper was supported by the following grants:

  • National Institute on Aging F32-AG-054204 to Caitlin R Bowman.

  • National Institute of Neurological Disorders and Stroke R01-NS112366 to Dasa Zeithamova.

  • University of Oregon Robert and Beverly Lewis Center for Neuroimaging to Dagmar Zeithamova.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Resources, Data curation, Software, Formal analysis, Supervision, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration.

Resources, Investigation, Project administration, Writing - review and editing.

Conceptualization, Resources, Supervision, Funding acquisition, Methodology, Writing - review and editing.

Ethics

Human subjects: All participants provided written informed consent, and Research Compliance Services at the University of Oregon approved all experimental procedures (approval code 10162014.010).

Additional files

Transparent reporting form

Data availability

Raw MRI data have been deposited at openneuro.org/datasets/ds002813. Source data have been provided for Figures 3-5.

The following dataset was generated:

Bowman CR, Iwashita T, Zeithamova D. 2020. Model-based fMRI reveals co-existing specific and generalized concept representations. OpenNeuro. ds002813

References

  1. Aizenstein HJ, MacDonald AW, Stenger VA, Nebes RD, Larson JK, Ursu S, Carter CS. Complementary category learning systems identified using event-related functional MRI. Journal of Cognitive Neuroscience. 2000;12:977–987. doi: 10.1162/08989290051137512. [DOI] [PubMed] [Google Scholar]
  2. Ashby FG, Alfonso-Reese LA, Turken AU, Waldron EM. A neuropsychological theory of multiple systems in category learning. Psychological Review. 1998;105:442–481. doi: 10.1037/0033-295X.105.3.442. [DOI] [PubMed] [Google Scholar]
  3. Ashby SR, Bowman CR, Zeithamova D. Perceived similarity ratings predict generalization success after traditional category learning and a new paired-associate learning task. Psychonomic Bulletin & Review. 2020;27:791–800. doi: 10.3758/s13423-020-01754-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Badre D, Wagner AD. Frontal lobe mechanisms that resolve proactive interference. Cerebral Cortex. 2005;15:2003–2012. doi: 10.1093/cercor/bhi075. [DOI] [PubMed] [Google Scholar]
  5. Bowman CR, Dennis NA. The neural basis of recollection rejection: increases in Hippocampal–Prefrontal Connectivity in the Absence of a Shared Recall-to-Reject and Target Recollection Network. Journal of Cognitive Neuroscience. 2016;28:1194–1209. doi: 10.1162/jocn_a_00961. [DOI] [PubMed] [Google Scholar]
  6. Bowman CR, Zeithamova D. Abstract memory representations in the ventromedial prefrontal cortex and Hippocampus support concept generalization. The Journal of Neuroscience. 2018;38:2605–2614. doi: 10.1523/JNEUROSCI.2811-17.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bowman CR, Zeithamova D. Training set coherence and set size effects on concept generalization and recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2020;46:1442–1464. doi: 10.1037/xlm0000824. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bozoki A, Grossman M, Smith EE. Can patients with Alzheimer's disease learn a category implicitly? Neuropsychologia. 2006;44:816–827. doi: 10.1016/j.neuropsychologia.2005.08.001. [DOI] [PubMed] [Google Scholar]
  9. Bransford JD, Johnson MK. Contextual prerequisites for understanding: some investigations of comprehension and recall. Journal of Verbal Learning and Verbal Behavior. 1972;11:717–726. doi: 10.1016/S0022-5371(72)80006-9. [DOI] [Google Scholar]
  10. Brunec IK, Bellana B, Ozubko JD, Man V, Robin J, Liu ZX, Grady C, Rosenbaum RS, Winocur G, Barense MD, Moscovitch M. Multiple scales of representation along the hippocampal anteroposterior Axis in humans. Current Biology. 2018;28:2129–2135. doi: 10.1016/j.cub.2018.05.016. [DOI] [PubMed] [Google Scholar]
  11. Cincotta CM, Seger CA. Dissociation between striatal regions while learning to categorize via feedback and via observation. Journal of Cognitive Neuroscience. 2007;19:249–265. doi: 10.1162/jocn.2007.19.2.249. [DOI] [PubMed] [Google Scholar]
  12. Collin SH, Milivojevic B, Doeller CF. Memory hierarchies map onto the hippocampal long Axis in humans. Nature Neuroscience. 2015;18:1562–1564. doi: 10.1038/nn.4138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Davis T, Love BC, Preston AR. Learning the exception to the rule: model-based fMRI reveals specialized representations for surprising category members. Cerebral Cortex. 2012;22:260–273. doi: 10.1093/cercor/bhr036. [DOI] [PubMed] [Google Scholar]
  14. Davis T, Poldrack RA. Quantifying the internal structure of categories using a neural typicality measure. Cerebral Cortex. 2014;24:1720–1737. doi: 10.1093/cercor/bht014. [DOI] [PubMed] [Google Scholar]
  15. Desimone R. Neural mechanisms for visual memory and their role in attention. PNAS. 1996;93:13494–13499. doi: 10.1073/pnas.93.24.13494. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Dubé C. Central tendency representation and exemplar matching in visual short-term memory. Memory & Cognition. 2019;47:589–602. doi: 10.3758/s13421-019-00900-0. [DOI] [PubMed] [Google Scholar]
  17. Ell SW, Weinstein A, Ivry RB. Rule-based categorization deficits in focal basal ganglia lesion and Parkinson's disease patients. Neuropsychologia. 2010;48:2974–2986. doi: 10.1016/j.neuropsychologia.2010.06.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Folstein JR, Palmeri TJ, Gauthier I. Category learning increases discriminability of relevant object dimensions in visual cortex. Cerebral Cortex. 2013;23:814–823. doi: 10.1093/cercor/bhs067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Frank LE, Bowman CR, Zeithamova D. Differential functional connectivity along the long Axis of the Hippocampus aligns with differential role in memory specificity and generalization. Journal of Cognitive Neuroscience. 2019;31:1958–1975. doi: 10.1162/jocn_a_01457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Freedman DJ, Riesenhuber M, Poggio T, Miller EK. Categorical representation of visual stimuli in the primate prefrontal cortex. Science. 2001;291:312–316. doi: 10.1126/science.291.5502.312. [DOI] [PubMed] [Google Scholar]
  21. Goldstone RL, Steyvers M. The sensitization and differentiation of dimensions during category learning. Journal of Experimental Psychology: General. 2001;130:116–139. doi: 10.1037/0096-3445.130.1.116. [DOI] [PubMed] [Google Scholar]
  22. Gonsalves BD, Kahn I, Curran T, Norman KA, Wagner AD. Memory strength and repetition suppression: multimodal imaging of medial temporal cortical contributions to recognition. Neuron. 2005;47:751–761. doi: 10.1016/j.neuron.2005.07.013. [DOI] [PubMed] [Google Scholar]
  23. Heindel WC, Festa EK, Ott BR, Landy KM, Salmon DP. Prototype learning and dissociable categorization systems in Alzheimer's disease. Neuropsychologia. 2013;51:1699–1708. doi: 10.1016/j.neuropsychologia.2013.06.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Henson RN, Shallice T, Gorno-Tempini ML, Dolan RJ. Face repetition effects in implicit and explicit memory tests as measured by fMRI. Cerebral Cortex. 2002;12:178–186. doi: 10.1093/cercor/12.2.178. [DOI] [PubMed] [Google Scholar]
  25. Hintzman DL. "Schema abstraction" in a multiple-trace memory model. Psychological Review. 1986;93:411–428. doi: 10.1037/0033-295X.93.4.411. [DOI] [Google Scholar]
  26. Homa D. Prototype abstraction and classification of new instances as a function of number of instances defining the prototype. Journal of Experimental Psychology. 1973;101:116–122. doi: 10.1037/h0035772. [DOI] [Google Scholar]
  27. Johansen MK, Palmeri TJ. Are there representational shifts during category learning? Cognitive Psychology. 2002;45:482–553. doi: 10.1016/S0010-0285(02)00505-4. [DOI] [PubMed] [Google Scholar]
  28. Jonides J, Smith EE, Marshuetz C, Koeppe RA, Reuter-Lorenz PA. Inhibition in verbal working memory revealed by brain activation. PNAS. 1998;95:8410–8413. doi: 10.1073/pnas.95.14.8410. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Kéri S, Kelemen O, Benedek G, Janka Z. Intact prototype learning in schizophrenia. Schizophrenia Research. 2001;52:261–264. doi: 10.1016/S0920-9964(00)00092-X. [DOI] [PubMed] [Google Scholar]
  30. Koenig P, Smith EE, Troiani V, Anderson C, Moore P, Grossman M. Medial temporal lobe involvement in an implicit memory task: evidence of collaborating implicit and explicit memory systems from FMRI and Alzheimer's disease. Cerebral Cortex. 2008;18:2831–2843. doi: 10.1093/cercor/bhn043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Koster R, Chadwick MJ, Chen Y, Berron D, Banino A, Düzel E, Hassabis D, Kumaran D. Big-Loop recurrence within the hippocampal system supports integration of information across episodes. Neuron. 2018;99:1342–1354. doi: 10.1016/j.neuron.2018.08.009. [DOI] [PubMed] [Google Scholar]
  32. Kruschke JK. ALCOVE: an exemplar-based connectionist model of category learning. Psychological Review. 1992;99:22–44. doi: 10.1037/0033-295X.99.1.22. [DOI] [PubMed] [Google Scholar]
  33. Kuhl BA, Dudukovic NM, Kahn I, Wagner AD. Decreased demands on cognitive control reveal the neural processing benefits of forgetting. Nature Neuroscience. 2007;10:908–914. doi: 10.1038/nn1918. [DOI] [PubMed] [Google Scholar]
  34. Kuhl BA, Chun MM. Successful remembering elicits event-specific activity patterns in lateral parietal cortex. Journal of Neuroscience. 2014;34:8051–8060. doi: 10.1523/JNEUROSCI.4328-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Lech RK, Güntürkün O, Suchan B. An interplay of fusiform gyrus and Hippocampus enables prototype- and exemplar-based category learning. Behavioural Brain Research. 2016;311:239–246. doi: 10.1016/j.bbr.2016.05.049. [DOI] [PubMed] [Google Scholar]
  36. Mack ML, Preston AR, Love BC. Decoding the brain's algorithm for categorization from its neural implementation. Current Biology. 2013;23:2023–2027. doi: 10.1016/j.cub.2013.08.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Maddox WT, Glass BD, Zeithamova D, Savarie ZR, Bowen C, Matthews MD, Schnyer DM. The effects of sleep deprivation on dissociable prototype learning systems. Sleep. 2011;34:253–260. doi: 10.1093/sleep/34.3.253. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. McClelland JL, McNaughton BL, O'Reilly RC. Why there are complementary learning systems in the Hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological Review. 1995;102:419–457. doi: 10.1037/0033-295X.102.3.419. [DOI] [PubMed] [Google Scholar]
  39. Medin DL, Schaffer MM. Context theory of classification learning. Psychological Review. 1978;85:207–238. doi: 10.1037/0033-295X.85.3.207. [DOI] [Google Scholar]
  40. Minda JP, Smith JD. Prototypes in category learning: the effects of category size, category structure, and stimulus complexity. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2001;27:775–799. doi: 10.1037/0278-7393.27.3.775. [DOI] [PubMed] [Google Scholar]
  41. Moscovitch M, Cabeza R, Winocur G, Nadel L. Episodic memory and beyond: the Hippocampus and neocortex in transformation. Annual Review of Psychology. 2016;67:105–134. doi: 10.1146/annurev-psych-113011-143733. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Myers EB, Swan K. Effects of category learning on neural sensitivity to non-native phonetic categories. Journal of Cognitive Neuroscience. 2012;24:1695–1708. doi: 10.1162/jocn_a_00243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Nomura EM, Maddox WT, Filoteo JV, Ing AD, Gitelman DR, Parrish TB, Mesulam MM, Reber PJ. Neural correlates of rule-based and information-integration visual category learning. Cerebral Cortex. 2007;17:37–43. doi: 10.1093/cercor/bhj122. [DOI] [PubMed] [Google Scholar]
  44. Nosofsky RM. Attention, similarity, and the identification–categorization relationship. Journal of Experimental Psychology: General. 1986;115:39–57. doi: 10.1037/0096-3445.115.1.39. [DOI] [PubMed] [Google Scholar]
  45. Nosofsky RM. Attention and learning processes in the identification and categorization of integral stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1987;13:87–108. doi: 10.1037/0278-7393.13.1.87. [DOI] [PubMed] [Google Scholar]
  46. Nosofsky RM. Exemplar-based accounts of relations between classification, recognition, and typicality. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1988;14:700–708. doi: 10.1037/0278-7393.14.4.700. [DOI] [Google Scholar]
  47. Nosofsky RM, Palmeri TJ, McKinley SC. Rule-plus-exception model of classification learning. Psychological Review. 1994;101:53–79. doi: 10.1037/0033-295X.101.1.53. [DOI] [PubMed] [Google Scholar]
  48. Nosofsky RM, Little DR, James TW. Activation in the neural network responsible for categorization and recognition reflects parameter changes. PNAS. 2012;109:333–338. doi: 10.1073/pnas.1111304109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Nosofsky RM, Stanton RD. Speeded classification in a probabilistic category structure: contrasting exemplar-retrieval, decision-boundary, and prototype models. Journal of Experimental Psychology: Human Perception and Performance. 2005;31:608–629. doi: 10.1037/0096-1523.31.3.608. [DOI] [PubMed] [Google Scholar]
  50. Palmeri TJ, Gauthier I. Visual object understanding. Nature Reviews Neuroscience. 2004;5:291–303. doi: 10.1038/nrn1364. [DOI] [PubMed] [Google Scholar]
  51. Paniukov D, Davis T. The evaluative role of rostrolateral prefrontal cortex in rule-based category learning. NeuroImage. 2018;166:19–31. doi: 10.1016/j.neuroimage.2017.10.057. [DOI] [PubMed] [Google Scholar]
  52. Payne JD, Schacter DL, Propper RE, Huang LW, Wamsley EJ, Tucker MA, Walker MP, Stickgold R. The role of sleep in false memory formation. Neurobiology of Learning and Memory. 2009;92:327–334. doi: 10.1016/j.nlm.2009.03.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Poldrack RA, Clark J, Paré-Blagoev EJ, Shohamy D, Creso Moyano J, Myers C, Gluck MA. Interactive memory systems in the human brain. Nature. 2001;414:546–550. doi: 10.1038/35107080. [DOI] [PubMed] [Google Scholar]
  54. Poldrack RA, Packard MG. Competition among multiple memory systems: converging evidence from animal and human brain studies. Neuropsychologia. 2003;41:245–251. doi: 10.1016/S0028-3932(02)00157-4. [DOI] [PubMed] [Google Scholar]
  55. Poppenk J, Evensmoen HR, Moscovitch M, Nadel L. Long-axis specialization of the human Hippocampus. Trends in Cognitive Sciences. 2013;17:230–240. doi: 10.1016/j.tics.2013.03.005. [DOI] [PubMed] [Google Scholar]
  56. Posner MI, Keele SW. On the genesis of abstract ideas. Journal of Experimental Psychology. 1968;77:353–363. doi: 10.1037/h0025953. [DOI] [PubMed] [Google Scholar]
  57. Posner MI, Keele SW. Retention of abstract ideas. Journal of Experimental Psychology. 1970;83:304–308. doi: 10.1037/h0028558. [DOI] [PubMed] [Google Scholar]
  58. Reed SK. Pattern recognition and categorization. Cognitive Psychology. 1972;3:382–407. doi: 10.1016/0010-0285(72)90014-X. [DOI] [Google Scholar]
  59. Schapiro AC, McDevitt EA, Chen L, Norman KA, Mednick SC, Rogers TT. Sleep benefits memory for semantic category structure while preserving Exemplar-Specific information. Scientific Reports. 2017;7:5. doi: 10.1038/s41598-017-12884-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Schlichting ML, Mumford JA, Preston AR. Learning-related representational changes reveal dissociable integration and separation signatures in the Hippocampus and prefrontal cortex. Nature Communications. 2015;6:8151. doi: 10.1038/ncomms9151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Schlichting ML, Preston AR. The hippocampus and memory integration: Building knowledge to navigate future decisions. In: Duff M. C, Hannula D. E, editors. The Hippocampus From Cells to System: Structure, Connectivity, and Functional Contributions to Memory and Flexible Cognition. Springer; 2017. pp. 405–437. [DOI] [Google Scholar]
  62. Seger CA. The roles of the caudate nucleus in human classification learning. Journal of Neuroscience. 2005;25:2941–2951. doi: 10.1523/JNEUROSCI.3401-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Shepard RN. Stimulus and response generalization: a stochastic model relating generalization to distance in psychological space. Psychometrika. 1957;22:325–345. doi: 10.1007/BF02288967. [DOI] [PubMed] [Google Scholar]
  64. Shohamy D, Wagner AD. Integrating memories in the human brain: hippocampal-midbrain encoding of overlapping events. Neuron. 2008;60:378–389. doi: 10.1016/j.neuron.2008.09.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Smith SM, Jenkinson M, Woolrich MW, Beckmann CF, Behrens TE, Johansen-Berg H, Bannister PR, De Luca M, Drobnjak I, Flitney DE, Niazy RK, Saunders J, Vickers J, Zhang Y, De Stefano N, Brady JM, Matthews PM. Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage. 2004;23 Suppl 1:S208–S219. doi: 10.1016/j.neuroimage.2004.07.051. [DOI] [PubMed] [Google Scholar]
  66. Smith JD, Redford JS, Haas SM. Prototype abstraction by monkeys (Macaca mulatta) Journal of Experimental Psychology: General. 2008;137:390–401. doi: 10.1037/0096-3445.137.2.390. [DOI] [PubMed] [Google Scholar]
  67. Smith JD, Minda JP. Thirty categorization results in search of a model. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2000;26:3–27. doi: 10.1037/0278-7393.26.1.3. [DOI] [PubMed] [Google Scholar]
  68. Smith JD, Minda JP. Distinguishing prototype-based and exemplar-based processes in dot-pattern category learning. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2002;28:800–811. doi: 10.1037/0278-7393.28.4.800. [DOI] [PubMed] [Google Scholar]
  69. Thibaut JP, Gelaes S, Murphy GL. Does practice in category learning increase rule use or exemplar use-or both? Memory & Cognition. 2018;46:530–543. doi: 10.3758/s13421-017-0782-4. [DOI] [PubMed] [Google Scholar]
  70. Tse D, Langston RF, Kakeyama M, Bethus I, Spooner PA, Wood ER, Witter MP, Morris RG. Schemas and memory consolidation. Science. 2007;316:76–82. doi: 10.1126/science.1135935. [DOI] [PubMed] [Google Scholar]
  71. van Kesteren MT, Ruiter DJ, Fernández G, Henson RN. How schema and novelty augment memory formation. Trends in Neurosciences. 2012;35:211–219. doi: 10.1016/j.tins.2012.02.001. [DOI] [PubMed] [Google Scholar]
  72. Vilberg KL, Rugg MD. Memory retrieval and the parietal cortex: a review of evidence from a dual-process perspective. Neuropsychologia. 2008;46:1787–1799. doi: 10.1016/j.neuropsychologia.2008.01.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Xiao X, Dong Q, Gao J, Men W, Poldrack RA, Xue G. Transformed neural pattern reinstatement during episodic memory retrieval. The Journal of Neuroscience. 2017;37:2986–2998. doi: 10.1523/JNEUROSCI.2324-16.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Zaki SR, Nosofsky RM, Stanton RD, Cohen AL. Prototype and exemplar accounts of category learning and attentional allocation: a reassessment. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2003;29:1160–1173. doi: 10.1037/0278-7393.29.6.1160. [DOI] [PubMed] [Google Scholar]
  75. Zeithamova D, Maddox WT, Schnyer DM. Dissociable prototype learning systems: evidence from brain imaging and behavior. Journal of Neuroscience. 2008;28:13194–13201. doi: 10.1523/JNEUROSCI.2915-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Zeithamova D, Dominick AL, Preston AR. Hippocampal and ventral medial prefrontal activation during retrieval-mediated learning supports novel inference. Neuron. 2012;75:168–179. doi: 10.1016/j.neuron.2012.05.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Zeithamova D, Bowman CR. Generalization and the Hippocampus: more than one story? Neurobiology of Learning and Memory. 2020;175:107317. doi: 10.1016/j.nlm.2020.107317. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision letter

Editor: Morgan Barense1
Reviewed by: Morgan Barense2, Alexa Tompary

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

The ability to form categories is critical to organizing our knowledge about the world. However, the nature by categories are formed has been the subject of longstanding debate, with the field divided between two competing ideas. One class of theories holds that categories are represented in terms of their individual members ("exemplars"), whereas another class of theories states that categories are represented by the abstracted average of the most typical members of the category ("prototypes"). Little progress had been made on this debate, until Bowman and colleagues show here that both exemplar and prototype representations exist simultaneously in the brain, allowing for flexible knowledge at multiple levels of specificity.

Decision letter after peer review:

Thank you for submitting your article "Model-based fMRI reveals co-existing specific and generalized concept representations" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Morgan Barense as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Timothy Behrens as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Alexa Tompary (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

As the editors have judged that your manuscript is of interest, but as described below that additional experiments are required before it is published, we would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). First, because many researchers have temporarily lost access to the labs, we will give authors as much time as they need to submit revised manuscripts. We are also offering, if you choose, to post the manuscript to bioRxiv (if it is not already there) along with this decision letter and a formal designation that the manuscript is "in revision at eLife". Please let us know if you would like to pursue this option. (If your work is more suitable for medRxiv, you will need to post the preprint yourself, as the mechanisms for us to do so are still in development.)

Summary:

Bowman et al. investigate a long-standing debate in cognitive science: whether categories are represented by exemplar category members or by prototypes that are extracted from individual members. In this experiment, participants undergo fMRI as they learn to categorize stimuli that vary along 8 dimensions. Model fitting is used to characterize participants' decisions and neural responses as supported by either the prototype or exemplar model of categorization. They find that there are separate sets of regions that relate with the prototype-based and exemplar-based models, suggesting that the two can coexist as strategies employed by different systems.

The reviewers were very enthusiastic about the question and the model-based fMRI approach, and we all appreciated the attempt to reconcile disparate findings in the field. That said, there were serious concerns about some of the analyses, consistency of results, and thus the ultimate implications for the bigger picture. We thought that the paper would benefit from a more extensive comparison to the authors' past work, as well as a deeper synthesis of the broader debate in general. The methods could be better grounded in the theoretical issues at stake. The reviewers were initially split on how to proceed. However, the reviewers were unanimous in seeing great value in the theoretical question at hand, and in the opinion that model-based fMRI has a lot of potential for our field in general. Thus, we wanted to give the opportunity for the authors to revise the manuscript, with the clear advice that any revision must address these concerns if it is to be successful. Below is a synthesis of the reviewers' comments and the discussion that emerged from our consultation session.

Essential revisions:

1) There were concerns that the pattern similarity analysis had underlying flaws. Specifically, the two comparisons (of item representations and category representations) do not appear to be on equal grounding. An item will have a highly similar representation to itself because it is visually identical to itself. This reviewer felt that it was not surprising (or necessarily interesting) that there are regions that show higher item self-similarity than cross-similarity, and we would hesitate to say these regions contain a more abstracted representation of the item or exemplar versus just responding to identical visual or perceptual processing of a given image. The category representation task is much harder (showing differences based on perceptually matched but learned category distinctions), and indeed it doesn't appear to come out from these data. What might be a stronger test than looking at pattern similarities during this learning period, would be to look during the test periods. The authors mention concerns about motor responses, but those could be regressed out. It was suggested that the authors could create two representational dissimilarity matrices based on the two models (exemplar and prototype) for each stimulus. For the exemplar model, an ROI pattern would be predicted with similarity to exemplars previously shown, while for the prototype model, an ROI pattern would be predicted with similarity to the average. This should be a more direct test.

2) Although the pattern similarity analyses were not the primary analyses in the paper, there were two very serious concerns about the primary model-based fMRI analyses. These will be essential to address if a revision is to be successful:

a) How correlated are the prototype model predictions and the exemplar model predictions? It seems like they might be highly correlated (e.g., very distant items would be distant from the prototype as well as previous exemplars). If so, then the authors may need to correct for multicollinearity in the GLMs – their separate contributions cannot be interpreted if they are correlated (because their shared variance is an arbitrary split between the two).

b) The within-ROI analyses were all one-sample t-tests only for the hypothesized model in a hypothesized region of interest. However, it is equally important to know: 1) is there evidence for the other model in that region as well (e.g., is there both prototype and exemplar information in the VMPFC, and if so, how can one interpret their co-existence?) and 2) is one model stronger than the other in a region (e.g., if the VMPFC doesn't have significantly higher prototype model fit than an exemplar one, can one really say it is a prototype-specific region?)

3) We discussed the fact that the paper presents the evidence for exemplar-based processing is not strong. Behaviorally there is significantly more evidence for the prototype model, and although it appears in the hypothesized regions during the 2nd phase, it doesn't appear in the 1st phase or the final phase. The pattern similarity results also do not directly address exemplar-based coding, for the reasons mentioned above. The above-mentioned RSA may help with this, but there was concern that the paper presents only weak evidence for the existence of an exemplar representation. On the flip side, we were concerned that the prototype model does show significantly higher fits than the exemplar model in most regions when pitted directly against each other (vs. the current analysis which just compares the hypothesized model to 0 – per Comment 2b above). As such, there was concern that the final set of results do not show a message that is consistent across timepoints, behaviour, and analyses. Pending the results of additional analyses, we felt that the conclusions should be tempered, or it may simply be the case that the results are not strong enough to warrant publication in eLife.

4) In the authors' prior paper (J Neuro 2018), participants were classified according to whether their responses were well fit by the exemplar model, prototype model, or neither. Here, these fits are displayed on a continuous scale rather than classifying participants according to their dominant strategy, and it seems that there is wide variability in these fits and with a majority of subjects either well or poorly fit by both (participants along the diagonal line).

The reviewers were curious how participants would be classified according to their prior approach, as a potential means to answer the question of whether performance in this dataset is indeed more exemplar-based than in the author's prior dataset. We agreed that the manuscript would benefit from a justification of why the authors switched to this analysis rather than the classification approach they had previously adopted. If possible, it would be nice to show both for a more direct comparison.

5) It was unclear how a participant could be a good fit to both models, as they should predict opposing behaviors (e.g. an advantage for old items over new ones in the exemplar model versus no difference in the prototype model, when testing items 2 features away from their prototype). An example or a high-level description of what a good fit to both models may look like would be helpful here.

6) Related to the first point, the authors state that task demands and/or category structures could bias categorization to be either exemplar or prototype based. To get at both, the authors used a category structure that is supported mostly by a prototype model, but use a task that is meant to promote exemplar encoding. However, on average participants' decisions were still better fit by the prototype model (although again, most people fall along the diagonal). And from Figure 4, there seem to be very few participants that were better fit by the exemplar model.

With this in mind, is there any evidence that the observational task actually shifted participants to be more exemplar-based? If not, why is there evidence of exemplar representations in LO/IFG/Lateral parietal in this experiment and not in the authors' earlier paper, where ~10% of participants were better fit by an exemplar model? Are there any other differences between the two procedures or stimulus sets that could account for this? Regardless of the differences between the experiments, the notion that exemplar-based representations exist in the brain even though they are not relevant for behavior is worth engaging with in more detail in the Discussion.

7) In general, it was felt that the paper could use more depth. Many of the citations regarding the prototype vs. exemplar debate are very old (~30+ years old) and the more recent papers mentioned later focus more on modeling and neuroimaging. Could the authors describe work that consolidates this debate? Also, we felt that there could be more elaboration on the Materials and methods in the main manuscript, specifically as they relate to the theoretical questions at stake. For example, what are the general principles underlying these models and what different hypotheses do they make for this specific stimulus space?

8) In addition to engaging more with the literature, we wished for more of a discussion of the mechanistic reasons behind why these ROIs have separate sensitivity to these two models, as well as what the present results show about what underlying computations are occurring in the brain. To quote directly from our consultation session: "I think some attempt at a mechanistic explanation may also be necessary. (Though I understand this can be delicate because they may want to avoid reverse inference.) One thing that stands out to me here is that LO is claimed to be a region showing exemplar representations. LO is believed by many (most?) to be a "lower level" visual area (e.g., in comparison to PPA which may have more abstract information) that represents visual shape information. So even if LO shows an exemplar representation, this could be because the item is more similar in shape to other items that were seen (the exemplars) versus a prototype that was not seen. So, a purely low-level visual account could possibly explain these results, rather than something deeper about mental representations. Thus I am also concerned about whether these findings necessarily mean something deep reconciling this larger theoretical debate, or may reflect some comparison of low level visual features across exemplars." In short, we all agreed that a deeper discussion of the neural computations and mechanisms would improve the contribution of the current paper considerably.

9) Six participants were excluded for high correlated prototype and exemplar regressors – can the authors provide a short of explanation of what pattern of behavioral responses would give rise to this? And what is the average correlation between these regressors in included participants?

10) For behavior, the authors investigated performance as it varied with distance from prototypes. It would also be interesting to investigate how behavior varies as distance from studied exemplars.

11) Did the authors find different weighting on different manipulated features?

12) The manuscript states that for the exemplar model, "test items are classified into the category with the highest summed similarity across category exemplars." One reviewer wondered whether this is a metric that is agreed upon across the field, as they would have anticipated other (possibly non-linear) calculations. For example, could it be quantified as the maximum similarity across exemplars?

13) Is there any way to explore whether the model fits from the neural data can be used to explain variance in participants' behavioral fits? For instance, do neural signals in vmPFC and anterior hippocampus better relate to behavior relative to signals in LO, IFG and lateral parietal for participants with better prototype fits? There may not be sufficient power for this so this was suggested only as a potential exploratory analysis.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your article "Tracking prototype and exemplar representations in the brain across learning" for consideration by eLife. Your revised article has been reviewed by two peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Timothy Behrens as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Alexa Tompary (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.

We all agreed that the revised paper was substantially improved. In particular, the removal of the pattern similarity analyses made the paper sharper and more straightforward, as well as eliminated the methodological concerns that were highlighted in our original reviews. The pairwise t-tests of prototype vs. exemplar fits in the ROIs has clarified the results, while still providing good support of the authors' conclusions. The relationship between prototype and exemplar models is much clearer and is thoughtfully discussed, and the added text in the Introduction and Discussion have nicely framed the paper and motivated the analyses. In general, the research reads as being much more impactful and the current work much stronger.

Although the paper is much improved, one concern still remains regarding the correlation between exemplar and prototype regressors. To quote reviewer #2 specifically:

"I'm convinced that mathematically, the regressors are not highly correlated and the explanation in the authors' response regarding the most distant items is helpful in getting an intuition of how this can be. At the same time, I'm still trying to understand how the correlation between regressors is on average only r=.3 when the behavioral fits themselves are so highly correlated across participants. If there is a way to clarify this, I think that would go a long way in helping readers get on board with the idea that both exemplar and prototype representations can exist in the brain.

Incorporating some of the authors' explanation from the response letter could be useful for this. In particular, it was nice that they explicitly laid out how the advantage for old items over new ones would result in a better fit to an exemplar model but still a decent fit to an exemplar model because accuracy for old/new items at distance 2 fall between accuracy for items with distances of 1 an 3. Their response to Comment 7 seems like it could be informative to include too – although here they talk about the models' differences in terms of confidence, which isn't explained in the manuscript."

eLife. 2020 Nov 26;9:e59360. doi: 10.7554/eLife.59360.sa2

Author response


Essential revisions:

1) There were concerns that the pattern similarity analysis had underlying flaws. Specifically, the two comparisons (of item representations and category representations) do not appear to be on equal grounding. An item will have a highly similar representation to itself because it is visually identical to itself. This reviewer felt that it was not surprising (or necessarily interesting) that there are regions that show higher item self-similarity than cross-similarity, and we would hesitate to say these regions contain a more abstracted representation of the item or exemplar versus just responding to identical visual or perceptual processing of a given image. The category representation task is much harder (showing differences based on perceptually matched but learned category distinctions), and indeed it doesn't appear to come out from these data.

We thank the reviewers for noting that the rationale and interpretation for the pattern similarity analyses were not clear in our original manuscript. We were debating whether or not to include the pattern similarity analysis in our original submission as the additional experimental phases (training runs, interim tests) already made the methods and results quite long compared to our previous 2018 paper. Additionally, because the model-based fMRI and the pattern similarity analyses focus on very distinct notions of what “abstract” category representations can mean, we were worried that it may lead to confusion. During the revisions, we decided to drop the pattern similarity analyses from the paper to streamline it and allow us to better flesh out the model-based analyses. Nevertheless, we do think that the pattern similarity analyses could be interesting in their own merit and hope to explain it better in our response than we did in the original submission.

Regarding category representations, we agree that the bar we set for the category representation operationalization was high. This was intentional: we were specifically interested if we can find category representations under the “strict” conditions of equated physical similarity within- and across- categories. We realize that a positive outcome would have been much more interesting from the reader’s perspective, but we reported the negative outcome nevertheless.

For the reviewers’ information, the lack of category representations is interesting to us. The reason is that we have another study currently in progress that uses different stimuli (face blends) in which we do find such category representations in a strict sense. Moreover, they emerge over the course of learning across many brain regions, including visual cortices. We felt that the comparison between the studies may help interpret both findings. For example, category learning often includes learning of what features are relevant vs. irrelevant for category membership. Greater within-category and smaller between-category similarity may then result from the hypothesized stretching and shrinking of perceptual space along those relevant and irrelevant perceptual dimensions, as one learns to attend to relevant information and ignore irrelevant information. As all stimulus features are relevant in the current study, one cannot achieve increase of perceived within-category similarity or decrease of perceived between-category similarity by ignoring irrelevant stimulus dimensions. This may explain why category representations in the strict sense were not found in the current study (although they could still emerge in higher cognitive regions). However, we cannot reference the in-progress study in the current manuscript, leaving the category representations to remain a seemingly non-informative null finding at this time.

Regarding item representations, we did not aim to argue that the observed item representations are abstract and apologize if it came out that way. However, we would also like to note that item representations are not necessarily trivial, nor they seem purely visually driven in our data. First, there were no stable item representations early in learning and instead they only emerged late in learning. This indicates that the formation of stable representations of individual items across repetitions involves learning and memory [as also illustrated in e.g., Xue et al., (2010, Science)]. Furthermore, item representations also weren’t strongest in a perceptual ROI (LO) but instead were strongest in the lateral prefrontal and especially lateral parietal cortices. We thought this was interesting in light of research on memory fidelity and protection from interference among similar memories, which motivated these regions of interest in the first place (we expand on this point in response to comment 11 as well). However, we realize we did not verbalize this well in the manuscript.

During the revision, we were again torn whether to keep the pattern similarity analyses and expand on their rationale and interpretation, or whether to drop them. We ended up dropping them for brevity and other reasons listed above, and agree that the manuscript is indeed much easier to digest without these secondary analyses. This also gave us more space to expand on the model-based results in response to other comments, as we outline below.

What might be a stronger test than looking at pattern similarities during this learning period, would be to look during the test periods. The authors mention concerns about motor responses, but those could be regressed out.

We agree that it would be interesting to do the pattern similarity analysis during the tests but opted against it initially because we wanted to focus on the strictest form of category representation – increased within-category representational similarity despite equal physical similarity. One challenge of using both model-based MRI and pattern similarity analyses is that it is difficult to find a category structure and a set of category examples that are well suited to both approaches. The stimulus set that we selected for the test periods was optimized for model-based fMRI and included a variety of examples that differed in their distance from category prototypes and from the training examples. This variability helped generate prototype and exemplar regressors that were not too correlated with one another and included a range of model-predicted values for the parametric modulation to track. However, this stimulus set was less well suited to pattern similarity analyses because the physical similarity between and across categories is no longer equated (as it was equated among the training stimuli). Thus, there are fewer PSA comparisons where physical similarity was matched within and across categories for the “strict” category representation analysis.

However, for the reviewers’ information, we ran category PSA for the test portions and include the results in Author response image 1. First, we focused on testing the “strict” category representations analogous to the original learning analysis, using only comparisons of items 4 features apart to equate physical similarity within- and between categories (left panel). No region showed reliable category representations during the interim or final tests. Second, we ran a “normal” category PSA and included all pairwise comparisons of all test stimuli (ignoring that two stimuli from the same category would tend to have also higher physical similarity). Here, we see that in VMPFC and LO, items within a category tend to be represented more similarly than items from different categories by the final test.

Author response image 1.

Author response image 1.

Because we no longer report any pattern similarity analyses, we did not include these results in the revised manuscript, but wanted to share them with the reviewers.

It was suggested that the authors could create two representational dissimilarity matrices based on the two models (exemplar and prototype) for each stimulus. For the exemplar model, an ROI pattern would be predicted with similarity to exemplars previously shown, while for the prototype model, an ROI pattern would be predicted with similarity to the average. This should be a more direct test.

We thank the reviewers for engaging deeply with our paper and coming up with additional suggestions for analyses. It would indeed be interesting to develop an analysis that directly compares neural RDMs with model-derived RDMs, in the tradition of other RSA work (e.g., Kriegeskorte, 2008, Frontiers).

The first approach we tried in response to this comment was to construct the model-based RDMs by computing predicted perceived distance between all pairs of stimuli using the attention weights derived from the respective models. This was the approach taken by Mack et al., 2013. Let’s consider a hypothetical example using 4-dimensional stimuli, prototype-based weight estimates being [.5.5 0 0] and exemplar-based weight estimates being [.25.25.25.25]. Then the dissimilarity between two stimuli, S1=[1 1 1 1] and S2=[1 1 0 0], would be zero according to the prototype model (because it assumes that features 3 and 4 are ignored) but it would be 0.5 per the exemplar model (because it assumes that features 3 and 4 are attended to just as much as features 1 and 2). Model-based RDMs can be constructed this way for all pairs of stimuli and compared with neural-based RDMs, as done by Mack et al.

Unfortunately, when we tried this analysis, we found it unsuitable for differentiating between the models in our data. Specifically, the two models produce highly correlated feature weight estimates (mean r = 0.85, range 0.53-1.00, see the histogram of correlation values in Author response image 2). We have previously shown in a separate large behavioral sample that feature weight estimates are well-correlated between the models when using the current stimuli and a range of category structures (Bowman and Zeithamova, 2020), so the model agreement on feature weights is not unique to this particular data set. When the prototype model fits indicate that Subject X ignored dimensions 3 and 4 and paid most attention to dimension 2, then the exemplar model typically indicates the same. From one perspective, this is a good validation that model fitting does what it should and the attention weight estimates are reliable. However, it also means that the prototype model-predicted and exemplar model-predicted RDMs are highly similar in our data and cannot serve to resolve which model better matches the neural RDMs. This was not the case in the Mack paper where the attention weight estimates differed enough between models to generate distinct predictions. We speculate that because the training stimuli in Mack et al. were less coherent and not centered around a prototype, the attention weight estimates per the prototype model were perhaps noisier. As we have a larger number of test stimuli in our studies and both exemplar and prototype strategy are in principle suitable for the current stimulus structure, both models may have sufficient information to estimate the participant’s attention to different features well-enough. Because this analysis is essentially a physical similarity analysis with weighing of features added, the model-based similarity predictions don’t differ between the prototype and exemplar model when the feature weight estimates are the same.

Author response image 2.

Author response image 2.

The wording of the comment also suggests an alternative analysis where the predicted similarity between two stimuli could be computed through a comparison of each stimulus to the exemplars/prototypes. We were considering such an approach, but it may not be possible to generate model-based similarity predictions this way. For instance, let’s consider what a similarity prediction should be for a pair of stimuli, S1 and S2, based on their similarity to the prototypes. Let’s assume that both S1 and S2 are 2 features from the prototype A and 6 features from prototype B. Even though S1 and S2 are both 2 features away from Prototype A, they can still differ by zero, 1, 2, 3 or 4 features from each other. Thus, it is unclear what prediction we should make about how similar S1 and S2 are to each other based on how similar they are to the two prototypes. The same holds for the exemplar model, but gets even more complicated given that a stimulus may be highly similar to an existing exemplar representation through similarity to very different category exemplars. Because this ambiguity in what model-based similarity predictions should be, we did not aim to implement this particular analysis.

2) Although the pattern similarity analyses were not the primary analyses in the paper, there were two very serious concerns about the primary model-based fMRI analyses. These will be essential to address if a revision is to be successful:

a) How correlated are the prototype model predictions and the exemplar model predictions? It seems like they might be highly correlated (e.g., very distant items would be distant from the prototype as well as previous exemplars). If so, then the authors may need to correct for multicollinearity in the GLMs – their separate contributions cannot be interpreted if they are correlated (because their shared variance is an arbitrary split between the two).

We appreciate this concern and took it into consideration when designing the task and during the analysis. It is indeed the case that stimuli further from old exemplars will also tend to be further from the prototypes. That’s also why the behavioral predictions of the two models are generally correlated and one can use either representation to solve the task. But this correlation will not be perfect, allowing distinct-enough predictions from the two models to differentiate them in behavior and in the brain in most cases.

Let’s take the example from the comment, focusing on the most distant items. The furthest a new exemplar in our task can be from the prototype is 3 features (4 features would make it equidistant from the two prototypes, and such stimuli were not used). When designing the task, we aimed to use a stimulus set that would be unlikely to generate highly correlated regressors and made sure that stimuli furthest from the prototype are not always far away from all the training exemplars. We used 8 new final test stimuli at distance 3 from the prototype A. Out of the 8 stimuli, there will be some that are near an old exemplar and some that are not near any of the old exemplars. The prototype model will have the same prediction for all distance 3 stimuli but the exemplar model will not. This will be also the case for stimuli at all other distances from prototype A. Because we varied the test stimuli to include all distances from the prototypes, and because within each distance to the prototype there is variability in how far the stimuli are from the old exemplars, the structure is set up to facilitate dissociation between the model predictions. And, in most cases, this variability among stimuli indeed resulted in sufficiently distinct predictions from the two models to permit analysis.

Although the objective structure was set up to facilitate dissociability, the reviewers are correct that correlated predictors are still a potential issue. We have control over the physical distances among the stimuli. However, each participant’s attention to different stimulus dimensions affected perceived similarity, and we do not have a control over that. For example, if physical distance is 2, but participant’s responses indicate that they didn’t really pay attention to those two features that happen to be different, the perceived distance between those stimuli becomes zero. Thus, the attentional parameters that we estimated for each subject affected the actual correlation between model predictions.

For that reason, we always check for correlation of regressors for each subject after the individual model fitting and exclude participants for whom the correlation is too high to permit the GLM analysis. In this study, it was 5 participants, as originally reported in the Participants section and as noted in Comment 12. In Author response image 3, we plot the histogram of the correlation values across all runs and all subjects that were included in the analysis.

Author response image 3.

Author response image 3.

The majority of correlations were small or moderate. However, there were some runs with the absolute correlation being relatively high. While none of these runs were flagged by FSL as rank-deficient or too highly correlated to estimate, we wanted to verify that our results were not affected by the correlation between the regressors. For example, it could be that the lack of exemplar correlates in the final test was due to the shared variance being mis-assigned. Thus, we re-created in Author response image 4 the final test ROI x Model analysis in the subset of subjects (n = 20) whose regressor correlations fell between +/- 0.5 for all final test runs (with most runs much closer to zero than.5). The overall pattern of results remained the same: VMPFC and anterior hippocampus tracked prototypes, and there were no above-chance exemplar correlates. Thus, it does not appear that the specific pattern of results was due to the limitations of running GLM (multiple regression) with correlated predictors.

Author response image 4.

Author response image 4.

We did not re-compute the ROI x model analysis for the second half of learning because all runs in all subjects during this phase had regressor correlations between (-.32, +.46), with the vast majority close to zero. Thus, the presence of both prototype and exemplar correlates across regions during this phase does not seem to be an artefact of correlated regressors.In the revised manuscript, we expanded the model description and regressor generation section to better explain how the potential correlation between regressors was taken into account:

“The modulation values for each model were computed as the summed similarity across category A and category B (denominator of Equation 3) generated under the assumptions of each model (from Equations 1 and 2). […] We verified that the pattern of results remained the same when analyses are limited to participants with absolute correlations below.5 in all runs, with most correlations being quite small.”

b) The within-ROI analyses were all one-sample t-tests only for the hypothesized model in a hypothesized region of interest. However, it is equally important to know: 1) is there evidence for the other model in that region as well (e.g., is there both prototype and exemplar information in the VMPFC, and if so, how can one interpret their co-existence?) and 2) is one model stronger than the other in a region (e.g., if the VMPFC doesn't have significantly higher prototype model fit than an exemplar one, can one really say it is a prototype-specific region?)

The reported analyses were based on our primary goal to replicate prototype correlates in VMPFC and AHIP from our 2018 paper and test whether we can replicate exemplar correlates in IFG, lateral parietal, and LO regions reported by Mack et., 2013. However, we agree that the analyses more directly testing a potential regional dissociation would be informative and should be reported as well. We were going back and forth about these analyses during the initial submission and ended up removing them to streamline the Results section. We have added these analyses to the Results section and briefly summarize them here:

1) We did not observe a region with significant prototype AND significant exemplar signals. As a side note, we do think that such a result is theoretically possible and offer a speculation about it at the end of the response to this comment, in case the reviewers are interested.

2) The direct comparison of prototype vs. exemplar signal did not reach significance in the predicted prototype regions during the interim tests (anterior hippocampus p=.19; VMPFC p=.11) but was significant during the final test in the anterior hippocampus (p=.015) and marginal in VMPFC (p=.053). The exemplar signal was significantly greater than the prototype signal in the majority of predicted exemplar regions (lateral parietal, IFG, PHIP; all p <.04) and marginal in LO (p=.08) during the interim tests. During the final test, there was no reliable exemplar signal and we also did not find any exemplar > prototype region (all p’s>.5). These results are now reported in detail in the revised manuscript.

In line with these results, we have revised our conclusion regarding the VMPFC. First, we emphasize that it contains generalized category representations abstracted across exemplars, akin to its role in memory integration across events. Second, we note that whether VMPFC is prototype-specific is still an open question in our view, and we hope to dive into that question in future research. The most relevant revised discussion text can be found here:

“Prior work has shown that the hippocampus and VMPFC support integration across related experiences in episodic inference tasks (for reviews, see Schlichting and Preston, 2017; Zeithamova and Bowman, 2020). […] Thus, it remains an open question whether representations in VMPFC are prototype specific or instead may reflect some mix of coding.”

As a side note, we think that the VMPFC in particular could in principle show both types of signal (prototype and exemplar) and that such a result could be meaningful. For example, exemplar-based category representations could emerge in VMPFC, perhaps with a less coherent category structure. We have behaviorally shown that less coherent training exemplars are less likely to produce a prototype representations (Bowman and Zeithamova, 2020). But a category label could serve as overlapping information linking category exemplars even if exemplars are too distinctive to produce a coherent prototype. This would resemble arbitrary overlapping pairs as used in associative inference tasks: S1-Badoon, S2-Badoon. If we had found exemplar signals in VMPFC, we would be curious to follow up to find out whether distinct representations form in distinct subregions of VMPFC (perhaps akin to separated (AB,BC) vs. integrated (ABC) representations identified in Schlichting et al., 2015). Or perhaps the same neural region would sometimes form exemplar and sometimes prototype representations, depending on the task or on the participant. Given that we did not actually find a region that would show both signals, we did not reproduce the above discussion in the paper itself but are including it here to answer the reviewers’ question.

3) We discussed the fact that the paper presents the evidence for exemplar-based processing is not strong. Behaviorally there is significantly more evidence for the prototype model, and although it appears in the hypothesized regions during the 2nd phase, it doesn't appear in the 1st phase or the final phase. The pattern similarity results also do not directly address exemplar-based coding, for the reasons mentioned above. The above-mentioned RSA may help with this, but there was concern that the paper presents only weak evidence for the existence of an exemplar representation. On the flip side, we were concerned that the prototype model does show significantly higher fits than the exemplar model in most regions when pitted directly against each other (vs. the current analysis which just compares the hypothesized model to 0 – per Comment 2b above). As such, there was concern that the final set of results do not show a message that is consistent across timepoints, behaviour, and analyses. Pending the results of additional analyses, we felt that the conclusions should be tempered, or it may simply be the case that the results are not strong enough to warrant publication in eLife.

We agree with the reviewers that the original submission may have sold the idea of co-existing representations too strongly. As noted to the Comment 4b, one piece of supporting evidence was added: an analysis that shows that the exemplar correlates in the hypothesized exemplar regions were not only above chance, but also reliably greater than prototype correlates. While this helps increase confidence in the exemplar findings during 2nd phase, it does not alleviate the concern that the exemplar correlates are only apparent in part of the task and do not carry consistently across timepoints. To better reflect the limits of exemplar evidence, we have revised the title of the manuscript to “Tracking prototype and exemplar representations in the brain across learning”. We have made revisions to the Abstract and throughout the discussion to give it the nuance warranted by the results. Removing the RSA analyses has also given us space to expand our discussion of the model-based results.

4) In the authors' prior paper (J Neuro 2018), participants were classified according to whether their responses were well fit by the exemplar model, prototype model, or neither. Here, these fits are displayed on a continuous scale rather than classifying participants according to their dominant strategy, and it seems that there is wide variability in these fits and with a majority of subjects either well or poorly fit by both (participants along the diagonal line).

The reviewers were curious how participants would be classified according to their prior approach, as a potential means to answer the question of whether performance in this dataset is indeed more exemplar-based than in the author's prior dataset. We agreed that the manuscript would benefit from a justification of why the authors switched to this analysis rather than the classification approach they had previously adopted. If possible, it would be nice to show both for a more direct comparison.

We thank the reviewers for this suggestion. We agree that the classification approach provides a nice way to visualize the behavioral model fits and offers the most straightforward way to evaluate whether we succeeded in shifting participants’ strategy compared to our prior paper. We are now including it in the manuscript. For the reviewers’ information, we used the alternative report because we were asked to report model fits using a scatter plot for our recent behavioral paper (Bowman and Zeithamova, 2020) and it also matched how Mack et al., 2013, visualized their model fits (we also note why the scatter plots are a popular way to display model fits in the response to Comment 7). We then ended up not including the model classification approach in the initial submission to shorten the paper but are happy to include them in the revision.

From the classification analysis above, we see that we succeeded in shifting participant’s strategy only partially. In our prior study, which had shorter, feedback-based training and only included a final test, 73% participants were best fit by the prototype model. In the current study, prototype model wins by a smaller margin and there are more participants who are comparably fit by both models. However, the prototype model still dominates behavior starting with second part of training.

In the revised manuscript, we added the strategy classification analysis and Figure 4. The relevant Materials and methods and Results are reprinted below:

Materials and methods:

“We also tested whether individual subjects were reliably better fit by one model or the other using a permutation analysis. […] We then compared the observed difference in model fits to the null distribution of model fit differences and determined whether the observed difference appeared with a frequency of less than 5% (α=.05, two-tailed). Using this procedure, we labeled each subject as having used a prototype strategy, exemplar strategy, or having fits that did not differ reliably from one another (“similar” model fits) for each phase of the experiment.”

Results:

“Figure 4D-F presents the percentage of subjects that were classified as having used a prototype strategy, exemplar strategy, or having model fits that were not reliably different from one another (“similar”). In the first half of learning, the majority of subjects (66%) had similar prototype and exemplar model fits. In the second half of learning and the final test, the majority of subjects (56% and 66%, respectively) were best fit by the prototype model.”

The Results section now also includes additional explanation of the alternative, continuous measure visualization using the scatter plots, which are detailed in response to Comment 7.

Finally, we added a discussion of the modest strategy shifts between the current and 2018 studies, as we detail in response to Comment 8.

Please note that in our original 2018 paper, we used the label “neither” in the sense that neither model outperformed the other model. However, we found out that it was confusing as it sounded like neither model fit the behavior well. That was not the case – both models outperformed chance (in our 2018 paper as well as here). We are now using “similar fit” instead of “neither” to label the participants whose exemplar and prototype model fits are comparable in order to avoid confusion about this point.

5) It was unclear how a participant could be a good fit to both models, as they should predict opposing behaviors (e.g. an advantage for old items over new ones in the exemplar model versus no difference in the prototype model, when testing items 2 features away from their prototype). An example or a high-level description of what a good fit to both models may look like would be helpful here.

We thank the reviewers for noting that in our attempt to explain dissociable predictions from the two models, we failed to note their similarities. In general, the two models will predict similar category labels, albeit with different confidence. For instance, prototype A will be predicted with high confidence to be a category A item by the prototype model. But it will be also predicted with moderate confidence to be a category A item by the exemplar model, as it is also closer to category A training exemplars than category B training exemplars. Category A training exemplars will be classified with high confidence as category A items by the exemplar model, but also classified with moderate confidence as category A items by the prototype model because they are only 2 features away from prototype A but 6 features away from prototype B. When participants are just guessing or pressing random buttons, neither model will do well. But as participants learn and do better on the task, the fit of both models will tend to improve. Even if a participant had a purely prototype representation and a perfect prototype fit, the exemplar model would still predict behavioral responses (based on the similarity to category exemplars) reasonably well and definitely significantly better than chance. Conversely, if a participant had a purely exemplar representation, items close to the old exemplars of category A will be – on average – also closer to the prototype A than prototype B. Thus, the prototype model will still fit quite well, but not as well as the exemplar model.

Because the model fits are usually correlated, they are often plotted as a scatter plot (as we did in the original Figure 4 and explained in response to Comment 6), with the diagonal line representing equal fits. Participants above the line are better fit by one model, participants below the line are better fit by the other model, and participants on the line are fit equally well by both models. (Our Monte Carlo procedure allows us to make a little more principled decision with respect to when to call the fits “equal”). Participants with higher (mis)fit values are those who learned less and are guessing on more trials, which will mean that both models will fit relatively poorly. And as noted above, the more consistent the responses become with an underlying representation (prototype or exemplar), the better fit for both models will tend to be. This is the reason why the exact fit value for one model is not sufficient to determine the participant’s strategy; only the relative fit of one model compared to the other model is diagnostic. In some cases, the responses will be highly consistent with both models and we end up unable to determine the strategy.

Up to this point, we discussed the mathematical reasons for why a great fit of one model will be likely accompanied by a decent fit of the other model. However, it is worth noting that there may also be cognitive reasons why a participant’s behavior may be well fit by both models. We don’t see prototype vs. exemplar representations to be necessarily an either-or scenario; it would be plausible that a participant forms a generalized representation (prototype) that guides categorization decisions but also has memory for specific old instances that can also inform categorization. For example, consider a participant whose behavior mirrors what we see across the group: they have an accuracy gradient based on the distance from the category prototypes but also have an accuracy advantage for old items compared to new items at distance 2 (Figure 3B in the current manuscript, also observed in 2018 paper). The reviewers are correct that the prototype model does not predict an old/new difference. However, accuracy for both old and new distance 2 items still falls between distance 1 and distance 3 items, consistent with the prototype model predictions. Thus, the observed old/new advantage will not cause a serious reduction in model fit for the prototype model as both old and new items at a given distance fall along the expected distance gradient. At the same time, the old > new advantage suggests that the prototype model does not offer the whole story.

In the revised manuscript, we included more information about the models to clarify why behavioral model fits in general track one another and why both models may fit behavior well:

“Figure 4A-C presents model fits in terms of raw negative log likelihood for each phase (lower numbers mean lower model fit error and thus better fit). Fits from the two models tend to be correlated. If a subject randomly guesses on the majority trials (such as early in learning), neither model will fit the subject’s responses well and the subject will have higher (mis)fit values for both models. […] In such cases, a subject may be relying on a single representation but we cannot discern which, or the subject may rely to some extent on both types of representations.”

We have also provided more motivation for the idea that prototype and exemplar representations may co-exist in the Introduction:

“It is possible that the seemingly conflicting findings regarding the nature of category representations arose because individuals are capable of forming either type of representation. […]Thus, under some circumstances, both prototype and exemplar representations may be apparent within the same task.”

6) Related to the first point, the authors state that task demands and/or category structures could bias categorization to be either exemplar or prototype based. To get at both, the authors used a category structure that is supported mostly by a prototype model, but use a task that is meant to promote exemplar encoding. However, on average participants' decisions were still better fit by the prototype model (although again, most people fall along the diagonal). And from Figure 4, there seem to be very few participants that were better fit by the exemplar model.

With this in mind, is there any evidence that the observational task actually shifted participants to be more exemplar-based? If not, why is there evidence of exemplar representations in LO/IFG/Lateral parietal in this experiment and not in the authors' earlier paper, where ~10% of participants were better fit by an exemplar model? Are there any other differences between the two procedures or stimulus sets that could account for this? Regardless of the differences between the experiments, the notion that exemplar-based representations exist in the brain even though they are not relevant for behavior is worth engaging with in more detail in the Discussion.

The reviewers’ point that the difference between the behavioral and neural model fits should be discussed in more detail is well taken. First, as the reviewers noted (and as we discussed in Comment 6) the strategy shift turned out to be modest. Despite that, we found exemplar correlates (at least in the interim tests after the second half of training) in the present study while we did not find them in our 2018 study. Why is that? First, we need to note that although our 2018 study did not find any above-threshold exemplar regions when using a standard corrected threshold, we found some parts of the brain, including lateral occipital and lateral parietal region, that tracked exemplar predictions at a lenient threshold (z=2, FDR cluster correction p=.1). We conducted this exploratory lenient threshold analysis in an attempt to reconcile our findings with that of Mack et al., 2013, and reported it on p. 2611 (last Results paragraph) of our 2018 J Neuro paper. Thus, the present results are not entirely different from our prior study in that there was some evidence of exemplar-tracking despite better fit of the prototype model to behavior.

One possible reason for observing exemplar correlates in the current study may be that the observational learning was actually successful in promoting the exemplar representation, even though the prototype strategy still dominated behavior. There are fewer of the pure “prototypists” and more subjects that are comparably fit by both models, providing some indication that there was a small strategy shift. The small shift in strategy may have been sufficient for the previously subthreshold exemplar correlates to become more pronounced.

The structure of the training was indeed the primary difference between the two studies. Our prior study included a single feedback-based training session outside the scanner. The present study included cycles of observational study runs (to promote exemplar memory) that were followed by interim generalization tests (to measure learning and estimate model fits). We also used a different set of cartoon animals, but their feature space is very similar and the structure of the training items was the same across studies. While the training portion was necessarily longer in the current study because it was scanned and included both observational learning and interim tests, the final tests were nearly identical across studies. Thus, the differences in training are likely the most important from the learning perspective.

Importantly, we agree with the reviewers that the prototype dominance in categorization behavior may not preclude the existence of exemplar-specific memory representations. Exemplar-based representations may form in the brain even though they are not immediately relevant for the task at hand. Exploring the possibility that specific and generalized memories form in parallel is of high interest to our lab, and we have behaviorally studied both specific memories after generalization learning (Bowman and Zeithamova, 2020) and generalization after learning focused on specific details (Ashby, Bowman and Zeithamova, 2020, Psychology Bulletin and Review). Thus, we are eager to dedicate more discussion to this topic.

In the revised manuscript, we expanded the comparison between the current study and our 2018 study throughout the discussion. We also discuss the existence of neural exemplar-based representations in the context of prototype dominance in behavior:

“In designing the present study, we aimed to increase exemplar strategy use as compared to our prior study in which the prototype model fit reliably better than the exemplar model in 73% of the sample (Bowman and Zeithamova, 2018). […] The present results show that these parallel representations may also be present during category learning.”

7) In general, it was felt that the paper could use more depth. Many of the citations regarding the prototype vs. exemplar debate are very old (~30+ years old) and the more recent papers mentioned later focus more on modeling and neuroimaging. Could the authors describe work that consolidates this debate?

We agree that our paper could benefit from broader connections to the existing literature and deepening of the discussion. We have revised the Introduction and especially the Discussion and incorporated a number of additional references relevant to the current work (noted in blue in the manuscript). We specifically aimed to include more recent papers focusing on prototype and/or exemplar models (e.g., Dubè, 2019; Thibaut, Gelaes and Murphy, 2018; Lech, Güntürkün and Suchan, 2016). Unfortunately, to our knowledge, they are not very prevalent and the best attempts at consolidations (such as Smith and Minda, 2000; Minda and Smith, 2001) are indeed two decades old. We were unsure whether there was another specific pocket of literature we missed that the reviewers had in mind. If so, we apologize and would be happy to incorporate it.

Also, we felt that there could be more elaboration on the Materials and methods in the main manuscript, specifically as they relate to the theoretical questions at stake. For example, what are the general principles underlying these models and what different hypotheses do they make for this specific stimulus space?

We have added more information on the models and their relationship to the present study to the Introduction:

“We then looked for evidence of prototype and exemplar representations in the brain and in behavioral responses. In behavior, the prototype model assumes that categories are represented by their prototypes and predicts that subjects should be best at categorizing the prototypes themselves, with decreasing accuracy for items with fewer shared features with prototypes. […] We then measured the extent to which prototype- and exemplar-tracking brain regions could be identified, focusing on the VMPFC and anterior hippocampus as predicted prototype-tracking regions, and lateral occipital, prefrontal, and parietal regions as predicted exemplar-tracking regions.”

“To test for evidence of prototype and exemplar representations in behavior across the group, we compared accuracy for items varying in distance from category prototypes and for an accuracy advantage for training items relative to new items matched for distance from category prototypes. […] The model whose predictions better match a given subject’s actual classification responses will have better fit. However, it is also possible that evidence for each of the models will be similar, potentially reflecting a mix of representations.”

8) In addition to engaging more with the literature, we wished for more of a discussion of the mechanistic reasons behind why these ROIs have separate sensitivity to these two models, as well as what the present results show about what underlying computations are occurring in the brain. To quote directly from our consultation session: "I think some attempt at a mechanistic explanation may also be necessary. (Though I understand this can be delicate because they may want to avoid reverse inference.) One thing that stands out to me here is that LO is claimed to be a region showing exemplar representations. LO is believed by many (most?) to be a "lower level" visual area (e.g., in comparison to PPA which may have more abstract information) that represents visual shape information. So even if LO shows an exemplar representation, this could be because the item is more similar in shape to other items that were seen (the exemplars) versus a prototype that was not seen. So, a purely low-level visual account could possibly explain these results, rather than something deeper about mental representations. Thus I am also concerned about whether these findings necessarily mean something deep reconciling this larger theoretical debate, or may reflect some comparison of low level visual features across exemplars." In short, we all agreed that a deeper discussion of the neural computations and mechanisms would improve the contribution of the current paper considerably.

As suggested, we have added more details about the proposed computations performed by individual ROIs and their relevance for prototype and exemplar coding. Introduction:

“Mack and colleagues (2013) found similar behavioral fits for the two models, but much better fit of the exemplar model to brain data. Parts of the lateral occipital, lateral prefrontal and lateral parietal cortices tracked exemplar model predictors. No region tracked prototype predictors. The authors concluded that categorization decisions are based on memory for individual items rather than abstract prototypes. In contrast, Bowman and Zeithamova (2018) found better fit of the prototype model in both brain and behavior. […] However, as neural prototype and exemplar representations were identified across studies that differed in both task details and in the categorization strategies elicited, it has not been possible to say whether differences in the brain regions supporting categorization were due to differential strength of prototype versus exemplar representations or some other aspect of the task.”

Discussion:

“Moreover, our results aligned with those found separately across two studies, replicating the role of the VMPFC and anterior hippocampus in tracking prototype information (Bowman and Zeithamova, 2018) and replicating the role of inferior prefrontal and lateral parietal cortices in tracking exemplar information (Mack et al., 2013). […]The present findings support and further this prior work by showing that regions supporting memory specificity across many memory tasks may also contribute to exemplar-based concept learning.”

Specifically regarding perceptual representations in categorization, there is prior evidence that representations in LO shift as a function of category learning (Palmeri and Gautheir, 2004; Folstein, Palmeri and Gauthier, 2013), which has typically been interpreted as the result of selective attention to category relevant features (Goldstone and Steyvers, 2001; Medin and Schaffer, 1978; Nosofsky, 1986). The study by Mack et al. showed that exemplar representations in LO were not only related to the physical similarity between items, but driven by subjective similarity in individual subjects estimated from their behavioral responses. Thus, these lower level perceptual regions can have strong category effects that are driven by more than just the physical similarity between items. Nonetheless, we note that of our predicted exemplar ROIs, LO showed the weakest evidence of exemplar coding. We have edited the discussion to explicitly point out that one of our hypothesized exemplar regions did not significantly track exemplar predictors. The following is the revised discussion text:

“In addition to IFG and lateral parietal cortex, we predicted that lateral occipital cortex would track exemplar information. […] This aspect of our task may have limited the role of selective attention in the present study and thus the degree to which perceptual regions tracked category information.”

9) Six participants were excluded for high correlated prototype and exemplar regressors – can the authors provide a short of explanation of what pattern of behavioral responses would give rise to this? And what is the average correlation between these regressors in included participants?

There were five participants excluded due to high correlations in one of the learning phases: 3 participants due to high correlation between prototype and exemplar regressor and 2 due to rank-deficient design driven by a lack of trial-by-trial variability in the exemplar predictions. In all five cases, attention weights from their behavioral model fits indicated that the participants ignored most stimulus dimensions in the given phase. As noted in the response to Comment 4A, the models’ predictions are based on estimated perceived distance, which in turn depend on estimated feature attention weights. When too many stimulus features are ignored, too many stimuli are perceived as equivalent and there may not be enough differences between the predictions of the two models (3 subjects) or enough trial-by-trial variability in the model predictions (2 subjects) to permit analysis. We have added a note to the exclusion section providing more explanation regarding the correlation-driven exclusions:

“An additional 5 subjects were excluded for high correlation between fMRI regressors that precluded model-based fMRI analyses of the first or second half of learning phase: 3 subjects had r >.9 for prototype and exemplar regressors and 2 subjects had a rank deficient design matrix driven by a lack of trial-by-trial variability in the exemplar predictor. In all 5 participants, attentional weight parameter estimates from both models indicated that most stimulus dimensions were ignored, which in some cases may lead to a lack of variability in model fits.”

The mean absolute correlation between regressors in included participants was.32. For a detailed response and manuscript edits regarding the regressor correlations for subjects that were retained for analyses, please see Comment 4A.

10) For behavior, the authors investigated performance as it varied with distance from prototypes. It would also be interesting to investigate how behavior varies as distance from studied exemplars.

We agree with the reviewers that, in principle, it is of interest to examine accuracy under the terms of the exemplar model. However, in practice, this can be tricky to accomplish. Because there is only one prototype per category and because the further a stimulus is from one prototype, the closer it becomes to the other prototype, it is easy to display accuracy in terms of the distance from category prototypes. With exemplar representations, that is not the case. There are multiple exemplars per category, so one has to make a choice whether to consider just the closest exemplar or attempt to consider all simultaneously. Taking a linear average of the distance to all exemplars in our category structure would simply be the prototype distance again. The exemplar model uses a nonlinear metric to compute similarity of each item to each exemplar before summing across exemplars, but the exact nature of this nonlinearity is measured for each subject, making it difficult to compile into a group-level analysis. As discussed further in response to Comment 15, getting an aggregate measure of distance/similarity across all training items is an important part of how the exemplar model works. Considering just one closest exemplar provides the best approximation to the prototype analysis, but is necessarily simplified as it does not consider that classifying an item close to two exemplars of one category is easier than classifying an item close to a single exemplar of the category (see also response to Comment 15).

To give the reviewers some sense of what exemplar-based accuracy looks like, we took the approach of calculating the physical distance between a given test item and all the category A and category B training items. We then took the minimum distance to each category (i.e., the physical distance to the closest exemplar from each category), and divided them to compute a “distance ratio.” For example, if the minimum distance to a category A exemplar is 1 (one feature different) and the minimum distance to a category B member is 2 (two features different), then the distance ratio would be 1/2 = 0.5. We focused on the ratio of distances to the two categories because measuring distance to exemplars from only one category does not provide enough information about distance to the other category (unlike the distances to the prototypes, which always sum to 8 and are perfectly inversely related).

The ratio will be 0 for old items themselves as the numerator will be 0 and the denominator would be 4 or 8 given the structure of our training exemplars, either way resulting in zero for the ratio. The ratio will be 1 if the test item is equidistant from the closest exemplar from each category. For all other test items, the ratio will be between zero and one, with numbers closer to zero indicating that a test item is close to an exemplar from one category and far from all exemplars in the opposing category. In practice with our category structure, the possible distance ratio values were 0, 0.2, 0.33, 0.5 and 1. We computed the “proportion correct” metric assuming that category membership is determined by the closer distance. For example, an item with a minimum distance to a category A exemplar of 1 and the minimum distance to a category B member of 2, category A would be considered the correct response as the closest category A item is closer than the closest category B item. We note that for items with a distance ratio of 1 (equidistant), there is no “correct” answer, so we instead represent how often those items were labeled as category A members.

In Author response image 5, panel A shows accuracy as a function of distance ratio during interim tests. Panel B shows accuracy as a function of distance ratio during the final test. What we find is that the gradient based on exemplar distance is less clean than that based on prototype distance. During the interim tests, distance ratios 0-0.5 are clustered together with relatively high accuracy, and do not show an obvious gradient. As would be expected, items that were equidistant between exemplars (distance ratio = 1) are labeled as category A members ~50% of the time after the first test. During the final test, we see that distance ratios 0-0.2 have similar levels of relatively high accuracy, followed by similar levels for ratios 0.3-0.5. Subjects responded category A to about 50% of items with distance ratio = 1, nicely capturing the ambiguity of classifying items equidistant from exemplars from opposing categories.Given the issues noted above with generating any raw-distance-to-exemplars analog to the raw-distance-to-prototype approach, we have elected not to include this analysis in the manuscript.

Author response image 5.

Author response image 5.

11) Did the authors find different weighting on different manipulated features?

We present the attention weights returned by each model averaged across subjects for each phase in Author response table 1:

Author response table 1.

Phase Model Feature weights (w) Sensitivity
(C)
Neck Tail Feet Nose Ears Color Body Pattern
Immediate tests 1-2 Exemplar .12 .15 .16 .18 .10 .11 .10 .08 36.6
Prototype .13 .13 .15 .17 .09 .12 .10 .09 24.7
Immediate tests 3-4 Exemplar .09 .15 .15 .11 .15 .08 .14 .13 56.1
Prototype .10 .15 .16 .11 .12 .10 .14 .13 49.3
Final test Exemplar .11 .16 .14 .15 .13 .06 .14 .12 57.9
Prototype .11 .15 .16 .13 .11 .09 .17 .09 49.7

As there were 8 features and the attention weights sum to 1, a perfectly equal weighing of all features would be.125 weight on each feature. We see that there are some differences in how individual features are weighted, but that the differences are relatively small. It does not seem that there was any feature that was completely ignored by most subjects, nor were there 1-2 features that captured the attention of all subjects. We note that there was also good consistency across the models in how the features were prioritized. We have added the information about these parameters to the data source files in addition to the model fits that were included in the original submission.

12) The manuscript states that for the exemplar model, "test items are classified into the category with the highest summed similarity across category exemplars." One reviewer wondered whether this is a metric that is agreed upon across the field, as they would have anticipated other (possibly non-linear) calculations. For example, could it be quantified as the maximum similarity across exemplars?

Thank you for pointing out that we did not make this clear. Indeed, this is the standard in calculating similarity for the exemplar model to take the sum across similarity to all training exemplars. The probability of responding A is then computed as the relative similarity to each category: the similarity to category A exemplars divided by sum of similarity to category A exemplars and category B exemplars. We adopted this model formalization as used in traditional cognitive modeling literature.

We note, however, that non-linearity is still included in the calculation. Because we (and others) use a nonlinear function to transform physical distance into subjective distance before summing across the items, the most similar items are weighted more heavily than the least similar items. Intuitively, one can recognize a tiger as a mammal because it looks similar to other types of cats, and it does not matter that it does not look like some other mammals, like elephants. Formally, this stems from computing similarity as an exponential decay function of physical distance (so close distances outweigh far distances) rather than a linear function. As a result, two items that are physically 1 feature apart will have a subjective similarity value that is more than twice the similarity value for two items that are 2 features apart. This is canonically how distance/similarity is computed in both models, and is derived from research on how perceived similarity relates to physical similarity (Shepard, 1957).

Although taking the maximum similarity would be a possible function to use for decision making, summing across all exemplars provides an opportunity for multiple highly similar exemplars to be considered in making decisions. For example, imagine a scenario in which there are multiple category A exemplars that are only 1 feature away from a given generalization item vs. a scenario in which there is only one category A exemplar that is 1 feature away (and the other exemplars are 2+ features away). Without summing across the training examples, these two scenarios would generate the same prediction. By summing across them, the model would assign a higher probability of the item being labeled a category A member when there are multiple distance 1 items compared to when there is a single distance 1 item. Because the similarity itself includes a non-linear computation, using the summed similarity function provides the best of both worlds: highly matching exemplars drive the prediction but there is still a differentiation in the resulting probability values informed by other exemplars beyond the closest exemplar.

In response to this comment and also Comment 7, we expanded the model description section in the manuscript to better conceptually explain the similarity computation and the response probability computation. We also made it more clear that these are the canonical ways of computing similarity and response probabilities in these models:

“Exemplar models assume that categories are represented by their individual exemplars, and that test items are classified into the category with the highest summed similarity across category exemplars (Figure 1A). […] This is canonically how an item’s similarity to each category is computed in exemplar models (Nosofsky, 1987; Zaki, Nosofsky, Stanton, and Cohen, 2003).”

13) Is there any way to explore whether the model fits from the neural data can be used to explain variance in participants' behavioral fits? For instance, do neural signals in vmPFC and anterior hippocampus better relate to behavior relative to signals in LO, IFG and lateral parietal for participants with better prototype fits? There may not be sufficient power for this so this was suggested only as a potential exploratory analysis.

In addition to the power issue, we’ve been hesitant to try to relate the neural and behavioral model fits to one another as they are they are not independent. Unlike univariate activation (or another independent neural measure), the brain model fits are computed based on regressors derived from the behavioral model fits and are thus inherently dependent on them. Thus, even if we find a correlation between neural model fits and behavioral model fits, we wouldn’t be able to interpret the neural fits as “explaining” the individuals’ variability in behavior (since the neural fits are computed from behavioral fits). Furthermore, for subjects whose behavior is not well fit by the model, we would be biased not to find neural evidence matching the model predictions. Because of this, we’ve been focusing on the main effects (group average) when conducting model-based fMRI analyses in this and our 2018 paper and avoided across-subject brain-behavior correlations.

Out of curiosity, we looked at how the behavioral prototype advantage (the degree of evidence for prototype strategy over exemplar strategy) relates to prototype and exemplar signals in the ROIs using correlations. We did not find any significant correlations, but the strongest relationship found was between behavioral prototype advantage and the VMPFC prototype signal (r = 0.35, p = 0.06 uncorrected), indicating that those who are the strongest “prototypists” tend to have greater prototype signal in VMPFC. This is a nice sanity check, but given the non-independence of the behavioral and neural measures (and the power issues, especially for the number of possible correlations), we did not include this analysis/result in the revised paper. We have however included a note explaining why the neural model fits are inherently linked to behavior and why traditional brain-behavior correlations may not be suitable to use.

“Because each participant’s neural model fit is inherently dependent on their behavioral model fit, we focused on group-average analyses and did not perform any brain-behavior individual differences analyses.”

As a side note, we are currently starting a large individual differences study (now on hold … of course) where we hope to explore the neural predictors of individual differences in strategy, using independent neural markers. So we hope to be able to give a more comprehensive answer to this question in a couple of years…

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

We all agreed that the revised paper was substantially improved. In particular, the removal of the pattern similarity analyses made the paper sharper and more straightforward, as well as eliminated the methodological concerns that were highlighted in our original reviews. The pairwise t-tests of prototype vs. exemplar fits in the ROIs has clarified the results, while still providing good support of the authors' conclusions. The relationship between prototype and exemplar models is much clearer and is thoughtfully discussed, and the added text in the Introduction and Discussion have nicely framed the paper and motivated the analyses. In general, the research reads as being much more impactful and the current work much stronger.

Although the paper is much improved, one concern still remains regarding the correlation between exemplar and prototype regressors. To quote reviewer #2 specifically:

"I'm convinced that mathematically, the regressors are not highly correlated and the explanation in the authors' response regarding the most distant items is helpful in getting an intuition of how this can be. At the same time, I'm still trying to understand how the correlation between regressors is on average only r=.3 when the behavioral fits themselves are so highly correlated across participants. If there is a way to clarify this, I think that would go a long way in helping readers get on board with the idea that both exemplar and prototype representations can exist in the brain.”

That is correct – high across-subject correlation does not preclude detection of within-subject differences. Intuitively, it is easy to see which students scored better on Midterm 1 vs. Midterm 2, and if one of the Midterms was harder than the other on average, even though the scores on the two midterms would be highly correlated across students. We added an explicit note to clarify that within-subject fit differences are detectable and meaningful even though the fits are highly correlated across subjects.

“Thus, although the model fits tend to be correlated across-subjects, the within-subject advantage for one model over the other is still detectable and meaningful.”

We have also expanded the stimulus set description to incorporate the example of the most distant items that the reviewer found helpful for understanding the model dissociations:

“The stimulus structure enabled dissociable behavioral predictions from the two models. While stimuli near the prototypes also tend to be near old exemplars, the correlation is imperfect. For example, when attention is equally distributed across features, the prototype model would make the same response probability prediction for all distance 3 items. However, some of those distance 3 items were near an old exemplar while others were farther from all old exemplars, creating distinct exemplar model predictions. Because we varied the test stimuli to include all distances from the prototypes, and because within each distance to the prototype there was variability in how far the stimuli are from the old exemplars, the structure was set up to facilitate dissociation between the model predictions.”

What may be less intuitive is that the within-subject correlations of neural model predictions (summed similarity across categories, with an average absolute r=.3 in our data) can be relatively small even when the behavioral predictions (predicted response probabilities) within that subject end up similar. An extreme example was observed in Mack et al., 2013, where the behavioral predictions for each of 16 stimuli in their category structure (9 training + 7 transfer) are nearly indistinguishable for the two models and behavior of all but one subject was similarly fit by both models. Yet, neural predictors were largely uncorrelated and neural model fits were reliably more consistent with exemplar model than prototype model in the majority of subjects. As Mack et al. noted, the two models might predict identical behavioral response on any given trial, but the latent representations that support that decision are different, which helps to tease those models apart.

Our category structure was constructed to facilitate distinct behavioral predictions from the two models, so the difference between dissociating behavioral model predictions and dissociating neural model predictions are less staggering in our data. However, to help with intuitive understanding of the neural regressors, we added a conceptual description of the model-based fMRI to the Results section to include explanation of the construction of behavioral vs. neural predictors. We also explicitly noted that representational match to the underlying representations (the neural model predictions) may dissociate the models better than predicted response probabilities (the behavioral model predictions).

“The behavioral model fitting described above maximizes the correspondence between response probabilities generated by the two models and the actual participants’ patterns of responses. […] Furthermore, the neural model fits can help detect evidence of both kids of representations, even if one dominates the behavior.”

Incorporating some of the authors' explanation from the response letter could be useful for this. In particular, it was nice that they explicitly laid out how the advantage for old items over new ones would result in a better fit to an exemplar model but still a decent fit to an exemplar model because accuracy for old/new items at distance 2 fall between accuracy for items with distances of 1 an 3. Their response to Comment 7 seems like it could be informative to include too – although here they talk about the models' differences in terms of confidence, which isn't explained in the manuscript.

Thank you for pointing out that some of the response could be beneficial to include in the paper itself. In the revision, we included the example that the reviewer found helpful (old>new advantage in the context of the overall typicality gradient) as a piece of evidence that both representations may play a role, even when one is more dominant in the behavior:

“To summarize, we observed a reliable typicality gradient where accuracy decreased with the distance from the prototypes and both old and new items at the distance 2 numerically fell between distance 1 and distance 3 items (Figure 3A). However, within distance 2 items, we also observed a reliable advantage for the old items compared to new items, an aspect of the data that would not be predicted by the prototype model.”

Some of the points from Response 7 are also incorporated in the new high-level model-based fMRI overview and reprinted above.

We hope that these revisions have helped the conceptual understanding of the models and how they can help detect both kinds of representations in the brain.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Bowman CR, Iwashita T, Zeithamova D. 2020. Model-based fMRI reveals co-existing specific and generalized concept representations. OpenNeuro. ds002813

    Supplementary Materials

    Figure 3—source data 1. Behavioral accuracy - interim tests.
    Figure 3—source data 2. Behavioral accuracy - final test.
    Figure 4—source data 1. Behavioral model fits - all phases.
    Figure 5—source data 1. Neural model fits - interim tests.
    Figure 5—source data 2. Neural model fits - final test.
    Transparent reporting form

    Data Availability Statement

    Raw MRI data have been deposited at openneuro.org/datasets/ds002813. Source data have been provided for Figures 3-5.

    The following dataset was generated:

    Bowman CR, Iwashita T, Zeithamova D. 2020. Model-based fMRI reveals co-existing specific and generalized concept representations. OpenNeuro. ds002813


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES