Skip to main content
MIT Press Open Journals logoLink to MIT Press Open Journals
. 2025 Dec 1;37(12):2589–2615. doi: 10.1162/JOCN.a.60

Semantic Dimensions Support the Cortical Representation of Object Memorability

Matthew Slayton 1, Cortney M Howard 1, Shenyang Huang 1, Mariam Hovhannisyan 2, Roberto Cabeza 1, Simon W Davis 1
PMCID: PMC12371987  NIHMSID: NIHMS2102493  PMID: 40523805

Abstract

Recent work in vision sciences contends that objects carry an intrinsic property called memorability that describes the likelihood that an object can be successfully encoded and later retrieved from memory. It has been shown that object memorability is supported by semantic information, but the neural correlates of this relationship are largely unexplored. The present study explores these premises and asks whether neural correlates of object memorability can be accounted for by semantic dimensions. We combine three data sets: (1) feature norms for a database of ∼1000 natural object images, (2) normative conceptual and perceptual memory data for those objects, and (3) neuroimaging data from an fMRI study collected using a subset (n = 360) of those objects. We found that object-wise memorability elicits consistent brain activation across participants in key mnemonic regions, including the hippocampus and rhinal cortex, and that the variance in this neural activity is mediated by the semantic factors describing these images. We propose that the features of memorable images may be facilitating memory formation by more deeply engaging encoding processes.

INTRODUCTION

Studies of the memory for object images have found that certain images are remembered better than others in a manner that holds across participants (Bainbridge et al., 2019; Bylinskii, Isola, Bainbridge, Torralba, & Oliva, 2015; Isola, Xiao, Parikh, Torralba, & Oliva, 2014; Isola, Parikh, Torralba, & Oliva, 2011), as well as across stimulus categories (Bainbridge, Isola, & Oliva, 2013), data sets (Khosla, Raju, Torralba, & Oliva, 2015), and testing conditions (Wakeland-Hart, Cao, deBettencourt, Bainbridge, & Rosenberg, 2022). It has been proposed that such image memorability is so consistent across contexts because it constitutes an intrinsic attribute of visual stimuli (Bainbridge, 2022). Other groups contend that memorability is the result of several lower-level properties of images, such as the relative distinctiveness of their features (Hovhannisyan et al., 2021). Whether memorability is decomposable into further properties or not, the reliability of this measure is an ongoing focus of investigation (Needell & Bainbridge, 2022), and fundamental questions remain concerning the means by which the human brain supports such memory biases toward certain object stimuli. Indeed, recent work has suggested that the visual and semantic features that constitute image stimuli (e.g., flamingo: “is pink,” “has wings”) can explain a large proportion of object-level variability in memorability, pointing to a possible mechanistic basis for the phenomenon (Deng, Beck, & Federmeier, 2024; Kramer, Hebart, Baker, & Bainbridge, 2023; Bainbridge, 2022; Hovhannisyan et al., 2021; Dubey, Peterson, Khosla, Yang, & Ghanem, 2015). Given the growing appreciation for the role that complex semantic representations play in memory encoding (Frisby, Halai, Cox, Ralph, & Rogers, 2023; Liu, Hou, Koen, & Rugg, 2022; Davis et al., 2021), semantic features are a leading candidate to explain cortical activation biases toward specific object stimuli during object encoding. The present study therefore seeks to identify how such memorability-related cortical activity can be explained by the constituent semantic features of real-world objects.

The neural basis of the encoding of visual and semantic features of images spans a rigorously explored hierarchy from early visual cortex, through the ventral stream toward anterior temporal and frontal cortices (Tyler et al., 2013; Peelen & Caramazza, 2012; Bird & Burgess, 2008). Conceptual features are encoded in multiple regions along the ventral stream that serve numerous functional roles, such as object representation and feature differentiation in perirhinal cortex (Clarke & Tyler, 2014; Tyler et al., 2013) and contextual information in parahippocampal cortex (Eichenbaum, Yonelinas, & Ranganath, 2007). Perirhinal cortex and parahippocampal cortex project to entorhinal cortex, which in turn maintains reciprocal projections with the hippocampus, where the “what” and “where” information converge to support memory encoding (Jin & Maren, 2015). Previous reports from the current study data set have found a relationship between object category and memory in parahippocampal regions, retrosplenial cortex, and fusiform gyrus (Yu et al., 2025) as well as memory phase-related effects in the hippocampus (Howard, Huang, Hovhannisyan, Cabeza, & Davis, 2024). Given their roles in resolving object identity, the hippocampus and rhinal cortex may be critical for consolidating object-specific visual representations into lasting representations in memory, and further work is needed to understand the relationship between activity in these regions, memory performance, and stimulus memorability.

The development of feature-based models of stimulus representation has engendered a deeper understanding of how structures like perirhinal cortex and hippocampus index distributed cortical representations and how these changes influence subsequent recollection of these visual and semantic representations. The mnemonic benefit of processing the constituent perceptual and conceptual features of a to-be-remembered stimulus is one of the most established findings in the study of episodic memory (Hargreaves, Pexman, Johnson, & Zdrazilova, 2012; Seamon & Murray, 1976; Craik & Lockhart, 1972). Modern techniques for outlining the granularity of these visual and semantic features offer new possibilities for establishing why semantic elaboration benefits subsequent memory and how brain regions coding for those elaborative dimensions may support deeper encoding (Walsh & Rissman, 2023). Hippocampal activity modulates the cortical representation of visual information in occipital regions and semantic information in frontoparietal regions in a transfer-appropriate manner that supports successful perceptual and conceptual retrieval (Huang et al., 2024). Nonetheless, although relational properties such as image similarity have been shown to be represented in hippocampus (Morton, Zippi, Noh, & Preston, 2021), the processing of features specifically known to predict memorability is not well-understood. Previous work has observed memorability effects in the medial temporal lobe (MTL; including parahippocampal cortex) and pFC (Bainbridge, Dilks, & Oliva, 2017). Using memorability-based model Representational Dissimilarity Matrices, the same group found memorability representations for both scenes and faces in MTL, ventral stream, and lateral pFC. They argue that the interaction between ventral stream regions and MTL in the processing of various image features (such as distinctiveness) is an important part of the encoding of differentially memorable images (Bainbridge et al., 2017). Similar evidence for memorability-related representations has been found in other studies, including both ventral visual stream and MTL, as well as frontal regions, although neural pattern similarity related to stimulus memorability and individual memory performance did not overlap completely (Bainbridge & Rissman, 2018). Evidence further suggests that memorability may be a rapid, perceptual phenomenon whereby memorable images are processed differently from less memorable ones (Bainbridge, 2020; Xie, Bainbridge, Inati, Baker, & Zaghloul, 2020), and studies of computational models of vision suggest that this phenomenon of privileged processing occurs in both humans and monkeys (Jaegle et al., 2019). Building on this work, we aim to develop a deeper understanding of what properties drive the mnemonic processing of visually presented objects and how these processes are modulated by object memorability.

The growing understanding of what processes underlie semantic representations has led to an emerging consensus regarding the role of conceptual feature encoding and memorability. Researchers use two main classes of methods to model features and derive semantic dimensions that describe object features and the relationships between different objects: computing semantic similarity as the correlation of feature vectors derived from human ratings of images (Davis et al., 2021; Hovhannisyan et al., 2021; Devereux, Clarke, Marouchos, & Tyler, 2013) or inferring semantic similarity from a computational model based on human similarity judgments (Contier, Baker, & Hebart, 2023; Kramer et al., 2023; Bainbridge, 2022; Kramer, Hebart, Baker, & Bainbridge, 2022; Hebart, Zheng, Pereira, & Baker, 2020). Hovhannisyan and colleagues showed that complex semantic features (e.g., the distinctiveness of features for a given image) are generally much better predictors of image memorability (i.e., both hit rates and false alarm rates) than either simple semantics (e.g., image frequency) or the visual properties of the image (Hovhannisyan et al., 2021). It can be inferred, therefore, that semantic properties carry a greater importance for memorability than do perceptual properties in such a paradigm. Similarly, Kramer and colleagues demonstrated that computationally derived semantic features are stronger predictors of memorability than a general estimate of object typicality, and that such data-driven semantic features captured 88% of the variance in memorability captured by the feature space (Kramer et al., 2023). An important theoretical point throughout such work concerns whether memorability is an intrinsic property of images on its own or a composite of other image properties. Although we support the latter position that memorability is the result of the way particular image features are processed, we argue that both approaches support the idea that the brain's propensity to modulate the processing of object images according to semantic properties is key to understanding the cortical representation of image memorability.

The current study investigates to what extent object-wise memorability can predict hippocampal activity during object encoding and to what extend this activity is mediated by object semantics. We used a previously published object encoding study (Davis et al., 2021), as well as feature norms and memorability norms (Hovhannisyan et al., 2021), to build a model of image memorability in the brain. We further compared the relative roles of human-derived image memorability ratings and semantic dimensions based on stimulus features norms in predicting trial-wise brain activity. We expected that object-wise memorability would elicit consistent brain activation in key mnemonic regions (e.g., hippocampus) and that a substantial portion of the variance in this brain activity could be explained by the semantic factors underlying the images.

METHODS

Overview

The present study examined extant data sets from three related experiments, which have been examined in previous behavioral analyses (Hovhannisyan et al., 2021) or neural basis of object memory (Huang et al., 2024; Davis et al., 2021). The first experiment collected a feature-norming database based on online descriptions of a set of object images, the second conducted an online experiment testing conceptual memory for the same objects, and a third neuroimaging experiment used objects from this database to investigate the role of cortical representations in mediating conceptual memory. While the full description for these data can be found in the original publications (Davis et al., 2021; Hovhannisyan et al., 2021), here, we describe details of study design, acquisition, and analysis sufficient to understand the current study.

Stimuli

The total set of stimuli include 995 object images drawn from a variety of object categories, including mammals, birds, fruits, reptiles, sea animals, vegetables, tools, clothing items, foods, musical instruments, vehicles, furniture items, buildings, and other objects found within specific cultural or functional contexts. The 12 object categories were balanced for frequency based on the Corpus of Contemporary American English (Davies, 2008). The full database of objects and features can be accessed at: https://mariamh.shinyapps.io/dinolabobjects/.

Experiment 1: Feature Norming

Data Collection

Five hundred sixty-six participants were recruited via Amazon Mechanical Turk (AMT) (all with greater than or equal to 95% approval) for the initial feature-norming experiment. On the basis of self-identification, this population comprised 347 women and 219 men, mean age = 34.6 years (range = 19–75 years), all self-reported native speakers of American English. The sample size is in line with previous work, such as from Devereaux and colleagues, who collected norms for 638 concepts from 123 participants (Devereux, Tyler, Geertzen, & Randall, 2014) or McRae and colleagues, who collected norms for 541 concepts from 725 participants (McRae, Cree, Seidenberg, & Mcnorgan, 2005). The images were selected to represent a wide range of everyday object categories and were presented on a white background to isolate semantic and visual properties without interference from an image background. Such a choice facilitates the sort of analysis we wanted to do to understand granular perceptual and semantic item-wise statistics, but does not capture naturalistic scenes in the manner of a database such as THINGSplus (Stoinski, Perkuhn, & Hebart, 2024).

Participants completed between one and five sessions, which lasted 1 hr and included 40 presented images. Participants were paid $3.00 per hour for their participation in the property norming study. Informed consent was obtained from all participants under a protocol approved by the Duke Medical School institutional review board. All procedures and analyses were performed in accordance with institutional review board guidelines and regulations for experimental testing.

Participants were shown an object (e.g., a porcupine) and were given a space to add five unique features, similar to previous feature-norming paradigms (Devereux et al., 2014; McRae et al., 2005). Participants were given a sentence (e.g., “Porcupine is/has _____”) and would fill in a feature, such as “an animal” or “quills.” Stimuli were randomized across participants. No two images from the same category appeared consecutively, and each image was presented to at least 20 participants.

Features were then processed following the manual procedures used by McRae and colleagues (McRae et al., 2005) and Devereux and colleagues (Devereux et al., 2014), including: (1) removal of adverbs; (2) feature splitting, for example, a feature such as “has a round face” was rewritten as “has a round face” and “has a face”; (3) combining close synonyms, for example, replacing “groups, packs, and flocks” with “groups”; (4) correction of spelling mistakes; (5) morphological mapping, for example, combining “is used in cooking” and “is used by cooks” into “is used in cooking”; (6) removal of plural forms; and (7) removal of features not present in at least two objects. Afterward, a feature × concept production frequency matrix was created to describe the normalized frequency of a given feature for a given concept. Each feature was further assigned a feature category label first used by McRae and colleagues (2005), for example, encyclopedic (e.g., “lives in the US,” n = 1930 features) and visual (e.g., “is red,” n = 1886 features). These feature categories help to classify features, and there is high interrater reliability (intraclass coefficient > 0.8).

Both human-derived and text corpus–derived methods are used for modeling and experimental investigations in human cognitive neuroscience, and consensus on which method is best for the study of semantics and human memory is still evolving. Feature norm approaches have a nearly 20-year history in the field and have been used by several groups (Borovsky, Peters, Cox, & McRae, 2024; Devereux et al., 2014; Vinson & Vigliocco, 2008; McRae et al., 2005; Van Overschelde, Rawson, & Dunlosky, 2004). We chose a method with a history of use in cognitive neuroscience to facilitate comparison with previous results and maximize reliability.

Experiment 2: Online Memory Testing

A separate group of participants (AMT, > 95% approval rating, all self-reported speakers of native English) participated in the conceptual (n = 200) recognition memory task (Figure 1). After excluding responses from seven users due to a computer error, the conceptual memory task included 193 users (108 women and 85 men, 19–87 years of age, mean = 39.7 years) and the perceptual memory task included 263 users (137 women and 126 men, 18–76 years of age, mean = 37.1 years). Such recognition memory norms typically include a few hundred participants. From early studies of recognition memory, such as Light and colleagues, who collected data from 219 participants (Light, Kayra-Stuart, & Hollander, 1979), to more recent clinically oriented studies such as that from Barvas and colleagues, whose normative data set included 280 participants (Barvas, Mattavelli, Meli, Guttmann, & Papagno, 2022) or Goodwill and colleagues who included 368 in their baseline study and 291 in the follow-up (Goodwill et al., 2019). Normative conceptual and perceptual memorability values used throughout this analysis come from the conceptual memory and perceptual memorability tasks, respectively. The subsequent analysis focuses on conceptual memorability, although perceptual memorability results are included in the Appendix.

Figure 1. .

Figure 1. 

Memorability task. Participants view a series of images and rate whether the object is living or nonliving. After a 24-hr delay, the participants completed the conceptual recognition memory task. They determined whether a given image or word was seen on the previous day (i.e., old) or was not (i.e., new).

This study used the same 995 objects used in the feature norming study. The AMT workers completed the encoding task on Day 1, determining if an object was living or nonliving. Each image was presented for 2 sec with a 1 sec intertrial interval. On Day 2 (±4 hr; mean lag between encoding and retrieval = 29.34 hr), participants completed a two-alternative forced-choice oldness decision, whether an object was seen on Day 1 (i.e., “old”) or not (i.e., “new”). One hundred sixty-eight old and 168 new stimuli were presented, balanced across object categories. Workers were paid $0.50 for completing the Day 1 encoding session (mean time to finish was 16.07 min), and they were paid $4.50 for completing the Day 2 retrieval session (mean time to finish the tasks was 23.5 min). After data collection, mean hit rates and false alarm rates for each object were calculated based on the percentage of correct responses across participants to old or new trials, giving object-wise average memorability ratings. Corrected recognition rates (hit rate – false alarm rate) were calculated for each object. Normative memorability values used throughout this analysis come from the conceptual memory and perceptual memorability tasks, and thus all references to conceptual or perceptual memorability refer to the normative memorability in Experiment 2.

Experiment 3: Neuroimaging

Details for the neuroimaging experiment have been published previously (Huang et al., 2024; Howard, Huang, Hovhannisyan, Cabeza, & Davis, 2024; Davis, 2021). In brief, 26 participants were recruited (native English speakers, 14 women, age mean = 20.4 years, SD = 2.4 years; range = 18–26 years) in accordance with a protocol approved by the Duke University Health System institutional review board. Previous studies of recognition memory include a range of sample sizes, including 23 participants in an EEG study (Schneider, Coll, Schnider, & Ptak, 2024), 16 participants for a study using univariate fMRI to study visual memory (Bainbridge & Rissman, 2018), and 25 participants for a study of verbal memory (Frithsen & Miller, 2014). There is some discussion in the literature that calls for larger sample sizes in fMRI studies of recognition memory (Grady, Rieck, Nichol, Rodrigue, & Kennedy, 2021; Turner, Paul, Miller, & Barbey, 2018), although our sample size remains in line with the current standard.

Out of the original group of participants, four participants were excluded because of poor performance related to tiredness on Day 1 of the experiment, one experienced a fainting episode during the MRI session on Day 2, and two were removed due to excessive motion, leaving 19 total participants in the final analysis. During the study, each object was presented alone on a white background in the center of the screen. Participants performed memory tasks across two separate days: Day 1 for encoding and Day 2 for retrieval. On Day 1, they viewed object images and were asked to covertly name the image. They also pressed a button to indicate that the image matched a single letter probe shown immediately before (e.g., a letter “f” followed by a picture of a flamingo). For each presented stimulus, participants viewed a fixation cross for 500 msec, a single-letter probe for 250 msec, the object image (or word) for 500 msec, followed by 2–7 sec of jitter. On Day 2, participants completed a conceptual memory test in the scanner where they were presented with word labels for old objects that were previously seen (n = 300) and new objects that were not previously seen (n = 100). Participants rated their confidence (1 = “definitely new,” 2 = “probably new,” 3 = “probably old,” and 4 = “definitely old”) for whether they had seen the object the word referred to on Day 1 (Figure 2).

Figure 2. .

Figure 2. 

Memory task for Experiment 3. Participants viewed images during an MRI scan and covertly identified that image, such as bowtie pasta or flamingo. On the second day, participants were scanned while viewing words in a recognition memory task. If the image was seen on Day 1, they would rate 3 or 4 for probably or definitely old. If the image was not seen on Day 1, they would rate 1 or 2 for definitely or probably new.

MRI Acquisition

Scans were performed on a GE MR 750 3 T scanner. Coplanar functional images were acquired using an inverse spiral sequence: 37 axial slices, 64 × 64 matrix, in-plane resolution 4 × 4 mm2, 3.8-mm slice thickness, flip angle = 77, repetition time = 2000 msec, echo time = 31 msec, field of view = 24 mm2. Anatomical images were acquired using a 3-D T1-weighted echo-planar sequence (68 slices, 256 × 256 matrix, in-plane resolution 2 × 2 mm2, 1.9-mm slice thickness, repetition time = 12 msec, echo time = 5 mg, field of view = 24 cm). Encoding was completed in two separate runs, and retrieval was completed in four separate runs.

Data preprocessing was performed using SPM12 and custom MATLAB (The MathWorks) scripts. Functional images were realigned to the first image of the first run using rigid-body transformation with six motion parameters. Functional images were then corrected for slice acquisition time (reference slice = first slice) and linear signal drift, temporally smoothed using a high-pass filter of 190 sec, segmented into different tissue types (gray matter, white matter, and cerebrospinal fluid), and normalized to the MNI152 standard space. Automatic independent component analysis–denoising was applied to remove artifacts due to motion or susceptibility artifacts, and the effects of head motion, button presses, white matter signals, and cerebrospinal fluid signals were included as first-level covariates of no interest. We used least-squares separate, a technique designed for signal estimating in event-related designs (Mumford, Davis, & Poldrack, 2014; Mumford, Turner, Ashby, & Poldrack, 2012; Rissman, Gazzaley, & D’Esposito, 2004). For each gray matter voxel, the activity estimate on each encoding trial was obtained by constructing a first-level general linear model with a one regressor for each trial and all other trials collapsed into a second regressor. This process was repeated for each trial, resulting in one beta estimate per trial.

Neuroimaging Analysis

For each participant, beta images from each single-trial model were used to estimate activity associated with each object, by averaging all voxel values within a given ROI using the Brainnetome atlas (Fan et al., 2016). Each ROI was selected to understand the relationship between object memorability, image semantics, and hippocampal activity while viewing images. In particular, each ROI was chosen based on its role in the effective encoding and consolidation of episodic memories. Hippocampus supports encoding, consolidation, and storage of episodic memories (among other forms of memory; Davis et al., 2021; Hainmueller & Bartos, 2020), and we selected the left and right hippocampi. Subsequent analyses extended our focus to additional temporal brain regions that have been demonstrated to support aspects of the encoding of visually presented objects. Rhinal cortex includes perirhinal cortex, which plays a role in representing object semantic information (Clarke & Tyler, 2014; Tyler et al., 2013; Staresina & Davachi, 2008) and entorhinal cortex, which mediates between the hippocampus and cortex, as well as other parahippocampal areas, during memory processes (Roesler & McGaugh, 2022; Hales et al., 2014). Parahippocampal cortex supports the processing of spatial and contextual information (Ritchey, Wang, Yonelinas, & Ranganath, 2019; Eichenbaum & Lipton, 2008) and object recognition, as does fusiform gyrus (Bainbridge & Rissman, 2018; Staresina & Davachi, 2006). Retrosplenial cortex supports episodic and spatial memory, integrating information from cortical and sensory areas that are then processed by hippocampus and associated regions (Alexander, Place, Starrett, Chrastil, & Nitz, 2023; Milczarek & Vann, 2020; Shallice et al., 1994).

We used unilateral ROIs for all five regions (making 10). In the 10-ROI analysis, we applied the Benjamini–Hochberg False Discovery Rate (FDR) procedure to p values and confidence intervals (CIs) using α = .05 (Benjamini & Hochberg, 1995) as it is a commonly used method in neuroimaging (Bennett, Wolford, & Miller, 2009) that provides better statistical power while controlling for multiple comparisons across ROIs. In this procedure, the p values are ranked from smallest to largest, and each p value is compared with its critical value (rank divided by the number of tests multiplied by an α of .05). Then, each p value is multiplied by the number of tests divided by rank. In the case of investigating the total effect of memorability on brain activity, we chose the Bonferroni correction, which is more appropriate for a small number of comparisons. (Two ROIs showed a significant effect for the subsequently remembered trials, Table 1, and one ROI was significant for all experimental trials; Appendix Table A5.) For each participant, the trial-level activation level was computed as the average of activity estimates (betas) for a given object stimulus on a single trial across voxels within an ROI mask using a general linear model. Brain images were visualized using the FSLeyes toolbox (fsl.fmrib.ox.ac.uk/fsl/fslwiki/FSLeyes). Subsequent analyses were restricted to regions with a significant memorability effect.

Table 1. .

Effects of Memorability on Cortical Activity

Region Beta SE df t Value p Value
Hippocampus R 0.38 0.18 240 2.18 .0302
Rhinal cortex R 0.32 0.15 240 2.07 .0397
Table A5. .

Effects of Memorability on Cortical Activity, All Experimental Trials

Region Beta SE df t Value p Value
Hippocampus R 0.33 0.15 240 2.16 .0303

Nonnegative Matrix Factorization

A principal outcome of Experiment 1 was a concept × feature frequency matrix describing 5520 unique features for each of 995 real-world objects. Each individual cell indicates the frequency with which a given feature was provided for a given object. Two individual feature matrices were used in the analysis based on McRae feature norms: encyclopedic and visual (McRae et al., 2005). Additional feature categories such as taxonomic features were not isolated for additional analysis although were still present in the original feature matrix.

To determine a robust set of semantic factors from the feature norms we obtained in Experiment 1, we used the nonnegative matrix factorization (NMF; Paatero & Tapper, 1994). NMF takes an input matrix V and factorizes into two matrices W and H that, when multiplied, give a lower-rank approximation of the original matrix. NMF is preferable to other commonly used dimensionality reduction techniques such as principal components analysis and factor analysis when a data set is sparse and linearity cannot be assumed (Paatero & Tapper, 1994). Furthermore, nonnegativity tends to produce more interpretable dimensions when compared with other dimensionality reduction methods that do not use such a constraint (Roads & Love, 2024). Although many object features are shared (e.g., “has legs,” “is made of metal”), most features are unique to a small minority of objects, and as such, the object × feature matrix is quite sparse (5,450,054 zero elements out of 5,492,400 elements, for a sparsity ratio of 0.9923). NMF is especially appropriate for such sparse matrices, because it derives matrices W (feature weights) and H (coefficients), both nonnegative factors, which together constitute a lower-rank approximation of a given matrix (see Figure 3).

Figure 3. .

Figure 3. 

NMF of feature norms. Conceptual description of the NMF procedure applied to the feature norm matrix. The V matrix consisted of object stimuli and the number of times a given feature was provided by participants in the feature norming study (Experiment 1). NMF decomposes this matrix into a feature matrix W and coefficient matrix H, which, when recombined, give an approximation of the original V matrix.

Applying NMF

After completing the thresholding procedure (see Appendix Figure A1), we applied a standard implementation of NMF from MATLAB using the nnmf function. We chose the standard alternative least squares algorithm (Paatero, 1997). The use of NMF in cognitive neuroscience has shown mixed utility. Although the method shows success for structural MRI (Patel et al., 2020; Sotiras, Resnick, & Davatzikos, 2015), it has been less effective for fMRI (Xie, Douglas, Wu, Brody, & Anderson, 2017). NMF is more widely used for psychology and computing applications involving language (Hassani, Iranmanesh, & Mansouri, 2021; Mangin, Filliat, & Oudeyer, 2015) and emotion (Calvo & Mac Kim, 2013), suggesting that it is appropriately chosen to model object features and semantics. We applied several methods for determining the optimal number of factors to extract from the W matrix, identifying 20 to use in subsequent analyses (see Appendix Figure A2). It is important to note that although NMF does not rank factors in order of explanatory importance, we discovered that approximately the first five factors exhibited greater explanatory power. We assessed this by reconstructing the original V matrix from the component W and H matrices, excluding one factor at a time.

Figure A1. .

Figure A1. 

Knee Point Detection. One common approach to determining the optimal rank of NMF is to investigate the residual error between original and reconstruction matrix by serially adding additional factors. The knee point is the point at which the gains in model fit become smaller for each new factor added.

Figure A2. .

Figure A2. 

Another strategy for determining NMF rank is to investigate the performance of NMF on a training set with the reconstruction error based on test data. The data are split into training and testing sets by removing individual elements from the matrix (i.e., cross-validation) to ensure robustness and avoid bias.

Identifying Cortical Memorability Effects

To model the relationship between brain activity, normative memorability, and memory performance for the participants in the neuroimaging study, we first isolate trials where participants successfully remembered a particular object. This was done to restrict our analysis to trials for which we have reasonable confidence that participants were oriented and engaged in the task. We then calculated corrected recognition (hit rate – false alarm rate) for each of the 19 participants for conceptual memory (i.e., individual subject memory performance from Experiment 3, whereas normative memorability came from a separate group of participants in Experiment 2). These two distinct memory measures were used in the model: first, the normative memorability of the stimuli, which reflects the overall performance of participants in Experiment 2, and, second, the individual memory performance by the participants in Experiment 3. Corrected recognition was then calculated for each individual participant. We first calculated an adjusted trial-level brain activity to control for differences in individual subject performance in Experiment 3. We submitted trial-level brain activity to a linear mixed-effects model using the “lme4” package (Bates, Mächler, Bolker, & Walker, 2015) in R (Version 4.3.2) with a fixed effect of conceptual corrected recognition and a fixed effect of object ID with a random effect of subject. This model gives the predicted activity associated with each individual object and controls for individual subject performance. We then proceeded to make a second model to explore the effect of normative memorability on adjusted brain activity. We found a subset of significant regions from our list of ROIs. Importantly, we only included subsequently remembered trials in our analysis because subsequently forgotten trials may be evidence of inattention toward that particular image, potentially resulting in a lack of representation of the image. Analyses were then repeated with all experimental trials (i.e., subsequently remembered and forgotten) yielding similar results (Appendix Tables A5, A6, and A7).

Table A6. .

Mediation Results for Encyclopedic Factors, All Experimental Trials

Region Estimate SE z Value p Value Adj p Value t Value CI Lower CI Upper
Total indirect effect for five encyclopedic factors
Hippocampus R* 0.14 0.056 2.54 .011 .011 2.54 0.033 0.25
Total effect for five encyclopedic factors
Hippocampus R 0.38 0.15 2.16 .03 .03 2.16 0.031 0.63

Statistical significance was defined as *p < .05, **p ≤ .01, and ***p ≤ .001. Adj p value = adjusted p value after Bonferroni correction; Adj CI lower, Adj CI upper = same adjustment applied to confidence intervals; SE = standard error; R = right; L = left.

Table A7. .

All ROI Mediation Analysis with Encyclopedic Factors, All Experimental Trials

Region Estimate SE z Value p Value Adj p Value t Value CI Lower CI Upper
Fusiform gyrus L 0.16 0.08 2.13 .033 .054 2.13 0.013 0.32
Fusiform gyrus R* 0.16 0.07 2.22 .026 .032 2.22 0.018 0.29
Hippocampus L* 0.15 0.06 2.51 .012 .032 2.51 0.033 0.27
Hippocampus R 0.14 0.06 2.54 .011 .069 2.54 0.033 0.25
Parahippocampal cortex L* 0.15 0.08 1.86 .062 .032 1.86 −0.008 0.3
Parahippocampal cortex R* 0.15 0.07 2.08 .037 .032 2.08 0.009 0.29
Rhinal L 0.15 0.06 2.65 .008 .053 2.64 0.039 0.26
Rhinal R 0.12 0.05 2.49 .013 .053 2.49 0.025 0.21
Retrosplenial cortex L 0.26 0.15 1.82 .069 .069 1.81 −0.021 0.55
Retrosplenial cortex R 0.18 0.1 1.85 .064 .069 1.85 −0.011 0.38

Statistical significance was defined as *p < .05, **p ≤ .01, and ***p ≤ .001. Results for the right hippocampal and right rhinal cortex ROIs are identical to those in Table 2, except for the adjusted p value. Adj p value = adjusted p value after applying FDR correction; Adj CI lower, Adj CI upper = same adjustment applied to confidence intervals; SE = standard error; R = right; L = left.

Mediation Analysis

To investigate whether semantic factors mediate the relationship between memorability and brain activity, we performed a mediation analysis using the lavaan package for latent variable analysis in R (Rosseel et al., 2024). A multilevel mediation model was run to investigate conceptual memorability, five semantic factors, and all ROIs that were found to have a significant total effect (i.e., a significant relationship between object memorability and trial-wise brain activity). Subsequent models were run to investigate the mediating influence of two factor types (encyclopedic and visual factors). (An additional set of models were run on visual factors and perceptual memorability; Appendix Tables A3 and A4.) The multilevel mediation model considered each mediator in parallel, such that the relationship between memorability and each semantic factor is a separate a path, the relationship between semantic factors and activity in each ROI is given by a separate b path, and the relationship between memorability and brain activity in each ROI is the c path (Figure 4). A significant a × b interaction showed that semantic factors were significant mediators of the relationship between memorability and activity. The a × b interaction effects were tested with bootstrap CIs. An additional benefit of multilevel mediation is that it affords control over multiple repeated factors (e.g., each stimulus). The variables at Level 1 included trial-wise activity in a given ROI for each participant and the intercepts were added at Level 2, along with object-wise memorability and the object-wise loading on each semantic factor. In this way, we can model the estimated brain activity across participants for each stimulus and complete an object-wise mediation analysis. Mediation models were fit using the Structural Equation Modeling function in R (Fox et al., 2022), after which Huber–White standard errors were calculated and parameter estimates were extracted.

Table A3. .

Mediation Analysis Results for Visual Memorability and Both Encyclopedic and Visual Factors, Hit Trials Only

Region Estimate SE z Value p Value Adj p Value t Value CI Lower CI Upper
Visual factors
Fusiform gyrus L 0.084 0.088 0.95 .34 .27 0.95 −0.09 0.25
Fusiform gyrus R 0.086 0.076 1.13 .26 .49 1.13 −0.062 0.23
Hippocampus L 0.14 0.067 2.12 .034 .63 2.11 0.01 0.27
Hippocampus R 0.11 0.06 1.89 .059 .76 1.89 −0.004 0.23
Parahippocampal cortex L 0.074 0.1 0.73 .46 .27 0.74 −0.12 0.27
Parahippocampal cortex R 0.03 0.1 0.31 .76 .49 0.31 −0.16 0.22
Rhinal L 0.06 0.06 1.01 .31 .27 1.004 −0.06 0.18
Rhinal R 0.028 0.05 0.57 .57 .49 0.57 −0.07 0.13
Retrosplenial cortex L 0.26 0.15 1.75 .08 .58 1.75 −0.032 0.56
Retrosplenial cortex R 0.15 0.1 1.29 .2 .49 1.29 −0.076 0.36
Encyclopedic factors
Fusiform gyrus L 0.11 0.06 1.80 .072 .092 1.80 −0.01 0.22
Fusiform gyrus R 0.10 0.05 1.95 .051 .092 1.95 0.00 0.21
Hippocampus L 0.11 0.05 2.19 .028 .095 2.19 0.01 0.20
Hippocampus R 0.12 0.05 2.48 .013 .107 2.48 0.03 0.21
Parahippocampal cortex L 0.10 0.06 1.66 .097 .092 1.66 −0.02 0.22
Parahippocampal cortex R 0.09 0.06 1.53 .127 .095 1.52 −0.02 0.20
Rhinal L 0.09 0.04 2.13 .033 .092 2.13 0.01 0.18
Rhinal R 0.07 0.04 1.92 .055 .127 1.92 0.00 0.14
Retrosplenial cortex L 0.21 0.11 1.96 .050 .092 1.96 0.00 0.42
Retrosplenial cortex R 0.13 0.07 1.77 .076 .092 1.77 −0.01 0.28
Table A4. .

Total Indirect Effects for Visual Factors, All Experimental Trials

Region Estimate SE z Value p Value Adj p Value t Value CI Lower CI Upper
Fusiform gyrus L 0.58 0.073 0.79 .43 .33 0.79 −0.085 0.2
Fusiform gyrus R 0.05 0.065 0.77 .44 .33 0.77 −0.077 0.18
Hippocampus L 0.12 0.057 2.15 .032 .44 2.15 0.011 0.23
Hippocampus R 0.093 0.051 1.83 .068 .62 1.82 −0.007 0.19
Parahippocampal cortex L 0.054 0.092 0.58 .56 .55 0.58 −0.13 0.23
Parahippocampal cortex R 0.025 0.088 0.29 .77 .77 0.29 −0.15 0.2
Rhinal L 0.076 0.05 1.53 .12 .32 1.53 −0.021 0.17
Rhinal R 0.046 0.043 1.08 .28 .33 1.08 −0.037 0.13
Retrosplenial cortex L 0.21 0.14 1.51 .13 .55 1.51 −0.063 0.49
Retrosplenial cortex R 0.097 0.1 0.94 .35 .55 0.94 −0.1 0.3
Figure 4. .

Figure 4. 

Multilevel mediation in one ROI. In this model, a 2–2–1, the outcome variable is at Level 1 because it refers to the brain activity of individual participants, whereas the predictor and mediator are at Level 2 because these properties of the stimuli remain constant for all participants. In this way, the mediation analysis takes place at Level 2, using the intercepts to capture the brain activity across participants for each stimulus. Such an object-wise model allows for the examination of how stimulus characteristics influence neural responses while accounting for both between-stimulus and between-subject variability. The stimuli presented at each trial determine the predictor (i.e., memorability) and mediators (i.e., the semantic factors). We estimate each path in the mediation model and can calculate the direct effect of the predictor on the outcome variable and the indirect effect through the mediators.

RESULTS

Semantic Dimensions

Our NMF procedure identified five factors capable of reconstructing the concept-feature frequency matrices for both encyclopedic and visual features. Each factor gave a unique pattern of loadings for different objects (W matrix) and features (H matrix; Figure 3). For the encyclopedic factors, Factor 1 was associated with the following features (sorted in descending order by their coefficients): “is useful,” “is helpful,” and “is portable” (Figure 5). The coefficients indicate how important that feature is for the factor. By focusing on the largest loading values, we can approximate an identity for that factor. In this case, utility is a defining feature of Factor 1. Each object is associated with a loading on each factor in the W matrix. The top objects for Factor 1 include bucket, trolley, and camel. Although these objects do not belong to one group based on a standard hierarchical categorization scheme, they clearly reflect the concept of utility in their respective contexts. Utility is an important feature of tools, as well as animals or other objects that serve some function. In contrast, Factor 2, which has top features that include “does fly,” “does eat fish,” and “does lay eggs,” more clearly recapitulates a traditional category reflecting the general property of animacy. Factor 3, characterized by top features like “is fun” and “is entertaining,” has top-loading objects such as swing set, saxophone, and pickle, indicating that the semantic attribute of “fun” can apply to toys, musical instruments, and food, among others. Therefore, each factor captures semantic features common to objects from a wide range of categories.

Figure 5. .

Figure 5. 

Semantic factor features. Top factors based on NMF for encyclopedic (A) and visual (B) features, with the top three features contributing to the factor listed. The reconstruction error is a proxy for the factor's importance because it quantifies how much it contributes to the overall reconstruction of the original concept-feature frequency matrix (a higher reconstruction error means a given feature is more important). To visualize which features contribute to the top encyclopedic and top visual features, word clouds representing the relative importance of features are presented. (See Appendix Figure A3 for the top 10 features for all 20 factors.)

The top visual factors offer dimensions with a more straightforward taxonomy, often clearly based on the material of objects, such as being made of metal (Factor 1), wood (Factor 2), plastic (Factor 3), or glass (Factor 5), as well as reflecting the general category of tools (Factor 4). Interestingly, NMF applied to the original feature matrix (which includes all feature types) identifies material-based features (e.g., made of metal or wood), as well as object categories such as animal or tool.

Memorability-related Regions

Our first analysis of fMRI data focused on identifying cortical regions that show a significant sensitivity to variability in object memorability (measured by corrected recognition). Using object-wise conceptual normative memorability from Experiment 2 and the adjusted trial-wise brain activity from Experiment 3, we performed an exploratory analysis using linear modeling to determine which may have a significant relationship between brain activity and object memorability (Table 1). Of our original 10 ROIs, two showed a significant effect with uncorrected p values, where the more memorable the object, the more brain activity: right hippocampus (β = 0.38, t(2) = 2.18, p = .03) and right rhinal cortex (β = 0.32, t(2) = 2.07, p = .04). (See Appendix Tables A4, A5, A6, and A7 for similar results based on all experimental trials.) On the basis of these results, we performed mediation analyses on these regions to understand the relationship between memorability and brain activity. It is important to note that there is an ongoing debate regarding whether nonsignificant total effects can support mediation. In this case, after applying multiple comparisons correction, neither the hippocampus nor rhinal cortex showed a significant effect. Some would argue, therefore, that mediation analysis cannot be conducted in the absence of a total effect for the potential mediator to mediate (Baron & Kenny, 1986). Although such a prohibition is appropriate in some cases, more recent work has argued that a significant total effect is not a prerequisite for mediation analysis in the case of a model with multiple mediators (Zhao, Lynch, & Chen, 2010; Hayes, 2009). Hayes argues that inferring indirect effects through a series of separate hypothesis tests about the constituent paths in the model can inappropriately reduce statistical power. More contemporary approaches quantify and test the indirect effects directly. In our case, although neither the hippocampus nor rhinal cortex showed significant total effects after multiple comparisons correction, we proceeded to test specific and total indirect effects to try to understand what relationship there might be, if any, between memorability, semantic factors, and brain activity. We considered it worthwhile to conduct the analysis in this manner given the mounting evidence for a strong relationship between the semantic properties of images and their memorability (Morales-Torres, Wing, Deng, Davis, & Cabeza, 2024; Kramer et al., 2022, 2023; Bainbridge, 2022; Davis et al., 2021; Hovhannisyan et al., 2021), while bearing in mind that future research may suggest additional statistical considerations as our understanding of this type of modeling develops (Figure 6).

Figure 6. .

Figure 6. 

Memorability-related regions. Using a general linear model, we examined the relationship between stimulus memorability and univariate brain activity in our ROIs for hit trials only. Two regions showed significant modulation of trial-wise activity in response to the memorability of the stimuli presented during those trials. The figure displays a t-map, which is colored based on the corresponding t value from the analysis. Degrees of freedom were reported as model df and residual df.

Multilevel Mediation Analyses

Encyclopedic Factors and Conceptual Memorability

We performed multilevel mediation analyses to explore what, if any, proportion of the relationship between memorability and brain activity in right hippocampus was mediated by semantic factors (Figure 7). Table 2 lists the results from the mediation analyses that showed significant mediation effects, determined by 95% CIs that do not include zero after Bonferroni correction was applied. Here, the encyclopedic factors mediate the relationship between conceptual memorability and stimulus-specific brain activity. Importantly, the right hippocampus, which showed a significant memorability effect, also showed mediation effects for encyclopedic factors. Estimates of the total indirect effect for each ROI were 0.161 (uncorrected p = .013, Bonferroni-corrected p = .021, 95% CI [0.051, 0.28]) for the right hippocampus and 0.124 (uncorrected p = .018, Bonferroni-corrected p = .036, 95% CI [0.033, 0.22]) for right rhinal cortex. The proportion mediated of these regions were 0.42 for the right hippocampus and 0.39 for right rhinal cortex, defined as the total indirect effect divided by the total effect.

Figure 7. .

Figure 7. 

Mediation of memorability-related activity in the hippocampus by semantic features. The relationship between stimulus memorability and activity in the right hippocampus is mediated by the first encyclopedic factor, characterized generally as utility. The proportion mediated (42%) is the quotient of the total indirect effect and total effect. Statistical significance was defined as *(p < .05), **(p ≤ .01), and ***(p ≤ .001).

Table 2. .

Mediation Results for Encyclopedic Factors

Region Estimate SE z Value p Value Adj p Value t Value CI Lower CI Upper
Total indirect effect for five encyclopedic factors
Hippocampus R* 0.16 0.06 2.57 .010 .021 2.56 0.04 0.29
Rhinal cortex R* 0.12 0.05 2.36 .018 .036 2.36 0.02 0.23
Total effect for five encyclopedic factors
Hippocampus R 0.38 0.18 2.18 .029 .058 2.18 0.04 0.73
Rhinal cortex R 0.32 0.15 2.07 .038 .076 2.07 0.02 0.62

Statistical significance was defined as *p < .05, **p ≤ .01, and ***p ≤ .001. Adj p value = adjusted p value after Bonferroni correction; Adj CI lower, Adj CI upper = same adjustment applied to confidence intervals; SE = standard error; R = right; L = left.

Interestingly, although the right hippocampus showed a significant total indirect effect, the only significant specific indirect effect was for Factor 1: 0.11 (z = 2.24, p = .025; Appendix Table A2). Although individual factors may not be significant mediators on their own, they can operate as significant mediators of the relationship between stimulus memorability and stimulus-specific brain activity as a group of several factors. This effect may be explained by the fact that NMF generates factors that capture semantics in an aggregate manner. Multilevel mediation has several advantages over single-mediator models, not only for assessing the presence of an overall effect but also for comparing the relative importance of each mediator and reducing omitted variable bias (Preacher & Hayes, 2008). In such a model, if the total indirect effect is significant when individual indirect effects are not significant, it may indicate that the effect of semantics is better captured by multiple factors.

Table A2. .

Additional Mediation Analysis Results for Individual Factors

Region Specific Indirect Effect of Factor 1 “Is Useful”
Encyclopedic factors, conceptual memorability
Fusiform R 0.13 (z = 2.14, p = .033, adj p = .055)
Hippocampus L 0.1 (z = 2.03, p = .043, adj p = .043)
Hippocampus R 0.11 (z = 2.24, p = .025, adj p = .125)
Rhinal L 0.1 (z = 2.06, p = .039, adj p = .043)
Rhinal R 0.09 (z = 2.14, p = .032, adj p = .08)
Region Specific Effect of Factor 4 “Is a Tool”
Visual factors, perceptual memorability
Fusiform L 0.11 (z = 2.12, p = .033, adj p = .066)
Fusiform R 0.1 (z = 2.18, p = .029, adj p = .166)
Hippocampus L 0.08 (z = 2.13, p = .033, adj p = .044)
Parahippocampal cortex R 0.16 (z = 2.5, p = .04, adj p = .04)

In this case, we can infer that Factor 1 (i.e., utility) is especially important given that it is the only factor for which there is a significant specific indirect effect. It may be that utility or material are so important for the mediating effect of semantic factors because they are general semantic properties that apply to our entire data set of concrete objects. They are characteristic of the objects whether they are useful or not, or made of metal or not and therefore may serve a central conceptual role in how they are perceived and remembered. When considering the feature norms from which the factors are derived, we can see that “is useful” (the top-loading feature on encyclopedic Factor 1) is correlated with several other features, including “is helpful,” “is practical,” and “is needed.” Although factors with a high number of mentions in the feature norm data set (such as “is useful” and “is dangerous”) load most highly on each factor, we can see that they are representative of conceptual clusters within the feature matrix.

A similar result can be seen in mediation models exploring the indirect effect of visual memorability on brain activity through visual and encyclopedic factors. After correcting for multiple comparisons, no total indirect effects were significant, and yet, the specific indirect effect of Factor 4 (i.e., “is a tool”) is significant for the left fusiform (β = 0.11, z = 2.12, p = .033), the right fusiform (β = 0.1, z = 2.18, p = .029), the left hippocampus (β = 0.08, z = 2.13, p = .033), and right parahippocampal cortex (β = 0.16, z = 2.5, p = .04; Appendix Table A2). Because the factors characterized by utility and tools are significant mediators on their own, we can infer that they are important for the memorability effect on stimulus-specific brain activity during encoding. (The full set of results for visual factors and perceptual memorability can be seen in Appendix Tables A3 and A4.)

We can see that the right hippocampus shows particular sensitivity to stimulus memorability during encoding, and that relationship is mediated by semantic factors that model the features of those stimuli, especially features having to do with the utility of the objects. This analysis aligns with previous work that has shown that hippocampus processes semantic properties of images during memory encoding (Miyashita, 2019; Clarke & Tyler, 2014) and further suggests that these processes may constitute a mechanistic basis by which memorability influences neural activity during encoding.

Encyclopedic Factors and Conceptual Memorability in All ROIs

Although our initial mediation analysis focused on regions that showed direct memorability effects, work from our group and others has highlighted the role of a wider array of cortical regions in the encoding of visual and conceptual information into memory (Yu et al., 2025; Howard et al., 2024; Huang et al., 2024). We therefore decided to conduct two additional mediation analyses to explore the relationship between memorability, semantic factors, and brain activity in all 10 ROIs: fusiform gyrus, hippocampus, parahippocampal cortex, rhinal cortex, and retrosplenial cortex. Because each region was selected due to its established role in the processing of semantic information to support memory encoding, it is possible that one or more of these regions may process semantic features in a way that contributes to memorability even if there is not overall memorability effect.

We ran multilevel mediation models on all 10 ROIs. After applying FDR correction, we found significant total indirect effects in the fusiform gyrus (left: β = 0.19, FDR-corrected p = .047, 95% CI [0.03, 0.36]; right: β = 0.18, FDR-corrected p = .047, 95% CI [0.04, 0.31]), the hippocampus (left: β = 0.16, FDR-corrected p = .045, 95% CI [0.05, 0.28]; right: β = 0.16, FDR-corrected p = .045, 95% CI [0.07, 0.26]), rhinal cortex (left: β = 0.15, FDR-corrected p = .045, 95% CI [0.05, 0.25]; right: β = 0.12, FDR-corrected p = .045, 95% CI [0.04, 0.21]), and right parahippocampal cortex (β = 0.18, FDR-corrected p = .047, 95% CI [0.03, 0.33]; Table 3, Figure 8). These findings suggest that semantic factors do indeed play a role in the mediation of the relationship between memorability and brain activity in several brain regions that have been previously shown to facilitate the processing of conceptual information to support memory encoding.

Table 3. .

All ROI Mediation Analysis with Encyclopedic Factors

Region Estimate SE z Value p Value Adj p Value t Value CI Lower CI Upper
Fusiform gyrus L* 0.19 0.09 2.13 .033 .047 2.13 0.02 0.37
Fusiform gyrus R* 0.18 0.08 2.26 .024 .047 2.26 0.02 0.33
Hippocampus L* 0.16 0.07 2.39 .017 .045 2.39 0.03 0.30
Hippocampus R* 0.16 0.06 2.57 .010 .045 2.56 0.04 0.29
Parahippocampal cortex L 0.17 0.09 1.90 .058 .059 1.89 −0.01 0.35
Parahippocampal cortex R* 0.18 0.08 2.14 .032 .047 2.14 0.01 0.34
Rhinal L* 0.15 0.06 2.44 .015 .045 2.44 0.03 0.28
Rhinal R* 0.12 0.05 2.36 .018 .045 2.36 0.02 0.23
Retrosplenial cortex L 0.29 0.15 1.99 .046 .058 1.99 0.00 0.58
Retrosplenial cortex R 0.20 0.10 1.89 .059 .059 1.89 −0.01 0.40

Statistical significance was defined as *p < .05, **p ≤ .01, and ***p ≤ .001. Results for the right hippocampal and right rhinal cortex ROIs are identical to those in Table 2, except for the adjusted p value. Adj p value = adjusted p value after applying FDR correction; Adj CI lower, Adj CI upper = same adjustment applied to confidence intervals; SE = standard error; R = right; L = left.

Figure 8. .

Figure 8. 

Total indirect effect for all ROIs. Representative figure showing the total indirect effects from mediation models exploring the effect of the top five encyclopedic factors on the relationship between memorability and activity in each region.

DISCUSSION

The current study sought to use a broad empirical approach to evaluate determinant factors for the neural basis of object memorability. First, we identified an object-wise effect of memorability in the hippocampus and rhinal cortex, medial temporal regions known to play a role in the memory and conceptual processing. Second, we found that semantic factors play an important role in the relationship between memorability and brain activity, mediating up to 42% of the relationship between memorability and brain activity in the right hippocampus and 39% in right rhinal cortex as defined by the total indirect effect divided by the total effect. Finally, we extended the mediation analysis to all 10 ROIs and showed that several regions showed a significant indirect effect, meaning that the five semantic factors mediate the relationship between object memorability and brain activity in several regions. We discuss these findings in context below.

Our first principal finding is that multiple mnemonic regions in the temporal lobe (hippocampus and rhinal cortex) demonstrated significant relationships between BOLD activation and object-wise memorability. We selected an object image data set that covers a wide range of categories to understand the relationship between memorability, image features, and brain activity. Previous work has suggested hippocampus acts as an interface to represent information from a range of semantic categories that are themselves processed in distributed cortical networks (such as place information in a posterior-medial network or person information in an anterior-temporal network; Morton et al., 2021). The role of the hippocampus in the successful encoding of object stimuli has been explored using both neuropsychological (Barense et al., 2005) and neuroimaging methodologies (Dalton, Zeidman, McCormick, & Maguire, 2018), but our object-focused analysis adds greater dimensionality to its central role in object encoding. Whereas some groups focus on the role of the hippocampus for binding arbitrary relations among individual elements in a scene (Ryals, Wang, Polnaszek, & Voss, 2015; Zeidman, Mullally, & Maguire, 2015) or experience (Konkel & Cohen, 2009), others emphasize its role in the binding of component features of objects (Dalton et al., 2018). Parametric object-level modulation in hippocampal activity, according to object memorability, suggests that encoding success effects are strongest for more memorable objects, perhaps due to comparatively stronger binding of features.

It is also not surprising that activity in rhinal cortex showed a memorability effect given its component regions, especially perirhinal cortex (Miyashita, 2019). An evolving body of work that uses representational similarity analyses (Kriegeskorte & Kievit, 2013) to investigate image information places structures like the perirhinal cortex at the end of a gradient of informational specificity along the ventral stream, from visual properties in primary visual cortex to coarse categorical representations in fusiform gyrus, to more fine-grained semantic representations in rhinal cortex (Clarke & Tyler, 2014; Tyler et al., 2013). The observed memorability effects, although modest, help connect trial-level variation in mean brain activity with more traditional analyses that rely on condition-level variation to understand subsequent memory in the hippocampus and rhinal cortex (Fernández et al., 1999). Both our first and second findings support this view.

Our second principal finding is that semantic factors, derived from NMF applied to image features, mediate the relationship between memorability and brain activity. Both regions that showed a memorability effect (right hippocampus and right rhinal cortex) showed significant mediation effects for the semantic factors based on image features. Given that semantic factors mediate a large proportion of the relationship between memorability and brain activity, we can infer that successful binding of image features may underlie the encoding of comparatively more memorable images. In a previous study (Hovhannisyan et al., 2021), we showed the degree to which highly correlated features (e.g., most mammals have fur, have legs, and so on) are strong predictors of memorability. The role of visual feature binding in promoting memory has been known for decades (Ueno, Allen, Baddeley, Hitch, & Saito, 2011; Chalfonte & Johnson, 1996), but there has been less attention given to how the processing of constituent conceptual features of an image may promote its subsequent recall. Binding theories of conceptual information tend to focus on stimulus–response mappings (Henson, Eckstein, Waszak, Frings, & Horner, 2014) and not the abstract, constituent features of related concepts. Nonetheless, our finding is consistent with previous work that demonstrates the importance of abstract features for memorability (Kramer et al., 2022; Bainbridge & Rissman, 2018). In this study, Factor 1 (characterized by utility) served as a significant individual mediator. We can further suggest that the encoding of groups of complementary, correlated features related to the utility of an object (“does work,” “is portable”) may explain why some objects are more memorable than others.

We chose mediation analysis because it helps to bridge memorability, semantics, and brain activity. It has been long appreciated that the deeper information is processed, the longer a memory trace will last (Craik & Lockhart, 1972). Craik and Lockhart defined depth as the meaningfulness extracted from the stimulus, where deeper levels of processing are organized along structural, phonetic, and semantic levels. More recently, several studies have tried to quantify these levels with semantic dimensions to understand their representation in the brain while participants perform perception tasks (Contier et al., 2023; Kramer et al., 2022; Muttenthaler & Hebart, 2022) and memory tasks (Huang et al., 2024; Kramer et al., 2023; Davis et al., 2021).

Our final principal finding, based on post hoc mediation analyses, showed that several additional memory-related regions (bilateral fusiform gyrus, bilateral hippocampus, and right parahippocampal cortex) showed significant mediation effects as well. Furthermore, individual dimensions moderate the relationship between memorability and brain activity (Appendix Table A1), whereas the magnitude of the loading of an image on a particular dimension determines whether memorability drives brain activity in a particular region.

Table A1. .

Moderation Analysis Results for Semantic Factors and Brain Activity

Region Factor Beta SE t Value p Value Adj p Value F
Inferior frontal gyrus L Encycl F3 −0.63 0.24 −2.60 .0100 .0299 1.23
Parahippocampal cortex L Encycl F5 −1.27 0.54 −2.35 .0198 .0394 3.44
Parahippocampal gyrus L Encycl F5 −0.97 0.39 −2.51 .0129 .0387 3.95
Rhinal cortex L Encycl F5 −0.85 0.35 −2.41 .0166 .0498 3.71
Rhinal cortex R Encycl F1 0.17 0.07 2.44 .0156 .0467 0.66

Considering the results from both mediation analyses, those restricted to regions with memorability effects and those hypothesized to relate to the encoding of semantic information based on previous work, we can see several parallels that support our conclusions. In the case of encyclopedic factors, we saw that the factor characterized by utility was the only individual factor that had a significant indirect effect on its own. The utility factor showed a significant indirect effect not only for the right hippocampus and right rhinal cortex (Table 3) but also for the right fusiform, the left hippocampus, and left rhinal cortex (Appendix Table A2). In multilevel mediation, five encyclopedic factors characterized by utility, animacy, engagement, weaponry, and marine animacy showed a significant indirect effect in several regions, although utility may be the most important factor given that it was the only individual factor to show a significant indirect effect. It appears that particular features stand out as important mediators of the memorability effect and that several types of features act in concert, indexing the role of general semantics in the encoding of object images.

The analyses performed with visual factors and perceptual memorability pose an interesting contrast. First, they were not significant mediators of memorability-related activity in either our initial tests of hippocampal and rhinal cortex or in post hoc tests of the full set of 10 ROIs. This result is consistent with other studies of memorability, which find that basic image properties (e.g., color, image complexity) are poor predictors of image memorability (Hovhannisyan et al., 2021; Isola et al., 2011, 2014). In fact, the only visual factor that produced a significant indirect effect (i.e., the influence of object memorability on brain activity occurs through a visual factor) was Factor 4, characterized by having the features of a tool, including “has a handle,” “is made of wood and metal,” and “has bristles.” We found this effect in the bilateral fusiform, the left hippocampus, and right parahippocampal cortex (uncorrected p values,; see Appendix Table A2). Although we found no significant total indirect effects (i.e., for all five factors), the fact that the tool-based factor stood out as the only significant individual factors suggests that utility (whether the general concept of usefulness or the visual properties of tools, which are useful for some purpose) is an important clue to which factors might have the strongest relationship to memorability and brain activity.

Our mediation analyses revealed that high-level semantic properties like utility explain substantial variance in trial-wise activity in several memory regions, which suggests that utility may be a feature common to highly memorable objects. The results concerning the total indirect effects (i.e., the portion of the relationship between memorability and brain activity mediated by semantics) of five encyclopedic factors were considerably stronger. It may be that a multilevel mediation model that includes several semantic factors can better index the semantic space of the image stimuli to mediate the relationship between memorability and neural activity. These results suggest a mechanistic basis for image memorability: Semantic features of highly memorable images more effectively engage memory processes during encoding, facilitating memory formation.

Considerations for Future Research

The treatment of memorability as a consistent phenomenon suited to scientific inquiry is still young, and as such, any exploratory approach such as ours has implicit limitations based on which we can suggest considerations for future work. We discuss three such considerations here.

First, there are increasingly many approaches to define and quantify semantic features. Our approach was to use explicit feature norms collected from online raters (Hovhannisyan et al., 2021), an approach that has several advantages, such as providing explicit access to the category of feature types (e.g., visual, taxonomic, encyclopedic; see Davis et al., 2021). Future studies might employ other data-driven methodologies, such as corpus or database modeling (Lindh-Knuutila & Honkela, 2015), relationship extraction methods (Perez-Arriaga, Estrada, & Abad-Mota, 2018), or word- and sentence-embedding methods (Devlin, Chang, Lee, & Toutanova, 2018; Pennington, Socher, & Manning, 2014; Mikolov, Chen, et al., 2013; Mikolov, Sutskever, et al., 2013). Several authors combine approaches, such as Bhatia and Richie who used feature norms to refine a pretrained BERT model to increase model performance (Bhatia & Richie, 2024) or feature2vec to augment participant-derived feature norm data sets (Derby, Miller, & Devereux, 2019). A systematic comparison of the similarity in semantic feature spaces derived from these disparate techniques could be especially useful for exploring the extent of the relationship between semantics and memory given the benefits and costs of each modeling approach.

Second, along these lines, more work is needed to understand the role of semantic features in memory, given the interrelatedness of episodic and semantic memory (Pandya, Nicholls, Krugliak, Davis, & Clarke, 2024; Naspi et al., 2021; Greenberg & Verfaellie, 2010). In particular, recognition memory only probes a portion of this complex relationship, and future work may explore the role of semantic factors in other forms of memory, such as autobiographical memory or related capacities like future imagination (Irish & Piguet, 2013). Although there have been neuroimaging studies that have investigated the interaction between episodic and semantic memory (Cabalo et al., 2024; Weidemann et al., 2019), future work may contextualize what is known about the cortical specialization for semantic processes with contemporary semantics modeling techniques (Needell & Bainbridge, 2022) or methods for modeling neural representations (Naspi, Stensholt, Karlsson, Monge, & Cabeza, 2023). Similarly, given recent evidence that visual memory for scenes is driven more by semantic richness than by perceptual vividness (Morales-Torres et al., 2024; as well as similar evidence from Hovhannisyan et al. [2021] and Kramer et al. [2022]), the relative contribution of semantic and visual representations in visual processing and memory warrants further attention.

Third, task-specific conditions may affect the reliability of object-level memorability effects seen in studies employing an object-wise approach (Kramer et al., 2022; Hovhannisyan, 2021). Activity estimates in this study were derived from a simple object encoding task, during which participants were presented with a single letter and color image depicting an everyday object and only pressed a button when they noticed a discrepancy between the letter and name of the presented object. Table 1 outlines effects limited to subsequently remembered trials, a step meant to exclude trials during which participants may have not been focused on the stimuli (or there was some other interference that led to subsequent forgetting of that stimulus). Such an approach is useful for modeling subsequent memory at an object-wise level, but it may be informative for future studies to explore subsequent forgetting effects as well.

Finally, the use of object stimuli has its own limitations. Subsequent memory effects in bilateral hippocampus and fusiform gyrus are greater when participants view complex visual stimuli compared with word stimuli (Kim, 2011), likely due to greater exploratory viewing, which facilitates deeper encoding through enhanced perceptual processing and feature binding (Voss, Bridge, Cohen, & Walker, 2017). We did not find a significant relationship between perceptual memorability and any of our memory-related ROIs after multiple comparisons correction, which may be due to the use of object images rather than more complex images like scenes (Voss et al., 2017). Indeed, previous work has shown that higher-level features are more useful for predicting the memorability of scenes than lower-level features (Bainbridge et al., 2017) and image similarity/discriminability is predictive of subsequent memory as well (Koch, Akpan, & Coutanche, 2020; Xie et al., 2020). Future work may probe the relationship between visual features, perceptual memorability, and brain activity by employing a different stimulus set.

Conclusion

Our analyses suggest a neural mechanism that underlies image memorability. By applying NMF to feature norms and performing mediation analysis, we demonstrate that semantic factors mediate a substantial proportion of the relationship between normative memorability (derived from a separate population) and brain activity during image viewing. The identification of specific semantic features like utility that predict activity in rhinal cortex and hippocampus provides a mechanistic framework for understanding memorability. Image memorability appears to emerge from how effectively the semantic features of an image engage memory-related neural processes during encoding. This work offers a potential neural basis for the phenomenon of memorability, advancing our understanding of how and why certain images are consistently better remembered than others.

APPENDIX

Methodological Considerations in Applying NMF

Thresholding

To determine the appropriate number of features, we used a two-step process. First, a thresholding procedure was applied to each feature matrix, to remove feature columns below a cutoff. Using the skewness function in MATLAB, the moment coefficient of skewness was calculated for each column in each feature matrix, which was in turn averaged to give a mean skewness of each matrix: 21.44 for the entire matrix, and for the subset matrices, 21.41 for encyclopedic and 20.59 for visual. On the basis of the mean skewness, the baseline threshold of 50th percentile was dynamically adjusted to determine the threshold below which feature columns were removed to ensure that the most relevant features were retained. The 70.45th percentile was calculated for the entire matrix, yielding a threshold of features with fewer than 10 mentions in the feature norm data set. For encyclopedic, there was the 70.41th percentile, yielding a threshold of 9, and for visual, there was the 69.59th percentile and a threshold of 12.

Determining Optimal Rank

To determine the optimal rank of NMF for our data, we used two methods. First, Knee Point Detection, a method by which NMF is applied with iteratively more factors, the original matrix is reconstructed from the component W and H matrices, and the error between the original and reconstructed matrix is determined. The addition of factors reduces the error between the two matrices, and the knee point is defined as the number of factors where the decrease in residuals falls below a threshold.

A second method was applied that used cross-validation to evaluate model performance and prevent overfitting. First, imputation was used to randomly mask elements of the data to split the original matrix into train and test groups. NMF was fit on the training set with an iteratively increasing number of factors, and the reconstruction error was calculated based on the test data. The minimum reconstruction error determines the optimal rank. Given the agreement between these two approaches, we considered the determined rank to be valid.

Factor Selection

With the number of factors determined, we performed additional factor-selection procedures to determine the relative importance of each factor. Frobenius norm reconstruction error, similar to its use in the optimal rank procedure, can show the relative importance of each factor, as all but one factor is used to reconstruct the original matrix, and the higher the error between the original and reconstructed matrices, the more important the left-out factor is. We additionally applied Elastic Net regularization (a technique that combines the least absolute deviations norm penalty of LASSO (Least Absolute Shrinkage and Selection Operator) and the least squares norm penalty of Ridge regression) to determine factor importance by performing variable selection and shrinking the coefficients of less important factors toward zero. Finally, we applied a procedure that involved modeling all factors as predictors and iteratively modeling all factors with one excluded. The R2 for the “all factors” model was subtracted from the R2 for the “one factor excluded” model to estimate the importance of the excluded factor. These factors were then sorted from most to least important in terms of “R2 difference.” These factors were added one at a time to a model memorability ∼ factors and the adjusted R2 was calculated. Then, a procedure was used to determine where the “knee” was in the cumulative adjusted R2. Using a sliding window of 3, we asked where the differences fell below a threshold of .0005. This is where additional factors led to diminishing returns in terms of model fit. Betas were then calculated for each factor, and significance was determined with a threshold of .05.

Figure A3. .

Figure A3. 

Figure A3. 

Figure A3. 

Figure A3. 

Top 10 features for all 20 factors.

Acknowledgments

We thank Lamont Conyers and Jennifer Graves for their extensive MRI support, all the participants for their participation, and two anonymous reviewers for their insightful comments.

Corresponding author: Simon W. Davis, Department of Psychological & Brain Sciences, Indiana University, Bloomington IN, 47401, e-mail: swd4@iu.edu.

Data Availability Statement

All de-identified imaging and behavioral data will be shared upon request via e-mail to the corresponding author, Simon Davis.

Author Contributions

Matthew A. Slayton: Conceptualization; Formal analysis; Methodology; Writing—Original draft. Cortney M. Howard: Conceptualization; Formal analysis; Methodology. Shenyang Huang: Formal analysis; Methodology. Mariam Hovhannisyan: Data curation; Methodology. Roberto Cabeza: Funding acquisition; Supervision; Writing—Review & editing. Simon W. Davis: Conceptualization; Formal analysis; Funding acquisition; Methodology; Supervision; Writing—Review & editing.

Funding Information

This study was supported by the National Institute of Health (https://dx-doi-org.proxy.lib.duke.edu/10.13039/100000049), grant numbers: R01-AG066901 and K01-AG053539.

Diversity in Citation Practices

Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance. The authors of this paper report its proportions of citations by gender category to be: M/M = .447; W/M = .202; M/W = .234; W/W = .117.

REFERENCES

  1. Alexander, A. S., Place, R., Starrett, M. J., Chrastil, E. R., & Nitz, D. A. (2023). Rethinking retrosplenial cortex: Perspectives and predictions. Neuron, 111, 150–175. 10.1016/j.neuron.2022.11.006, [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bainbridge, W. A. (2020). The resiliency of image memorability: A predictor of memory separate from attention and priming. Neuropsychologia, 141, 107408. 10.1016/j.neuropsychologia.2020.107408, [DOI] [PubMed] [Google Scholar]
  3. Bainbridge, W. A. (2022). Memorability: Reconceptualizing memory as a visual attribute. In Visual memory (pp. 173–187). Routledge. 10.4324/9781003158134-11 [DOI] [Google Scholar]
  4. Bainbridge, W. A., Berron, D., Schütze, H., Cardenas-Blanco, A., Metzger, C., Dobisch, L., et al. (2019). Memorability of photographs in subjective cognitive decline and mild cognitive impairment: Implications for cognitive assessment. Alzheimer’s & Dementia, 11, 610–618. 10.1016/j.dadm.2019.07.005, [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bainbridge, W. A., Dilks, D. D., & Oliva, A. (2017). Memorability: A stimulus-driven perceptual neural signature distinctive from memory. Neuroimage, 149, 141–152. 10.1016/j.neuroimage.2017.01.063, [DOI] [PubMed] [Google Scholar]
  6. Bainbridge, W. A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face photographs. Journal of Experimental Psychology: General, 142, 1323–1334. 10.1037/a0033872, [DOI] [PubMed] [Google Scholar]
  7. Bainbridge, W. A., & Rissman, J. (2018). Dissociating neural markers of stimulus memorability and subjective recognition during episodic retrieval. Scientific Reports, 8, 8679. 10.1038/s41598-018-26467-5, [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Barense, M. D., Bussey, T. J., Lee, A. C. H., Rogers, T. T., Davies, R. R., Saksida, L. M., et al. (2005). Functional specialization in the human medial temporal lobe. Journal of Neuroscience, 25, 10239–10246. 10.1523/JNEUROSCI.2704-05.2005, [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182. 10.1037/0022-3514.51.6.1173, [DOI] [PubMed] [Google Scholar]
  10. Barvas, E., Mattavelli, G., Meli, C., Guttmann, S., & Papagno, C. (2022). Standardization and normative data for a new test of visual long-term recognition memory. Neurological Sciences, 43, 2491–2497. 10.1007/s10072-021-05642-z, [DOI] [PubMed] [Google Scholar]
  11. Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67, 1–48. 10.18637/jss.v067.i01 [DOI] [Google Scholar]
  12. Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57, 289–300. 10.1111/j.2517-6161.1995.tb02031.x [DOI] [Google Scholar]
  13. Bennett, C. M., Wolford, G. L., & Miller, M. B. (2009). The principled control of false positives in neuroimaging. Social Cognitive and Affective Neuroscience, 4, 417–422. 10.1093/scan/nsp053, [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Bhatia, S., & Richie, R. (2024). Transformer networks of human conceptual knowledge. Psychological Review, 131, 271–306. 10.1037/rev0000319, [DOI] [PubMed] [Google Scholar]
  15. Bird, C. M., & Burgess, N. (2008). The hippocampus and memory: Insights from spatial processing. Nature Reviews Neuroscience, 9, 182–194. 10.1038/nrn2335, [DOI] [PubMed] [Google Scholar]
  16. Borovsky, A., Peters, R. E., Cox, J. I., & McRae, K. (2024). Feats: A database of semantic features for early produced noun concepts. Behavior Research Methods, 56, 3259–3279. 10.3758/s13428-023-02242-x, [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Bylinskii, Z., Isola, P., Bainbridge, C., Torralba, A., & Oliva, A. (2015). Intrinsic and extrinsic effects on image memorability. Vision Research, 116, 165–178. 10.1016/j.visres.2015.03.005, [DOI] [PubMed] [Google Scholar]
  18. Cabalo, D. G., DeKraker, J., Royer, J., Xie, K., Tavakol, S., Rodríguez-Cruces, R., et al. (2024). Differential reorganization of episodic and semantic memory systems in epilepsy-related mesiotemporal pathology. Brain, 147, 3918–3932. 10.1093/brain/awae197, [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Calvo, R. A., & Mac Kim, S. (2013). Emotions in text: Dimensional and categorical models. Computational Intelligence, 29, 527–543. 10.1111/j.1467-8640.2012.00456.x [DOI] [Google Scholar]
  20. Chalfonte, B. L., & Johnson, M. K. (1996). Feature memory and binding in young and older adults. Memory & Cognition, 24, 403–416. 10.3758/BF03200930, [DOI] [PubMed] [Google Scholar]
  21. Clarke, A., & Tyler, L. K. (2014). Object-specific semantic coding in human perirhinal cortex. Journal of Neuroscience, 34, 4766–4775. 10.1523/JNEUROSCI.2828-13.2014, [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Contier, O., Baker, C. I., & Hebart, M. N. (2023). Distributed representations of behaviorally relevant object dimensions in the human visual system. bioRxiv. 10.1101/2023.08.23.553812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11, 671–684. 10.1016/S0022-5371(72)80001-X [DOI] [Google Scholar]
  24. Dalton, M. A., Zeidman, P., McCormick, C., & Maguire, E. A. (2018). Differentiable processing of objects, associations, and scenes within the hippocampus. Journal of Neuroscience, 38, 8146–8159. 10.1523/JNEUROSCI.0263-18.2018, [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Davies, M. (2008). English-Corpora: COCA. https://www.english-corpora.org/coca/.
  26. Davis, S. W., Geib, B. R., Wing, E. A., Wang, W.-C., Hovhannisyan, M., Monge, Z. A., et al. (2021). Visual and semantic representations predict subsequent memory in perceptual and conceptual memory tests. Cerebral Cortex, 31, 974–992. 10.1093/cercor/bhaa269, [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Deng, W., Beck, D. M., & Federmeier, K. D. (2024). Image memorability is linked to facilitated perceptual and semantic processing. Imaging Neuroscience, 2, 1–13. 10.1162/imag_a_00281 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Derby, S., Miller, P., & Devereux, B. (2019). Feature2Vec: Distributional semantic modelling of human property knowledge. arXiv. 10.48550/arXiv.1908.11439 [DOI] [Google Scholar]
  29. Devereux, B. J., Clarke, A., Marouchos, A., & Tyler, L. K. (2013). Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects. Journal of Neuroscience, 33, 18906–18916. 10.1523/JNEUROSCI.3809-13.2013, [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Devereux, B. J., Tyler, L. K., Geertzen, J., & Randall, B. (2014). The Centre for Speech, Language and the Brain (CSLB) concept property norms. Behavior Research Methods, 46, 1119–1127. 10.3758/s13428-013-0420-4, [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv.Org. https://arxiv.org/abs/1810.04805v2. [Google Scholar]
  32. Dubey, R., Peterson, J., Khosla, A., Yang, M.-H., & Ghanem, B. (2015). What makes an object memorable? In 2015 IEEE International Conference on Computer Vision (ICCV) (pp.1089–1097). 10.1109/ICCV.2015.130 [DOI] [Google Scholar]
  33. Eichenbaum, H., & Lipton, P. A. (2008). Towards a functional organization of the medial temporal lobe memory system: Role of the parahippocampal and medial entorhinal cortical areas. Hippocampus, 18, 1314–1324. 10.1002/hipo.20500, [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Eichenbaum, H., Yonelinas, A. P., & Ranganath, C. (2007). The medial temporal lobe and recognition memory. Annual Review of Neuroscience, 30, 123–152. 10.1146/annurev.neuro.30.051606.094328, [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Fan, L., Li, H., Zhuo, J., Zhang, Y., Wang, J., Chen, L., et al. (2016). The human brainnetome atlas: A new brain atlas based on connectional architecture. Cerebral Cortex, 26, 3508–3526. 10.1093/cercor/bhw157, [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Fernández, G., Effern, A., Grunwald, T., Pezer, N., Lehnertz, K., Dümpelmann, M., et al. (1999). Real-time tracking of memory formation in the human rhinal cortex and hippocampus. Science, 285, 1582–1585. 10.1126/science.285.5433.1582, [DOI] [PubMed] [Google Scholar]
  37. Fox, J., Nie, Z., Byrnes, J., Culbertson, M., DebRoy, S., Friendly, M., et al. (2022). sem: Structural equation models (Version 3.1–15) [Computer software]. https://cran.r-project.org/web/packages/sem/index.html.
  38. Frisby, S. L., Halai, A. D., Cox, C. R., Ralph, M. A. L., & Rogers, T. T. (2023). Decoding semantic representations in mind and brain. Trends in Cognitive Sciences, 27, 258–281. 10.1016/j.tics.2022.12.006, [DOI] [PubMed] [Google Scholar]
  39. Frithsen, A., & Miller, M. B. (2014). The posterior parietal cortex: Comparing remember/know and source memory tests of recollection and familiarity. Neuropsychologia, 61, 31–44. 10.1016/j.neuropsychologia.2014.06.011, [DOI] [PubMed] [Google Scholar]
  40. Goodwill, A. M., Campbell, S., Henderson, V. W., Gorelik, A., Dennerstein, L., McClung, M., et al. (2019). Robust norms for neuropsychological tests of verbal episodic memory in Australian women. Neuropsychology, 33, 581–595. 10.1037/neu0000522, [DOI] [PubMed] [Google Scholar]
  41. Grady, C. L., Rieck, J. R., Nichol, D., Rodrigue, K. M., & Kennedy, K. M. (2021). Influence of sample size and analytic approach on stability and interpretation of brain–behavior correlations in task-related fMRI data. Human Brain Mapping, 42, 204–219. 10.1002/hbm.25217, [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Greenberg, D. L., & Verfaellie, M. (2010). Interdependence of episodic and semantic memory: Evidence from neuropsychology. Journal of the International Neuropsychological Society, 16, 748–753. 10.1017/S1355617710000676, [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Hainmueller, T., & Bartos, M. (2020). Dentate gyrus circuits for encoding, retrieval and discrimination of episodic memories. Nature Reviews Neuroscience, 21, 153–168. 10.1038/s41583-019-0260-z, [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Hales, J. B., Schlesiger, M. I., Leutgeb, J. K., Squire, L. R., Leutgeb, S., & Clark, R. E. (2014). Medial entorhinal cortex lesions only partially disrupt hippocampal place cells and hippocampus-dependent place memory. Cell Reports, 9, 893–901. 10.1016/j.celrep.2014.10.009, [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Hargreaves, I. S., Pexman, P. M., Johnson, J. S., & Zdrazilova, L. (2012). Richer concepts are better remembered: Number of features effects in free recall. Frontiers in Human Neuroscience, 6, 73. 10.3389/fnhum.2012.00073, [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Hassani, A., Iranmanesh, A., & Mansouri, N. (2021). Text mining using nonnegative matrix factorization and latent semantic analysis. Neural Computing and Applications, 33, 13745–13766. 10.1007/s00521-021-06014-6 [DOI] [Google Scholar]
  47. Hayes, A. F. (2009). Beyond Baron and Kenny: Statistical mediation analysis in the new millennium. Communication Monographs, 76, 408–420. 10.1080/03637750903310360 [DOI] [Google Scholar]
  48. Hebart, M. N., Zheng, C. Y., Pereira, F., & Baker, C. I. (2020). Revealing the multidimensional mental representations of natural objects underlying human similarity judgements. Nature Human Behaviour, 4, 1173–1185. 10.1038/s41562-020-00951-3, [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Henson, R. N., Eckstein, D., Waszak, F., Frings, C., & Horner, A. J. (2014). Stimulus-response bindings in priming. Trends in Cognitive Sciences, 18, 376–384. 10.1016/j.tics.2014.03.004, [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Hovhannisyan, M., Clarke, A., Geib, B. R., Cicchinelli, R., Monge, Z., Worth, T., et al. (2021). The visual and semantic features that predict object memory: Concept property norms for 1,000 object images. Memory & Cognition, 49, 712–731. 10.3758/s13421-020-01130-5, [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Howard, C. M., Huang, S., Hovhannisyan, M., Cabeza, R., & Davis, S. W. (2024). Differential mnemonic contributions of cortical representations during encoding and retrieval. Journal of Cognitive Neuroscience, 36, 2137–2165. 10.1162/jocn_a_02227, [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Huang, S., Howard, C. M., Hovhannisyan, M., Ritchey, M., Cabeza, R., & Davis, S. W. (2024). Hippocampal functions modulate transfer-appropriate cortical representations supporting subsequent memory. Journal of Neuroscience, 44, e1135232023. 10.1523/JNEUROSCI.1135-23.2023, [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Irish, M., & Piguet, O. (2013). The pivotal role of semantic memory in remembering the past and imagining the future. Frontiers in Behavioral Neuroscience, 7, 27. 10.3389/fnbeh.2013.00027, [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Isola, P., Parikh, D., Torralba, A., & Oliva, A. (2011). Understanding the intrinsic memorability of images. Advances in Neural Information Processing Systems, 24. https://proceedings.neurips.cc/paper/2011/hash/286674e3082feb7e5afb92777e48821f-Abstract.html. [Google Scholar]
  55. Isola, P., Xiao, J., Parikh, D., Torralba, A., & Oliva, A. (2014). What makes a photograph memorable? IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 1469–1482. 10.1109/TPAMI.2013.200, [DOI] [PubMed] [Google Scholar]
  56. Jaegle, A., Mehrpour, V., Mohsenzadeh, Y., Meyer, T., Oliva, A., & Rust, N. (2019). Population response magnitude variation in inferotemporal cortex predicts image memorability. eLife, 8, e47596. 10.7554/eLife.47596, [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Jin, J., & Maren, S. (2015). Prefrontal–hippocampal interactions in memory and emotion. Frontiers in Systems Neuroscience, 9, 170. 10.3389/fnsys.2015.00170, [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Khosla, A., Raju, A. S., Torralba, A., & Oliva, A. (2015). Understanding and predicting image memorability at a large scale (pp. 2390–2398). https://openaccess.thecvf.com/content_iccv_2015/html/Khosla_Understanding_and_Predicting_ICCV_2015_paper.html.
  59. Kim, H. (2011). Neural activity that predicts subsequent memory and forgetting: A meta-analysis of 74 fMRI studies. Neuroimage, 54, 2446–2461. 10.1016/j.neuroimage.2010.09.045, [DOI] [PubMed] [Google Scholar]
  60. Koch, G. E., Akpan, E., & Coutanche, M. N. (2020). Image memorability is predicted by discriminability and similarity in different stages of a convolutional neural network. Learning & Memory, 27, 503–509. 10.1101/lm.051649.120, [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Konkel, A., & Cohen, N. J. (2009). Relational memory and the hippocampus: Representations and methods. Frontiers in Neuroscience, 3, 166–174. 10.3389/neuro.01.023.2009, [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Kramer, M. A., Hebart, M. N., Baker, C. I., & Bainbridge, W. A. (2022). Semantics, not atypicality reflect memorability across concrete objects. Journal of Vision, 22, 3634. 10.1167/jov.22.14.3634 [DOI] [Google Scholar]
  63. Kramer, M. A., Hebart, M. N., Baker, C. I., & Bainbridge, W. A. (2023). The features underlying the memorability of objects. Science Advances, 9, eadd2981. 10.1126/sciadv.add2981, [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Kriegeskorte, N., & Kievit, R. A. (2013). Representational geometry: Integrating cognition, computation, and the brain. Trends in Cognitive Sciences, 17, 401–412. 10.1016/j.tics.2013.06.007, [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Light, L. L., Kayra-Stuart, F., & Hollander, S. (1979). Recognition memory for typical and unusual faces. Journal of Experimental Psychology: Human Learning and Memory, 5, 212–228. [PubMed] [Google Scholar]
  66. Lindh-Knuutila, T., & Honkela, T. (2015). Exploratory analysis of semantic categories: Comparing data-driven and human similarity judgments. Computational Cognitive Science, 1, 2. 10.1186/s40469-015-0001-1 [DOI] [Google Scholar]
  67. Liu, E. S., Hou, M., Koen, J. D., & Rugg, M. D. (2022). Effects of age on the neural correlates of encoding source and item information: An fMRI study. Neuropsychologia, 177, 108415. 10.1016/j.neuropsychologia.2022.108415, [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Mangin, O., Filliat, D., Ten Bosch, L., & Oudeyer, P.-Y. (2015). MCA-NMF: Multimodal concept acquisition with non-negative matrix factorization. PLoS One, 10, e0140732. 10.1371/journal.pone.0140732, [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. McRae, K., Cree, G. S., Seidenberg, M. S., & Mcnorgan, C. (2005). Semantic feature production norms for a large set of living and nonliving things. Behavior Research Methods, 37, 547–559. 10.3758/BF03192726, [DOI] [PubMed] [Google Scholar]
  70. Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv. 10.48550/arXiv.1301.3781 [DOI] [Google Scholar]
  71. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. arXiv. 10.48550/arXiv.1310.4546 [DOI] [Google Scholar]
  72. Milczarek, M. M., & Vann, S. D. (2020). The retrosplenial cortex and long-term spatial memory: From the cell to the network. Current Opinion in Behavioral Sciences, 32, 50–56. 10.1016/j.cobeha.2020.01.014, [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Miyashita, Y. (2019). Perirhinal circuits for memory processing. Nature Reviews Neuroscience, 20, 577–592. 10.1038/s41583-019-0213-6, [DOI] [PubMed] [Google Scholar]
  74. Morales-Torres, R., Wing, E. A., Deng, L., Davis, S. W., & Cabeza, R. (2024). Visual recognition memory of scenes is driven by categorical, not sensory, visual representations. Journal of Neuroscience, 44, e1479232024. 10.1523/JNEUROSCI.1479-23.2024, [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Morton, N. W., Zippi, E. L., Noh, S. M., & Preston, A. R. (2021). Semantic knowledge of famous people and places is represented in hippocampus and distinct cortical networks. Journal of Neuroscience, 41, 2762–2779. 10.1523/JNEUROSCI.2034-19.2021, [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Mumford, J. A., Davis, T., & Poldrack, R. A. (2014). The impact of study design on pattern estimation for single-trial multivariate pattern analysis. Neuroimage, 103, 130–138. 10.1016/j.neuroimage.2014.09.026, [DOI] [PubMed] [Google Scholar]
  77. Mumford, J. A., Turner, B. O., Ashby, F. G., & Poldrack, R. A. (2012). Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. Neuroimage, 59, 2636–2643. 10.1016/j.neuroimage.2011.08.076, [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Muttenthaler, L., & Hebart, M. N. (2022). Interpretable object dimensions in deep neural networks and their similarities to human representations. Journal of Vision, 22, 4516. 10.1167/jov.22.14.4516 [DOI] [Google Scholar]
  79. Naspi, L., Hoffman, P., Devereux, B., Thejll-Madsen, T., Doumas, L. A. A., & Morcom, A. (2021). Multiple dimensions of semantic and perceptual similarity contribute to mnemonic discrimination for pictures. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47, 1903–1923. 10.1037/xlm0001032, [DOI] [PubMed] [Google Scholar]
  80. Naspi, L., Stensholt, C., Karlsson, A. E., Monge, Z. A., & Cabeza, R. (2023). Effects of aging on successful object encoding: Enhanced semantic representations compensate for impaired visual representations. Journal of Neuroscience, 43, 7337–7350. 10.1523/JNEUROSCI.2265-22.2023, [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Needell, C. D., & Bainbridge, W. A. (2022). Embracing new techniques in deep learning for estimating image memorability. Computational Brain & Behavior, 5, 168–184. 10.1007/s42113-022-00126-5 [DOI] [Google Scholar]
  82. Paatero, P. (1997). Least squares formulation of robust non-negative factor analysis. Chemometrics and Intelligent Laboratory Systems, 37, 23–35. 10.1016/S0169-7439(96)00044-5 [DOI] [Google Scholar]
  83. Paatero, P., & Tapper, U. (1994). Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values. Environmetrics, 5, 111–126. 10.1002/env.3170050203 [DOI] [Google Scholar]
  84. Pandya, S., Nicholls, V. I., Krugliak, A., Davis, S. W., & Clarke, A. (2024). Context and semantic object properties interact to support recognition memory. Quarterly Journal of Experimental Psychology. 10.1177/17470218241283028, [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Patel, R., Steele, C. J., Chen, A. G. X., Patel, S., Devenyi, G. A., Germann, J., et al. (2020). Investigating microstructural variation in the human hippocampus using non-negative matrix factorization. Neuroimage, 207, 116348. 10.1016/j.neuroimage.2019.116348, [DOI] [PubMed] [Google Scholar]
  86. Peelen, M. V., & Caramazza, A. (2012). Conceptual object representations in human anterior temporal cortex. Journal of Neuroscience, 32, 15728–15736. 10.1523/JNEUROSCI.1953-12.2012, [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1532–1543). 10.3115/v1/D14-1162 [DOI] [Google Scholar]
  88. Perez-Arriaga, M. O., Estrada, T., & Abad-Mota, S. (2018). Construction of semantic data models. In Filipe J., Bernardino J., & Quix C. (Eds.), Data management technologies and applications (pp. 46–66). Springer International Publishing. 10.1007/978-3-319-94809-6_3 [DOI] [Google Scholar]
  89. Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40, 879–891. 10.3758/BRM.40.3.879, [DOI] [PubMed] [Google Scholar]
  90. Rissman, J., Gazzaley, A., & D’Esposito, M. (2004). Measuring functional connectivity during distinct stages of a cognitive task. Neuroimage, 23, 752–763. 10.1016/j.neuroimage.2004.06.035, [DOI] [PubMed] [Google Scholar]
  91. Ritchey, M., Wang, S.-F., Yonelinas, A. P., & Ranganath, C. (2019). Dissociable medial temporal pathways for encoding emotional item and context information. Neuropsychologia, 124, 66–78. 10.1016/j.neuropsychologia.2018.12.015, [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Roads, B. D., & Love, B. C. (2024). The dimensions of dimensionality. Trends in Cognitive Sciences, 28, 1118–1131. 10.1016/j.tics.2024.07.005, [DOI] [PubMed] [Google Scholar]
  93. Roesler, R., & McGaugh, J. L. (2022). The entorhinal cortex as a gateway for amygdala influences on memory consolidation. Neuroscience, 497, 86–96. 10.1016/j.neuroscience.2022.01.023, [DOI] [PubMed] [Google Scholar]
  94. Rosseel, Y., Jorgensen, T. D., Wilde, L. D., Oberski, D., Byrnes, J., Vanbrabant, L., et al. (2024). lavaan: Latent variable analysis (Version 0.6–18) [Computer software]. https://cran.r-project.org/web/packages/lavaan/index.html.
  95. Ryals, A. J., Wang, J. X., Polnaszek, K. L., & Voss, J. L. (2015). Hippocampal contribution to implicit configuration memory expressed via eye movements during scene exploration. Hippocampus, 25, 1028–1041. 10.1002/hipo.22425, [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Schneider, S., Coll, S. Y., Schnider, A., & Ptak, R. (2024). Electrophysiological analysis of signal detection outcomes emphasizes the role of decisional factors in recognition memory. Frontiers in Human Neuroscience, 18, 1358298. 10.3389/fnhum.2024.1358298, [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Seamon, J. G., & Murray, P. (1976). Depth of processing in recall and recognition memory: Differential effects of stimulus meaningfulness and serial position. Journal of Experimental Psychology: Human Learning and Memory, 2, 680–687. 10.1037/0278-7393.2.6.680 [DOI] [Google Scholar]
  98. Shallice, T., Fletcher, P., Frith, C. D., Grasby, P., Frackowiak, R. S. J., & Dolan, R. J. (1994). Brain regions associated with acquisition and retrieval of verbal episodic memory. Nature, 368, 633–635. 10.1038/368633a0, [DOI] [PubMed] [Google Scholar]
  99. Sotiras, A., Resnick, S. M., & Davatzikos, C. (2015). Finding imaging patterns of structural covariance via non-negative matrix factorization. Neuroimage, 108, 1–16. 10.1016/j.neuroimage.2014.11.045, [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Staresina, B. P., & Davachi, L. (2006). Differential encoding mechanisms for subsequent associative recognition and free recall. Journal of Neuroscience, 26, 9162–9172. 10.1523/JNEUROSCI.2877-06.2006, [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Staresina, B. P., & Davachi, L. (2008). Selective and shared contributions of the hippocampus and perirhinal cortex to episodic item and associative encoding. Journal of Cognitive Neuroscience, 20, 1478–1489. 10.1162/jocn.2008.20104, [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Stoinski, L. M., Perkuhn, J., & Hebart, M. N. (2024). THINGSplus: New norms and metadata for the THINGS database of 1854 object concepts and 26,107 natural object images. Behavior Research Methods, 56, 1583–1603. 10.3758/s13428-023-02110-8, [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Turner, B. O., Paul, E. J., Miller, M. B., & Barbey, A. K. (2018). Small sample sizes reduce the replicability of task-based fMRI studies. Communications Biology, 1, 62. 10.1038/s42003-018-0073-z, [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Tyler, L. K., Chiu, S., Zhuang, J., Randall, B., Devereux, B. J., Wright, P., et al. (2013). Objects and categories: Feature statistics and object processing in the ventral stream. Journal of Cognitive Neuroscience, 25, 1723–1735. 10.1162/jocn_a_00419, [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Ueno, T., Allen, R. J., Baddeley, A. D., Hitch, G. J., & Saito, S. (2011). Disruption of visual feature binding in working memory. Memory & Cognition, 39, 12–23. 10.3758/s13421-010-0013-8, [DOI] [PubMed] [Google Scholar]
  106. Van Overschelde, J. P., Rawson, K. A., & Dunlosky, J. (2004). Category norms: An updated and expanded version of the Battig and Montague (1969) norms. Journal of Memory and Language, 50, 289–335. 10.1016/j.jml.2003.10.003 [DOI] [Google Scholar]
  107. Vinson, D. P., & Vigliocco, G. (2008). Semantic feature production norms for a large set of objects and events. Behavior Research Methods, 40, 183–190. 10.3758/brm.40.1.183, [DOI] [PubMed] [Google Scholar]
  108. Voss, J. L., Bridge, D. J., Cohen, N. J., & Walker, J. A. (2017). A closer look at the hippocampus and memory. Trends in Cognitive Sciences, 21, 577–588. 10.1016/j.tics.2017.05.008, [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Wakeland-Hart, C. D., Cao, S. A., deBettencourt, M. T., Bainbridge, W. A., & Rosenberg, M. D. (2022). Predicting visual memory across images and within individuals. Cognition, 227, 105201. 10.1016/j.cognition.2022.105201, [DOI] [PubMed] [Google Scholar]
  110. Walsh, C. R., & Rissman, J. (2023). Behavioral representational similarity analysis reveals how episodic learning is influenced by and reshapes semantic memory. Nature Communications, 14, 7548. 10.1038/s41467-023-42770-w, [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Weidemann, C. T., Kragel, J. E., Lega, B. C., Worrell, G. A., Sperling, M. R., Sharan, A. D., et al. (2019). Neural activity reveals interactions between episodic and semantic memory systems during retrieval. Journal of Experimental Psychology: General, 148, 1–12. 10.1037/xge0000480, [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. Xie, J., Douglas, P. K., Wu, Y. N., Brody, A. L., & Anderson, A. E. (2017). Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms. Journal of Neuroscience Methods, 282, 81–94. 10.1016/j.jneumeth.2017.03.008, [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Xie, W., Bainbridge, W. A., Inati, S. K., Baker, C. I., & Zaghloul, K. A. (2020). Memorability of words in arbitrary verbal associations modulates memory retrieval in the anterior temporal lobe. Nature Human Behaviour, 4, 937–948. 10.1038/s41562-020-0901-2, [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Yu, C., Huang, S., Howard, C. M., Hovhannisyan, M., Clarke, A., Cabeza, R., et al. (2025). Subsequent memory effects in cortical pattern similarity differ by semantic class. Journal of Cognitive Neuroscience, 37, 155–166. 10.1162/jocn_a_02238, [DOI] [PubMed] [Google Scholar]
  115. Zeidman, P., Mullally, S. L., & Maguire, E. A. (2015). Constructing, perceiving, and maintaining scenes: Hippocampal activity and connectivity. Cerebral Cortex, 25, 3836–3855. 10.1093/cercor/bhu266, [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Zhao, X., Lynch, J. G., & Chen, Q. (2010). Reconsidering Baron and Kenny: Myths and truths about mediation analysis. Journal of Consumer Research, 37, 197–206. 10.1086/651257 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All de-identified imaging and behavioral data will be shared upon request via e-mail to the corresponding author, Simon Davis.


Articles from Journal of Cognitive Neuroscience are provided here courtesy of MIT Press

RESOURCES