Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2022 Jun 18.
Published in final edited form as: Psychophysiology. 2020 Dec 19;58(3):e13750. doi: 10.1111/psyp.13750

Inferior parietal lobule is sensitive to different semantic similarity relations for concrete and abstract words

Maria Montefinese 1,2,, Paola Pinti 3,4,#, Ettore Ambrosini 2,5,6,#, Ilias Tachtsidis 3, David Vinson 1
PMCID: PMC7612868  EMSID: EMS145882  PMID: 33340124

Abstract

Similarity measures, the extent to which two concepts have similar meanings, are the key to understand how concepts are represented, with different theoretical perspectives relying on very different sources of data from which similarity can be calculated. While there is some commonality in similarity measures, the extent of their correlation is limited. Previous studies also suggested that the relative performance of different similarity measures may also vary depending on concept concreteness and that the inferior parietal lobule (IPL) may be involved in the integration of conceptual features in a multimodal system for the semantic categorization. Here, we tested for the first time whether theory-based similarity measures predict the pattern of brain activity in the IPL differently for abstract and concrete concepts. English speakers performed a semantic decision task, while we recorded their brain activity in IPL through fNIRS. Using representational similarity analysis, results indicated that the neural representational similarity in IPL conformed to the lexical co-occurrence among concrete concepts (regardless of the hemisphere) and to the affective similarity among abstract concepts in the left hemisphere only, implying that semantic representations of abstract and concrete concepts are characterized along different organizational principles in the IPL. We observed null results for the decoding accuracy. Our study suggests that the use of the representational similarity analysis as a complementary analysis to the decoding accuracy is a promising tool to reveal similarity patterns between theoretical models and brain activity recorded through fNIRS.

1. Introduction

Similarity/relatedness measures, the extent to which two concepts have similar meaning, can reveal the nature of semantic representation: the knowledge we have of the world (Montefinese, 2019; Vigliocco et al., 2009). Numerous efforts have been made to understand the nature of semantic representation, assessing the extent to which a given theoretically based similarity measure predicts patterns in data. Some perspectives focus upon similarity in our affective and sensorimotor experience as inferred from verbal features (e.g., featural similarity; Montefinese, Vinson, et al., 2018; Montefinese et al., 2015; Vigliocco et al., 2004) or property ratings (e.g., affective content; Fairfield et al., 2017; Montefinese et al., 2014; Warriner et al., 2013), others upon regularities in spoken and written language (e.g., lexical co-occurrence; Andrews et al., 2009; Griffiths et al., 2007; Landauer & Dumais, 1997; Lund & Burgess, 1996). The similarity relation between two concepts may also be modeled as a measure of associative strength that reflects the probability of one concept to evoke another one in a free word association task (De Deyne et al., 2019; Nelson et al., 2004). Although these similarity measures are correlated to each other to some extent in characterizing semantic representation, they are not entirely overlapping and seem to target different aspects of word meaning (Montefinese & Vinson, 2017). This allows us to investigate whether the different similarity measures also relate differently to the patterns of brain activity, and to identify which theoretical approach relates the most to observed effects of meaning similarity.

Relative effects of different theory-based similarity measures in predicting the participants’ performance in different semantic tasks (such as, for example, lexical decision, Brunellière et al., 2017; Heyman et al., 2015; Montefinese, Buchanan, et al., 2018; Vigliocco et al., 2004) may also vary depending on concept concreteness: the degree to which a concept denoted by a word refers to an entity that can be perceived through the senses (Brysbaert et al., 2014). This dimension is usually assessed by participants on Likert scales, in which concrete concepts lie on one side of the scale, referring to single, bounded, identifiable referents that can be perceived through the senses (Borghi et al., 2017). Abstract concepts lie on the opposite side of the scale and lack clearly perceivable referents (even if they might evoke scenes and emotional experiences) and are more strongly reliant on interoception (i.e., sensations inside the body; Connell et al., 2018; Montefinese et al., 2020). Indeed, the latter are acquired later and mostly through language and social interaction compared to concrete concepts (see recent review by Dove et al., in press). By contrast, concrete concepts are more imageable (Paivio, 1990) and have greater availability of contextual information (Schwanenflugel et al., 1992).

Different organizational principles have been argued to govern semantic representations of concrete and abstract concepts: concrete concepts are predominantly organized by featural similarity and abstract concepts by associative relations (Crutch & Warrington, 2005; Hill et al., 2013). It has been proposed that abstract conceptual representation could be based more on linguistic information arising through patterns of co-occurrence and syntactic information (Gleitman & Papafragou, 2005; Vigliocco et al., 2013); however, some studies suggest in literature either lexical co-occurrence-based models behave similarly for concrete and abstract words concepts (Rotaru et al., 2018) or account better for concrete than abstract concepts (Hill et al., 2013).

Concrete concepts can be also inscribed into definite domains, such as natural kinds versus artifacts, and they are organized into hierarchical categories, while abstract concepts are considerably more variable and not organized into well-defined categories (Borghi et al., 2017). Moreover, participants agree more when they produce properties and associations for concrete words like “dog” compared with abstract words like “justice” or “freedom” (De Mornay Davies & Funnell, 2000; Tyler et al., 2002). Together, these studies suggest indirectly that concepts’ concreteness should be considered when comparing theory-based similarity measures.

In terms of the neural circuitry underlying the semantic system, two meta-analyses of neuroimaging studies showed a left-lateralized brain network (Binder et al., 2009; Wang et al., 2010). Wang et al.’s meta-analysis identified neural differences in abstract and concrete concept representations in two main systems, with abstract concepts relying on the verbal-language system and concrete concepts relying on the imagery and perceptual systems (Wang et al., 2010). In addition, Binder et al.’s meta-analysis showed that the semantic system includes seven brain regions: (1) lateral temporal cortex, (2) ventral temporal cortex, (3) dorsomedial prefrontal cortex, (4) inferior frontal gyrus, (5) ventromedial prefrontal cortex, (6) posterior cingulate gyrus and (7) inferior parietal lobule (IPL) (i.e., the angular gyrus and the adjacent supramarginal gyrus; Pulvermüller, 2013). The common denominator of these regions is their role in high-level integrative processes. Indeed, they are known to receive extensively processed, multimodal and supramodal inputs. In particular, the IPL is assumed to have the role of a convergence hub sustaining integration in a multimodal system (Binder & Desai, 2011). It has been suggested indeed that IPL may integrate features for semantic categorization (Koenig et al., 2005) and enable increasingly abstract, supramodal representations of perceptual experience (Binder & Desai, 2011). In other words, this high-level convergence zone binds representations from two or more modalities, and the resulting supramodal representations capture similarity structures that define categories (Binder & Desai, 2011). The information for encoding abstract versus concrete concept representations in this region may thus reflect the abstract/concrete distinction on a semantic comparison level, as a consequence of the differences in either sensorimotor information from mental imagery or associated verbal contexts (Binder et al., 2005; Dhond et al., 2007; Pexman et al., 2007; Sabsevitz et al., 2005; Wang et al., 2013, 2010). For these reasons, here, we aim to further investigate which kind of information IPL integrates depending on the concept concreteness.

In recent years, the advancement in analytical approaches for fMRI data, like the development of multivariate pattern analysis (MVPA) techniques, has allowed a more direct investigation of abstract and concrete conceptual representations by examining, for example, whether the pattern of functional brain responses (e.g., fMRI voxels or functional Near Infrared Spectroscopy (fNIRS) channels) can discriminate between two stimuli conditions. Being a data-driven analysis, MVPA does not require any hypothesis and exhibits a higher sensitivity than traditional univariate statistical methods (Liu et al., 2013). Recently, this analytical approach has been successfully extended to the analysis of fNIRS data (Emberson et al., 2017; Zinszer et al., 2017). In particular, MVPA has been used in fNIRS studies for different purposes, such as to discriminate children with attention-deficit/hyperactivity disorder from healthy controls during a working memory task (Gu et al., 2018), to decode visual and auditory stimuli in infants (Emberson et al., 2017), or to classify activation patterns associated with spoken and signed language in monolinguals (Mercure et al., 2020). To date, Zinszer and colleagues’ study (2017) is the only one that investigated whether semantic representations are encoded in fNIRS neuroimaging data in a semantic task. The authors examined if participants’ neural response to spoken words and their corresponding pictures predicts the pattern of similarity between concepts computed as a lexical co-occurrence measure. Results showed that neural activity pattern in the occipital cortex was predicted by the lexical co-occurrence across concepts. This work represents a substantial step forward in the investigation of semantic representation with fNIRS, moving from the conventional univariate analyses, typically used to localize which brain region is involved in semantic tasks by analyzing one measurement location or channel at a time (e.g., Amiri et al., 2014), to multivariate methods, that consider the full spatial pattern of brain activity and channels simultaneously (Haynes & Rees, 2006).

Within this framework, representational similarity analysis (RSA) (Kriegeskorte et al., 2008), may represent a promising tool for fNIRS-based decoding of brain activity, enabling the comparison between data from different sources. For instance, the semantic similarity structure of a word set (that can be modelled in various ways) can be correlated with the similarity structure of the regional hemodynamic response pattern elicited by the same words. RSA was demonstrated to be particularly fruitful for fMRI research in investigating the relation between the functional response pattern and higher level semantic representations between concepts, with a heavy focus on concrete knowledge (Devereux et al., 2013; Fairhall & Caramazza, 2013; Kriegeskorte et al., 2008; but see e.g., Wang et al., 2017 for abstract concepts). Here, we aim to further expand and show the potential of MVPA-based methods to decode brain activity patterns from fNIRS data.

While there are limitations to fNIRS, such as the lack of anatomical information, a lower spatial resolution, and the ability to record only from the surface of the cortex, it presents some advantages that allow novel cognitive neuroscience investigations (Emberson et al., 2017; Pinti et al., 2019; Zinszer et al., 2017). For example, unlike fMRI, fNIRS does not require participants to lie in a confined scanner environment, and it is more robust to movements, making it suitable for a wide range of participant populations and tasks (Pinti et al., 2019). However, being a relatively recent technology, fNIRS still lacks standardized analysis procedures (Hocke et al., 2018; Pinti et al., 2019) and sophisticated analytical approaches comparable to those used for fMRI.

The specific question that we seek to answer is whether theory-based similarity measures predict the pattern of brain activity in IPL, and whether this differs for abstract and concrete words as recorded by fNIRS. By taking the finding from Zinszer et al. (2017) as a starting point, that is, that fNIRS can be used successfully to predict neural patterns from concept similarity, we address this question by advancing in several ways by: (1) using a larger sample of stimuli, (2) comparing directly abstract and concrete concepts and alternative semantic models, (3) employing RSA which is particularly appropriate for comparing similarity patterns between data sets, which can be from very different sources.

We hypothesize that MVPA can be used to classify brain activation in response to abstract and concrete words. We also predict that the IPL encodes different kinds of similarity measure depending on the word concreteness. Moreover, we hypothesize that information in the left hemisphere will be more critical than the one in the right hemisphere to this classification and similarity pattern.

2. Method

2.1. Participants

Thirteen healthy adults (four females; mean age = 26.7 years; range: 20–40 years) took part in the fNIRS study, which included two experimental sessions. A sensitivity power analysis (G*Power 3 software; Faul et al., 2007) revealed that our sample size was large enough to have a statistical power of 0.80 to detect the significant (α = 0.05, two-tailed test) within-subject differences of interest (i.e., a main effect of hemisphere, word Category, or their interaction; see Section 2.5.2) in either representation similarities or decoding accuracies with a medium effect size (Cohen's f = 0.3) assuming a correlation between repeated measures of 0.75.

All participants were right-handed, healthy and native English speakers with no history of neurological or psychiatric disorders and normal or corrected-to-normal vision. They were compensated for their participation and gave written informed consent to the experimental protocol approved by the University College London local research ethics committee. Due to technical issues, one participant completed 4 out of 6 runs (1 run in session 1, 3 runs in session 2) and another participant 3 out of 6 runs (3 runs of session 1).

2.2. Stimuli

The experimental stimuli consisted of 160 English words denoting 80 abstract and 80 concrete concepts derived from the English semantic feature norms (Buchanan et al., 2013, 2019; McRae et al., 2005; Vinson & Vigliocco, 2008). The words were assigned to abstract/concrete type based on a cut-off value estimated from the distribution of the 5-point concreteness ratings for English language (Brysbaert et al., 2014) for all words in the English semantic norms (Buchanan et al., 2013). In particular, the visual inspection of the distribution of concreteness ratings (estimated using a gaussian kernel smoothing function with an optimized bandwidth) indicated bimodality, which was confirmed by the Hartigan's dip test of unimodality (p < .001), suggesting the existence of (at least) two sub-distributions reflecting concreteness ratings for abstract and concrete words. The distribution was thus submitted to a gaussian mixture analysis, which confirmed that two sub-distributions were required to fit the data (R2 = 0.97; a model including a third sub-distribution did not significantly increase the fit, p > .05). The cut-off value was then estimated as the concreteness value corresponding to the intersection of these sub-distributions.

The selection of experimental stimuli was restricted to the nouns for which the following English norms and measures were available that were necessary to compute the similarity measures for the RSA analysis (see below): semantic norms (Buchanan et al., 2013), lexical British National Corpus (Leech et al., 1994), association norms (De Deyne et al., 2019); affective norms (Warriner et al., 2013). In order to obtain a more representative set of abstract and concrete concepts while maximizing their variability (and, thus, RSA efficiency; see e.g., Meersmans et al., 2020), abstract and concrete concepts were selected based on a series of clustering analyses performed on the measures mentioned above, so that they could be grouped into four categories each, composed by a variable number of words (for the abstract concepts: social constructs, social attributes, cognitive events/states, and other abstract constructs; for the concrete concepts: professions, animals, vehicles, and buildings). The small number of words for each category, and their uneven number across categories, prevented us to further investigate between-cate-gories differences. Moreover, the experimental stimuli were selected so to maximize the range in semantic similarity within abstract and concrete stimuli while keeping correlations among the semantic, lexical, and association similarities as low as possible (see below). A complete list of the selected stimuli is provided in the Supplementary Material.

A two-tailed independent t test confirmed the significant difference between abstract and concrete words for the concreteness measure (abstract: M = 2.72, SD = 0.73, concrete: M = 4.61, SD = 0.35; t(78) = 26.305, p < .0001, Cohen's d = 3.296). Abstract and concrete stimuli were naturally balanced for word length, word frequency, valence, and dominance (Brysbaert et al., 2014; Warriner et al., 2013); as shown by two-tailed independent t test comparisons (t(78) = 1.524, p > .132, Cohen’s d < 0.241). However, we refrained to balance them for further semantic-lexical variables. Indeed, as already noted, it is important to select variable stimuli for RSA (e.g., Meersmans et al., 2020) and matching abstract and concrete concepts on all semantic-lexical variables would have resulted in a stimuli set including highly specific concepts, which would have lowered RSA efficiency.

We obtained five types of representational dissimilarity matrices (RDMs) for abstract and concrete words, one for each of the similarity measures considered here, that is, fea-tural similarity, association strength, lexical co-occurrence, affective ratings, and orthographic similarity (as a control model). Each RDM is a symmetric n × n matrix, where n is the number of experimental conditions (i.e., n = 160 words in this study) and each off-diagonal element indicates the distance for each pair of words in a given measure. We maximized the range in semantic similarity within the concrete and abstract stimulus group as we assumed that a wider range of semantic similarities might increase the sensitivity of the representational similarity analysis of the fNIRS patterns (e.g., Meersmans et al., 2020).

The five similarity measures were computed as follows. The featural similarity measure was derived from English semantic norms, in which participants were asked to list the properties of each word, such as its physical, functional, and categorical properties (Buchanan et al., 2013); it was calculated as the cosine angle between vectors for each pair of words (Buchanan et al., 2013). The association strength values were gathered by a continuous association task on large scale for English words (i.e., to produce multiple associations for each cue word) (De Deyne et al., 2019). Association strength was computed as the proportion of participants that gave the target response to a given cue word. The lexical co-occurrence was computed as the natural logarithm of co-occurrence frequency plus one of each word pair in a symmetrical 5-word window in the British National Corpus (Leech et al., 1994). Since the results may be severely compromised whether these measures of similarity were too highly correlated, we tried to keep these correlations as low as possible (the within-domain correlations between the three measures of similarity were all lower than 0.27); we also selected materials that, while still representing high levels of similarity overall, varied substantially within each domain (i.e., abstract, concrete) in relative similarity according to the three measures. The affective similarities were calculated as the Euclidean distance in the three-dimensional space characterized by valence (the way an individual judges a situation, from pleasant to unpleasant), arousal (the degree of activation an individual feels toward a given stimulus, from calm to exciting) and dominance (the degree of control an individual feels over a specific stimulus, from out of control to in control) measured on a 9-point scale (Warriner et al., 2013). Finally, the orthographic distance values were calculated as the Levenshtein distance (Levenshtein, 1966) between each pair of words, reflecting the number of deletions, insertions and substitutions necessary to turn a word of the pair into the other one (for example, the orthographic distance between “career” and “deer” is three, reflecting two deletions (“c” and “a”) and a substitution (from “r” to “d”).

2.3. Procedure

The experimental task consisted of a semantic decision task, in which the participants were asked to assess the concreteness (abstract vs. concrete) of the word stimuli. The words were presented at the center of the screen in black Helvetica font against a grey background.

Each subject participated to six 12-min fNIRS runs divided in two sessions, which were performed in separate days, one week apart. In each run, all the 160 experimental stimuli (word trials) were presented exactly once along with 40 interspersed null trials (consisting in the presentation of a fixation cross), whose orders were pseudorandomized so that there were no more than three concepts in a row from the same category (abstract, concrete) and no null event repetitions. By repeating experimental stimuli six times, but in different runs, we were able to maximize design efficiency (and the precision of the estimates) while avoiding temporal clustering, which is undesirable in this kind of design (Kriegeskorte et al., 2008). Different trial orders were used per subject.

The trial duration was 3 s, during which a fixation cross (200 ms) preceded the stimulus (i.e., a word or a null event, presented for 500 ms), which, in turn, was followed by the blank screen (ITI: 2,300 ms) in which participants could make their response by pressing the right or left arrow keys on a computer keyboard. The input key mapping was counterbalanced across participants. Participants were required to perform a semantic decision task, in which they judged whether the word denoted an abstract or concrete concept.

The fact that each word was repeated six times during the experiment introduces a possible source of noise because of novelty effects on the first presentation. We therefore asked participants to perform a paper-and-pencil version of the semantic decision task on the same stimuli of the computerized version at the beginning of the first experimental session to reduce any novelty signals during the fNIRS session.

2.4. fNIRS data acquisition

fNIRS, like fMRI, is a neuroimaging technique based on neurovascular coupling that measures the changes in brain hemodynamics (oxyhemoglobin -HbO2- and deoxyhemoglobin -HbR) following neuronal activations using nearinfrared light (Pinti et al., 2019). Brain hemodynamic and oxygenation changes were recorded over the inferior parietal cortex bilaterally using a wearable fNIRS device (LIGHTNIRS, Shimadzu Corp., Kyoto, Japan). The fNIRS system is equipped with eight light sources, emitting light at 780, 805, and 830 nm, and eight light detectors that were split and arranged into two 4 × 2 arrays as shown in Figure 1. The source-detector separation was set at 3 cm. Raw intensity signals were sampled at 13.33 Hz from 20 measurement channels, 10 per hemisphere.

Figure 1. Schematic representation of the fNIRS channel configuration.

Figure 1

Note: Sources (red circles) and detectors (blue circles) are arranged in a 4 × 2 configuration on each hemisphere, creating 20 measurement channels (white circles) in total. The channels of interest, covering the left and right IPL, are indicated in green

For each participant, the coordinates of the fNIRS optodes and of the 10–20 standard anatomical landmarks (Nasion, Inion, right and left preauricular points, Cz) were recorded using a 3D magnetic digitizer (Liberty, Polhemus, Vermont). These were used to co-register the fNIRS channels locations onto a standard brain template using the SPM for fNIRS toolbox (https://www.nitrc.org/projects/spm_fnirs). The MNI coordinates and the anatomical locations of each channel were then estimated. The median group MNI coordinates, the corresponding anatomical locations, and the atlas-based probabilities are listed in Supplementary Table 1.

2.5. fNIRS data preprocessing and analysis

The Homer2 software package (Huppert et al., 2009) was used to preprocess the fNIRS signals. Raw intensity data were first visually inspected and the presence of the heart beat frequency (~1 Hz) in the signal power spectral density was assessed in order to identify the channels showing poor signal-to-noise due to detectors saturation or poor optical coupling (Pinti et al., 2019). No channels were excluded from further analysis due to poor signal quality. Raw intensity signals were converted into optical density changes (Homer2 function: hmrInteensity2OD) and motion artifacts were corrected using the wavelet-based approach (Homer2 function: hmrInteensity2OD; iqr = 1.5; Molavi & Dumont, 2012). In order to reduce high-frequency (e.g., heart rate) and very low frequency noise, a band-pass filter was applied (Homer2 function: hmrBandpassFilt; order: 3rd; band-pass frequency range [0.01 0.6] Hz) and the concentration changes of HbO2 and HbR were then calculated with the modified Beer-Lambert law (Homer2 function: hmrOD2Conc; DPF = 6). The correlation-based signal improvement (CBSI, Cui et al., 2010) was used to combine HbO2 and HbR into the so-called “activation signal” (Scholkmann et al., 2014). This was done to infer functional brain activity on one signal including both HbO2 and HbR at the same time that can help in reducing false positives at the statistical analysis stage (Tachtsidis & Scholkmann, 2016).

2.5.1. Contrast effects analysis

A General Linear model (GLM) approach (Friston et al., 1994) was adopted to estimate the first-level (or single-subject) β-values on the CBSI signals. This was done using the SPM for fNIRS toolbox on the fNIRS activation signals, downsampled to 3 Hz, and for each of the 6 runs individually. Specifically, our design matrix included 160 regressors, one for each word, computed through the convolution of the events timeline (modeled as stick functions) with the canonical hemodynamic response function. The regressors were used to fit to the fNIRS activations signals and the singlesubject β-values were estimated. Then, for each word, we computed the t-statistics on the β estimates testing for the hypothesis that that word was significantly related to the CBSI signal. This was carried out on the channels covering the IPL bilaterally (green-filled circles in Figure 1).

2.5.2. Representational similarity analysis and multivariate classification analysis

We performed a Representational Similarity (RS) Analysis based on Spearman partial correlations computed between, on the one side, a similarity matrix based on the neural activity patterns (brain) and, on the other side, the similarity measures across abstract and concrete concepts (models). The brain similarity matrix was computed as the correlation between the neural activations for each pair of words. The five theoretical models were based on featural similarity, word association, lexical co-occurrence, affective content, and orthographic similarity (control model), as detailed in the Stimuli section. We also performed a leave-one-out itemlevel multivariate classification analysis, carried out using the procedure in Emberson et al. (2017; see also, Anderson et al., 2016) to decode the concreteness category of single words (trial-level decoding).

Subject-wise RSs and mean trial-level decoding accuracies were compared between hemispheres and abstract and concrete categories with within-subjects GLM analysis. Post hoc one-tailed one-sample t tests were performed to test for simple effects (i.e., RSs and decoding accuracies significantly higher than 0 and 0.5, respectively). The same analysis was carried out on HbO2 and HbR separately and results are included in the Supplementary Material.

3. Results

3.1. Representational similarity analysis

We found different brain-model RSs for co-occurrence between abstract and concrete words, regardless of the hemisphere [F(1,12) = 16.00, p = .002, Cohen's f = 0.55; Figure 2]. This was due to a greater (and significant) RS for concrete [t(12) = 2.13, p = .027] as compared to abstract words [t(12) = −2.76, p = .955]. The hemisphere by concreteness interaction was not significant [F(1,12) = 0.20, p = .662, f = 0.06].

Figure 2. RSA results.

Figure 2

Note: The plots in the upper row show the mean representational similarity (RS, Spearman partial correlations) as a function of word concreteness (Abs, abstract; Conc, concrete) and hemisphere (left, in red; right, in blue) for each of the five theoretical models. The plots in the lower row show the corresponding differences between RSs in the left and right hemisphere as a function of word concreteness. Error bars represent within-subjects standard errors of the mean (Morey, 2008)

We also found different brain-model RSs for affective content between hemispheres and word concreteness [hemisphere by concreteness interaction: F(1,12) = 6.62, p = .024, f = 0.36]. This interaction was explained by a significantly greater (and significant) RS for abstract [t(12) = 2.02, p = .033] as compared to concrete [t(12) = −1.20, p = .873] words for the left hemisphere; the RSs for the abstract [t(12) = −1.44, p = .913] and concrete [t(12) = 0.83, p = .210] words were not significant for the right hemisphere.

No significant concreteness effect or hemisphere by concreteness interaction was found for the brain-model RSs for featural similarity [respectively, F(1,12) = 1.00 and 0.19, p = .337 and 0.067, f = 0.14 and 0.06], word association [respectively, F(1,12) = 1.81 and 0.07, p = .203 and 0.792, f = 0.19 and 0.04], and orthographic similarity [respectively, F(1,12) = 1.51 and 2.31, p = .241 and 0.155, f = 0.17 and 0.21].

3.2. Concreteness decoding analysis

No significant concreteness effect or hemisphere by concreteness interaction was found for the brain-model RSs for featural similarity [respectively, F(1,12) = 4.09 and 0.01, p = .066 and 0.958, f = 0.28 and 0.01], co-occurrence [respectively, F(1,12) = 0.14 and 1.02, p = .717 and 0.333, f = 0.05 and 0.14], and orthographic similarity [respectively, F(1,12) = 3.95 and 0.92, p = .070 and 0.356, f = 0.28 and 0.13]. A significant concreteness effect was found for word association [F(1,12) = 17.40, p = .001, f = 0.58], but the decoding accuracy was not significantly above chance level for both abstract and concrete words (all ps > 0.621). Moreover, a significant hemisphere by concreteness interaction was found for affective content [F(1,12) = 6.14, p = .029, f = 0.34], but again the decoding accuracy was not significantly above chance level in any case (all ps > 0.104).

4. Discussion

In this study, we tested for the first time whether theory-based similarity measures predicted the pattern of brain activity in the IPL differently for abstract and concrete words. To this aim, we asked native English speakers to perform a semantic decision task requiring an explicit coding of the concreteness dimension of the words, while we recorded their brain activity in IPL through fNIRS. Using RSA on fNIRS data, we found that the neural representational similarity in IPL conformed to the lexical co-occurrence among concrete concepts (regardless of the hemisphere) and to the affective similarity among abstract concepts in the left hemisphere only. This concordance between neural and semantic-affective relationships within supramarginal and angular gyri suggests that these regions encode semantic-affective information depending on word concreteness.

The role of the IPL in semantic processing has been supported by convergent evidence from human functional imaging studies (Binder & Desai, 2011; Binder et al., 2009; Wang et al., 2010), implicating this region in semantic representational aspects. Although this evidence would suggest that IPL may encode feature similarity among concepts, our results do not confirm this assumption. Indeed, we did not find a significant similarity pattern between feature similarity between concepts and neural activation pattern in IPL. Rather we found a similarity pattern in this region with lexical co-occurrence between concrete concepts in IPL for both hemispheres. These results appear consistent with a meta-analysis of 120 fMRI and PET studies on semantic processing that identified the bilateral angular gyrus to be consistently engaged more in concrete than in abstract concept processing (Binder et al., 2009), but this does not explain why we found a reliable similarity pattern only for lexical co-occurrence and not for feature similarity. The IPL also responds to statistical regularity of meaningful events (words/pictures sequences) (Hoenig & Scheef, 2009; Kuperberg et al., 2003; Tinaz et al., 2006, 2008) and it is part of a context-related processing network (Bar et al., 2008; Fornito et al., 2012). As lexical co-occurrence captures statistical regularities in language and reflects co-occurrence of words in similar contexts, IPL may be sensitive to regularities in written and spoken language as well as to the spatial and temporal regularities of events. Support for this view comes from a body of evidence revealing that the text-based model predicted activity in a distributed network extending to the bilateral inferior parietal lobule during language and semantic processing (Anderson et al., 2015; Huth et al., 2016) Indeed, like in our study, Huth and colleagues (2016) found that semantic representation of concepts modeled as a corpus-based space recruits both hemispheres for comprehension of natural speech. But this does not explain why we only observed a reliable similarity pattern only for concrete, and not also for abstract words. This result was unexpected and at odds with the view that sensorimotor properties would be more relevant in IPL for concrete concepts, and linguistic (or at least relational) properties would be more relevant for abstract (Crutch & Warrington, 2005). If confirmed in future studies, this result would highlight the importance of the linguistic and context information for concrete concepts as well.

It is also worth noting that single word processing in a semantic task (as in our study) elicits more context-related information (Price, 2010) compared to the processing of a word embedded in a sentence. Concrete concepts are strongly associated to a limited number of contexts compared to abstract concepts (i.e., abstract concepts have greater contextual diversity; see Brysbaert et al., 2014; Schwanenflugel, 1991) and are thus processed more efficiently, particularly when little or no context is provided with the concept presentation. This characteristic is also reflected in our set of stimuli (two-tailed independent t test on contextual diversity values shows a difference between abstract and concrete concepts: t(78) = 3.80, p < .001, d = 0.589). On this view, in our study the greater availability of context information for concrete concepts conveyed by lexical co-occurrence may have determined the similarity pattern between corpus-based space and activity pattern of IPL for concrete concepts only. It remains to be seen whether this is an important representational difference, related to this broad difference between concrete and abstract concepts overall, or a product of this characteristic of our stimulus set.

We also found a similarity pattern between affective content-based model and activity pattern in IPL for abstract concepts. While we did not have a strong hypothesis about the IPL’s role in abstract representation, this pattern of results was not surprising: while concrete concepts usually have direct sensory referents (Crutch & Warrington, 2005; Montefinese, Ambrosini, Fairfield, & Mammarella, 2013a, 2013b), abstract concepts tend to be more emotionally va-lenced (Crutch et al., 2013; Kousta et al., 2011; Montefinese et al., 2020; Vigliocco et al., 2014) and have low sensorimotor grounding (for a concise review, Montefinese, 2019). Indeed, several authors have emphasized the peculiar role of affective and social experiences in semantic representation of abstract concepts (Binder et al., 2016; Borghi et al., 2011; Katja Wiemer-Hastings & Xu, 2005; Kousta et al., 2011; Vigliocco et al., 2009; Zdrazilova & Pexman, 2013). brain activations in the left (or bilateral) IPL have been observed in response to emotional salience of words (Kensinger & Corkin, 2004; Kensinger & Schacter, 2006; Skipper & Olson, 2014). Brain activations in the left IPL have also been shown to be modulated by the presence of both valence and arousal dimensions of the word (Kensinger & Corkin, 2004).

Together, the results of our study indicate that the IPL represents distributional and affective information depending on the word concreteness, implying that semantic representations of abstract and concrete concepts are characterized along different organizational principles in the IPL. Although some fMRI studies found a similarity pattern between the word association-based model and other brain regions (Liuzzi et al., 2019; Meersmans et al., 2020), we did not find a significant similarity pattern between this model and activity pattern in the IPL. These results suggest that the association strength for written words could represent less similarity between concepts compared to the other measures in the IPL. However, future research is needed to confirm this conclusion.

We also observed unexpected, null results for the decoding accuracy: that is, the decoding accuracy was not significantly above chance level with any semantic model. This could be related to the intrinsic limitations of the fNIRS technology and the low number of encoding channels we used. In fact, fNIRS instruments are typically equipped with a limited number of channels and, in general, brain activity is sampled at sparse and discrete locations of the cortex, with optodes placed 2–3 cm apart from each other. fNIRS has also a spatial resolution of 2–3 cm and recordings are much less finely grained than fMRI (Emberson et al., 2017). Therefore, MVPA is applied on a lower number of encoding channels than fMRI and these include information from a bigger brain volume than the fMRI voxels, which can impact the performance of the decoding procedure. Nonetheless, our study showed that the use of the RSA as a complementary analysis to the decoding accuracy is a promising tool to reveal similarity pattern between theoretical models and brain activity recorded through fNIRS. Future technological advances can help in improving the performance of MVPA-based analyses on fNIRS data and fully establish multivariate analyses as a solid method for fNIRS-derived brain decoding. For instance, the development of whole-head diffuse optical tomography systems can increase the data quality, depth and spatial resolution of fNIRS recordings by using dense grids of sources and detectors with different separations and overlapping measurements at different depths (Zhao & Cooper, 2017).

Supplementary Material

Additional supporting information may be found online in the Supporting Information section.

supinfo

Funding Information

This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 702655 to MM. This work was also supported by the Department of General Psychology, University of Padua (Italy) under research grant type B, awarded to MM. IT and PP were supported by the Wellcome Trust 104580/z/14/z. This work was supported by the “Department of excellence 2018-2022” initiative of the Italian Ministry of education (MIUR) awarded to the Department of Neuroscience—University of Padua

Funding information

Wellcome Trust, Grant/Award Number: 104580/z/14/z; University of Padova, Grant/Award Number: Grant type B; Department of excellence 2018-2022, Grant/Award Number: Italian Ministry of education; European Union’s Horizon 2020, Grant/Award Number: Marie Skłodowska-Curie Grant Agreement No. 702655

Footnotes

Author contribution

Maria Montefinese: Conceptualization; Funding acquisition; Investigation; Methodology; Writing-original draft; Writing-review & editing. Paola Pinti: Data curation; Investigation; Methodology; Resources; Software; Supervision; Visualization; Writing-original draft; Writingreview & editing. Ettore Ambrosini: Data curation; Formal analysis; Funding acquisition; Methodology; Resources; Visualization; Writing-original draft; Writing-review & editing. Ilias Tachtsidis: Funding acquisition; Methodology; Project administration; Resources; Supervision; Writingreview & editing. David Vinson: Conceptualization; Data curation; Funding acquisition; Methodology; Project administration; Supervision; Writing-review & editing.

Conflict Of Interest

All authors declare that they have no financial or other conflicts of interest.

References

  1. Amiri M, Pouliot P, Bonnéry C, Leclerc PO, Desjardins M, Lesage F, Joanette Y. An exploration of the effect of hemodynamic changes due to normal aging on the fNIRS response to semantic processing of words. Frontiers in Neurology. 2014;5:249. doi: 10.3389/fneur.2014.00249. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Anderson AJ, Bruni E, Lopopolo A, Poesio M, Baroni M. Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text. NeuroImage. 2015;120:309–322. doi: 10.1016/j.neuroimage.2015.06.093. [DOI] [PubMed] [Google Scholar]
  3. Anderson AJ, Zinszer BD, Raizada RDS. Representational similarity encoding for fMRI: Pattern-based synthesis to predict brain activity using stimulus-model-similarities. NeuroImage. 2016;128:44–53. doi: 10.1016/j.neuroimage.2015.12.035. [DOI] [PubMed] [Google Scholar]
  4. Andrews M, Vigliocco G, Vinson D. Integrating experiential and distributional data to learn semantic representations. Psychological Review. 2009;116(3):463–498. doi: 10.1037/a0016261. [DOI] [PubMed] [Google Scholar]
  5. Bar M, Aminoff E, Schacter DL. Scenes unseen: The parahippocampal cortex intrinsically subserves contextual associations, not scenes or places per se. Journal of Neuroscience. 2008;28(34):8539–8544. doi: 10.1523/JNEUROSCI.0987-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Binder JR, Conant LL, Humphries CJ, Fernandino L, Simons SB, Aguilar M, Desai RH. Toward a brain-based componential semantic representation. Cognitive Neuropsychology. 2016;33(3-4):130–174. doi: 10.1080/02643294.2016.1147426. [DOI] [PubMed] [Google Scholar]
  7. Binder JR, Desai RH. The neurobiology of semantic memory. Trends in Cognitive Sciences. 2011;15(11):527–536. doi: 10.1016/j.tics.2011.10.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Binder JR, Desai RH, Graves WW, Conant LL. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex. 2009 December;19:2767–2796. doi: 10.1093/cercor/bhp055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Binder JR, Westbury CF, McKiernan KA, Possing ET, Medler DA. Distinct brain systems for processing concrete and abstract concepts. Journal of Cognitive Neuroscience. 2005;17(6):905–917. doi: 10.1162/0898929054021102. [DOI] [PubMed] [Google Scholar]
  10. Borghi AM, Binkofski F, Castelfranchi C, Cimatti F, Scorolli C, Tummolini L. The challenge of abstract concepts. Psychological Bulletin. 2017;143(3):263–292. doi: 10.1037/bul0000089. [DOI] [PubMed] [Google Scholar]
  11. Borghi AM, Flumini A, Cimatti F, Marocco D, Scorolli C. Manipulating objects and telling words: A study on concrete and abstract words acquisition. Frontiers in Psychology. 2011 February;2:15. doi: 10.3389/fpsyg.2011.00015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Brunellière A, Perre L, Tran T, Bonnotte I. Cooccurrence frequency evaluated with large language corpora boosts semantic priming effects. Quarterly Journal of ExperimentalPsychology. 2017;70(9):1922–1934. doi: 10.1080/17470218.2016.1215479. [DOI] [PubMed] [Google Scholar]
  13. Brysbaert M, Warriner AB, Kuperman V. concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods. 2014;46(3):904–911. doi: 10.3758/s13428-013-0403-5. [DOI] [PubMed] [Google Scholar]
  14. Buchanan EM, Holmes JL, Teasley ML, Hutchison KA. English semantic word-pair norms and a searchable Web portal for experimental stimulus creation. Behavior Research Methods. 2013;45(3):746–757. doi: 10.3758/s13428-012-0284-z. [DOI] [PubMed] [Google Scholar]
  15. Buchanan EM, Valentine KD, Maxwell NP. English semantic feature production norms: An extended database of 4436 concepts. Behavior Research Methods. 2019;51(4):1849–1863. doi: 10.3758/s13428-019-01243-z. [DOI] [PubMed] [Google Scholar]
  16. Connell L, Lynott D, Banks B. Interoception: The forgotten modality in perceptual grounding of abstract and concrete concepts. Philosophical Transactions of the Royal Society B: Biological Sciences. 2018;373(1752):20170143. doi: 10.1098/rstb.2017.0143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Crutch SJ, Troche J, Reilly J, Ridgway GR. Abstract conceptual feature ratings: The role of emotion, magnitude, and other cognitive domains in the organization of abstract conceptual knowledge. Frontiers in Human Neuroscience. 2013 May;7:1–14. doi: 10.3389/fnhum.2013.00186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Crutch SJ, Warrington EK. Abstract and concrete concepts have structurally different representational frameworks. Brain. 2005;128(3):615–627. doi: 10.1093/brain/awh349. [DOI] [PubMed] [Google Scholar]
  19. Cui X, Bray S, Reiss AL. Functional near infrared spectroscopy (NIRS) signal improvement based on negative correlation between oxygenated and deoxygenated hemoglobin dynamics. Neurolmage. 2010;49(4):3039–3046. doi: 10.1016/j.neuroimage.2009.11.050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. De Deyne S, Navarro DJ, Perfors A, Brysbaert M, Storms G. The “Small World of Words” English word association norms for over 12,000 cue words. Behavior Research Methods. 2019;51(3):987–1006. doi: 10.3758/s13428-018-1115-7. [DOI] [PubMed] [Google Scholar]
  21. De Mornay Davies P, Funnell E. Semantic representation and ease of predication. Brain and Language. 2000;73(1):92–119. doi: 10.1006/brln.2000.2299. [DOI] [PubMed] [Google Scholar]
  22. Devereux BJ, Clarke A, Marouchos A, Tyler LK. Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience. 2013;33(48):18906–18916. doi: 10.1523/JNEUROSCI.3809-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Dhond RP, Witzel T, Dale AM, Halgren E. Spatiotemporal cortical dynamics underlying abstract and concrete word reading. Human Brain Mapping. 2007;28(4):355–362. doi: 10.1002/hbm.20282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Dove G, Barca L, Tummolini L, Borghi AM. Words have a weight: Language as a source of inner grounding and flexibility in abstract concepts. Psychological Research PsychArxiv. doi: 10.31234/osf.io/j6xhe. in press. [DOI] [PubMed] [Google Scholar]
  25. Emberson LL, Zinszer BD, Raizada RDS, Aslin RN. Decoding the infant mind: Multivariate pattern analysis (MVPA) using fNIRS. PLoS One. 2017;12(4):e0172500. doi: 10.1371/journal.pone.0172500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Fairfield B, Ambrosini E, Mammarella N, Montefinese M. Affective norms for Italian words in older adults: Age differences in ratings of valence, arousal and dominance. PLoS One. 2017;12(1):e0169472. doi: 10.1371/journal.pone.0169472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Fairhall SL, Caramazza A. Brain regions that represent Amodal conceptual knowledge. Journal of Neuroscience. 2013;33(25):10552–10558. doi: 10.1523/JNEUROSCI.0051-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Faul F, Erdfelder E, Lang A-G, Buchner A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods. 2007;39(2):175–191. doi: 10.3758/bf03193146. [DOI] [PubMed] [Google Scholar]
  29. Fornito A, Harrison BJ, Zalesky A, Simons JS. Competitive and cooperative dynamics of large-scale brain functional networks supporting recollection. Proceedings of the National Academy of Sciences of the United States of America. 2012;109(31):12788–12793. doi: 10.1073/pnas.1204185109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Friston KJ, Holmes AP, Worsley KJ, Poline J-P, Frith CD, Frackowiak RSJ. Statistical parametric maps in functional imaging: A general linear approach. Human Brain Mapping. 1994;2(4):189–210. doi: 10.1002/hbm.460020402. [DOI] [Google Scholar]
  31. Gleitman L, Papafragou A. In: The Cambridge handbook of thinking and reasoning. Holyoak KJ, Morrison RG, editors. Cambridge University Press; 2005. Language and thought; pp. 633–661. [Google Scholar]
  32. Griffiths TL, Steyvers M, Tenenbaum JB. Topics in semantic representation. Psychological Review. 2007;114(2):211–244. doi: 10.1037/0033-295X.114.2.211. [DOI] [PubMed] [Google Scholar]
  33. Gu Y, Miao S, Han J, Liang Z, Ouyang G, Yang J, Li X. Identifying ADHD children using hemodynamic responses during a working memory task measured by functional near-infrared spectroscopy. Journal of Neural Engineering. 2018;15(3):035005. doi: 10.1088/1741-2552/aa9ee9. [DOI] [PubMed] [Google Scholar]
  34. Haynes JD, Rees G. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience. 2006;7(7):523–534. doi: 10.1038/nrn1931. [DOI] [PubMed] [Google Scholar]
  35. Heyman T, Van Rensbergen B, Storms G, Hutchison KA, De Deyne S. The influence of working memory load on semantic priming. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2015;41(3):911–920. doi: 10.1037/xlm0000050. [DOI] [PubMed] [Google Scholar]
  36. Hill F, Kiela D, Korhonen A. Concreteness and corpora: A theoretical and practical analysis. CMCL 2013; Sofia: 2013. p. 75. [Google Scholar]
  37. Hocke L, Oni I, Duszynski C, Corrigan A, Frederick B, Dunn J. Automated processing of fNIRS data—A visual guide to the pitfalls and consequences. Algorithms. 2018;11(5):67. doi: 10.3390/a11050067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Hoenig K, Scheef L. Neural correlates of semantic ambiguity processing during context verification. NeuroImage. 2009;45(3):1009–1019. doi: 10.1016/j.neuroimage.2008.12.044. [DOI] [PubMed] [Google Scholar]
  39. Huppert TJ, Diamond SG, Franceschini MA, Boas DA. HomER: A review of time-series analysis methods for near-infrared spectroscopy of the brain. Applied Optics. 2009;48(10):D280–D298. doi: 10.1364/AO.48.00D280. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Huth AG, de Heer WA, Griffiths TL, Theunissen FE, Gallant JL. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature. 2016;532(7600):453–458. doi: 10.1038/nature17637. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Katja Wiemer-Hastings K, Xu X. Content differences for abstract and concrete concepts. Cognitive Science. 2005;29(5):719–736. doi: 10.1207/s15516709cog0000_33. [DOI] [PubMed] [Google Scholar]
  42. Kensinger EA, Corkin S. Two routes to emotional memory: Distinct neural processes for valence and arousal. Proceedings of the National Academy of Sciences. 2004;101(9):3310–3315. doi: 10.1073/pnas.0306408101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Kensinger EA, Schacter DL. Processing emotional pictures and words: Effects of valence and arousal. Cognitive, Affective and Behavioral Neuroscience. 2006;6(2):110–126. doi: 10.3758/CABN.6.2.110. [DOI] [PubMed] [Google Scholar]
  44. Koenig P, Smith EE, Glosser G, DeVita C, Moore P, McMillan C, Gee J, Grossman M. The neural basis for novel semantic categorization. NeuroImage. 2005;24:369–383. doi: 10.1016/j.neuroimage.2004.08.045. [DOI] [PubMed] [Google Scholar]
  45. Kousta S-T, Vigliocco G, Vinson DP, Andrews M, Del Campo E. The representation of abstract words: Why emotion matters. Journal of Experimental Psychology: General. 2011;140(1):14–34. doi: 10.1037/a0021446. [DOI] [PubMed] [Google Scholar]
  46. Kriegeskorte N, Mur M, Bandettini PA. Representational similarity analysis—Connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience. 2008;2:4. doi: 10.3389/neuro.06.004.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Kuperberg GR, Holcomb PJ, Sitnikova T, Greve D, Dale AM, Caplan D. Distinct patterns of neural modulation during the processing of conceptual and syntactic anomalies. Journal of Cognitive Neuroscience. 2003;15(2):272–293. doi: 10.1162/089892903321208204. [DOI] [PubMed] [Google Scholar]
  48. Landauer TK, Dumais ST. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review. 1997;104(2):211–240. doi: 10.1037/0033-295X.104.2.211. [DOI] [Google Scholar]
  49. Leech G, Garside R, Bryant M. CLAWS4; Proceedings of the 15th conference on Computational linguistics; Morristown, NJ. 1994. p. 622. [DOI] [Google Scholar]
  50. Levenshtein VI. Binary codes capable of correcting deletions, insertions, and reversals. Cybernetics and Control Theory. 1966;10(8):707–710. [Google Scholar]
  51. Liu P, Qin W, Wang J, Zeng F, Zhou G, Wen H, von Deneen KM, Liang F, Gong Q, Tian J. Identifying neural patterns of functional dyspepsia using multivariate pattern analysis: A resting-state fMRI study. PLoS One. 2013;8(7):e68205. doi: 10.1371/journal.pone.0068205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Liuzzi AG, Dupont P, Peeters R, Bruffaerts R, De Deyne S, Storms G, Vandenberghe R. Left perirhinal cortex codes for semantic similarity between written words defined from cued word association. NeuroImage. 2019;191:127–139. doi: 10.1016/j.neuroimage.2019.02.011. [DOI] [PubMed] [Google Scholar]
  53. Lund K, Burgess C. Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments, & Computers. 1996;28(2):203–208. doi: 10.3758/BF03204766. [DOI] [Google Scholar]
  54. McRae K, Cree GS, Seidenberg MS, Mcnorgan C. Semantic feature production norms for a large set of living and nonliving things. Behavior Research Methods. 2005;57(4):547–559. doi: 10.3758/BF03192726. [DOI] [PubMed] [Google Scholar]
  55. Meersmans K, Bruffaerts R, Jamoulle T, Liuzzi AG, De Deyne S, Storms G, Dupont P, Vandenberghe R. Representation of associative and affective semantic similarity of abstract words in the lateral temporal perisylvian language regions. Neuroimage. 2020;217:116892. doi: 10.1016/j.neuroimage.2020.116892. [DOI] [PubMed] [Google Scholar]
  56. Mercure E, Evans S, Pirazzoli L, Goldberg L, Bowden-Howl H, Coulson-Thaker K, Beedie I, Lloyd-Fox S, Johnson MH, MacSweeney M. Language experience impacts brain activation for spoken and signed language in infancy: Insights from unimodal and bimodal bilinguals. Neurobiology of Language. 2020;1(1):9–32. doi: 10.1162/nol_a_00001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Molavi B, Dumont GA. Wavelet-based motion artifact removal for functional near-infrared spectroscopy. Physiological Measurement. 2012;55(2):259–270. doi: 10.1088/0967-3334/33/2/259. [DOI] [PubMed] [Google Scholar]
  58. Montefinese M. Semantic representation of abstract and concrete words: A minireview of neural evidence. Journal of Neurophysiology. 2019;121(5):1585–1587. doi: 10.1152/jn.00065.2019. [DOI] [PubMed] [Google Scholar]
  59. Montefinese M, Ambrosini E, Fairfield B, Mammarella N. Semantic memory: A feature-based analysis and new norms for Italian. Behavior Research Methods. 2013a;45(2):440–461. doi: 10.3758/s13428-012-0263-4. [DOI] [PubMed] [Google Scholar]
  60. Montefinese M, Ambrosini E, Fairfield B, Mammarella N. The “subjective” pupil old/new effect: Is the truth plain to see? International Journal of Psychophysiology. 2013b;89(1):48–56. doi: 10.1016/j.ijpsycho.2013.05.001. [DOI] [PubMed] [Google Scholar]
  61. Montefinese M, Ambrosini E, Fairfield B, Mammarella N. The adaptation of the Affective Norms for English Words (ANEW) for Italian. Behavior Research Methods. 2014;46(3):887–903. doi: 10.3758/s13428-013-0405-3. [DOI] [PubMed] [Google Scholar]
  62. Montefinese M, Ambrosini E, Visalli A, Vinson D. Catching the intangible: a role for emotion? Behavioral and Brain Sciences. 2020;45:e138. doi: 10.1017/s0140525x19002978. [DOI] [PubMed] [Google Scholar]
  63. Montefinese M, Buchanan EM, Vinson D. How well do similarity measures predict priming in abstract and concrete concepts? 2018. Jun 20, [DOI] [Google Scholar]
  64. Montefinese M, Vinson D. Resemblance among similarity measures in semantic representation. 2017. https://cogsci.mindmodeling.org/2017/papers/0809/
  65. Montefinese M, Vinson D, Ambrosini E. Recognition memory and featural similarity between concepts: The pupil’s point of view. Biological Psychology. 2018;155:159–169. doi: 10.1016/j.biopsycho.2018.04.004. [DOI] [PubMed] [Google Scholar]
  66. Montefinese M, Zannino GD, Ambrosini E. Semantic similarity between old and new items produces false alarms in recognition memory. Psychological Research Psychologische Forschung. 2015;79(5):785–794. doi: 10.1007/s00426-014-0615-z. [DOI] [PubMed] [Google Scholar]
  67. Morey RD. Confidence intervals from normalized data: A correction to Cousineau (2005) Tutorial in Quantitative Methods for Psychology. 2008;4:61–64. doi: 10.20982/tqmp.04.2.p061. [DOI] [Google Scholar]
  68. Nelson DL, McEvoy CL, Schreiber TA. The University of South Florida free association, rhyme, and word fragment norms. Behavior Research Methods, Instruments, & Computers. 2004;56(3):402–407. doi: 10.3758/BF03195588. [DOI] [PubMed] [Google Scholar]
  69. Paivio A. Mental representations: A dual coding approach. Oxford University Press; New York, NY: 1990. [DOI] [Google Scholar]
  70. Pexman PM, Hargreaves IS, Edwards JD, Henry LC, Goodyear BG. Neural correlates of concreteness in semantic categorization. Journal of Cognitive Neuroscience. 2007;19(8):1407–1419. doi: 10.1162/jocn.2007.19.8.1407. [DOI] [PubMed] [Google Scholar]
  71. Pinti P, Scholkmann F, Hamilton A, Burgess P, Tachtsidis I. Current status and issues regarding pre-processing of fNIRS neuroimaging data: An investigation of diverse signal filtering methods within a general linear model framework. Frontiers in Human Neuroscience. 2019;12:505. doi: 10.3389/fnhum.2018.00505. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Price CJ. The anatomy of language: A review of 100 fMRI studies published in 2009. Annals of the New York Academy of Sciences. 2010;1191(1):62–88. doi: 10.1111/j.1749-6632.2010.05444.x. [DOI] [PubMed] [Google Scholar]
  73. Pulvermüller F. How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in Cognitive Sciences. 2013 September 1;17(9):458–470. doi: 10.1016/j.tics.2013.06.004. [DOI] [PubMed] [Google Scholar]
  74. Rotaru AS, Vigliocco G, Frank SL. Modeling the structure and dynamics of semantic processing. Cognitive Science. 2018;42(8):2890–2917. doi: 10.1111/cogs.12690. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Sabsevitz DS, Medler DA, Seidenberg M, Binder JR. Modulation of the semantic system by word imageability. NeuroImage. 2005;27(1):188–200. doi: 10.1016/j.neuroimage.2005.04.012. [DOI] [PubMed] [Google Scholar]
  76. Scholkmann F, Kleiser S, Metz AJ, Zimmermann R, Mata Pavia J, Wolf U, Wolf M. A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology. NeuroImage. 2014;85:6–27. doi: 10.1016/j.neuroimage.2013.05.004. [DOI] [PubMed] [Google Scholar]
  77. Schwanenflugel PJ. Capter 2 contextual constraint and lexical processing. Advances in Psychology. 1991;77(C):23–45. doi: 10.1016/S0166-4115(08)61528-9. [DOI] [Google Scholar]
  78. Schwanenflugel PJ, Akin C, Luh WM. Context availability and the recall of abstract and concrete words. Memory & Cognition. 1992;20(1):96–104. doi: 10.3758/bf03208259. [DOI] [PubMed] [Google Scholar]
  79. Skipper LM, Olson IR. Semantic memory: Distinct neural representations for abstractness and valence. Brain and Language. 2014;150:1–10. doi: 10.1016/j.bandl.2014.01.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Tachtsidis I, Scholkmann F. False positives and false negatives in functional near-infrared spectroscopy: Issues, challenges, and the way forward. Neurophotonics. 2016;3(3):031405. doi: 10.1117/1.nph.3.3.031405. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Tinaz S, Schendan HE, Schon K, Stern CE. Evidence for the importance of basal ganglia output nuclei in semantic event sequencing: An fMRI study. Brain Research. 2006;1067(1):239–249. doi: 10.1016/j.brainres.2005.10.057. [DOI] [PubMed] [Google Scholar]
  82. Tinaz S, Schendan HE, Stern CE. Fronto-striatal deficit in Parkinson’s disease during semantic event sequencing. Neurobiology of Aging. 2008;29(3):397–407. doi: 10.1016/j.neurobiolaging.2006.10.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Tyler LK, Moss HE, Galpin A, Voice JK. Activating meaning in time: The role of imageability and form-class. Language and Cognitive Processes. 2002;17(5):471–502. doi: 10.1080/01690960143000290. [DOI] [Google Scholar]
  84. Vigliocco G, Kousta S-T, Della Rosa PA, Vinson DP, Tettamanti M, Devlin JT, Cappa SF. The neural representation of abstract words: the role of emotion. Cerebral Cortex. 2014;24(7):1767–1777. doi: 10.1093/cercor/bht025. [DOI] [PubMed] [Google Scholar]
  85. Vigliocco G, Kousta S, Vinson D, Andrews M, Del Campo E. The representation of abstract words: What matters? Reply to Paivio’s (2013) comment on Kousta et al. (2011) Journal of Experimental Psychology: General. 2013;142(1):288–291. doi: 10.1037/a0028749. [DOI] [PubMed] [Google Scholar]
  86. Vigliocco G, Meteyard L, Andrews M, Kousta S. Toward a theory of semantic representation. Language and Cognition. 2009;1(02):219–247. doi: 10.1515/LANGCOG.2009.011. [DOI] [Google Scholar]
  87. Vigliocco G, Vinson DP, Lewis W, Garrett MF. Representing the meanings of object and action words: The featural and unitary semantic space hypothesis. Cognitive Psychology. 2004;48(4):422–488. doi: 10.1016/j.cogpsych.2003.09.001. [DOI] [PubMed] [Google Scholar]
  88. Vinson DP, Vigliocco G. Semantic feature production norms for a large set of objects and events. Behavior Research Methods. 2008;40(1):183–190. doi: 10.3758/BRM.40.1.183. [DOI] [PubMed] [Google Scholar]
  89. Wang J, Baucom LB, Shinkareva SV. Decoding abstract and concrete concept representations based on single-trial fMRI data. Human Brain Mapping. 2013;34(5):1133–1147. doi: 10.1002/hbm.21498. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Wang J, Conder JA, Blitzer DN, Shinkareva SV. Neural representation of abstract and concrete concepts: A meta-analysis of neuroimaging studies. Human Brain Mapping. 2010;31(10):1459–1468. doi: 10.1002/hbm.20950. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Wang X, Wu W, Ling Z, Xu Y, Fang Y, Wang X, Binder JR, Men W, Gao J-H, Bi Y. Organizational principles of abstract words in the human brain. Cerebral Cortex. 2017;28(12):1–14. doi: 10.1093/cercor/bhx283. [DOI] [PubMed] [Google Scholar]
  92. Warriner AB, Kuperman V, Brysbaert M. Norms of valence, arousal, and dominance for 13,915 English lemmas. Behavior Research Methods. 2013;45(4):1191–1207. doi: 10.3758/s13428-012-0314-x. [DOI] [PubMed] [Google Scholar]
  93. Zdrazilova L, Pexman PM. Grasping the invisible: Semantic processing of abstract words. Psychonomic Bulletin & Review. 2013;20(6):1312–1318. doi: 10.3758/s13423-013-0452-x. [DOI] [PubMed] [Google Scholar]
  94. Zhao H, Cooper RJ. Review of recent progress toward a fiberless, whole-scalp diffuse optical tomography system. Neurophotonics. 2017;5(01):1. doi: 10.1117/1.nph.5.1.011012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Zinszer BD, Bayet L, Emberson LL, Raizada RDS, Aslin RN. Decoding semantic representations from functional near-infrared spectroscopy signals. Neurophotonics. 2017;5(01):1. doi: 10.1117/1.NPh.5.1.011003. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supinfo

RESOURCES