Skip to main content
Sage Choice logoLink to Sage Choice
. 2023 Dec 21;77(10):2137–2150. doi: 10.1177/17470218231216304

Noun imageability and the processing of sensory-based information

Johannes Gerwien 1,, Maroš Filip 2, Filip Smolík 2
PMCID: PMC11445977  PMID: 37953293

Abstract

The aim of this study was to test whether the availability of internal imagery elicited by words is related to ratings of word imageability. Participants are presented with target words and, after a delay allowing for processing of the word, answer questions regarding the size or weight of the word referents. Target words differ with respect to imageability. Results show faster responses to questions for high imageability words than for low imageability words. The type of question (size/weight) modulates reaction times suggesting a dominance of the visual domain over the physical-experience domain in concept representation. Results hold across two different languages (Czech/German). These findings provide further insights into the representations underlying word meaning and the role of word imageability in language acquisition and processing.

Keywords: Imageability, mental imagery, semantic representation, word processing

Introduction

Some words easily elicit sensory images of their referents in the minds of readers or listeners. For example, it is easy to form an internal image upon hearing the word “apple”: we can quickly imagine not only its visual appearance but also its taste, smell, or the tactile impression of touching it. Other common words are much less powerful that way, for example, the word “shop” can elicit various mental images but the clarity and speed with which it happens is much reduced. The ability of words to elicit mental images is labelled as imageability and it has been shown to affect the acquisition and processing of words. However, the relation between imageability and sensory or perceptual processes is not fully understood. This article aims to address this topic by testing sensory-based decisions about referents of words with different levels of imageability.

Imageability and words

The effects of word imageability, that is, how easily a word can evoke a mental image of the respective word referent, have first been discovered in research on paired associate word learning (Paivio, 1967, 1969). Later research suggested that highly imageable words are not only remembered better but also recognised and possibly named faster (Cortese et al., 2015; de Groot, 1989; Khanna & Cortese, 2021; Kroll & Merves, 1986; Strain et al., 1995), and acquired earlier (Łuniewska et al., 2019; Masterson et al., 2008; Morrison et al., 1997; Smolík, 2019). Its effects are not limited to processing word stems: imageability also facilitates morphological operations and acquisition (Butler et al., 2012; Prado & Ullman, 2009; Smolík, 2014; Smolík & Kříž, 2015). Imageability has also been linked to conceptual accessibility during online language production. Concepts that are linked to words with high imageability can be accessed faster, which, for example, has consequences for the generation of sentences: more accessible concepts tend to appear earlier in a sentence, and/or receive grammatical roles higher in the syntactic hierarchy (subject role/position over object role/position; Bock & Warren, 1985; MacDonald, 2013). However, a recent study by Khanna and Cortese (2021) found that while imageability significantly predicts recognition memory, it explains only little additional variance in lexical decision performance and accuracy in reading aloud, suggesting that imageability does not interact strongly with the lexical processing system in all tasks, especially if other factors like age of acquisition (e.g., AoA) are controlled for.

Although the relation between imageability and linguistic representation and language processing is well established overall, there is no generally accepted explanation for it. Imageability has often been treated as a nuisance variable to control for, rather than a word characteristic that is important in lexical representations and processing. In part, this may be because the connection between imageability and internal representation of words and concepts has not been sufficiently addressed. When the nature of imageability has been addressed directly, it has often been in the context of research on conceptual representations rather than lexicon and language processing. We will use the word “imageability” as a term that can be considered a property of both words and concepts.

Imageability is a property of words that is based on subjective assessment. It cannot be measured directly but it is based on average ratings across people who are asked to judge how fast and easily a word elicits a mental image. Although the intuition of internal imagery is involved in generating the ratings, there is little direct evidence about the source of these imaginations. It is difficult to judge whether different people use similar criteria in their ratings, and which parts of the cognitive system contribute to them. In particular, the rating process provides no direct evidence that the intuitions used in the ratings are as perception-like as suggested by previous studies (Kosslyn et al., 2006) and by introspection. Without such evidence, it is difficult to understand the exact role imageability plays in the representation of words or concepts, and thus explain its effects on language processing.

Probably the strongest source of validation for imageability ratings is studies that use neuroimaging or patient data to find brain correlates of processing high- and low-imageability words or concepts. In this line of research, imageability often comes into play via the notion of concreteness/abstractness, with words referring to concrete entities typically being rated as more imageable than words referring to abstract entities. Representation and processing of abstract and concrete words can be differentially affected by damage to various brain regions (Loiselle et al., 2012), for example, the left prefrontal cortex and the anterior temporal lobe (Bonner et al., 2009; Sirigu et al., 1991). In addition, patients with some types of brain damage may also show reversal of concreteness effects, exhibiting better performance with less concrete words (Bonner et al., 2009; Breedin et al., 1994; Crutch & Warrington, 2005), indicating that the facilitatory effects of imageability may be lost under some circumstances. A recent meta-analysis of functional imaging studies (Bucur & Papagno, 2021) indicated that the processing of concrete, highly imageable words is associated with activation in various areas in the parietal gyrus, while abstract words result specifically in activation in inferior frontal and temporal regions. These findings indicate that the dimension of imageability or concreteness has measurable correlates independent of individual ratings or judgements, providing external validity to the characteristic.

As indicated in the previous paragraph, the theoretical concepts of concreteness and imageability are often used interchangeably, and the major theoretical accounts of imageability effects (see below) are formulated so that they address both imageability and concreteness (Paivio et al., 1994; Schwanenflugel, 1991; Schwanenflugel & Shoben, 1983). However, there are arguments for separating the two. For example, words exist that score high on imageability but low on concreteness (e.g., love) and vice versa. Typically, imageability focuses on how easy it is to elicit an internal image, while concreteness focuses on the availability of the referent as a physical object that can be interacted with (Kousta et al., 2011). One consequence of this is that concreteness ratings are often distributed bimodally, while imageability tends to have an unimodal distribution (Brysbaert et al., 2014; Della Rosa et al., 2010). Despite the differentiation between imageability and concreteness, this study is concerned with how sensory-based processing is related to imageability. However, given the high correlation between ratings of imageability and concreteness for many words, we expect that the results could be very similar for concreteness.

Theoretical accounts of imageability

Two major theoretical accounts exist for the effects of imageability. The dual coding theory (Paivio, 1971, 2013) is based on the idea that there are two modes of mental representation, image-based and verbal. While abstract, low-imageability words are represented exclusively via the verbal representational system, words with higher imageability also recruit the image-based systems, which means that more imageable words have richer underlying representations. The second major approach is based on context availability (Schwanenflugel & Shoben, 1983), assuming that it is easier to find situations or contexts in which highly imageable words can occur. In addition to these traditional theories, the issue of imageability can be addressed using recent general models of semantic representation that often build upon the neuroscience findings including the lesion studies discussed above. Models such as the hub-and-spoke model (Hoffman & Lambon Ralph, 2011; Patterson et al., 2007; Pobric et al., 2010) or the dynamic reactivation framework (Reilly et al., 2016; Reilly & Peelle, 2008) share the idea that concepts that are linked with words are represented in concentric structures, perhaps with multiple levels. The centre, or “hub,” of such representation, integrates information from several external and internal sources, including sensory information. More abstract concepts are represented in the hubs or higher-level hubs without necessarily activating peripheral sensory information. However, spokes include connections between the hub and sensory systems, which are active in representing the concrete and imageable objects. The hub-and-spoke models can represent various levels of abstractness. Words and concepts with more imageable, sensory content activate the spokes, that is, the connections between hubs and sensory systems, while abstract words and concepts are represented by hubs without or limited connections to sensory systems. The relative role of each representational component may account for the continuous scales of imageability or abstractness. One of the key questions is whether word imageability indeed recruits in part the same resources as perception and sensation.

The present study

Although there is solid neuropsychological evidence confirming the differences between high- and low-imageability words, it is difficult to say these differences are indeed due to richer or faster internal imagery. Since imageability ratings are subjective, it could be that words are rated as highly imageable because of some other property that results in faster processing as well as high imageability ratings. To establish links between imageability and internal imagery, the relationship should be tested more directly.

This study was inspired by approaches that have been used previously to argue for the existence of a link between a verbal and sensory-based mode of representation such as the mental scanning task where participants are asked to make sensory-based decisions about internally generated images elicited from words (Kosslyn, 1973, 1975; Kosslyn et al., 1978; Pinker et al., 1984; Reed et al., 1983). However, we modified and amended previous approaches with the goal to obtain a more comprehensive understanding of how imageability relates to the time course of sensory-based processing, the representation of information from different modalities, and the type of resources that are mobilised during image generation and processing.

Here, we present a noun and ask participants to imagine the corresponding referent, and then provide an answer to one of two questions about it: Is it bigger than a car? or Does it weigh a lot?. For reasons that are described in more detail below we assumed that this approach would allow us to determine when participants would begin sensory-based processing, that is, upon presentation of the question, which is where we start to measure. If internally generated images rely on representations that overlap with those generated by sensory processes, stronger activation of internal images should make the sensory content more easily available. Stronger activation of sensory representations in high-imageability words relative to low-imageability words should thus facilitate answers to questions that require access to these representations. This way we can test how reaction times (RTs) relate to imageability ratings.

The modifications we implemented were the following. First, we addressed the possibility that faster responses to questions about words are due to easier lexical processing, that is, factors such as word frequency or length. If lexical processing proceeds faster for some words, decisions about the associated referent could be made faster regardless of imagery. To account for this problem, we resorted to a new approach: after participants are presented with the target word, the question about the associated concept is not shown immediately but with a sufficiently long delay to consider lexical processing as finished. The lexical processing of the stimulus words should be fully completed 1000 ms after word onset (Holcomb & Grainger, 2006; Laszlo & Federmeier, 2014), and any additional effects of imageability are likely due to the processes we are interested in here. Second, methodological and theoretical considerations led us to provide participants with two different questions that alternate randomly within the experiment. If participants do not know in advance which question they would receive, they cannot start to prepare an answer before question onset—a strategy one might expect if there were only one question. Also, the different contents of the two questions—one targeting visual properties (size), the other targeting weight—may allow us to infer potential differences between modalities. Third, we systematically manipulated the time between the word offset and the question, using three different durations (stimulus-onset asynchrony (SOA) manipulation). This way, we aimed to tap into the time course of internal imagery. One of the assumptions about word imageability is that highly imageable words elicit the internal images faster than less imageable words. We thus expected that the effects of imageability could differ according to the time people are given for generating the images before question onset.

Next, we motivate an additional manipulation that, to the best of our knowledge, has not been reported before. Previous research has shown that mental imagery and perception may interact in several ways. However, findings are mixed. There is evidence that imagery during perception may decrease (Perky, 1910; Segal & Fusella, 1970) or increase (Dijkstra & Fleming, 2021; Moseley et al., 2016) the likelihood of detecting an input signal, which may depend on the similarity between the imagined and perceived input. One possible way of how perception and mental imagery interact was illustrated in an ERP study by Villena-González et al. (2016), who showed that the P1 amplitude, related to a sensory response, was significantly attenuated when participants engaged in mental imagery while being exposed to external visual stimulation. This finding suggests that attention towards “outward-directed” and “inward-directed” visual processing is shared. In addition, it has been shown that the strength of the interaction may also depend on the extent to which participants can engage in mental imagery (Bergmann et al., 2016; Keogh & Pearson, 2014). To test the interaction between mental imagery elicited by words and the processing of external input, we included a condition in which participants see a flickering checkerboard during the time they engage in internal imagery. If internal imagery uses partly the same resources as sensory and perceptual processes, this should have detrimental effects on mental imagery. Thus, the shared-resources phenomenon is under investigation from the opposite direction compared to the studies mentioned above, looking at whether external stimulation will interfere with ongoing internal imagery, and whether word imageability plays a role in this.

Finally, we conducted the experiment with native speakers of Czech and German to be better able to generalise potential findings. We used translation equivalents of the target words between languages (see “Method” section for details). Thus, overall, the specific conceptual representations targeted in this study are very similar between the two languages. If effects of imageability are observed in both languages, they are less likely to be due to surface properties of individual words because these will differ across languages.

The trials in our study start with the presentation of a target word. Participants are instructed to form a vivid internal image of the word referent, and being prompted by a cue, answer one of the two questions about the word referent by pressing one of two buttons (yes/no). The structure of trials is shown in Figure 1.

Figure 1.

Figure 1.

Time course of experimental trials.

Imageability of target words varies within-subjects as a continuous predictor. Question varies within-subjects. The SOA manipulation has three possible values, 1, 1.5, and 2 s, and also varies within subjects. Checkerboard (presence/absence) varies between-subjects: during the period between word offset and question cue onset, one half of the participants receive a flickering checkerboard, while the other half is presented with a blank screen. All target words appear with both questions, in all SOA conditions, as well as with and without the checkerboard. The combination and counterbalancing of all variables resulted in twelve experimental lists per language group.

Questions and hypotheses

The general goal of our study was to test whether imageability ratings are linked to the processing time for internal imagery. We preregistered seven hypotheses (OSF preregistration URL: https://doi.org/10.17605/OSF.IO/CG45A). (1) RTs for answers to our questions will be longer for words with low-imageability ratings than for words with high imageability ratings. One caveat with respect to this hypothesis is the case of purely abstract words with very low-imageability ratings, which could result in faster responses due to clear irrelevance of the sensory questions. (2) RTs will be longer in the short SOA condition than in the long SOA condition, due to the time required to develop internal images. (3) With increasing SOA, RT differences between words with high- and low-imageability ratings will decrease. If inward and outward oriented attention share in part the same cognitive resources, then we are expecting the following: (4) RTs for answers will be longer when the checkerboard is present compared to when it is absent. (5) RTs for answers will be longer for high-imageability words in the presence of the checkerboard relative to its absence. (6) RTs for answers will not differ for low-imageability words when comparing absence/presence of the checkerboard. (7) From points 5 and 6, it follows that there should be a 3-way interaction of SOA, checkerboard, imageability.

Method

Participants

168 participants took part in the experiment, 84 Czech speakers (mean age = 21.69, range = 19-32) and 84 German speakers (mean age = 27.39, range = 19-70). For the Czech part of the experiment, participants were recruited from the pool of the lab LABELS (Laboratory of behavioural and linguistic studies), via the online recruitment system ORSEE. The database contains mostly students participating for course credits and a few volunteers. German participants were recruited through the “Studienportal” database at Heidelberg University. German participants received course credit for their participation. The research was approved by the Ethics Committee of the Institute of Psychology of the Czech Academy of Sciences.

Stimuli

120 words varying in imageability served as stimuli. These words along with their ratings were chosen from a previous study (Grandy et al., 2020) in which more than 2,500 imageability ratings of German words were collected. Since in that study, two values of imageability were collected, one for the younger population (20-31 years old) and one for the older (70-86 years old), we used the average of these two values in the current study. This seemed justified as the ratings for the words we selected yielded a very high correlation between the values from the two sets, Pearson’s r(118) = 0.92; p < .001. The choice of words for our final set faced several challenges. Target words had to be translated into Czech as one-word equivalents, had to be of similar length in both languages, and had to cover the entire spectrum of imageability ratings. To achieve this, we created a list of 120 words divided into six blocks of 20 words each. Words in each block had the same or similar length in letters. Each block covered a different range of imageability ratings. Words within each block had the same or similar imageability values. The only exception was the two blocks with the lowest imageability scores, which covered a wider range of values. Since the imageability ratings are not evenly distributed across the range of the scale, with more words that have high imageability ratings than words that have low-imageability ratings, we mimicked this trend in our selection. The German list was then translated into Czech and imageability ratings for Czech were collected in a separate experiment with 24 participants, who each rated all 120 words, which were presented to them in random order. Mean ratings were calculated for each item and then compared with the German set.

In the German set, the imageability ratings were collected using a visual analogue scale with values from 0 to 100 (Grandy et al., 2020), and the mean rating for words selected for our experiment is 73.19 (SD = 21.83, range = 21.59 to 97.69); the mean word length in letters is 6.2 (SD = 1.84, range = 3 to 10). The Czech imageability ratings were collected using a Likert-type-like scale ranging from 1 to 7, with 1 indicating low and 7 indicating high imageability. The different scale used for collecting the Czech imageability data was chosen to keep consistency with previous projects collecting Czech imageability ratings (e.g., Smolík, 2019; Smolík & Kříž, 2015). For better comparability, Czech ratings were transformed to match the scale used in German. The mean overall imageability ratings is 75.16 (SD = 15.31, range = 19.30 to 96.49), while the mean word length in letters is 5.88 (SD = 1.89, range = 3 to 12). We calculated Pearson’s correlation coefficient after bringing the norms from both languages to the same scale, Pearson’s r (118) = 0.55; p < .001. The mean difference between the Czech and German ratings (subtracting the Czech ratings of each word from the German ratings) was −1.97 with an SD of 18.54, a minimum difference of 0.29 (“trunk”) and a maximum difference of -59.73 (“bareness”). Supplementary Material includes a chart that compares the means and SDs for individual words in German and Czech and provides further details.

To tap into semantic processing, we used two questions: Is it smaller than a car? and Does it weigh a lot? These two questions were chosen because for most target words the answers are opposite (e.g., rose is smaller than a car but not heavy). To reduce the impact of language processing, the questions were presented by a word cue (small? or heavy?). During a practice block, participants learned which cue word represented which of the two questions.

Procedure

The experiment was designed using PsychoPy (Peirce et al., 2019) and administered online using the Pavlovia web platform (pavlovia.org). Participants first provided informed consent. Then they were introduced to the task during an instruction phase, which was followed by a practice block (30 trials). The main part of the experiment consisted of two blocks with one break after the first half of the trials. Participants were asked not to leave the room or close the experiment window during the break but were allowed to rest as long as they needed (M = 101 s).

Participants were asked in every trial to create an internal image of the target word referent and then answer a question about that word as quickly as possible (Figure 1). The word was presented for 500 ms in the centre of the screen (white font colour on a grey background). Half of the participants saw a distractor stimulus in the form of a flashing checkerboard pattern between the target word and the question (checkerboard condition), the other half were presented with a blank screen (non-checkerboard condition). The time interval between target word offset and question onset (checkerboard or blank screen) was varied (SOA = 1000, 1500, or 2000 ms). The RT to pressing the response keys on the computer keyboard (yes = A, or no = L) was measured. Each participant saw each target word once. The order of the target words, the alternation of the SOA duration, and the alternation between the questions was random.

Results

Data pre-processing

First, we calculated the response consistency (“consistency” from here on) to describe how prevalent the predominant response for each question was. To achieve this, we divided the sum of the number of “yes” responses by the total number of responses per item. The same was done for “no” responses. Whichever response yielded the larger value was taken as the predominant response; values ranged between 0.5 (equal number of yes-/no-responses) and 1.0 (no exceptions to the dominant response). Thus, response consistency is a variable resulting from aggregation over the responses of all participants. We take it to reflect to some extent the mean difficulty of deciding for either a yes- or no-response, with high values indicating small difficulty and low values indicating high difficulty. As RTs often decrease during the time course of an experiment, we decided to include “trial ID” as a control variable. This variable captures the ordinal position of a specific trial in the time course of the experiment.

Next, we inspected the distribution of the RT data. Values ranged from 1 ms to 89,382 ms and showed a right-skewed distribution. We used a data-driven approach to determine which values were unrealistic given the experimental method (3 sigma ± grand mean) and were to be excluded from the statistical analysis. The lower cut-off point was determined to be at 221 ms, and the upper cut-off point was determined to be at 8,604 ms, which both appeared to be plausible response times given the stimuli and the task. The Box-Cox procedure was used to transform RT values after trimming in order to meet model assumptions (Box & Cox, 1964).

Next, we coded our predictor variables. After scaling and centring the imageability ratings around a mean of 0 with an SD of 1 the resulting values were used as a continuous variable in the statistical analyses. For the presence/absence of the checkerboard, type of question, and language group, simple coding was applied (board present = −0.5, absent = +0.5; question “heavy?” = −0.5, question “small?” = +0.5; language Czech = −0.5, German = +0.5). SOA was coded as a numeric predictor (SOA 1 s = −0.5, SOA 1.5 s = 0, SOA 2 s = +0.5). Trial ID, word length and response consistency entered the analyses as continuous variables (scaled and centred).

Statistical analysis

We performed linear mixed model analyses using R (R Core Team, 2020; version 4.0.3) and the packages lme4 (Bates et al., 2015; version 1.1-23), MASS (Venables & Ripley, 2002; version 7.3-53), plyR (Wickham, 2011; version 1.8.6), ggplot2 (Wickham, 2016; version 3.3.5), cowplot (Wilke, 2020; version 1.1.1), and car (Fox & Weisberg, 2019; version 3.0-10). The main analysis focused on evaluating the predictions that we specified in the preregistration. In addition, we performed several exploratory analyses.

Main analysis

The power-transformed RT values were used as the dependent variable. We were interested in potential main effects of the predictors imageability, SOA, and board, as well as any potential interactions between them. Language, question, word length, consistency, and trial ID were included as control variables. Interactions between the control variables or between control variables and the focal variables were not included in the setup of the main model (but see “Exploratory analyses” below). We used a backward model selection approach in which we started with the most complex random effects structure that was justified by the experimental design and then reduced the complexity until the model converged (Barr et al., 2013). For the random variable “participant,” the final model included slopes and intercepts for imageability as well as slopes and intercepts for trial ID and response consistency. For the random variable “items,” only random intercepts were specified. Model fit was evaluated by plotting the distribution of the residuals (to check for normal distribution), and by plotting the residual against the observed values (check for non-heteroscedasticity). Visual inspection of the plots did not suggest problems with the distribution of the residuals or any systematic change in the spread of the residuals over the range of measured values. For better readability, we report here the portion of the model output that relates to the fixed effects only (Table 1). The full model output is available in Supplementary Material. Figure 2 depicts the main findings visually.

Table 1.

Main analysis: main effects and interactions.

Estimate 95% CI t-value Degrees of freedom p-value
(Intercept) 3.815 [3.805, 3.825] 747.752 201.126 0.000***
trial.id –0.013 [–0.015, –0.010] –10.151 166.937 0.000***
resp.cons –0.017 [–0.020, –0.014] –11.828 406.536 0.000***
w.length –0.002 [–0.004, 0.000] –1.568 1.314.991 0.117
Imgblty –0.005 [–0.008, –0.001] –2.672 334.396 0.008**
SOA –0.013 [–0.017, –0.009] –5.810 166.323 0.000***
Board 0.007 [–0.012, 0.026] 0.686 167.166 0.494
Quest 0.029 [0.026, 0.032] 18.474 18.480.777 0.000***
Lang –0.030 [–0.048, –0.011] –3.172 168.074 0.002**
imgblty:SOA 0.005 [0.000, 0.009] 2.081 152.259 0.039*
imgblty:board 0.000 [–0.005, 0.006] 0.118 161.077 0.906
SOA:board 0.001 [–0.008, 0.010] 0.247 166.440 0.805
imgblty:SOA:board 0.003 [–0.006, 0.012] 0.590 153.637 0.556

CI: confidence interval; SOA: stimulus-onset asynchrony.

*

p < 0.05; **p < 0.01; ***p < 0.001.

Figure 2.

Figure 2.

Effects plots from the main analysis. (a) Imageability and SOA. (b) Imageability and board.

Our test statistics revealed main effects of imageability and SOA, as well as the interaction of both factors: Higher imageability ratings lead to faster responses; RTs are shorter given a longer SOA; the longer the SOA, in general, the less pronounced the impact of the imageability ratings (see Figure 2). Board, our third focal predictor, did not show an effect; neither was there any interaction with any of the other factors (see Table 1).

As expected, the time course of the experiment (captured as the variable trial ID) affected the response times significantly: Participants become faster over the course of the experiment. Response consistency also contributed significantly to explaining the variance: the lower the consistency value, the longer the RTs. Word length did not show a significant effect (see Table 1).

Surprisingly, the variable “language” as well as the variable “question” (heavy/small) showed main effects: the Czech participants responded slower than the German participants; response times were faster for the “Heavy?”-question than for the “Small?”-question.

Exploratory analyses

To further explore our findings and examine their validity, we performed several additional analyses. First, we tested whether extreme values affected the results by refitting the main model after excluding values with a standardised residual at a distance greater than 2.5 standard deviations from zero. This analysis excluded 2% of the data. The model detected the exact same effects as the main model. Second, we wanted to evaluate whether the results from the main analysis could be due to the low response consistency that we found for some items. While a high response consistency value indicates a low variability in participants’ choice to answer the questions with “yes” or “no,,” a lower value indicates a higher variability. To test whether response consistency had an effect, we performed the same analysis as above again on a subset of the original data. Items with a response consistency value lower than .75 were removed. The subset included 68% of the original data. The respective model detected the same effects (see Supplementary Material for details of the statistical analyses). Thus, our main findings neither depend on extreme values nor on low response consistency for some items.

In the third step, we wanted to explore the language and question effects further. We were interested in main effects and interactions of imageability, SOA, language group, and question. Response consistency and trial number were again included as control variables. Board and word length were not included, because they did not show significant effects in the main analysis (see Table 1). Table 2 shows the portion of the model output related to the fixed effects. The full model output is available in Supplementary Material. Figure 3 depicts the main results visually.

Table 2.

Exploratory analysis: main effects and interactions.

Estimate 95% CI t-value degrees of freedom p-value
(Intercept) 3.815 [3.805, 3.825] 745.597 201.800 0.000***
trial.id –0.013 [–0.015, –0.010] –10.109 167.048 0.000***
resp.cons –0.017 [–0.020, –0.014] –11.595 411.138 0.000***
Imgblty –0.005 [–0.008, –0.001] –2.665 374.390 0.008**
SOA –0.013 [–0.017, –0.009] –5.889 167.051 0.000***
Quest 0.029 [0.026, 0.032] 18.291 18.550.809 0.000***
Lang –0.031 [–0.050, –0.012] –3.228 167.255 0.001**
imgblty:SOA 0.005 [0.001, 0.010] 2.311 176.911 0.022*
imgblty:quest –0.013 [–0.016, –0.010] –7.815 19.123.700 0.000***
SOA:quest 0.004 [–0.004, 0.011] 0.948 19.282.298 0.343
imgblty:lang 0.001 [–0.005, 0.007] 0.324 206.942 0.746
SOA:lang –0.002 [–0.010, 0.007] –0.355 167.266 0.723
quest1:lang –0.016 [–0.022, –0.010] –5.347 18.902.081 0.000***
imgblty:SOA:quest –0.003 [–0.011, 0.005] –0.724 19.131.285 0.469
imgblty:SOA:lang –0.006 [–0.015, 0.003] –1.207 178.165 0.229
imgblty:quest1:lang –0.011 [–0.018, –0.005] –3.454 19.123.672 0.001**
SOA:quest1:lang –0.009 [–0.024, 0.005] –1.245 19.286.971 0.213
imgblty:SOA:quest1:lang 0.004 [–0.011, 0.020] 0.538 19.131.421 0.591

CI: confidence interval; SOA: stimulus-onset asynchrony.

*

p < 0.05; **p < 0.01; ***p < 0.001; .p < 0.1.

Figure 3.

Figure 3.

Effects plots from the exploratory analysis: (a) Imageability and language; (b) Imageability, SOA, and language; (c) Imageability and question; and (d) Imageability, question, and language.

All fixed factors that were included in this model showed the same main effects as in the main analysis reported above. In addition, there were significant two-way interactions of imageability and SOA (as in the main model, see Table 1); a significant two-way interaction of imageability and question (see Figure 3c); and a two-way interaction of language and question, suggesting that the difference between questions regarding RTs was not the same when comparing languages (see Figure 3d). There was no significant interaction of the factors SOA and language, suggesting that the effect of the SOA manipulation was similar in both languages; no interaction between SOA and question, suggesting that the effect of SOA did not differ when comparing the two questions; nor was there an interaction between imageability and language, suggesting that the imageability effect was similar in both languages. We also found one significant three-way interaction, namely, between imageability, language and question, suggesting that the combined effect of question and imageability was not the same in both languages. No other three-way interactions were significant. Most notably there was no three-way-interaction between imageability, SOA and language, suggesting that the general effect of SOA and imageability was very similar across both languages. Finally, there was no four-way interaction of our four main predictors (see Table 2).

General discussion

Our general goal in this study was to test whether the availability of internal imagery elicited by words is related to ratings of word imageability. We tested this by examining whether decision times about sensory properties of noun referents were related to imageability ratings for these nouns collected in two different languages—Czech and German—and to the presence of a concurrently presented, potentially interfering, visual stimulus. Overall, our findings confirm that word imageability is related to internal imagery, and that the time course of internal imagery is predicted by imageability ratings. This provides new empirical support for the notion that word imageability reflects some aspects of the image-based or sensory mode of internal representation. Our design included several further variables, which allows us to make a number of suggestions about how sensory-based imagery participates in semantic representation.

The first hypothesis we tested was that nouns with high imageability ratings would elicit faster responses to questions about their corresponding referents. This was confirmed. There was a robust effect of imageability on response times. We had also hypothesised, as a possibility, that this effect could be U-shaped or that it would level off, with the lowest imageability words showing no or inverse effects on response times. The data do not support this. Low-imageability words were processed slower than medium- and high-imageability words, with imageability showing a linear trend in its effects on response times (see Figure 2). Note, however, that the distribution of imageability values was not equal across the full range in our study, with a higher number of high-imageability words than medium- and low-imageability words. Only a replication study with a more balanced distribution might ultimately be able to reject our alternative hypothesis.

Our next preregistered hypothesis concerned the SOA manipulation, which we achieved by varying the time interval between target word offset and test question onset, ranging from 1 to 2 s in three steps. Our hypotheses were confirmed. Responses were faster in trials with longer SOAs. This reflects that participants were indeed engaged in mental imagery at least during the time interval between word offset and question onset, and, more importantly, that mental imagery takes time. In addition to this overall effect of SOA, we also hypothesised that the effects of SOA would be smaller for high-imageability words. There was indeed a significant interaction between SOA and imageability confirming this (see Figure 2a). This interaction also indicates that the effect of imageability was stronger with an SOA of 1 s than with an SOA of 2 s, providing an estimate of the time course of word imageability effects on mental imagery. The larger impact of imageability in the shortest SOA condition (measurement starts after 1.5 s relative to word onset; see Figure 1) and the much-reduced impact in the two longer SOA condition (measurement starts after 2 or 2.5 s, respectively), suggest that most of the imagery happens within an interval shorter than 2 s after word onset. Alternatively, it may be that word imageability only influences early stages of the image generation process. In principle, “finding” a mental image and “unfolding” it for further mental processing may be two separate processes, and word imageability may either tap into the first, the second, or both. Perhaps the former is an aspect of lexical access that connects the word with the embodied or sensory components of its meaning, while the latter is unrelated to language and primarily due to the imagery system. More research is required to investigate this further.

All further hypotheses concerned the potential interference effect of the checkerboard that was shown to half of the participants during the mental imagery phase. Contrary to our expectations, no overall slowing due to the checkerboard presentation was observed. There were also no interactions of the checkerboard manipulation with imageability or SOA. Internal imagery seems to proceed in a similar manner regardless of the concurrent visual input during the process. Thus, although there is evidence that word imageability affects internal imagery, concurrent external sensory stimulation does not seem to interfere with it in this study. We will discuss this further after summarising results from our exploratory analyses.

Language and question effects

In addition to the preregistered questions, we examined effects of language and question, for which we had no specific hypotheses and, in fact, did not expect any effects. Our exploratory analyses revealed significant main effects of the language in which the experiment was run, and the type of question presented, with faster responses overall in German and in the question regarding weight. There was also a significant interaction between imageability and question type, language and question, and a three-way interaction between imageability, language and question. The three-way interaction indicates that the imageability effects are strongest for the size question in German participants (see Figure 3d). In Czech, the effect of imageability is also stronger for the size question than for the weight question. The asymmetry between questions may provide further insights into the role of imageability in sensory-based processing, and we will discuss our findings in more detail below.

Question asymmetry

The standard definitions of imageability refer to the ability of words to elicit an internal image, regardless of a specific modality (Paivio, 2014). However, as the term itself suggests, there seems to be a strong tendency to treat the visual modality as special (Connell & Lynott, 2012; Vigliocco et al., 2009). This can explain why the imageability effects were limited to the visual, or rather visuospatial question (“size”-question). Past sensory experience of weight, that is, the experience of lifting a specific object, may in general not be represented strongly enough to affect imageability ratings as they are collected in offline-judgement tasks. In that case, we would not expect these ratings to predict RTs for our weight question. However, it is also possible that the lack of the weight effects is less due to the association between imageability ratings and the internal representation of weight experience, but more to the instructions in our experiment. Our general instruction explicitly mentioned that people should form a visual image of the words’ referents and to use this as the basis for answering both questions. However, if the instruction were more tuned to the body-sense domain, specifically for the weight question, for example, imagine you are lifting the object, effects of imageability ratings might have been observed in weight question trials to the same extent as in the size question trials. This possibility requires further research.

It is important to note that the responses to the weight question, in addition to the absence of an imageability effect, were generally faster compared to the size question. This may suggest that the mechanism of answering the weight question has circumvented the imagery process in general. Participants may have instead relied on non-sensory or propositional knowledge about the word referents. Note, however, that at the time participants were exposed to the target word and engaged in mental imagery, they had no way of knowing which question they would encounter, implying that the imagery process should be the same for both question trials, at least up to question onset. Perhaps the participants avoided using the mental image in their decisions regarding weight. In other words, they might have stopped the image generation process upon receiving the weight question and relied on non-sensory information to generate a response. Alternatively, the findings from the weight question may be explained by the fact that the weight question lacked a specific comparison element in our study. People were asked “Does it weigh a lot?” not “Does it weigh more than . . .?” which was different from the size question. However, this does not necessarily mean that the participants generated an answer to the weight question without any type of sensory-based processing, that is, by solely relying on non-sensory or propositional knowledge. To judge whether an object is heavy or not most likely involves imagining lifting it, which after all is a form of mental simulation, with one’s own strength as a reference point. Following such a strategy, however, still does not require to represent two objects for comparison simultaneously, as in the size question, which may well explain the differences between the two question types. Although we cannot fully explain the asymmetry between the size and weight question regarding the imageability ratings, the fact that our data show that the SOA manipulation was equally effective in both questions, with a shorter SOA leading to longer responses than a longer SOA, suggests that sensory-based processing did in fact take place in both cases. Thus, our findings at least to some extent provide evidence for modality-specific effects in sensory-based processing given the task we employed in this study.

It should be noted that imageability ratings are normally elicited as sensory ratings, rather than visual, and that words that score highest in imageability usually have a multimodal nature (visual, tactile, olfactory, . . .; Connell & Lynott, 2012), for example, names of foods (Scott et al., 2019). One would thus expect some relation of the imageability ratings to the non-visual aspects of the imagined objects. However, Connell and Lynott (2012) showed that there is a strong bias for the visual modality in the imageability ratings, regardless of the elicitation instructions. Under specific interpretations of the asymmetry between the results for the two questions in our study, our data support their findings.

Language effects

The effects of imageability for the visual question were stronger in German, and weaker in Czech. One possible explanation for this effect could be the differences in the variability of imageability ratings. The mean of the imageability ratings in German was 73.19 with SD 21.92 and 75.16 with SD 15.13 in Czech, giving the SD/mean ratio (“coefficient of variation”) of 0.299 in German, and 0.204 in Czech. There was thus a slightly smaller spread in the ratings in Czech. There are at least two possible explanations for how this difference came about. While the German list was compiled from existing norms (Grandy et al., 2020) to include a wide range of imageability values, the Czech list was created by choosing translation equivalents of the German words on that list under the initial assumption that Czech imageability ratings would be similar. In this, for many German words there were multiple translation candidates. In these cases, Czech words which resembled the dominant meaning in German most closely or were most specific were chosen. This strategy might have resulted in a change of the distribution of imageability ratings originally present in the German set. Another possibility is that the variability differences are due differences in the scale used for collecting the imageability rating between languages. The German norms are based on a visual analogue scale ranging from 0 to 100 (Grandy et al., 2020), while the Czech norms are based on a 7-point Likert-type scale. A more fine-grained scale likely captures more variation. In addition, another explanation for the observed language effects regarding the ratings could be interpersonal mental characteristics or cross-linguistic/cross-cultural differences. Further research is required to investigate potential systematic differences in imageability ratings between speakers of different languages. However, despite the cross-language differences in the ratings, the RT effects we found in our study were strikingly similar between the two languages, which, we believe, supports the validity of our approach.

Lack of distractor effects

Our assumption that a visual distractor stimulus would interfere with the imagery process was not borne out in this study. One possible explanation is the mismatch between the exact modality of our question, instruction and the imageability ratings. Kosslyn (1975) pointed out the difference between spatial and purely visual imagery. Our question was primarily spatial: the size judgement can be made visually but it is about spatial extent, not about the visual properties of the object such as colour or texture. Maybe our visual distractor manipulation would have interfered with answers to questions about colours or surfaces. An alternative explanation is that visual imagery does not interfere with sensory input because it is to some extent abstract and does not use all the resources visual perception does. We will return to this in more detail below.

Findings from the perspective of the hub-and-spoke model view

Our results may be interpreted from the hub-and-spoke views of knowledge representation (Patterson et al., 2007; Reilly et al., 2016). From this perspective, concepts form structures that cluster around amodal “hub” representations which in turn are connected via “spokes” to different modalities that are related to that concept, for example, visual, spatial, tactile or auditory (Patterson et al., 2007; Reilly et al., 2016; Reilly & Peelle, 2008). Each modality-specific sensory representation may be partially shared between perceptual and imagery processes. Lexical nodes are thought to be connected to the main hub. From this perspective, a target word in our study activates the hub via its link to it, and from there, activation flows to the visual modality representations. Our findings suggest that this activation process takes time, and that it takes longer for words with low-imageability ratings compared to those with high ones. While the visual modality representation is being activated, or shortly thereafter, participants are prompted to engage in the decision-making process to answer the question. Given a short SOA, the visual modality representation may not be activated to an extent that the perceived mental image is yet useful for deciding for an appropriate response. Especially, with low-imageability words, the presentation of the question may fall into the same time interval as the ongoing activation process. However, with a longer SOA, our results suggest, the mental image has already unfolded, and, therefore, decision times are shorter. The checkerboard image we used bears no conceptual information, just low-level sensory data. Thus, during the perception of the checkerboard image there is not much activation “above” the low-level feature domain, colour and shape in our case. If we assume that in our task the information flow is from the verbal code to the main hub and from there to the visual-modality representation it might be possible that low-level features of the target (like colour and geometrical shape) are not automatically activated during this process, which might explain the absence of an interference effect induced by the simultaneous presentation of the flickering checkerboard. Alternatively, RT measurements might not be sensitive enough to detect an effect of shared domain-specific attention as was shown in previous EEG studies (Villena-González et al., 2016). The hub-and-spoke architecture might also be used to describe the distinction between general visual appearance (Gestalt) and more specific spatial information, as it can explain triggering of concepts by different sensory data. For example, the concept of bread can be activated by the visual or tactile input. There is evidence that eliciting the concept does not activate all connections to modality-specific representations, only those that participate in the current activation event (Reilly et al., 2016). This could explain why our visual imagery instruction did not affect the judgement of weight, and why the visual distractor did not interfere with spatial size judgement.

Conclusion

This RT study provides evidence for a link between average subjective imageability ratings for words and processing time associated with mental imagery. Words with high imageability ratings produce more vivid internal images than those with lower imageability ratings, judging from the speed of responses to questions about sensory properties. The observed effect holds across languages, although differences emerged in our study due to the imageability measures we employed, suggesting a rather high sensitivity of imageability ratings and their relation to processing time. Furthermore, the findings presented may be viewed to highlight the predominant role of the visual modality in mental imagery. Finally, an interference between resources required for visual imagery (inward-directed attention) and visual perception (outward-directed attention) was not observed in the present paradigm. It is a topic for further research to test whether such interference could be detected using more sensitive measures.

Footnotes

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The study was supported, in part, by the 4EU+ mini-grant no. 4EU+/21/F2/16 from Charles University, Prague, and the Czech Science Foundation (GA ČR) grant 19-155768.

ORCID iD: Johannes Gerwien Inline graphic https://orcid.org/0000-0002-6207-8167

Supplemental material: The supplementary material is available at qjep.sagepub.com. The raw data and the complete documentation of the statistical analysis can be found here: https://osf.io/5qf7e/

References

  1. Barr D. J., Levy R., Scheepers C., Tily H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255–278. 10.1016/j.jml.2012.11.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bates D. M., ächler M., Bolker B., Walker S. (2015). Fitting linear mixed-effects models using {lme4}. Journal of Statistical Software, 67(1), 1–48. 10.18637/jss.v067.i01 [DOI] [Google Scholar]
  3. Bergmann J., Genç E., Kohler A., Singer W., Pearson J. (2016). Smaller primary visual cortex is associated with stronger, but less precise mental imagery. Cerebral Cortex, 26(9), 3838–3850. 10.1093/CERCOR/BHV186 [DOI] [PubMed] [Google Scholar]
  4. Bock K. J., Warren R. K. (1985). Conceptual accessibility and syntactic structure in sentence formulation. Cognition, 21(1), 47–67. 10.1016/0010-0277(85)90023-X [DOI] [PubMed] [Google Scholar]
  5. Bonner M. F., Vesely L., Price C., Anderson C., Richmond L., Farag C., Avants B., Grossman M. (2009). Reversal of the concreteness effect in semantic dementia. Cognitive Neuropsychology, 26(6), 568–579. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Box G. E. P., Cox D. R. (1964). An analysis of transformations. Journal of the Royal Statistical Society: Series B (Methodological), 26(2), 211–243. 10.1111/j.2517-6161.1964.tb00553.x [DOI] [Google Scholar]
  7. Breedin S. D., Saffran E. M., Coslett H. B. (1994). Reversal of the concreteness effect in a patient with semantic dementia. Cognitive Neuropsychology, 11(6), 617–660. [Google Scholar]
  8. Brysbaert M., Warriner A. B., Kuperman V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3), 904–911. 10.3758/s13428-013-0403-5 [DOI] [PubMed] [Google Scholar]
  9. Bucur M., Papagno C. (2021). An ALE meta-analytical review of the neural correlates of abstract and concrete words. Scientific Reports, 11(1), 1–24. 10.1038/s41598-021-94506-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Butler R., Patterson K., Woollams A. M. (2012). In search of meaning: Semantic effects on past-tense inflection. Quarterly Journal of Experimental Psychology, 65(8), 1633–1656. [DOI] [PubMed] [Google Scholar]
  11. Connell L., Lynott D. (2012). Strength of perceptual experience predicts word processing performance better than concreteness or imageability. Cognition, 125(3), 452–465. [DOI] [PubMed] [Google Scholar]
  12. Cortese M. J., McCarty D. P., Schock J. (2015). A mega recognition memory study of 2897 disyllabic words. Quarterly Journal of Experimental Psychology, 68(8), 1489–1501. 10.1080/17470218.2014.945096 [DOI] [PubMed] [Google Scholar]
  13. Crutch S. J., Warrington E. K. (2005). Abstract and concrete concepts have structurally different representational frameworks. Brain, 128(3), 615–627. [DOI] [PubMed] [Google Scholar]
  14. de Groot A. M. (1989). Representational aspects of word imageability and word frequency as assessed through word association. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(5), 824–845. [Google Scholar]
  15. Della Rosa P. A., Catricalà E., Vigliocco G., Cappa S. F. (2010). Beyond the abstract—Concrete dichotomy: Mode of acquisition, concreteness, imageability, familiarity, age of acquisition, context availability, and abstractness norms for a set of 417 Italian words. Behavior Research Methods, 42(4), 1042–1048. 10.3758/BRM.42.4.1042 [DOI] [PubMed] [Google Scholar]
  16. Dijkstra N., Fleming S. (2021). Fundamental constraints on distinguishing reality from imagination. Psyarxiv. 10.31234/OSF.IO/BW872 [DOI]
  17. Fox J., Weisberg S. (2019). An {R} companion to applied regression (3rd ed.). SAGE. [Google Scholar]
  18. Grandy T. H., Lindenberger U., Schmiedek F. (2020). Vampires and nurses are rated differently by younger and older adults: Age-comparative norms of imageability and emotionality for about 2500 German nouns. Behavior Research Methods, 52(3), 980–989. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Hoffman P., Lambon Ralph M. A. (2011). Reverse concreteness effects are not a typical feature of semantic dementia: Evidence for the hub-and-spoke model of conceptual representation. Cerebral Cortex, 21(9), 2103–2112. [DOI] [PubMed] [Google Scholar]
  20. Holcomb P. J., Grainger J. (2006). On the time course of visual word recognition: An event-related potential investigation using masked repetition priming. Journal of Cognitive Neuroscience, 18(10), 1631–1643. 10.1162/JOCN.2006.18.10.1631 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Keogh R., Pearson J. (2014). The sensory strength of voluntary visual imagery predicts visual working memory capacity. Journal of Vision, 14(12), Article 7. 10.1167/14.12.7 [DOI] [PubMed] [Google Scholar]
  22. Khanna M. M., Cortese M. J. (2021). How well imageability, concreteness, perceptual strength, and action strength predict recognition memory, lexical decision, and reading aloud performance. Memory, 29(5), 622–636. 10.1080/09658211.2021.1924789 [DOI] [PubMed] [Google Scholar]
  23. Kosslyn S. M. (1973). Scanning visual images: Some structural implications. Perception & Psychophysics, 14(1), 90–94. [Google Scholar]
  24. Kosslyn S. M. (1975). Information representation in visual images. Cognitive Psychology, 7(3), 341–370. [Google Scholar]
  25. Kosslyn S. M., Ball T. M., Reiser B. J. (1978). Visual images preserve metric spatial information: Evidence from studies of image scanning. Journal of Experimental Psychology: Human Perception and Performance, 4(1), 47–60. [DOI] [PubMed] [Google Scholar]
  26. Kosslyn S. M., Thompson W. L., Ganis G. (2006). The case for mental imagery. In The case for mental imagery. Oxford University Press. 10.1093/ACPROF:OSO/9780195179088.001.0001 [DOI] [Google Scholar]
  27. Kousta S.-T., Vigliocco G., Vinson D. P., Andrews M., Del Campo E. (2011). The representation of abstract words: Why emotion matters. Journal of Experimental Psychology: General, 140(1), 14–34. [DOI] [PubMed] [Google Scholar]
  28. Kroll J. F., Merves J. S. (1986). Lexical access for concrete and abstract words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 12(1), 92–107. [Google Scholar]
  29. Laszlo S., Federmeier K. D. (2014). Never seem to find the time: Evaluating the physiological time course of visual word recognition with regression analysis of single-item event-related potentials. Language, Cognition and Neuroscience, 29(5), 642–661. 10.1080/01690965.2013.866259 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Loiselle M., Rouleau I., Nguyen D. K., Dubeau F., Macoir J., Whatmough C., Lepore F., Joubert S. (2012). Comprehension of concrete and abstract words in patients with selective anterior temporal lobe resection and in patients with selective amygdalo-hippocampectomy. Neuropsychologia, 50(5), 630–639. [DOI] [PubMed] [Google Scholar]
  31. Łuniewska M., Wodniecka Z., Miller C. A., Smolík F., Butcher M., Chondrogianni V., Hreich E. K., Messarra C. A., Razak R., Treffers-Daller J., Yap N. T., Abboud L., Talebi A., Gureghian M., Tuller L., Haman E. (2019). Age of acquisition of 299 words in seven languages: American English, Czech, Gaelic, Lebanese Arabic, Malay, Persian and Western Armenian. PLOS ONE, 14(8), Article e0220611. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. MacDonald M. C. (2013). How language production shapes language form and comprehension. Frontiers in Psychology, 4, Article 226. 10.3389/fpsyg.2013.00226 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Masterson J., Druks J., Gallienne D. (2008). Object and action picture naming in three-and five-year-old children. Journal of Child Language, 35(2), 373–402. [DOI] [PubMed] [Google Scholar]
  34. Morrison C. M., Chappell T. D., Ellis A. W. (1997). Age of acquisition norms for a large set of object names and their relation to adult estimates and other variables. The Quarterly Journal of Experimental Psychology Section A, 50(3), 528–559. [Google Scholar]
  35. Moseley P., Smailes D., Ellison A., Fernyhough C. (2016). The effect of auditory verbal imagery on signal detection in hallucination-prone individuals. Cognition, 146, 206–216. 10.1016/J.COGNITION.2015.09.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Paivio A. (1967). Paired-associate learning and free recall of nouns as a function of concreteness, specificity, imagery, and meaningfulness. Psychological Reports, 20(1), 239–245. [DOI] [PubMed] [Google Scholar]
  37. Paivio A. (1969). Mental imagery in associative learning and memory. Psychological Review, 76(3), 241–263. [Google Scholar]
  38. Paivio A. (1971). Imagery and language. In Segal S. J. (Ed.), Imagery: Current cognitive approaches (pp. 7–32). Elsevier. [Google Scholar]
  39. Paivio A. (2013). Dual coding theory, word abstractness, and emotion: A critical review of Kousta et al. (2011). Journal of Experimental Psychology: General, 142(1), 282–7. [DOI] [PubMed] [Google Scholar]
  40. Paivio A. (2014). Mind and its evolution: A dual coding theoretical approach. Psychology Press. [Google Scholar]
  41. Paivio A., Walsh M., Bons T. (1994). Concreteness effects on memory: When and why? Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(5), 1196–1204. [Google Scholar]
  42. Patterson K., Nestor P. J., Rogers T. T. (2007). Where do you know what you know? The representation of semantic knowledge in the human brain. Nature Reviews Neuroscience, 8(12), 976–987. [DOI] [PubMed] [Google Scholar]
  43. Peirce J., Gray J. R., Simpson S., MacAskill M. H., öchenberger R., Sogo H., Kastman E., Lindeløv J. K. (2019). PsychoPy2: Experiments in behavior made easy. Behavior Research Methods, 51(1), 195–203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Perky C. W. (1910). An experimental study of imagination. The American Journal of Psychology, 21(3), 422–452. 10.2307/1413350 [DOI] [Google Scholar]
  45. Pinker S., Choate P. A., Finke R. A. (1984). Mental extrapolation in patterns constructed from memory. Memory & Cognition, 12(3), 207–218. [DOI] [PubMed] [Google Scholar]
  46. Pobric G., Jefferies E., Ralph M. A. L. (2010). Category-specific versus category-general semantic impairment induced by transcranial magnetic stimulation. Current Biology, 20(10), 964–968. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Prado E. L., Ullman M. T. (2009). Can imageability help us draw the line between storage and composition? Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(4), 849–866. [DOI] [PubMed] [Google Scholar]
  48. R Core Team. (2020). R: A language and environment for statistical computing.
  49. Reed S. K., Hock H. S., Lockhead G. R. (1983). Tacit knowledge and the effect of pattern configuration on mental scanning. Memory & Cognition, 11(2), 137–143. [DOI] [PubMed] [Google Scholar]
  50. Reilly J., Peelle J. E. (2008). Effects of semantic impairment on language processing in semantic dementia. Seminars in Speech and Language, 29(1), 32–43. [DOI] [PubMed] [Google Scholar]
  51. Reilly J., Peelle J. E., Garcia A., Crutch S. J. (2016). Linking somatic and symbolic representation in semantic memory: The dynamic multilevel reactivation framework. Psychonomic Bulletin & Review, 23(4), 1002–1014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Schwanenflugel P. J. (1991). Why are abstract concepts hard to understand. The Psychology of Word Meanings, 11, 223–250. [Google Scholar]
  53. Schwanenflugel P. J., Shoben E. J. (1983). Differential context effects in the comprehension of abstract and concrete verbal materials. Journal of Experimental Psychology: Learning, Memory, and Cognition, 9(1), 82–102. [Google Scholar]
  54. Scott G. G., Keitel A., Becirspahic M., Yao B., Sereno S. C. (2019). The Glasgow norms: Ratings of 5,500 words on nine scales. Behavior Research Methods, 51(3), 1258–1270. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Segal S. J., Fusella V. (1970). Influence of imaged pictures and sounds on detection of visual and auditory signals. Journal of Experimental Psychology, 83(3 Pt. 1), 458–464. [DOI] [PubMed] [Google Scholar]
  56. Sirigu A., Duhamel J.-R., Poncet M. (1991). The role of sensorimotor experience in object recognition: A case of multimodal agnosia. Brain, 114(6), 2555–2573. [DOI] [PubMed] [Google Scholar]
  57. Smolík F. (2014). Noun imageability facilitates the acquisition of plurals: Survival analysis of plural emergence in children. Journal of Psycholinguistic Research, 43(4), 335–350. [DOI] [PubMed] [Google Scholar]
  58. Smolík F. (2019). Imageability and neighborhood density facilitate the age of word acquisition in Czech. Journal of Speech, Language, and Hearing Research, 62(5), 1403–1415. 10.1044/2018_JSLHR-L-18-0242 [DOI] [PubMed] [Google Scholar]
  59. Smolík F., Kříž A. (2015). The power of imageability: How the acquisition of inflected forms is facilitated in highly imageable verbs and nouns in Czech children. First Language, 35(6), 446–465. [Google Scholar]
  60. Strain E., Patterson K., Seidenberg M. S. (1995). Semantic effects in single-word naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(5), 1140–1154. 10.1037/0278-7393.21.5.1140 [DOI] [PubMed] [Google Scholar]
  61. Venables W. N., Ripley B. D. (2002). Modern applied statistics with S (4th ed.). Springer. [Google Scholar]
  62. Vigliocco G., Meteyard L., Andrews M., Kousta S. (2009). Toward a theory of semantic representation. Language and Cognition, 1(2), 219–247. [Google Scholar]
  63. Villena-González M. L., ópez V., Rodríguez E. (2016). Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli. NeuroImage, 132, 71–78. 10.1016/j.neuroimage.2016.02.013 [DOI] [PubMed] [Google Scholar]
  64. Wickham H. (2011). The split-apply-combine strategy for data analysis. Journal of Statistical Software, 40(1), 1–29. [Google Scholar]
  65. Wickham H. (2016). ggplot2: Elegant graphics for data analysis. Springer. [Google Scholar]
  66. Wilke C. O. (2020). cowplot: Streamlined Plot Theme and Plot Annotations for “ggplot2.” https://cran.r-project.org/web/packages/cowplot/index.html

Articles from Quarterly Journal of Experimental Psychology (2006) are provided here courtesy of SAGE Publications

RESOURCES