Skip to main content
Frontiers in Psychology logoLink to Frontiers in Psychology
editorial
. 2015 Jan 8;5:1553. doi: 10.3389/fpsyg.2014.01553

What are memory-perception interactions for? Implications for action

Loïc P Heurley 1,*, Laurent P Ferrier 2,3
PMCID: PMC4287056  PMID: 25620945

Currently, a growing body of studies demonstrates memory-perception interactions (see Barsalou, 2008; Heurley et al., 2012; Lobel, 2014, for reviews). Even if such interactions are highly relevant to support embodied approaches of cognition as well as to better understand memory and perception (e.g., Zwaan, 2008; Versace et al., 2009; Landau et al., 2010; Kiefer and Barsalou, 2013), their functional role remains unclear: Why would perception integrate memory and knowledge while it seems highly efficient without such influences? To understand the functional relevance of these interactions, we assume that it is necessary to take into account two important conditions in which our cognitive systems have evolved during the phylogenesis and continue to evolve during our ontogenesis. More precisely, we develop a view where memory-perception interactions are highly relevant to plan and control actions when we interact with well-known objects in non-optimal perceptual conditions.

It is widely accepted that to properly parameterize action components and to control them during the course of action, it is necessary to perceptually process some object's features (Hommel and Elsner, 2009). As already claimed by Glover (2004), these “action-relevant perceptual features” (ARPF) can be spatial (e.g., shape, orientation) as well as non-spatial (e.g., fragility, weight). Among spatial-ARPF, size is usually recognized as an important cause to its involvement in a great variety of actions. Jeannerod (1984) has for example demonstrated that the magnitude of the grip aperture, a component of the grasping movement, is function to the visual size of objects (see also Ellis et al., 2007; Fagioli et al., 2007; Wykowska et al., 2009). Visual size processing also seems highly important in order to intercept flying objects (Lee, 1976). Nevertheless, very frequently and for various reasons, the perceptual processing of ARPF is far from being optimal especially in “out-of-laboratory” conditions. For instance, some ARPF can't be processed by the available perceptual channels. Indeed, when we want to grasp an object, we are only able to visually perceive it and therefore we are unable to directly perceive its fragility, weight, and temperature whereas they are extremely relevant to plan the force and the velocity of the grasp (Glover, 2004). Even when the right channels are available, some environmental conditions can impair perception. For example, the occlusion of an object by other surfaces can limit our ability to visually perceive it and thus process its shape, size, or distance (Tanaka et al., 2001). Furthermore, short- or long-term injuries to perceptual systems can also induce non-optimal conditions of perception. Indeed, the eyes can be long-term-impaired by cell aging or short-term-impaired by an intense flash light, but in both cases our ability to process visual features is affected. Accordingly, how do we plan and control actions in conditions where features that are suited to plan and control relevant parameters of action cannot often be optimally perceived? First of all, it is important to note that non-optimal processing of ARPF do not necessarily induce object-recognition problems. Indeed, as mentioned in several models, object recognition can be accurately based on non-ARPF such as the color and/or the texture of objects and the context (Tanaka et al., 2001; Bar, 2009). Therefore, even if some ARPF cannot be processed, objects can be accurately identified in many cases. Secondly, because in everyday life we mainly interact with well-known objects, preserved ability to recognize object identity can automatically induce the retrieval of a myriad of knowledge associated with the recognized-objects including associated ARPF (e.g., shape, size). Thus, we claim that recognition processes used to identify objects during the planning phase of actions involve the retrieval of previous experienced ARPF that are automatically integrated into perception. We also claim that they allow compensating non-optimally perceived ARPF and so to maintain a high level of action efficiency even in non-optimal conditions of perception. To resume, we assume that the functional relevance of memory-perception interactions (i.e., an embodied cognitive architecture) occurs when humans interact with well-known objects in degraded-perceptual-conditions. We discuss three potential sets of evidence, coming from studies all focusing on the ARPF size in support of this view.

First, numerous experiments suggests that memory would be able to store objects' perceptual features and especially ARPF (see Barsalou, 2008, for a review). For instance, a great variety of studies support the idea that the size of objects is accurately stored in memory and closely matches their real visual organization (Moyer, 1973; Holyoak, 1977; Holyoak et al., 1979; Shoben and Wilson, 1998; Bertamini et al., 2011; Konkle and Oliva, 2011; Linsen et al., 2011). More importantly, it seems that the known size of objects can be automatically retrieved even when objects are briefly perceived suggesting the possible automatic retrieval of ARPF during fast real interactions with objects. Ferrier et al. (2007) have for example demonstrated that a target picture (e.g., an elephant) is easily categorized as an animal when the brief prime picture (150 ms) has a similar known size (e.g., a giraffe or a car) rather than a different (e.g., a bee or a key) while both pictures have the same visual size on the screen (see also Setti et al., 2009; Gabay et al., 2013). It is noteworthy that the size is generally stable across items of a category as well as across experiences. Because all the ladybugs we experienced have approximately the same small size, their size can be easily stored at a conceptual level (i.e., general knowledge, Whittlesea, 1987). However, in some cases, ARPF could be stored in a more specific or short-term format. For example, because the size of your car is not shared by all the exemplars of the “car” category, this feature is undoubtedly stored in a more autobiographic format. Furthermore, some ARPF are so variable that we can only store them for a short period of time like the last position of your car on the supermarket car park or the distance of some objects on a table (see also Borghi, 2013 for a close distinction). Thus, we claim that ARPF can be stored and automatically retrieved from memory but perhaps in various ways according to the stability of the ARPF across experiences.

Moreover, several studies suggest that ARPF are not only stored but can also influence conscious perception. Among others, the case of the size perception has been strongly studied. In a primary study, Paivio (1975) has demonstrated that the comparison of the known sizes of objects is faster when they are congruent with their visual sizes. In others words, it is easier to say that in general an elephant is larger than a mouse when in the experiment the picture of the elephant is presented larger than the picture of the mouse rather than smaller (see also Srinivas, 1996; Rubinsten and Henik, 2002; Konkle and Oliva, 2012, for similar results). The works of Riou et al. (2011) and Rey et al. (2014) go further and suggest the automatic nature of this influence. Riou et al. (2011) have demonstrated that the known size of objects can influence the detection of a visually odd-sized stimulus in a visual search task while such an object's feature is absolutely useless to complete the task. Others studies have demonstrated an influence of the known size of objects on the judgment of distance that are often derived from visual size suggesting that the stored size can automatically impact not only the perception of visual size but also the perception of other ARPF derived from it (Epstein, 1965; Predebon, 1992, 1994; Hershenson and Samuels, 1999; Distler et al., 2000). Besides the known size of objects, the perception of visual size can be affected by a more abstract kind of size representation: numbers. Henik and Tzelgov (1982) have replicated the interaction between visual- and stored-size reported by Paivio (1975) but with numbers. In a classic bisection task requiring implicit length estimation, de Hevia and Spelke (2009) have found a bias of bisection toward the side of the line where the larger number is printed. In a reproduction task, Viarouge and de Hevia (2013) have demonstrated that large numbers (e.g., 9) presented at each corner of a square induce larger reproduction of this square compared to the condition where smaller numbers are presented (e.g., 2). Altogether, these studies support the possibility that the size stored in memory (i.e., known size of objects or numbers) can directly influence the perception of size or of size-related features (e.g., distance) supporting the possible completion of perception by stored-ARPF when some of them are missing or ambiguous (see Barsalou, 2009, for a similar idea).

A further step is achieved by recent works demonstrating the influence of size stored in memory on more automatic perception-action links (rather than conscious judgments). Indeed, some studies have been able to show an influence of the known size of objects on action parameters that are dependent on the visual size. For instance, Hosking and Crassini (2010) have conducted experiments in which participants have to carry out time-to-contact judgments on stimuli for which a linear or a parabolic trajectory of approach are simulated. Such a judgment is highly important for a great variety of interceptive actions and is mainly based on the online processing of the visual size of the approaching stimulus. In their experiments, the stimuli used have different known sizes (i.e., large: a football vs. small: a tennis ball). Results elegantly demonstrated that this stored feature of objects influences time-to-contact judgments suggesting that it could interfere with our ability to intercept mobiles (see also DeLucia, 2005; Hosking and Crassini, 2011, for similar results). Another set of studies suggest also an influence of the known size of objects on another well-established perception-action link: Our ability to adapt our grip aperture according to the visual size of the to-be-grasped objects (Jeannerod, 1984). Several studies demonstrate that participants are faster to carry out a precision grip on typically small objects (e.g., cherry) and a power grip on typically large objects (i.e., eggplant; Ellis and Tucker, 2000; Tucker and Ellis, 2004; Derbyshire et al., 2006; Girardi et al., 2010) even when visual size cannot interfere (Glover et al., 2004; Tucker and Ellis, 2004; Heurley et al., in revision). The same effect on grip aperture is obtained when size-related adjectives are concomitantly processed (e.g., SMALL/LARGE, LONG/SHORT) rather than known objects (Gentilucci and Gangitano, 1998; Gentilucci et al., 2000; Glover and Dixon, 2002). These results are also replicated when numbers are used. More concretely, Moretto and di Pellegrino (2008) have shown that large number processing facilitate power grips while small number processing facilitate precision grips (see also Andres et al., 2004; Lindemann et al., 2007). In addition, some results support that such interactions are highly automatic (Moretto and di Pellegrino, 2008; Namdar et al., 2014) and seems to be restricted to the planning phase of grasping (Glover and Dixon, 2002; Glover et al., 2004; Badets et al., 2007; Andres et al., 2008). Taken together, these works demonstrate that stored ARPF, such as size, can influence automatic perception-action links and not only conscious perception supporting the possibility that perception can be completed by stored ARPF, itself influencing the planning of some action components.

This short review suggests that the ARPF size can be stored in memory, automatically retrieved during object perception, and can influence conscious perception of visual size (or related-features) as well as the planning of action components mainly based on visual size processing. We used this evidence to support the view that the interactions between present and absent–but simulated in memory–perceptual features are important for action especially in “out-of-laboratory conditions” in which ARPF can't be optimally perceived and in which interactions mainly occur with well-known objects. Of course, the reported evidence is limited to the size, but several studies have already demonstrated that other ARPF such as distance, position, and weight could be stored and automatically retrieved (Estes et al., 2008; Scorolli et al., 2009; Winter and Bergen, 2012). This strongly suggests that our view can be extended. Even if many questions remain open and a lot of work has to be done to best support this view, it has the advantage to search for the functional relevance of memory-perception interactions (i.e., an embodied cognitive architecture) by taking into account two main constraints in which our cognitive systems have certainly evolved at phylogenetic and ontogenetic scales: Interactions with (i) well-known objects in (ii) more or less degraded-perceptual-conditions.

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We are grateful to Gabrielle Chesnoy-Servanin for her revision of the English text.

References

  1. Andres M., Davare M., Pesenti M., Olivier E., Seron X. (2004). Number magnitude and grip aperture interaction. Neuroreport 15, 2773–2777. [PubMed] [Google Scholar]
  2. Andres M., Ostryc D. J., Nicola F., Pause T. (2008). Time course of number magnitude interference during grasping. Cortex 44, 414–419. 10.1016/j.cortex.2007.08.007 [DOI] [PubMed] [Google Scholar]
  3. Badets A., Andres M., Di Luca S., Pesenti M. (2007). Number magnitude potentiates action judgements. Exp. Brain Res. 180, 525–534. 10.1007/s00221-007-0870-y [DOI] [PubMed] [Google Scholar]
  4. Bar M. (2009). The proactive brain: memory for predictions. Philos. Trans. R. Soc. Lond. B Biol. Sci. 364, 1235–1243. 10.1098/rstb.2008.0310 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Barsalou L. W. (2008). Grounded cognition. Annu. Rev. Psychol. 59, 617–645. 10.1146/annurev.psych.59.103006.093639 [DOI] [PubMed] [Google Scholar]
  6. Barsalou L. W. (2009). Simulation, situated conceptualization, and prediction. Philos. Trans. R. Soc Lond. B Biol. Sci. 364, 1281–1289. 10.1098/rstb.2008.0319 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bertamini M., Bennett K. M., Bode C. (2011). The anterior bias in visual art: the case of images of animals. Laterality 16, 673–689. 10.1080/1357650X.2010.508219 [DOI] [PubMed] [Google Scholar]
  8. Borghi A. (2013). Language comprehension: action, affordances and goals, in Language and Action in Cognitive Neuroscience, eds Coello Y., Bartolo A. (New-York, NY: Psychology Press; ), 125–144. [Google Scholar]
  9. de Hevia M.-A., Spelke E. S. (2009). Spontaneous mapping of number and space in adults and young children. Cognition 110, 198–207. 10.1016/j.cognition.2008.11.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. DeLucia P. R. (2005). Does binocular disparity or familiar size override effects of relative size on judgments of time to contact? Q. J. Exp. Psychol. A 58, 865–886. 10.1080/02724980443000377 [DOI] [PubMed] [Google Scholar]
  11. Derbyshire N., Ellis R., Tucker M. (2006). The potentiation of two components of the reach-to-grasp action during object categorisation in visual memory. Acta Psychol. 122, 74–98. 10.1016/j.actpsy.2005.10.004 [DOI] [PubMed] [Google Scholar]
  12. Distler H. K., Gegenfurtner K. R., van Veen H. A. H. C., Hawken M. J. (2000). Velocity constancy in a virtual reality environment. Perception 29, 1423–1435. 10.1068/p3115 [DOI] [PubMed] [Google Scholar]
  13. Ellis R., Tucker M. (2000). Micro-affordance: the potentiation of components of action by seen objects. Br. J. Psychol. 9, 451–471. 10.1348/000712600161934 [DOI] [PubMed] [Google Scholar]
  14. Ellis R., Tucker M., Symes E., Vainio L. (2007). Does selecting one visual object from several require inhibition of the actions associated with nonselected objects? J. Exp. Psychol. Hum. Percept. Perform. 33, 670–691. 10.1037/0096-1523.33.3.670 [DOI] [PubMed] [Google Scholar]
  15. Epstein W. (1965). Nonrelational judgments of size and distance. Am. J. Psychol. 78, 120–123. 10.2307/1421091 [DOI] [PubMed] [Google Scholar]
  16. Estes Z., Verges M., Barsalou L. W. (2008). Head up, foot down: object words orient attention to the objects' typical location. Psychol. Sci. 19, 93–97. 10.1111/j.1467-9280.2008.02051.x [DOI] [PubMed] [Google Scholar]
  17. Fagioli S., Hommel B., Schubotz R. I. (2007). Intentional control of attention: action planning primes action related stimulus dimensions. Psychol. Res. 71, 22–29. 10.1007/s00426-005-0033-3 [DOI] [PubMed] [Google Scholar]
  18. Ferrier L., Staudt A., Reilhac G., Jiménez M., Brouillet D. (2007). L'influence de la taille typique des objets dans une tâche de catégorisation. Can. J. Exp. Psychol. 61, 316–321. 10.1037/cjep2007031 [DOI] [PubMed] [Google Scholar]
  19. Gabay S., Leibovich T., Henik A., Gronau N. (2013). Size before numbers: conceptual size primes numerical value. Cognition 129, 18–23. 10.1016/j.cognition.2013.06.001 [DOI] [PubMed] [Google Scholar]
  20. Gentilucci M., Benuzzi F., Bertolani L., Daprati E., Gangitano M. (2000). Language and motor control. Exp. Brain Res. 133, 468–490. 10.1007/s002210000431 [DOI] [PubMed] [Google Scholar]
  21. Gentilucci M., Gangitano M. (1998). Influence of automatic word reading on motor control. Eur. J. Neurosci. 10, 752–756. 10.1046/j.1460-9568.1998.00060.x [DOI] [PubMed] [Google Scholar]
  22. Girardi G., Lindemann O., Bekkering H. (2010). Context effects on the processing of action-relevant object features. J. Exp. Psychol. Hum. Percept. Perform. 36, 330–340. 10.1037/a0017180 [DOI] [PubMed] [Google Scholar]
  23. Glover S. (2004). Separate visual representations in the planning and control of action. Behav. Brain Sci. 27, 3–24. 10.1017/S0140525X04000020 [DOI] [PubMed] [Google Scholar]
  24. Glover S., Dixon P. (2002). Semantics affect the planning but not control of grasping. Exp. Brain Res. 146, 383–387. 10.1007/s00221-002-1222-6 [DOI] [PubMed] [Google Scholar]
  25. Glover S., Rosenbaum D. A., Graham J., Dixon P. (2004). Grasping the meaning of words. Exp. Brain Res. 154, 103–108. 10.1007/s00221-003-1659-2 [DOI] [PubMed] [Google Scholar]
  26. Henik A., Tzelgov J. (1982). Is three greater than five: the relation between physical and semantic size in comparison tasks. Mem. Cogn. 10, 389–395. 10.3758/BF03202431 [DOI] [PubMed] [Google Scholar]
  27. Hershenson M., Samuels S. M. (1999). An airplane illusion: apparent velocity determined by apparent distance. Perception 28, 433–436. 10.1068/p2779 [DOI] [PubMed] [Google Scholar]
  28. Heurley L. P., Milhau A., Chesnoy G., Ferrier L. P., Brouillet T., Brouillet D. (2012). Influence of language on color perception: a simulationnist explanation. Biolinguistics 6, 354–382. 12941281 [Google Scholar]
  29. Holyoak K. J. (1977). The form of analog size information in memory. Cogn. Psychol. 9, 31–51 10.1016/0010-0285(77)90003-2 [DOI] [Google Scholar]
  30. Holyoak K. J., Dumais S. T., Moyer R. S. (1979). Semantic association effects in a mental comparison task. Mem. Cogn. 7, 303–313. 10.3758/BF03197604 [DOI] [PubMed] [Google Scholar]
  31. Hommel B., Elsner B. (2009). Acquisition, representation and control of action, in Oxford Handbook of Human Action, eds Morsella E., Bargh J. A., Gollwitzer P. M. (Oxford: Oxford University Press; ), 371–398. [Google Scholar]
  32. Hosking S. G., Crassini B. (2010). The effects of familiar size and object trajectories on time-to-contact judgements. Exp. Brain Res. 203, 541–552. 10.1007/s00221-010-2258-7 [DOI] [PubMed] [Google Scholar]
  33. Hosking S. G., Crassini B. (2011). The influence of optic expansion rates when judging the relative time to contact of familiar objects. J. Vis. 11, 1–13. 10.1167/11.6.20 [DOI] [PubMed] [Google Scholar]
  34. Jeannerod M. (1984). The timing of natural prehension movements. J. Mot. Behav. 16, 235–254. 10.1080/00222895.1984.10735319 [DOI] [PubMed] [Google Scholar]
  35. Kiefer M., Barsalou L. W. (2013). Grounding the human conceptual system in perception, action and internal states, in Action Science: Foundations of an Emerging Discipline, eds Prinz W., Beisert M., Herwig A. (Cambridge: MIT Press; ), 381–408 10.7551/mitpress/9780262018555.003.0015 [DOI] [Google Scholar]
  36. Konkle T., Oliva A. (2011). Canonical visual size for real-world objects. J. Exp. Psychol. Hum. Percept. Perform. 7, 23–37. 10.1037/a0020413 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Konkle T., Oliva A. (2012). A familiar-size stroop effect: real-world size is an automatic property of object representation. J. Exp. Psychol. Hum. Percept. Perform. 38, 561–569. 10.1037/a0028294 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Landau M. J., Meier B. P., Keefer L. A. (2010). A metaphor-enriched social cognition. Psychol. Bull. 136, 1045–1067. 10.1037/a0020970 [DOI] [PubMed] [Google Scholar]
  39. Lee D. N. (1976). A theory of visual control of braking based on information about time-to-collision. Perception 5, 437–459. 10.1068/p050437 [DOI] [PubMed] [Google Scholar]
  40. Lindemann O., Abolafia J. M., Girardi G., Bekkering H. (2007). Getting a grip on numbers: numerical magnitude priming in object grasping. J. Exp. Psychol. Hum. Percept. Perform. 33, 1400–1409. 10.1037/0096-1523.33.6.1400 [DOI] [PubMed] [Google Scholar]
  41. Linsen S., Leyssen M. H. R., Sammartino J., Palmer S. E. (2011). Aesthetic preferences in the size of images of real-world objects. Perception 40, 291–298. 10.1167/10.7.1234 [DOI] [PubMed] [Google Scholar]
  42. Lobel T. (2014). Sensation: the Sew Science of Physical Intelligence. New York, NY: Atria Books. [Google Scholar]
  43. Moretto G., di Pellegrino G. (2008). Grasping numbers. Exp. Brain Res. 188, 505–515. 10.1007/s00221-008-1386-9 [DOI] [PubMed] [Google Scholar]
  44. Moyer R. S. (1973). Comparing object in memory: evidence suggesting and internal psychophysics. Percept. Psychophys. 13, 180–184 10.3758/BF03214124 [DOI] [Google Scholar]
  45. Namdar G., Tzelgov J., Algom D., Ganel T. (2014). Grasping numbers: evidence for automatic influence of numerical magnitude on grip aperture. Psychon. Bull. Rev. 21, 830–835. 10.3758/s13423-013-0550-9 [DOI] [PubMed] [Google Scholar]
  46. Paivio A. (1975). Perceptual comparaisons through the mind's eye. Mem. Cogn. 3, 653–647. 10.3758/BF03198229 [DOI] [PubMed] [Google Scholar]
  47. Predebon J. (1992). The role of instructions and familiar size in absolute judgments of size and distance. Percept. Psychophys. 51, 344–354. 10.3758/BF03211628 [DOI] [PubMed] [Google Scholar]
  48. Predebon J. (1994). Perceived size of familiar objects and the theory of off-sized perceptions. Percept. Psychophys. 56, 238–247. 10.3758/BF03213902 [DOI] [PubMed] [Google Scholar]
  49. Rey A. E., Riou B., Versace R. (2014). Demonstration of an ebbinghaus illusion at a memory level: manipulation of the memory size and not the perceptual size. Exp. Psychol. 61, 378–384. 10.1027/1618-3169/a000258 [DOI] [PubMed] [Google Scholar]
  50. Riou B., Lesourd M., Brunel L., Versace R. (2011). Visual memory and visual perception: when memory improves visual search. Mem. Cogn. 39, 1094–1102. 10.3758/s13421-011-0075-2 [DOI] [PubMed] [Google Scholar]
  51. Rubinsten O., Henik A. (2002). Is an ant larger than a lion? Acta Psychol. 111, 141–154. 10.1016/S0001-6918(02)00047-1 [DOI] [PubMed] [Google Scholar]
  52. Scorolli C., Borghi A. M., Glenberg A. M. (2009). Language-induced motor activity in bi-manual object lifting. Exp. Brain Res. 193, 43–53. 10.1007/s00221-008-1593-4 [DOI] [PubMed] [Google Scholar]
  53. Setti A., Caramelli N., Borghi A. M. (2009). Conceptual information about size of objects in nouns. Eur. J. Cogn. Psychol. 21, 1022–1044 10.1080/09541440802469499 [DOI] [Google Scholar]
  54. Shoben E. J., Wilson T. L. (1998). Categorization in judgments of relative magnitude. J. Mem. Lang. 38, 94–111 10.1006/jmla.1997.2534 [DOI] [Google Scholar]
  55. Srinivas K. (1996). Size and reflection effects in priming: a test of transfer-appropriate processing. Mem. Cognit. 24, 441–452. 10.3758/BF03200933 [DOI] [PubMed] [Google Scholar]
  56. Tanaka J. W., Weiskopf D., Williams P. (2001). The role of color in high-level vision. Trends Cogn. Sci. 5, 211–215. 10.1016/S1364-6613(00)01626-0 [DOI] [PubMed] [Google Scholar]
  57. Tucker M., Ellis R. (2004). Action priming by briefly presented objects. Acta Psychol. 116, 185–203. 10.1016/j.actpsy.2004.01.004 [DOI] [PubMed] [Google Scholar]
  58. Versace R., Labeye E., Badard G., Rose M. (2009). The contents of long-term memory and the emergence of knowledge. Eur. J. Cogn. Psychol. 21, 522–560 10.1080/09541440801951844 [DOI] [Google Scholar]
  59. Viarouge A., de Hevia M.-A. (2013). The role of numerical magnitude and order in the illusory perception of size and brightness. Front. Psychol. 4:484. 10.3389/fpsyg.2013.00484 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Whittlesea B. W. A. (1987). Preservation of specific experiences in the representation of general knowledge. J. Exp. Psychol. Learn. Mem. Cogn. 13, 3–17 10.1037/0278-7393.13.1.3 [DOI] [Google Scholar]
  61. Winter B., Bergen B. (2012). Language comprehenders represent object distance both visually and auditorily. Lang. Cogn. 4, 1–16 10.1515/langcog-2012-0001 [DOI] [Google Scholar]
  62. Wykowska A., Schubö A., Hommel B. (2009). How you move is what you see: action planning biases selection in visual search. J. Exp. Psychol. Hum. Percept. Perform. 35, 1755–1769. 10.1037/a0016798 [DOI] [PubMed] [Google Scholar]
  63. Zwaan R. A. (2008). Experiential traces and mental simulations in language comprehension, in Symbols and Embodiment: Debates on meaning and cognition, eds de Vega M., Glenberg A. M., Graesser A. C. (Oxford: Oxford University Press; ), 165–180 10.1093/acprof:oso/9780199217274.003.0009 [DOI] [Google Scholar]

Articles from Frontiers in Psychology are provided here courtesy of Frontiers Media SA

RESOURCES