Skip to main content
Communicative & Integrative Biology logoLink to Communicative & Integrative Biology
. 2009 Nov-Dec;2(6):479–481. doi: 10.4161/cib.2.6.9344

What can crossmodal aftereffects reveal about neural representation and dynamics?

Talia Konkle 1,2,*, Christopher I Moore 1,2
PMCID: PMC3398893  PMID: 22811763

Abstract

The brain continuously adapts to incoming sensory stimuli, which can lead to perceptual illusions in the form of aftereffects. Recently we demonstrated that motion aftereffects transfer between vision and touch.1 Here, the adapted brain state induced by one modality has consequences for processes in another modality, implying that somewhere in the processing stream, visual and tactile motion have shared underlying neural representations. We propose the adaptive processing hypothesis—any area that processes a stimulus adapts to the features of the stimulus it represents, and this adaptation has consequences for perception. This view argues that there is no single locus of an aftereffect. Rather, aftereffects emerge when the test stimulus used to probe the effect of adaptation requires processing of a given type. The illusion will reflect the properties of the brain area(s) that support that specific level of representation. We further suggest that many cortical areas are more process-dependent than modality-dependent, with crossmodal interactions reflecting shared processing demands in even ‘early’ sensory cortices.

Keywords: Adaptation, brain state, multisensory, motion, visual, tactile, aftereffect

Introduction

Aftereffects are a powerful behavioral paradigm used to infer how information is represented in the brain and how neural populations and circuits change over time – neural dynamics. In the case of motion aftereffect paradigms, for example, an observer stares at visual motion such as a drifting grating (the adapting stimulus) for a period of seconds. When this visual stimulus is suddenly changed to a static visual grating (the test stimulus), the observer sees this stationary stimulus as if it were moving opposite the direction of the original motion for a short period of time.2,3 From this simple behavioral paradigm, we gain critical insights into both underlying neural representation and neural dynamics. First, visual motion perception relies on competing representations in opponent directions. Second, extended processing of a stimulus leads to changes in the brain, which we refer to as the adapted brain state. Where in the brain are circuits changing during adaptation? In other words, what is the site of these neural dynamics?

One intuitive answer is that visual motion aftereffects arise from local dynamics in visual cortex. Motion aftereffects also exist in the auditory4 and tactile domains,5 suggesting that such neural dynamics are a general property of cortico-cortical or thalamo-cortical circuits.6 However, we recently demonstrated that adaptation to tactile motion can lead to visual motion aftereffects, and visa versa.1 Further, visual motion adaptation leads to auditory motion aftereffects.7,8 As such, these crossmodal aftereffects challenge the simple explanation that motion aftereffects arise from unisensory cortex alone. What properties about adaptation and representation are needed to explain how crossmodal aftereffects occur?

Adaptive Processing Hypothesis

Aftereffects reveal that extended processing of incoming sensory information changes the brain, and there are measurable consequences of this change in subsequent perception. What is the site of these neural dynamics? A naïve view is that dynamics are expressed only in a final integration stage, for example a single ‘higher’ order cortical area, where modulatory flexibility is inherent to its function. In the classic view, the explicit computational goal of this area is ‘bimodal’ integration. Cross-modal aftereffects would emerge, then, as the product of dynamics at this convergent center.

In contrast, we propose that any area or circuit that processes a stimulus is changed by that stimulus and that these dynamics are a functional property of areas throughout the system—the adaptive processing hypothesis. For example, motion-responsive neurons are found in many places short of the “motion processing area” MT, including V1, V2, and V3, and are also found in parietal areas. Thus, motion aftereffects likely originate not from adaptation in one area or circuit but from many stages of processing both in early sensory areas and in higher level areas.3

A corollary of the adaptive processing hypothesis is that at each level of processing, different aspects of the incoming stimulus are adapting, reflecting the underlying dimensions represented by those neural populations. For example, V1 responses reflect orientation, scale, and motion properties at a specific location, with increasing receptive field size and tuning properties in V2, V3, and MT. This view implies that different aftereffects might be observed across retinotopic locations based on the relative contribution of early and later areas in processing the subsequent test stimulus.

Thus, in adaptation paradigms, the subsequent test stimulus can be thought of as a probe of the adapted state. For example, following 10 sec of visual motion adaptation, presenting a static grating leads to retinotopic aftereffects of short duration with low illusory velocity. Following the same adaptation, presenting a dynamic grating instead leads to aftereffects in more spatial locations, which have faster velocity and longer duration.9 Importantly, the same adapted brain state can give rise to several different perceptual aftereffects. Similarly, following adaptation to a face, observers have stronger aftereffects when tested on upright vs. inverted faces, but also show aftereffects when tested with simple T-shaped stimuli.10,11 The critical insight is that the adapted brain state will have consequences on a subsequently presented test stimulus to the extent that that test stimulus depends on processing in those adapted areas. This framework helps explain the well known fact that aftereffects depend on the relationship between the adapting and test stimuli.3

In the case of crossmodal aftereffects, these paradigms simply use one modality to probe the adapted state induced by extended processing in another modality. For example, we recently demonstrated that visual and tactile motion adaptation lead to aftereffects in the other modality.1 Based on the framework outlined above, processing tactile motion depends on circuits that were previously adapted by visual motion processing. Similarly, the processing of visual motion depends on circuits adapted by tactile motion. Crossmodal motion aftereffects reveal that visual and tactile motion perception rely on partially shared neural substrates.

Process-Selective Cortical Circuits

One reason why there might be a site of shared processing between visual and tactile motion comes from an argument for efficient processing. If there is a neural circuit that is specialized to extract motion trajectories from spatio-temporal patterns of spiking input, it might be efficient to route information that requires that processing through that circuit. Indeed, visual motion and tactile motion appear to processed in overlapping (or at least adjacent) areas.12,13

However, this logic does not extend to auditory motion, which does not activate area MT14 (reviewed in ref. 15). One possible account for this discrepancy is that visual motion and tactile motion share a similar input pattern, where a grid of sensors in the retina or skin receives spatial information over time. Interestingly, auditory motion information does not arrive by a grid of spatial sensors but by interaural temporal differences, suggesting these stimuli access other brain areas organized to more efficiently perform a different computation.

Several other examples of utilizing specialized processing circuits across modalities exists. For example, TMS studies have shown that fine spatial orientation judgments utilize V1 for visual as well as tactile stimuli.16 Further, fMRI evidence has shown that fine scale orientation judgments in vision and tactile modalities both activate early visual cortex,17 visual and haptic shape activate lateral occipital areas,18 and even haptic exploration of faces is suggested to activate the visual face-selective FFA.19 If these areas are fundamentally contributing to the perception of orientation, shape, and faces in both modalities, then we would predict crossmodal aftereffects will be found. More generally, these data support process-selective cortical circuits, rather than stimulus-selective cortical circuits. Indeed, emerging evidence that the neocortex is more multisensory than previously believed,20,21 also suggests that defining areas by sensory modality might not accurately describe the underlying representation.

Conclusions

Aftereffects reveal that any incoming sensory information leads to changes in neural dynamics. Typically when we sense the world, we sample the continuous stream of input with rapid exploratory patterns. Eyes saccade 3 times per second, and maintaining steady fixation eventually causes the world to turn flat gray. Similarly, skin sweeps over surfaces, and without changing stimulation we cease to notice contact, e.g., with clothing. While active sensing rapidly samples different aspects of the physical world, adaptation paradigms force extended processing of a single aspect of the physical world (for adaptation with brief durations see ref. 22). In a sense, this extended processing during adaptation may accentuate the neural mechanisms and perceptual consequences that are continually operating on at a more rapid timescale.

Crossmodal aftereffects provide several insights about these adaptive mechanisms. Specifically, we suggest that adaptation is happening at all neural sites that are involved in processing the stimulus, e.g., by renormalizing competing representations to reflect the incoming sensory information. The test stimulus can be thought of as a probe of this adapted state – the extent of shared substrates in processing determines what aftereffects properties will be observed. Areas may be more process-dependent, rather than stimulus-dependent, with crossmodal interactions following automatically in cases with shared processing demands.

Konkle T, Wang Q, Hayward V, Moore CI. Motion aftereffects transfer between touch and vision. Curr Biol. 2009;19:1–6. doi: 10.1016/j.cub.2009.03.035.

Footnotes

References

  • 1.Konkle T, Wang Q, Hayward V, Moore CI. Motion aftereffects transfer between touch and vision. Curr Biol. 2009;19:1–6. doi: 10.1016/j.cub.2009.03.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Wolgemuth A. On the aftereffect of seen movement. Br J Psychol. 1911;1:1–117. [Google Scholar]
  • 3.Mather G, Pavan A, Campana G, Casco C. The motion aftereffect reloaded. Trends Cogn Sci. 2008;12:481–7. doi: 10.1016/j.tics.2008.09.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Grantham DW, Wightman FL. Auditory motion aftereffects. Percept Psychophys. 1979;26:403–8. doi: 10.3758/BF03204166. [DOI] [PubMed] [Google Scholar]
  • 5.Watanabe J, Hayashi S, Kajimoto H, Tachi S, Nishida S. Tactile motion aftereffects produced by appropriate presentation for mechanoreceptors. Exp Brain Res. 2007;180:577–82. doi: 10.1007/s00221-007-0979-z. [DOI] [PubMed] [Google Scholar]
  • 6.Moore CI. Frequency-dependent processing in the vibrissa sensory system. J Neurophysiol. 2004;91:2390–9. doi: 10.1152/jn.00925.2003. [DOI] [PubMed] [Google Scholar]
  • 7.Kitagawa N, Ichihara S. Hearing visual motion in depth. Nature. 2002;416:172–4. doi: 10.1038/416172a. [DOI] [PubMed] [Google Scholar]
  • 8.Jain A, Sally SL, Papathomas TV. Audiovisual short-term influences and aftereffects in motion: examination across three sets of directional pairings. J Vis. 2008;8:7–, 1-13. doi: 10.1167/8.15.7. [DOI] [PubMed] [Google Scholar]
  • 9.Nishida S, Sato T. Motion aftereffect with flickering test patterns reveals higher stages of motion processing. Vision Res. 1995;35:477–90. doi: 10.1016/0042-6989(94)00144-B. [DOI] [PubMed] [Google Scholar]
  • 10.Fang F, Ijichi K, He S. Transfer of the face viewpoint aftereffect from adaptation to different and inverted faces. J Vis. 2007;7:6–, 1-9. doi: 10.1167/7.13.6. [DOI] [PubMed] [Google Scholar]
  • 11.Susilo T, McKone E, Edwards M. Solving the upside-down puzzle: inverted face aftereffects derive from shape-generic rather than face-specific mechanisms. J Vis. 2009 doi: 10.1167/10.13.1. In press. [DOI] [PubMed] [Google Scholar]
  • 12.Hagen MC, Franzén O, McGlone F, Essick G, Dancer C, Pardo JV. Tactile motion activates the human middle temporal/V5 (MT/V5) complex. Eur J Neurosci. 2002;16:957–64. doi: 10.1046/j.1460-9568.2002.02139.x. [DOI] [PubMed] [Google Scholar]
  • 13.Beauchamp MS, Yasar NE, Kishan N, Ro T. Human MST but not MT responds to tactile stimulation. J Neurosci. 2007;27:8261–7. doi: 10.1523/JNEUROSCI.0754-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Lewis JW, Beauchamp MS, DeYoe EA. A comparison of visual and auditory motion processing in human cerebral cortex. Cereb Cortex. 2000;10:873–88. doi: 10.1093/cercor/10.9.873. [DOI] [PubMed] [Google Scholar]
  • 15.Poirier C, Collignon O, Devolder AG, Renier L, Vanlierde A, Tranduy D, et al. Specific activation of the V5 brain area by auditory motion processing: an fMRI study. Brain Res Cogn Brain Res. 2005;25:650–8. doi: 10.1016/j.cogbrainres.2005.08.015. [DOI] [PubMed] [Google Scholar]
  • 16.Zangaladze A, Epstein CM, Grafton ST, Sathian K. Involvement of visual cortex in tactile discrimination of orientation. Nature. 1999;401:587–90. doi: 10.1038/44139. [DOI] [PubMed] [Google Scholar]
  • 17.Sathian K, Zangaladze A, Hoffman JM, Grafton ST. Feeling with the mind’s eye. Neuroreport. 1997;8:3877–81. doi: 10.1097/00001756-199712220-00008. [DOI] [PubMed] [Google Scholar]
  • 18.Amedi A, Malach R, Hendler T, Peled S, Zohary E. Visuo-haptic object-related activation in the ventral visual pathway. Nat Neurosci. 2001;4:324–30. doi: 10.1038/85201. [DOI] [PubMed] [Google Scholar]
  • 19.Kilgour AR, Kitada R, Servos P, James TW, Lederman SJ. Haptic face identification activates ventral occipital and temporal areas: an fMRI study. Brain Cogn. 2005;59:246–57. doi: 10.1016/j.bandc.2005.07.004. [DOI] [PubMed] [Google Scholar]
  • 20.Ghazanfar AA, Schroeder CE. Is neocortex essentially multisensory? Trends Cogn Sci. 2006;10:278–85. doi: 10.1016/j.tics.2006.04.008. [DOI] [PubMed] [Google Scholar]
  • 21.Driver J, Noesselt T. Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments. Neuron. 2008;57:11–23. doi: 10.1016/j.neuron.2007.12.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Suzuki S, Cavanagh P. A shape-contrast effect for briefly presented stimuli. J Exp Psychol Hum Percept Perform. 1998;24:1315–41. doi: 10.1037/0096-1523.24.5.1315. [DOI] [PubMed] [Google Scholar]

Articles from Communicative & Integrative Biology are provided here courtesy of Taylor & Francis

RESOURCES