Skip to main content
Journal of Neurophysiology logoLink to Journal of Neurophysiology
. 2015 Mar 5;113(9):3069–3075. doi: 10.1152/jn.00836.2014

The speed of object recognition from a haptic glance: event-related potential evidence

Ane Gurtubay-Antolin 1,2, Borja Rodriguez-Herreros 1,2, Antoni Rodríguez-Fornells 1,2,3,
PMCID: PMC4455565  PMID: 25744887

Abstract

Recognition of an object usually involves a wide range of sensory inputs. Accumulating evidence shows that first brain responses associated with the visual discrimination of objects emerge around 150 ms, but fewer studies have been devoted to measure the first neural signature of haptic recognition. To investigate the speed of haptic processing, we recorded event-related potentials (ERPs) during a shape discrimination task without visual information. After a restricted exploratory procedure, participants (n = 27) were instructed to judge whether the touched object corresponded to an expected object whose name had been previously presented in a screen. We encountered that any incongruence between the presented word and the shape of the object evoked a frontocentral negativity starting at ∼175 ms. With the use of source analysis and L2 minimum-norm estimation, the neural sources of this differential activity were located in higher level somatosensory areas and prefrontal regions involved in error monitoring and cognitive control. Our findings reveal that the somatosensory system is able to complete an amount of haptic processing substantial enough to trigger conflict-related responses in medial and prefrontal cortices in <200 ms. The present results show that our haptic system is a fast recognition device closely interlinked with error- and conflict-monitoring processes.

Keywords: haptics, haptic object recognition, tactile object recognition, event-related potentials, error monitoring


daily actions such as switching off an alarm placed on the bedside table challenge us to discriminate objects by touch without the help of vision. Although the use of haptic information in identifying objects cannot rival vision in terms of speed, there has been intriguing debate about whether touch has a fast and viable object recognition system independent from that of vision (Amedi et al. 2001; Klatzky et al. 1987). The visual system acquires multiple types of information through parallel channels, and first brain responses associated with the visual discrimination of images arise around 150 ms (Allison et al. 1999; Thorpe et al. 1996). Somatosensory information, however, is transmitted through a network of hierarchical connections (Hyvärinen and Poranen 1978b; Iwamura and Tanaka 1978; James et al. 2007), beginning with thalamocortical circuits relaying information into the primary somatosensory cortex (SI), located in the postcentral gyrus of the parietal lobe (DiCarlo et al. 1998; Huffman and Krubitzer 2001). SI neurons project to higher order somatosensory areas, such as the secondary somatosensory cortex (SII) and the superior parietal lobule (SPL), which compute more complex representations of shape (Roland et al. 1998). Well-defined parietal lesions have been associated with a selective disturbance in the recollection of geometrical shape, a phenomenon termed as tactile agnosia (Caselli 1991; Reed et al. 1996). To gather all haptic inputs together to permit object recognition, these areas are connected with the insular cortex, remote prefrontal regions, and the anterior cingulate cortex (ACC) through a frontoparietal network engaged in complex haptic object recognition (HOR) (Binkofski et al. 1999; Stoeckel et al. 2003). Consequently, albeit touch can rapidly provide information regarding an object's shape, it may require serial analysis as opposed to that of visual object recognition, resulting in longer response times. However, previous studies have found shared visuotactile representations of shape between both modalities (Amedi et al. 2010; Lacey et al. 2009). According to these findings, the visuotactile region from the latero-occipital complex stores shape representations that are recruited in both tactile and visual tasks. Although this was initially interpreted as the influence of visual imagery and its interference in tactile shape discrimination tasks, it is not restricted to sighted individuals, since activation in that region has also been found in congenitally blind adults.

Another aspect that has been controversial is to which extent a single grasp is sufficient for recognizing an object (Bodegård et al. 2001; Roland and Mortensen 1987; Seitz et al. 1991). To identify an object, cutaneous inputs from the skin have to be integrated with kinesthetic information from muscles and joints. A single grasp is thought to limit the amount of kinesthetic information in such a manner that disables its recognition. However, in a very interesting study Klatzky and Lederman (1995) proved that a brief haptic exposure without active exploration (a “haptic glance”) was enough to identify an encountered object. This finding was only observed in conditions where expectancies about the object that was going to be presented were created, thus providing hypotheses about candidate objects. The haptic glance in this experiment consisted of allowing the participant's fingertips (as many as necessary) to make contact with the object for only 200 ms, avoiding movement of the fingers on the object's surface. This temporal constraint provides an initial estimate of the amount of exposure time requested for recognizing an object. In the same line, behavioral measures demonstrated that in the absence of sight, people needed <2 s to correctly name common objects during a free tactile exploration task (Klatzky et al. 1985). Although these studies provided an ecological benchmark of how fast and accurate our haptic system is, they did not isolate the exact time required for neural haptic processing solely. The former focused exclusively on the time of exposure, and the latter also included the time required to emit the response. Previous work in the tactile domain has analyzed the discrimination of somatosensory information using the comparison between two subsequent stimuli (Mountcastle et al. 1990; Romo and Salinas 2003). Even though that research focused on the flutter vibrations delivered to the hand (passive recognition), the procedure of the present experiment has strong similarities to the methodology used in those reports. In our study, objects were recognized via discrepancy detection between expected and actual haptic input. It is noteworthy that the somatosensory information was associated with an object and that active exploration was necessary to extract haptic features. The present study aimed to investigate, for the first time to our knowledge, the exact time course of haptic recognition using fine-grained electrophysiological measures. We recorded event-related potentials (ERPs) during a HOR task in which participants were instructed to distinguish the shape of a familiar object without visual information. We found that any incongruence between the expected and the touched object elicited a frontocentral negativity starting 175 ms after the contact time. The temporal dynamics of HOR-related selective brain signatures point to a fast, specialized tactile recognition system closely linked to neural networks implicated in conflict monitoring processes.

MATERIALS AND METHODS

Participants.

Twenty-seven right-handed naive volunteers [11 women, mean (±SD) age = 25.7 ± 7.09 yr] participated in the experiment. The experiment was undertaken with the understanding and written consent of each participant, who reported no neurological or neuromuscular disorders. The study was approved by the local ethics committee in accordance with the Declaration of Helsinki. One subject was excluded from the analysis due to excessive eye movement artifacts.

Haptic stimuli and procedure.

Participants sat on a comfortable chair with the arm extended on a table surface that was 35 cm (vertical distance) below the eyes. Nine two-dimensional (2-D) wooden geometrical objects were manufactured: racket, circle, square, triangle, arrow, flower, crown, heart, and lightning (Fig. 1). The objects were chosen to be highly salient to be easily identifiable with only three contact points and at a single grasp. After participants completed a pilot study with 12 shapes, 3 shapes were discarded due to difficulties in their identification. Participants were not allowed to see the objects during the entire experimental study. We obstructed vision by placing an opaque screen between the subject and the object. Before the experiment, participants underwent a learning-training phase to get familiar with the objects' shapes. All the objects were presented for free haptic exploration until participants could discriminate them correctly through touch. The experimental task consisted of a constrained exploration by touching the object at three specific locations (contact points) after sliding three fingers (thumb, index, and middle) through three rails that were attached to the table. Contact points were the same for each object, preventing the use of their location to discriminate objects (Fig. 2B). Thus the three fingers had always the same initial and final position.

Fig. 1.

Fig. 1.

Each stimulus (racket, circle, square, triangle, arrow, flower, crown, heart, and lightning) is shown with its 3 contact points (black circles).

Fig. 2.

Fig. 2.

A: time course of a trial procedure, including the duration of each element. B: schematic illustration (top) of the contact points in 2 objects (circle and triangle). Solid lines corresponded to the 3 rails that were used as guides to reach the object. The initial position of the fingers is indicated by the red squares and the final position (contact points) by the black circles. Bottom schematic illustrates the overlap of both shapes, confirming that objects were chosen as a means to have the same contact point. CP, contact point.

At the beginning of each trial, subjects placed the right thumb, middle, and index fingers in the outer edges of the rails. The name of one of the objects was displayed on a screen for 1 s, followed by a fixation cross for another 1 s. An auditory cue was then presented, indicating that the three-finger movement toward the figure could begin. The participants slid their fingers through the rails and reached the contact points (Fig. 2). In 50% of the trials the word displayed on the screen corresponded to the object touched (congruent), and in the other 50% of trials the object did not match the name (incongruent). For each of the nine objects, eight repetitions of their congruent trial and eight incongruent trials were presented. The incongruent trials were a combination of each shape with the remaining shapes (e.g., in the case “circle,” the 8 incongruent trials were circle-square, circle-lightning, circle-arrow, circle-triangle, circle-crown, circle-flower, circle-heart, and circle-racket, with the first object being the one displayed on the screen and the second being the touched one). To obtain a high number of trials per condition for the ERP analysis, 2 additional series of 12 trials were included maintaining the 50% congruent and 50% incongruent proportion. In these two series as well as in the training phase, a random sample from the entire set of possibilities was used. Three seconds after the auditory cue, a response prompt appeared on the screen requesting the participants to press “Yes” or “No” after considering whether the object that they were touching corresponded to the name of the object previously displayed on the screen. The yes-no choice was made by pressing one of two keyboard buttons with the left hand. The experimental session consisted of 168 trials performed in 4 different blocks, interleaved by resting periods. Two of these blocks consisted of 3 series of 12 trials, and the other 2 blocks consisted of 4 series of 12 trials. Each series consisted of six congruent and six incongruent trials randomly presented. The total duration of the experiment was 80 min. Finger movements were recorded using an infrared motion capture system (CMS-30P; Zebris, Isny, Germany) with a sampling frequency of 83 Hz and a spatial resolution of 0.1 mm. Three sensors were attached to the three fingers of the subject's right hand to measure the time at which subjects reached the object (contact time). Data recording began 100 ms before the auditory cue and ended 3 s afterward.

Behavioral analysis.

To address whether the chosen set of shapes was appropriate for the task, differences in objects' discriminability were analyzed using an ANOVA on the discriminability index (d′) with the presented shape as a factor. The discriminability index is defined as the probability of a hit minus the probability of a false alarm (hit: respond yes when the correct response is yes; false alarm: respond yes when the correct response is no). The higher the d′, the easier it was to recognize the object. In case the discriminability differed depending on the shape, Bonferroni-corrected pairwise t-tests were used to determine which pair of objects showed discriminability differences. Contact time was defined as the time when the absolute velocity of the last finger reaching the object was lower than 5% of its peak velocity. Other criteria for defining contact time were also considered but finally rejected: a fixed velocity threshold was discarded due to high intertrial variability of velocity, and a varying threshold corresponding to 5% of the average peak velocity between fingers was rejected due to inter-finger speed variability. Response time was defined as the time needed to answer the yes-no question. Of note, the response time in the present study is not truly informative of the speed of the tactile discrimination process because we requested participants to delay their motor response for 3 s to avoid contamination of the electroencephalographic (EEG) signal from ERP-motor-related components. Trials with an incorrect yes-no response (incorrect trials) or with a response time longer than 2 s were removed from the analysis. The rejection rate was 11% and did not differ between congruency conditions (P = 0.57).

EEG recordings and analysis.

The EEG was recorded from 29 electrodes in an Electro-Cap (Electro-Cap International) using BrainVision Recorder software (version 1.3; Brain Products, Munich, Germany). Electrode positions were based on the standard 10/20 positions (Jasper 1958): Fp1/2, Fz, F7/8, F3/4, Fcz, Fc1/2, Fc5/6, Cz, C3/4, T7/8, Cp1/2, Cp5/6, Pz, P3/4, P7/P8, PO1/2, Oz. Eye movements and blinks were monitored by electrodes placed on the external canthus and the infraorbital ridge of the right eye. All scalp electrodes were referenced offline to the average of the reference electrodes, placed at the right and left mastoid. Electrode impedances were kept below 5 kΩ. The EEG signal was sampled at 250 Hz and filtered with a bandpass of 0.01–70 Hz (half-amplitude cutoffs). Trials with base-to-peak electrooculogram (EOG) amplitude of more than 75 μV, amplifier saturation, or a baseline shift exceeding 200 μV/s were automatically rejected (Cunillera et al. 2008).

Contact-locked ERPs for artifact-free trials were averaged over epochs of 900 ms, including a 100-ms prestimulus baseline. To obtain reliable averages, we required each condition to have a minimum of 60 trials per participant. We submitted amplitude values to repeated-measures ANOVA that included two within-subject factors: congruency (congruent vs. incongruent) and electrode (29 levels). In case an effect of electrode or a significant congruency × electrode interaction was found, we further decomposed by selecting 15 electrodes for a topographical analysis according to 2 factors (Cunillera et al. 2006): laterality [3 levels: left (F3, Fc1, C3, P3, PO1), central (Fz, Fcz, Cz, Pz, Oz), and right (F4, Fc2, C4, P4, PO2)] and anterior-posterior [5 levels: frontal (F3, Fz, F4), frontocentral (Fc1, Fcz, Fc2), central (C3, Cz, C4), parietal (P3, Pz, P4), and parieto-occipital (PO1, Pz, PO2)]. This analysis was carried out on data corrected using the vector normalization procedure (McCarthy and Wood 1985). Post hoc analyses were performed when appropriate. The onset and offset latencies of the congruency effect for each electrode were determined via a stepwise series of one-tailed serial t-tests (step size = 4 ms) and defined as the point at which 10 consecutive t-tests showed a significant difference from zero (t = 2.056) (Rodriguez-Fornells et al. 2002). Greenhouse-Geisser epsilon was applied when the assumption of sphericity was not met (Jennings and Wood 1976).

To derive a topographical visualization of the somatosensory processing voltage sources, we transformed all the contact-locked averaged ERP waveforms into reference-free current source density (CSD) estimates (in μV/cm2, head radius = 10 cm) (Perrin et al. 1989).

Source localization analysis.

Brain Electrical Source Analysis (BESA 2000, version 5.3; Scherg 1990) was used to estimate the cortical areas involved in haptic processing. The L2 minimum-norm estimation (Hämäläinen and Ilmoniemi 1994) was applied for source reconstruction. We used an idealized three-shell spherical head model (radius = 85 mm). The minimum norm was applied to the data across the latency interval in which the difference between correct and incorrect trials was statistically significant (172–456 ms). Spatiotemporal weighting was applied to the data using the signal subspace correlation method of Mosher and Leahy (1998). The BESA algorithm also computed the location and the orientation of multiple equivalent dipolar sources by calculating the voltage scalp distribution that would be produced for a given dipole model (forward solution) and comparing it with the original scalp distribution. The congruent-incongruent difference waveform was analyzed. By following previous solutions to the neural sources of somatosensory processing (Reed et al. 2004), two single dipoles were fitted in the ACC and SII, respectively, and two symmetrical dipoles were subsequently fitted near the posterior bank of the inferior frontal gyrus (IFG). Source analysis was performed for the interval in which the difference between congruent and incongruent trials was statistically significant (172–456 ms). The final locations of each dipole were projected on mean structural T1 MRI images of 24 individuals and converted into Talairach coordinates (Talairach and Tournoux 1988). The latencies of major peaks in the dipole source waveforms were taken as indexes of neural response timing. Each symmetric dipole pair was constrained to be mirror image in location only.

RESULTS

Behavioral analysis.

The results on the discriminability index suggest that the difficulty to recognize the objects differed across shapes [F(5,125) = 2.5, P = 0.03]. The flower shape seemed to be the most difficult object to discriminate, since the only pairs (from all 72 possible pairs) that differed on the discriminability index were flower-arrow [t(25)= −4.0, P = 0.04] and flower-circle [t(25)= −3.6, P = 0.04]. The mean percentage of correct responses was 94.4% (SD = 3.6), denoting a remarkably good performance. The mean overall contact time (or duration of the movement: time between the onset of the movement toward the object and the contact with it) was 551.9 ± 176 ms. The mean response time was 496 ± 102 ms.

ERP results.

We inspected the grand-average ERP waves of all scalp electrodes from 100 ms prestimulus to 800 ms poststimulus for congruent and incongruent conditions. The incongruence between the touched object and the observed word elicited a prominent negativity [F(1,26) = 18.5, P < 0.001] arising around 170–180 ms and lasting until 450 ms (Fig. 3A). The analysis showed a significant congruency × electrode interaction [F(28,728) = 8.3, P < 0.001], indicating differences in the topographical distribution of the congruency effect. Decomposition of this interaction revealed that the congruency effect was modulated as a function of the laterality [congruency × laterality, F(2,52) = 19.8, P < 0.001], a modulation that was different across the sagittal axis [congruency × laterality × anterior-posterior, F(28,208) = 7.5, P < 0.001]. Post hoc contrasts indicated that the congruency effect was maximum in midline central and midline frontocentral sites [C3, t(26) = 3.2, P = 0.004; Cz, t(26) = 5.2, P < 0.001; C4, t(26) = 5.0, P < 0.001; Fc1, t(26) = 3.8, P = 0.001; Fcz, t(26) = 4.4, P < 0.001; Fc2, t(26) = 4.6, P < 0.001]. Figure 3B, left, reflects the time course of activation in the midline central site that was found to contribute more to the congruency effect (Cz location). We observed that the incongruence between the touched object and the observed name elicited a negativity starting at 172 ms. One-tailed serial t-tests revealed that the effect peaked at 300 ms and lasted until 456 ms (Fig. 3B, right). The 3-D isovoltage topographical mapping illustrates the distribution of the congruency effect (Fig. 4A). CSD maps precisely localized the voltage source locations, which revealed a differential morphology in central and frontal sites (Fig. 4B). Post hoc comparisons between congruent and incongruent CSD estimates reported a higher activity in Cz [t(26) = 4.7, P < 0.001], F7 [t(26) = −3.0, P = 0.005], and Fcz [t(26) = 2.9, P = 0.007] when the word and the expected object did not match.

Fig. 3.

Fig. 3.

A: grand-average (n = 27) contact-locked event-related potential (ERP) waveforms for congruent and incongruent conditions (dashed and dotted lines, respectively) and their respective difference waveform (Incong − Cong; solid line). The gray area indicates the time interval where the 2 conditions differed statistically. B: Cz difference waveform (left) and t-value evolution (right). Gray dotted line indicates the latency (172 ms) at which significance is reached (t value = 2.056).

Fig. 4.

Fig. 4.

Three-dimensional isovoltage topographical mapping of temporal evolution of the congruency effect. A: spatial distribution of differential voltage (incongruent − congruent) every 50 ms in the 200- to 400-ms time window. B: scalp distribution of current source density (CSD) difference waveform. C: minimum-norm estimates.

Source analysis.

Source reconstruction using minimum-norm maps suggested a frontal and central pattern for differential processing between congruency conditions (Fig. 4C). The strongest activation is limited to a 250- to 350-ms time window over frontal and prefrontal regions. The dipole solution identified the possible neural generators of the different neural activity in the incongruent condition. Frontocentral negativity was explained by a four-source model with a single ACC dipole, a single dipole in the contralateral SII, and two symmetrical sources situated in left/right IFG (Fig. 5). This four-source model explained up to 90% (resting variance = 9.13%) of the variance within the time interval where congruent and incongruent ERP waveforms statistically differed (172–456 ms). Source waveforms of the SII dipole showed a shorter peak latency at 240 ms, whereas the ACC and IFG peaked around 300 and 350 ms, respectively.

Fig. 5.

Fig. 5.

Dipole model for the neural sources of the ERP difference waveform (incongruent − congruent). Both the left secondary somatosensory cortex (SII) dipole (pink; x = −45.6, y = −20.9, z = 19.8) and the anterior cingulate cortex (ACC) dipole (red; x = −0.7, y = 4.3, z = 49.5), together with symmetrical dipoles at the inferior frontal gyrus (IFG; green and blue; x = ±44.2, y = 23.2, z = 5.3), were fit over the 172- to 456-ms interval. The time course of each computed dipole is represented in the source waveform. Images at right show the anatomic location of each dipole. L, left; R, right.

DISCUSSION

The present study sought to examine the time course of the neural correlates underlying haptic recognition of objects. We demonstrated that shape identification of an unexpected object elicited an increase of neural activity in frontocentral regions around 175 ms after contact time. Moreover, SII and IFG, in conjunction with the ACC, seem to be the neural generators of this differential activity, supporting their crucial role in the fast detection of conflicting stimuli. Our results reveal a substantial deal of haptic processing accomplished before 200 ms, including the neural processes required to trigger neural networks involved in error monitoring and conflict detection.

Our study provides further evidence that object identification is possible at a single grasp (similar to report by Klatzky and Lederman 1995) when expectancies about the object that is going to be touched are created. Precisely, our results indicate that a restricted exploratory procedure that prevents the extraction of global volumetric attributes of an object (the most fundamental information needed for identification) is sufficient to enable the recognition of objects that differ only in their shape. Of note, the chosen set of shapes showed no differences in their discriminability, except for 2 of the 72 possible pairs. To create a sensory expectancy, participants first had to associate the displayed name with an object (categorization) and then make use of working memory mechanisms to retrieve and maintain the representation of the object or concept until reaching the presented shape. The haptic processing of the presented shape could then begin. Haptic inputs travel in a hierarchical fashion from the periphery to different subdivisions of SI [including Brodmann's areas (BA) 1, 2, 3a and 3b (Bodegård et al. 2001; Iwamura 1998)]. However, BA1 and BA2 also receive input from neurons located in BA3a and BA3b, suggesting that BA1 and BA2 may actually represent a higher stage in haptic processing (Hyvärinen and Poranen 1978a; Iwamura and Tanaka 1978). Information is then projected to SII and other interconnected areas separate from the somatosensory cortex (such as the SPL or the intraparietal sulcus), which are involved in integrating low-level haptic inputs as well as somatosensory and motor information and compute more complex representations (Bodegård et al. 2000; Roland et al. 1998). In our task, when the object representation had been completed, the discrimination process could start. The working memory representation had to be compared with the actual perception. Lastly, the decision outcome, presumably involving cognitive control- and error-monitoring processes, had to be kept in the working memory for nearly 2 s before the decision was executed (yes-no choice). As a result, a type of conflict-related negativity was elicited following the trials with a mismatch between the expected and the actual input. Interestingly, our source reconstruction analysis results showed that contralateral SII was more active in retrieving shape information from incongruent objects, corroborating its importance in the fast coding of complex macrogeometrical attributes during shape discrimination. This concurs with previous studies reporting an increase in SII activity during complex object manipulation (Binkofski et al. 1999) and points to SII as a key player integrating somatosensory inputs to generate a coherent representation of an object, rather than the distinction of specific object features (Sinclair and Burton 1993). In agreement with this view, seminal studies in monkeys and humans reported that SII lesions produced severe deficits in TOR tasks concerning the retention of shapes (Caselli 1991), without loss of simple tactile sensation or motor control (Ridley and Ettlinger 1976).

Whereas the above-mentioned evidence speaks to the importance of somatosensory cortices for haptic processing, further areas related to short-term information storage, retrieval, comparison, and decision making are necessary for fast sequential haptic discrimination (Stoeckel et al. 2003). A particularly compelling example corresponds to the existence of a frontoparietal network involved in object manipulation: tract-tracing studies showed corticocortical connections of the prefrontal cortex with the somatosensory cortex (Preuss and Goldman-Rakic 1989), which contributed to the gating of haptic information (Yamaguchi and Knight 1990). In addition, cortical projections from the SII to motor-related areas may indicate that SII provides somatosensory feedback gained from exploration of the manipulatory movements necessary to extract salient object information (Friedman et al. 1986; Reed et al. 2004). These neural networks might provide potential pathways for top-down modulation of somatosensory areas (Gogulski et al. 2015). In this regard, we found that unexpected geometrical properties were able to trigger a network of neural generators which has been previously associated with conflict monitoring and further cognitive control processes (Botvinick et al. 2001; Marco-Pallarés et al. 2008). Therefore, our results support fast somatosensory projections to frontal and prefrontal regions, sending shape representations to supramodal cognitive processes. Specifically, shape processing of an incongruent object elicited an increase of neural activity 300 ms after touching the object in the ACC, which has largely been proposed to play a prominent role in conflict monitoring (Botvinick et al. 2004; Dehaene et al. 1994; Ridderinkhof et al. 2004). This finding suggests that incongruence-related activity is consistent with error-related components sensitive to erroneous somatosensory information, such as the error-related negativity (ERN) (Holroyd et al. 1998) or conflict-monitoring brain signatures (Rodriguez-Fornells et al. 2002). Importantly, the ACC was also found to be the neural generator in an ERP study following incorrect tactile feedback (Miltner et al. 1997). Subsequently, our data revealed a possible influence of IFG ∼350 ms after contact time. Our results concur with previous neuroimaging studies that implicated IFG in complex haptic discrimination (Binkofski et al. 1999; Stoeckel et al. 2003). This source reconstruction is also in accordance with prior studies that posited IFG and surrounding insular cortex as neural sources of cognitive control, with these regions being highly interconnected with the error- and conflict-monitoring system (Doñamayor et al. 2012; Gehring and Knight 2000).

The present insights on the temporal dynamics of haptic object recognition neural correlates have a number of important implications. First, they outline a fast stream of the haptic system capable of generating neural signatures of efficient haptic discrimination in less than 200 ms. Second, they link the wealth of literature about complex haptic processing in higher order somatosensory areas with error- and conflict-monitoring networks. In particular, they reconcile the observation that medial frontal and lateral prefrontal areas inspect the reliability of sensory information from different modalities throughout top-down modulations. That being said, further evidence from connectivity approaches will be needed to make strong claims about the specificity of the processes taking place in each step of the modeled network and their directionality.

GRANTS

This work was supported by an Agency for Management of University and Research Grant (AGAUR B.E.) from the Catalan government (to B. Rodriguez-Herreros) and Spanish Government Grant PSI2012-29219 (to A. Rodríguez-Fornells).

DISCLOSURES

No conflicts of interest, financial or otherwise, are declared by the authors.

AUTHOR CONTRIBUTIONS

A.G.-A. performed experiments; A.G.-A. and B.R.-H. analyzed data; A.G.-A. and B.R.-H. interpreted results of experiments; A.G.-A. and B.R.-H. prepared figures; A.G.-A. and B.R.-H. drafted manuscript; A.R.-F. conception and design of research; A.R.-F. edited and revised manuscript; A.R.-F. approved final version of manuscript.

ACKNOWLEDGMENTS

We thank J. Marco-Pallarés for helpful discussions and advice.

Present address of B. Rodriguez-Herreros: LREN, Dept. of Clinical Neurosciences, Vaudois Hospital Univ. Center, Univ. of Lausanne, Lausanne, Switzerland.

REFERENCES

  1. Allison T, Puce A, Spencer DD, McCarthy G. Electrophysiological studies of human face perception. I. Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cereb Cortex 9: 415–430, 1999. [DOI] [PubMed] [Google Scholar]
  2. Amedi A, Malach R, Hendler T, Peled S, Zohary E. Visuo-haptic object-related activation in the ventral visual pathway. Nat Neurosci 4: 324–330, 2001. [DOI] [PubMed] [Google Scholar]
  3. Amedi A, Raz N, Azulay H, Malach R, Zohary E. Cortical activity during tactile exploration of objects in blind and sighted humans. Restor Neurol Neurosci 28: 143–156, 2010. [DOI] [PubMed] [Google Scholar]
  4. Binkofski F, Buccino G, Posse S, Seitz RJ, Rizzolatti G, Freund H. A fronto-parietal circuit for object manipulation in man: evidence from an fMRI-study. Eur J Neurosci 11: 3276–3286, 1999. [DOI] [PubMed] [Google Scholar]
  5. Bodegård A, Geyer S, Grefkes C, Zilles K, Roland PE. Hierarchical processing of tactile shape in the human brain. Neuron 31: 317–328, 2001. [DOI] [PubMed] [Google Scholar]
  6. Bodegård A, Ledberg A, Geyer S, Naito E, Zilles K, Roland PE. Object shape differences reflected by somatosensory cortical activation. J Neurosci 20: RC51, 2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. Conflict monitoring and cognitive control. Psychol Rev 108: 624–652, 2001. [DOI] [PubMed] [Google Scholar]
  8. Botvinick MM, Cohen JD, Carter CS. Conflict monitoring and anterior cingulate cortex: an update. Trends Cogn Sci 8: 539–546, 2004. [DOI] [PubMed] [Google Scholar]
  9. Caselli RJ. Rediscovering tactile agnosia. Mayo Clin Proc 66: 129–142, 1991. [DOI] [PubMed] [Google Scholar]
  10. Cunillera T, Gomila A, Rodríguez-Fornells A. Beneficial effects of word final stress in segmenting a new language: evidence from ERPs. BMC Neurosci 9: 23, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Dehaene S, Posner MI, Tucker DM. Localization of a neural system for error detection and compensation. Psychol Sci 5: 303–305, 1994. [Google Scholar]
  12. DiCarlo JJ, Johnson KO, Hsiao SS. Structure of receptive fields in area 3b of primary somatosensory cortex in the alert monkey. J Neurosci 18: 2626–2645, 1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Doñamayor N, Heilbronner U, Münte TF. Coupling electrophysiological and hemodynamic responses to errors. Hum Brain Mapp 33: 1621–1633, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Friedman DP, Murray EA, O'Neill JB, Mishkin M. Cortical connections of the somatosensory fields of the lateral sulcus of macaques: evidence for a corticolimbic pathway for touch. J Comp Neurol 252: 323–347, 1986. [DOI] [PubMed] [Google Scholar]
  15. Gehring WJ, Knight RT. Prefrontal-cingulate interactions in action monitoring. Nat Neurosci 3: 516–520, 2000. [DOI] [PubMed] [Google Scholar]
  16. Gogulski J, Boldt R, Savolainen P, Guzmán-López J, Carlson S, Pertovaara A. A segregated neural pathway for prefrontal top-down control of tactile discrimination. Cereb Cortex 25: 161–166, 2015. [DOI] [PubMed] [Google Scholar]
  17. Hämäläinen MS, Ilmoniemi RJ. Interpreting magnetic fields of the brain: minimum norm estimates. Med Biol Eng Comput 32: 35–42, 1994. [DOI] [PubMed] [Google Scholar]
  18. Holroyd CB, Dien J, Coles MG. Error-related scalp potentials elicited by hand and foot movements: evidence for an output-independent error-processing system in humans. Neurosci Lett 242: 65–68, 1998. [DOI] [PubMed] [Google Scholar]
  19. Huffman KJ, Krubitzer L. Area 3a: topographic organization and cortical connections in marmoset monkeys. Cereb Cortex 11: 849–867, 2001. [DOI] [PubMed] [Google Scholar]
  20. Hyvärinen J, Poranen A. Movement-sensitive and direction and orientation-selective cutaneous receptive fields in the hand area of the post-central gyrus in monkeys. J Physiol 283: 523–537, 1978a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Hyvärinen J, Poranen A. Receptive field integration and submodality convergence in the hand area of the post-central gyrus of the alert monkey. J Physiol 283: 539–556, 1978b. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Iwamura Y. Hierarchical somatosensory processing. Curr Opin Neurobiol 8: 522–528, 1998. [DOI] [PubMed] [Google Scholar]
  23. Iwamura Y, Tanaka M. Postcentral neurons in hand region of area 2: their possible role in the form discrimination of tactile objects. Brain Res 150: 662–666, 1978. [DOI] [PubMed] [Google Scholar]
  24. James TW, Kim S, Fisher JS. The neural basis of haptic object processing. Can J Exp Psychol 61: 219–229, 2007. [DOI] [PubMed] [Google Scholar]
  25. Jasper HH. The ten twenty electrode system of the international federation. Electroencephalogr Clin Neurophysiol 10: 371–375, 1958. [PubMed] [Google Scholar]
  26. Jennings JR, Wood CC. Letter: The epsilon-adjustment procedure for repeated-measures analyses of variance. Psychophysiology 13: 277–278, 1976. [DOI] [PubMed] [Google Scholar]
  27. Klatzky RL, Lederman SJ. Identifying objects from a haptic glance. Percept Psychophys 57: 1111–1123, 1995. [DOI] [PubMed] [Google Scholar]
  28. Klatzky RL, Lederman SJ, Metzger VA. Identifying objects by touch: an “expert system”. Percept Psychophys 37: 299–302, 1985. [DOI] [PubMed] [Google Scholar]
  29. Klatzky RL, McCloskey B, Doherty S, Pellegrino J, Smith T. Knowledge about hand shaping and knowledge about objects. J Mot Behav 19: 187–213, 1987. [DOI] [PubMed] [Google Scholar]
  30. Lacey S, Tal N, Amedi A, Sathian K. A putative model of multisensory object representation. Brain Topogr 21: 269–274, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Marco-Pallarés J, Camara E, Münte TF, Rodríguez-Fornells A. Neural mechanisms underlying adaptive actions after slips. J Cogn Neurosci 20: 1595–1610, 2008. [DOI] [PubMed] [Google Scholar]
  32. McCarthy G, Wood CC. Scalp distributions of event-related potentials: an ambiguity associated with analysis of variance models. Electroencephalogr Clin Neurophysiol 62: 203–208, 1985. [DOI] [PubMed] [Google Scholar]
  33. Miltner WH, Braun CH, Coles MG. Event-related brain potentials following incorrect feedback in a time-estimation task: evidence for a “generic” neural system for error detection. J Cogn Neurosci 9: 788–798, 1997. [DOI] [PubMed] [Google Scholar]
  34. Mosher JC, Leahy RM. Recursive MUSIC: a framework for EEG and MEG source localization. IEEE Trans Biomed Eng 45: 1342–1354, 1998. [DOI] [PubMed] [Google Scholar]
  35. Mountcastle VB, Steinmetz MA, Romo R. Frequency discrimination in the sense of flutter: psychophysical measurements correlated with postcentral events in behaving monkeys. J Neurosci 10: 3032–3044, 1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Perrin F, Pernier J, Bertrand O, Echallier JF. Spherical splines for scalp potential and current density mapping. Electroencephalogr Clin Neurophysiol 72: 184–187, 1989. [DOI] [PubMed] [Google Scholar]
  37. Preuss TM, Goldman-Rakic PS. Connections of the ventral granular frontal cortex of macaques with perisylvian premotor and somatosensory areas: anatomical evidence for somatic representation in primate frontal association cortex. J Comp Neurol 282: 293–316, 1989. [DOI] [PubMed] [Google Scholar]
  38. Reed CL, Caselli RJ, Farah MJ. Tactile agnosia. Underlying impairment and implications for normal tactile object recognition. Brain 119: 875–888, 1996. [DOI] [PubMed] [Google Scholar]
  39. Reed CL, Shoham S, Halgren E. Neural substrates of tactile object recognition: an fMRI study. Hum Brain Mapp 21: 236–246, 2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Ridderinkhof KR, Ullsperger M, Crone EA, Nieuwenhuis S. The role of the medial frontal cortex in cognitive control. Science 306: 443–447, 2004. [DOI] [PubMed] [Google Scholar]
  41. Ridley RM, Ettlinger G. Impaired tactile learning and retention after removals of the second somatic sensory projection cortex (SII) in the monkey. Brain Res 109: 656–660, 1976. [DOI] [PubMed] [Google Scholar]
  42. Rodriguez-Fornells A, Kurzbuch AR, Münte TF. Time course of error detection and correction in humans: neurophysiological evidence. J Neurosci 22: 9990–9996, 2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Roland PE, Mortensen E. Somatosensory detection of microgeometry, macrogeometry and kinesthesia in man. Brain Res 434: 1–42, 1987. [DOI] [PubMed] [Google Scholar]
  44. Roland PE, O'Sullivan B, Kawashima R. Shape and roughness activate different somatosensory areas in the human brain. Proc Natl Acad Sci USA 95: 3295–3300, 1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Romo R, Salinas E. Flutter discrimination: neural codes, perception, memory and decision making. Nat Rev Neurosci 4: 203–218, 2003. [DOI] [PubMed] [Google Scholar]
  46. Scherg M. Fundamentals of dipole source potential analysis. Auditory evoked magnetic fields and electric potentials. Adv Audiol 6: 40–69, 1990. [Google Scholar]
  47. Seitz RJ, Roland PE, Bohm C, Greitz T, Stone-Elander S. Somatosensory discrimination of shape: tactile exploration and cerebral activation. Eur J Neurosci 3: 481–492, 1991. [DOI] [PubMed] [Google Scholar]
  48. Sinclair RJ, Burton H. Neuronal activity in the second somatosensory cortex of monkeys (Macaca mulatta) during active touch of gratings. J Neurophysiol 70: 331–350, 1993. [DOI] [PubMed] [Google Scholar]
  49. Stoeckel MC, Weder B, Binkofski F, Buccino G, Shah NJ, Seitz RJ. A fronto-parietal circuit for tactile object discrimination: an event-related fMRI study. Neuroimage 19: 1103–1114, 2003. [DOI] [PubMed] [Google Scholar]
  50. Talairach J, Tournoux P. Co-Planar Stereotaxic Atlas of the Human Brain: 3-Dimensional Proportional System: An Approach to Cerebral Imaging. New York: Thieme Medical, 1988. [Google Scholar]
  51. Thorpe S, Fize D, Marlot C. Speed of processing in the human visual system. Nature 381: 520–522, 1996. [DOI] [PubMed] [Google Scholar]
  52. Yamaguchi S, Knight RT. Gating of somatosensory input by human prefrontal cortex. Brain Res 521: 281–288, 1990. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Neurophysiology are provided here courtesy of American Physiological Society

RESOURCES