Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 May 1.
Published in final edited form as: Eur J Neurosci. 2010 May;31(10):1882–1888. doi: 10.1111/j.1460-9568.2010.07204.x

ARE SURFACE PROPERTIES INTEGRATED INTO VISUO-HAPTIC OBJECT REPRESENTATIONS?

Simon Lacey 1, Jenelle Hall 1, K Sathian 1,2,3,4
PMCID: PMC3066147  NIHMSID: NIHMS278083  PMID: 20584193

Abstract

Object recognition studies have almost exclusively involved vision, focusing on shape rather than surface properties such as color. Visual object representations are thought to integrate shape and color information because changing the color of studied objects impairs their subsequent recognition. However, little is known about integration of surface properties into visuo-haptic multisensory representations. Here, participants studied objects with distinct patterns of surface properties (color in Experiment 1, texture in Experiments 2 & 3) and had to discriminate between object shapes when color/texture schemes were altered in within-modal (visual and haptic) and cross-modal (visual study/haptic test and vice versa) conditions. In Experiment 1, color changes impaired within-modal visual recognition but had no effect on cross-modal recognition, suggesting that the multisensory representation is not influenced by modality-specific surface properties. In Experiment 2, texture changes impaired recognition in all conditions, suggesting that both unisensory and multisensory representations integrate modality-independent surface properties. However, the cross-modal impairment might have reflected either the texture change or a failure to form the multisensory representation. Experiment 3 attempted to distinguish between these possibilities by combining changes in texture with changes in orientation, taking advantage of the known view-independence of the multisensory representation, but the results were not conclusive owing to the overwhelming effect of texture change. The simplest account is that the multisensory representation integrates shape and modality-independent surface properties. However, more work is required to investigate this and the conditions under which multisensory integration of structural and surface properties occurs.

Keywords: cross-modal, multisensory, texture, vision, touch

Introduction

Object recognition studies have overwhelmingly concentrated on vision, typically focusing on the structural property of shape and, where surface properties are considered, the surface property of color. These studies have shown that, although visual shape, color and texture are processed in separate visual cortical areas (Cant & Goodale, 2007; Cant, Arnott & Goodale, 2009), shape and color information are integrated in visual object representations: participants instructed to remember the shape of objects showed longer response times when the color of an object or its part-color combinations were changed or reversed between study and test; these effects could be isolated to the object representation because changing the color of the background against which objects were presented had no effect (Nicholson & Humphrey, 2003). Such integration of shape and color may assist in object representation and recognition, e.g., if color information helps segment an object into its constituent parts (Sanocki, Bowyer, Heath & Sarkar, 1998). It should be noted that, in a speeded classification task with no explicit memory load, participants instructed to attend to either structural or surface properties were able to ignore shape when making color and texture judgments and vice versa, and to ignore color when making texture judgments and vice versa (Cant, Large, McCall & Goodale, 2008). This suggests that integration may depend on the requirement to remember an object rather than merely classify it.

Compared to visual object representations, less is known about surface properties in haptic representations and nothing is known about such properties in visuo-haptic cross-modal object recognition. For haptics, structural and surface properties are perceived using different exploratory procedures that are optimal for apprehending particular object attributes (Lederman & Klatzky, 1987). Haptic integration of surface properties with shape depends on whether the appropriate exploratory procedures are motorically or regionally compatible (Lederman, Klatzky & Reed, 1993), i.e., whether the exploratory procedures for shape (contour-following or enclosure) and texture (lateral motion) can be executed simultaneously, or whether information about two properties can be extracted from the same region of an object. Motoric and regional compatibility of exploratory procedures may depend on whether the object is planar or non-planar (Lederman et al., 1993). For planar objects (2-D shapes), neither texture nor hardness is integrated with shape because the relevant exploratory procedures cannot be performed together and because the relevant information cannot be obtained from the same region of the object – texture and hardness information could only be obtained by exploring the central planar surface whilst shape information could only be obtained from the outer edges (Klatzky, Lederman & Reed, 1989). When these motoric and regional incompatibilities were removed by creating non-planar objects (edgeless 3-D ellipsoids), shape and texture information could be extracted using a single exploratory procedure from a single region: in these circumstances, shape and texture were integrated (Lederman et al., 1993). However, these studies used a classification task in which participants were given exemplars of texture and/or shape categories and subsequently sorted a series of objects into these categories. These tasks involve little or no memory load and do not require forming explicit representations of objects. Instead, they address whether properties can be integrated at the perceptual level, in the sense that two or more properties can be processed at the same time. As pointed out in the previous paragraph, requiring subjects to remember specific objects is a better test of the underlying object representation, and is the approach taken here.

From the foregoing it can be seen that vision and touch operate under different constraints in the conjoint processing of structural and surface properties. In addition, while structural shape can be processed in both vision and touch, some surface properties tend to be exclusive, or at least highly salient, to one or other modality. Color is obviously processed exclusively in vision while texture is highly salient to touch (Klatzky, Lederman & Reed, 1987; Heller, 1989) although some visual cortical areas respond to both visual and haptic texture (Stilla & Sathian, 2008). Furthermore, texture processing may be qualitatively different in vision and touch, each modality focusing on different aspects of texture information for different purposes. This likely results in different texture representations so that, despite the potential for co-localized cortical processing, visual and haptic texture might still be processed in largely independent systems (reviewed in Whitaker, Simões-Franklin & Newell, 2008).

Visuo-haptic cross-modal object recognition is hypothesized to rely on a modality-independent representation of shape (Lacey, Campbell & Sathian, 2007a). Previous studies from our laboratory provided empirical support for this view, and showed that the underlying modality-independent representation is also view-independent (Lacey, Peters & Sathian, 2007b; Lacey, Pappas, Kreps, Lee & Sathian, 2009). The potential differences between vision and touch in how surface properties are represented might lead one to suppose that cross-modal object recognition would be better served by a shape representation that abstracted away from unisensory (color) or bisensory (texture) perceptions of surface properties. This would be consistent with the idea of a relatively high-level shape representation that is independent of modality and viewpoint. However, if color information helps segment an object into its constituent parts (Sanocki et al., 1998), this might enhance shape encoding and thus influence cross-modal recognition. Here, our main aim was to clarify the extent to which integration of structural and surface properties occurs in the multisensory object representation underlying cross-modal recognition. In contrast to the classification tasks used in previous studies that only required participants to allocate objects to a category with no memory load, we used a congruent/incongruent study/test shape discrimination task that required participants to rely on object representations. Following Nicholson & Humphrey (2003), if changing the surface properties of an object between study and test impaired shape discrimination, we could infer that the representation included information about that property as well as shape.

Experiment 1

Participants

16 people took part in this experiment (8 male, 8 female; mean age 22 years, 1 month) and were remunerated for their time. All gave informed written consent and all procedures were approved by the Emory University Institutional Review Board.

Materials

We used a total of 48 objects, each made from six smooth wooden blocks measuring 1.6×3.6×2.2 cm: the resulting objects were 9.5 cm high with the other dimensions free to vary according to the arrangement of the component blocks (Figure 1). Each of the component blocks was painted red, green, blue or yellow but no two adjacent blocks had the same color. Each object had a small (<1 mm) grey pencil dot on the bottom facet that cued the experimenter to present the object in the same orientation consistently. Debriefing showed that participants were never aware of these small dots. The 48 objects were divided into 4 sets of 12, one for each modality condition. Each set was then divided into 6 pairs (Object 1 and Object 2) and a set of copies was made for each pair. These copy pairs were then painted so that the copy of Object 1 had the color scheme of the original Object 2 and vice versa (Figure 1). As in previous studies (Lacey et al., 2007b, 2009), we used difference matrices based on the number of differences in the position and orientation of component blocks to calculate the mean difference in object shape within the 4 sets. Paired t-tests showed no significant differences between sets (all p values > .05) and the sets were therefore considered equally discriminable.

Figure 1.

Figure 1

Example objects used in Experiment 1: on the left, Objects 1 and 2 are shown in their original color schemes – on the right, Object 1 now has the color scheme of Object 2 and Object 2 now has the color scheme of Object 1.

Procedure

Participants performed six trials of a 2-alternative-forced-choice shape discrimination task in each of four modality conditions, visual and haptic within-modal discrimination and visual-haptic and haptic-visual cross-modal discrimination (i.e. visual study followed by haptic test and vice versa). The four modality conditions were counterbalanced across participants. In each trial, participants sequentially studied two unfamiliar objects (Object 1 and Object 2) with different color schemes and, as in Nicholson & Humphrey (2003), were instructed to remember the shape of each. These instructions drew attention away from the main experimental manipulation of color. They were then sequentially presented, in random order, with the original objects and with two new objects. As described above, these new objects were the same shape as the originals but the color schemes were exchanged, i.e. one had the same shape as original Object 1 but the color scheme of original Object 2 and vice versa. The task was to decide whether each of the four objects was the same shape as Object 1 or Object 2. In each modality condition there were therefore 12 same shape/same color and 12 same shape/color change discriminations. Objects were presented for 30 seconds for haptic study and 15 seconds for visual study; the 2:1 haptic:visual exploration time ratio is consistent with prior studies (Newell, Ernst, Tjan & Bülthoff, 2001; Lacey & Campbell, 2006; Lacey et al., 2007b, 2009; and see Freides, 1974). Response times during the test phase were unrestricted.

Participants sat facing the experimenter at a table on which objects were placed for both visual and haptic exploration. The table was 86 cm high so that, for visual exploration, the viewing distance was 30–40 cm and the viewing angle as the participants looked down at the objects was approximately 35–45° from the vertical. For visual presentations, the objects were placed on the table oriented along their vertical axes as in Figure 1. Participants were free to move the head and eyes when looking at the objects but were not allowed to get up and move around them. For haptic exploration, participants felt the objects behind an opaque cloth screen. Each object was placed into the participant’s hands, oriented along its vertical axis as in Figure 1: participants were free to move their hands over the object but were not allowed to rotate, manipulate, or otherwise move it out of its original orientation.

Results

Figure 2 shows that color changes impaired within-modal visual shape discrimination, but not the cross-modal conditions or, unsurprisingly, within-modal haptic shape discrimination. Two-way repeated-measures analysis of variance (RM-ANOVA) (modality, color) showed no main effect of either modality(F3,45 = 2.1, p = .1) or color (F1,15 = 1.7, p = .2), but there was a significant modality × color interaction (F3,45 = 3.49, p = .02). In this interaction, within-modal visual shape discrimination was reduced when the color scheme changed (t15 = 1.97, p = .03, one-tailed), but neither the cross-modal conditions nor within-modal haptic shape discrimination was affected. (We used one-tailed t-tests because, following Nicholson & Humphrey (2003), we had a directional hypothesis that shape discrimination would be impaired, not improved, by manipulating surface properties, see Note 1).

Figure 2.

Figure 2

Mean recognition accuracy in each modality condition for objects with original and changed color schemes. Error bars = SEM.

Discussion

Changing the color scheme of an object between study and test impaired within-modal visual object recognition, replicating Nicholson & Humphrey (2003) and indicating that color and shape are integrated in visual representations. The cross-modal conditions were unaffected by color changes, suggesting that color information does not influence cross-modal recognition by explicitly segmenting the object into its constituent parts, thus enhancing the shape representation. A more direct test of this would be to compare multi-colored objects to objects with a uniform color. Hampstead et al. (in press) found no significant difference in visual recognition performance between objects where each block was a different color and objects where each block was the same uniform gray. Together with the present results this suggests that, while surface property information may not provide any particular benefit, recognition is nonetheless sensitive to changes in such information. Unsurprisingly, there was no effect of color changes in the within-modal haptic condition, which essentially serves to show that there was no particular difficulty with encoding the shapes themselves that could have affected the cross-modal conditions. However, this experiment only shows that the multisensory object representation is uninfluenced by visual modality-specific surface properties. In the next experiment, we addressed the potential role of haptic surface properties.

Experiment 2

Participants

16 people took part in this experiment (8 male, 8 female; mean age 22 years, 2 months) and were remunerated for their time. All gave informed written consent and all procedures were approved by the Emory University Institutional Review Board.

Materials and procedure

The materials, procedure, and instructions to participants were identical to those of Experiment 1 with the exceptions that the objects now had four-part texture schemes: each component block was covered with sandpaper (20 grit), standard Braille paper with Braille characters, velvet cloth or was left untreated, i.e. the original smooth surface; no two adjacent blocks had the same texture. All these textures were rendered in matt black. Figure 3a shows a schematic representation of object shapes and texture changes. Additionally, in an attempt to restrict texture perception to the haptic modality, participants wore Solarettes® post-mydriatic dark glasses (Solar Shield, San Luis Obispo, California). These allowed participants to visually inspect the shape of each object but were intended to attenuate their visual perception of surface texture.

Figure 3.

Figure 3

Schematic representation of objects used in Experiment 2 and 3. Panel A: on the left, Objects 1 and 2 are shown in their original texture schemes – on the right, Object 1 now has the texture scheme of Object 2 and Object 2 now has the texture scheme of Object 1. Panel B: shows the horizontal axis along which objects were presented in Experiment 3.

Results

Figure 4 shows that texture changes impaired shape discrimination in all conditions. Two-way RM-ANOVA (modality, texture) showed main effects of both modality (F3,45 = 4.51, p = .008) and texture (F1,15 = 14.95, p = .002), but there was no interaction (F3,45 = 1.45, p = .2). Shape discrimination was significantly impaired in all conditions when the texture scheme changed (within-modal vision t15 = 2.26, p = .02; within-modal haptic t15 = 3.51, p = .001; cross-modal visual-haptic t15 = 1.96, p = .03; cross-modal haptic-visual t15 = 2.72, p = .008 – all one-tailed) (Figure 3). Post-hoc tests (Bonferroni-corrected) showed that within-modal visual performance was significantly better than haptic-visual cross-modal performance (p = .03), but there were no other differences between the four modality conditions.

Figure 4.

Figure 4

Mean recognition accuracy in each modality condition for objects with original and changed texture schemes. Error bars = SEM.

Both visual and haptic within- and cross-modal shape discrimination were impaired by changes in the texture scheme. Clearly the dark glasses did not prevent visual texture perception and so we treat texture as a modality-independent property. On this basis, we conducted a three-way ANOVA (learning [visual, haptic], test [within-, cross-modal], texture [unchanged, changed]). This enabled us to contrast within- and cross-modal discrimination (note the same analysis would not have been meaningful in Experiment 1 where it would have been impossible for the color manipulation to affect within-modal haptic discrimination). This ANOVA showed that shape discrimination was better when objects were learned visually compared to haptically (F1,15 = 9.12, p = .009); that at test within-modal discrimination was better than cross-modal discrimination (F1,15 = 6.28, p = .02); and that discrimination was worse if the texture changed than if it did not (F1,15 = 14.95, p = .002). There was a significant learning x texture interaction (F1,15 = 5.68, p = .03) in which visual and haptic learning resulted in equivalent discrimination performance where the texture did not change but the decrement in performance when texture changed was greater for haptically learned objects than for visually learned objects. The test x texture interaction was not significant (F1,15 = .7, p = .42) indicating that texture changes affected within- and cross-modal discrimination equally.

Discussion

These results extend the work of Nicholson & Humphrey (2003) by showing that visual object representations contain information about less visually salient properties like texture as well as specifically visual properties like color. In addition, we show that haptic representations include information about surface properties. Earlier it was found that structural and surface information could be integrated haptically at the perceptual level due to conjoint processing (Lederman et al., 1993) but there was no requirement for explicit object representations to be formed in that study. Here, we extend these earlier findings by showing that integration of haptic surface properties also occurs at the representational level. In this respect, it is interesting to compare the objects used here with planar objects for which no perceptual integration was observed due to motoric and regional incompatibilities (Klatzky et al., 1989) and edgeless non-planar objects for which perceptual integration did occur because these incompatibilities were absent (Lederman et al., 1993). Since each of the components of the current objects was the same shape, global shape had to be computed from the configuration of the component blocks. Where two or more blocks are aligned along a particular facet, a planar surface is created that may be as informative about configuration, and therefore shape, as edge information. Since no two adjacent blocks had the same texture, the texture sequence across such a planar surface might have provided some limited information about configuration. The planar surface and the texture sequence across it could be perceived with a single exploratory procedure (lateral motion) that is both motorically and regionally compatible. Other information about global shape could be obtained by contour-following for edges and enclosure for configuration which are motorically and/or regionally incompatible with each other and with lateral motion.

Texture changes also impaired shape discrimination in the cross-modal conditions. This suggests that the modality-independent multisensory representation encodes surface properties if these are also modality-independent. However, it is possible that the cross-modal impairment arose because the multisensory representation was not properly formed to start with (see below), rather than because the texture schemes changed. To attempt to distinguish between these possibilities, we conducted a further experiment in which we tested texture changes against orientation changes, view-independence being a feature of cross-modal object recognition that is not shared with within-modal recognition (Lacey et al., 2007b).

Cross-modal view-independence arises from direct integration of unisensory, view-dependent representations into a multisensory, view-independent representation (Lacey et al., 2009). Without such successful integration, the multisensory representation would not be formed and cross-modal view-independence would be disrupted. The integration of the unisensory representations into a multisensory representation might break down in the face of a texture change between, say, visual encoding and haptic recognition, because although the shape information is the same in each unisensory representation, the texture information is not. In this case, we would expect an additive effect in that cross-modal recognition would be impaired by texture changes and further impaired by the combination of texture and orientation changes. Another possibility is that there may be differences between visual and haptic perception of the same texture (Whitaker et al., 2008) and that such differences prevent integration of unisensory representations containing texture information. In this case, however, we would expect that performance would be disrupted even if the texture did not change. We assume, therefore, that if the texture information in the unisensory representations is integrated into the multisensory representation, then any such differences between visual and haptic texture perception have been resolved. Thus, we should observe a simple effect in which cross-modal recognition is impaired by texture changes, because the perceived texture information conflicts with that in the representation, but remains view-independent because the perceived shape information can be matched with that in the representation across changes in orientation.

Experiment 3

Participants

24 people took part in this experiment (12 male, 12 female; mean age 21 years, 11 months) and were remunerated for their time. All gave informed written consent and all procedures were approved by the Emory University Institutional Review Board.

Materials and procedure

The materials, procedure, and instructions to participants were identical to Experiment 2 with the exception that objects were placed into the participants’ hands so that the main axis was horizontal (Figure 3b) and that, during the test phase, each object was presented twice: once in the same orientation as during the study phase and once rotated 180° about its y-axis, i.e. in depth. Additionally, we dispensed with the dark glasses during visual presentations. In this experiment, there were no within-modal conditions because it was already established that these encode surface properties (Nicholson & Humphrey, 2003; present study, Experiment 2) and that these are view-dependent (Newell et al., 2001; Lacey et al., 2007b). The two cross-modal conditions were counterbalanced across participants.

Results

Figure 5 shows that cross-modal shape discrimination was unimpaired by a change in orientation when there was no change in texture. However, discrimination was reduced to chance levels when texture changed, whether orientation changed or not. Three-way RM-ANOVA (modality, texture, orientation) showed no main effect of either modality (F1,23 = .001, p = .9) or orientation (F1,23 = .3, p = .6), but there was a main effect of texture (F1,23 = 22.4, p < .001). There were no significant interactions.

Figure 5.

Figure 5

Mean recognition accuracy in each modality condition for rotated and unrotated objects with original and changed texture schemes. Error bars = SEM.

Discussion

When the texture scheme of the object was unchanged, cross-modal shape discrimination was view-independent, replicating Lacey et al. (2007b). Thus we can be sure that in this experiment participants accessed the modality-independent, view-independent representation hypothesized to support cross-modal object recognition (Lacey et al., 2007a,b, 2009). However, when the texture scheme of the object was changed, shape discrimination was reduced to chance, whether the orientation of the object changed or not. This was a very strong effect: we were unable to achieve performance above chance with various other paradigms that were tested. Consequently, we could not determine whether there was a simple effect of texture alone (indicating integration of texture and shape in the multisensory representation) or whether the effects of texture and orientation change were additive (indicating a failure in constructing the multisensory representation). It is worth noting, however, that when the texture scheme remained the same, shape discrimination was not impaired by perceiving the same textures in a different sequence owing to a change in orientation, suggesting that texture information may be integrated into the multisensory representation. If construction of the multisensory representation was a problem, the high performance levels when there was no texture change rule out differences between visual and haptic texture perception as an explanation. On balance, then, we conclude that the multisensory representation probably includes information about modality-independent surface properties.

General discussion

Cross-modal object recognition is hypothesized to rely on a modality-independent representation that is separate from unisensory visual and haptic representations (Lacey et al., 2007a). Since the structural property of shape can be perceived by both vision and touch but surface properties, such as color or weight, may be specific to one sense or the other, it seems reasonable to suppose that the modality-independent representation might encode only shape information, abstracting away from surface properties. Here, we investigated whether this was so by altering the surface properties of objects between study and test: impaired performance following such changes would suggest that the surface property is also encoded in the representation (Nicholson & Humphrey, 2003). In Experiment 1, we showed that cross-modal shape discrimination was unaffected by changing the color scheme of an object. This suggests that the representation underlying cross-modal recognition does not include color information and that it abstracts away from such modality-specific surface properties. However, Experiments 2 and 3 showed that changing the texture scheme of an object overwhelmingly impaired cross-modal shape discrimination, indicating that the relevant multisensory representation includes information about modality-independent surface properties. Most likely, this reflects integration of texture information into the unisensory representations from which the multisensory representation is derived. The experimental task was to discriminate global shape – color and texture were relatively uninformative about this since each color and texture was confined to a single component block (and each of these blocks was the same shape and size). Global shape had to be computed from the configuration of these components and it was unclear, a priori, how useful surface property information was in this regard – a possible strategy may have been to use this information to reinforce configuration (for example, one could note that the red block points left or that the smooth block is on top of the rough block). Nonetheless, the decrement in cross-modal performance when texture schemes changed shows that surface properties were indeed encoded at some level.

These findings raise several questions about the limits of multisensory integration of structural and surface properties. In the present study, the component parts of each object were identical, while in the study of Nicholson & Humphrey (2003) the components were more diverse and distinctive and thus potentially more informative about global shape. However, both studies show that surface property information was included in the object representation, perhaps because, in both studies, changes in the surface properties of color and texture were congruent with the boundaries of the structural components and thus directly related to the structure of the object. For example, as shown in Figure 1, the colors were restricted to separate component blocks; they did not vary within a block nor spill over into neighboring blocks. One question is whether integration depends on changes in surface properties being congruent with the boundaries of structural components as opposed to changing irregularly in ways that are unrelated to the structure of the object. The colors and textures used here were chosen to be very different from one another; thus, a further condition for integration may be that the variation in surface properties has to reach a threshold. If the color schemes were merely different shades of the same color, or the texture schemes simply different grades of sandpaper, surface property information might be less helpful in differentiating component parts in order to compute global shape and thus might be filtered out in forming the object representation. A related question is how potentially different visual and haptic estimates of the same texture (Whitaker et al., 2008) are resolved in the multisensory representation. Although the visual and haptic estimates of a particular texture may differ, the perceived difference between that texture and another may be the same in vision and touch. Thus discrepancies in visual and haptic texture perception might be resolved by recording only the relative differences between textures rather than the textures themselves. This could be investigated by manipulating the weighting given to any one texture by vision and touch. These issues potentially have practical implications, for example, in the design of equipment for use in low-visibility environments or in esthetics (see Jansson-Boyd & Marlow, 2007). Further studies are also required to examine the circumstances under which surface property information provides a benefit to object recognition. At the moment, it seems that object recognition is sensitive to surface property information but receives no particular benefit from it (unless the property is diagnostic, as discussed below). However, work with Alzheimer’s patients has shown that color information can improve visual recognition, at least in the short-term (Cernin, Keller & Stoner, 2003), suggesting that benefits may accrue in development and aging. Whether information about haptic surface properties has the same effect is a topic for future research.

In this study we used objects constructed from identical component blocks, each object differing from the next only in the spatial arrangement of those blocks. The rationale for this, here and elsewhere (Lacey et al., 2007b; Lacey et al., 2009), is that these complex objects are difficult to name, highly similar, and that participants do not have any prior visual or haptic knowledge of them; thus, they allow us to ‘test the system at its limits’. However, encoding the component parts of an object and encoding the spatial relations between them are dissociable mechanisms (Behrmann, Peterson, Moscovitch & Suzuki, 2006), raising the question whether simpler objects that cannot be decomposed into component parts would also be sensitive to changes in surface properties. One potential explanation for the dramatic drop in performance in Experiment 3 is that, for these complex objects, a change in surface properties coupled with a change in orientation, overloads the system and, in effect, tests it to destruction. Further work will be necessary to investigate the effect of structural complexity on encoding of surface properties (see Phillips, Egan & Perry, 2009, for a recent study of the effects of structural complexity alone on cross-modal performance). A related issue is that perception of these unfamiliar objects might be mediated by different mechanisms or strategies than familiar objects. For some familiar objects, surface properties are diagnostic, e.g., the color of a banana or the texture of a golf ball (Lederman & Klatzky, 1990; Tanaka & Presnell, 1999; Therriault, Yaxley & Zwaan, 2009). Classification and naming of color-diagnostic objects is slower when these are presented in non-diagnostic colors (Tanaka & Presnell, 1999; Therriault et al., 2009) but the effect has not been investigated in haptic or cross-modal tasks.

The cortical localization of visuo-haptic multisensory integration is an important question for further research. This can be seen as a special instance of the binding problem: not only must object properties be bound together to form a coherent unitary percept but information derived from different source modalities must also be combined. One issue is the need to distinguish between integration of information about the same property from different source modalities and binding of different properties of the same object. The lateral occipital complex (LOC) is a well-known visuo-haptic convergence site for both 3-D (Amedi et al., 2001, 2002; Zhang et al., 2004; Stilla & Sathian, 2008) and 2-D (Stoesz et al., 2003; Prather et al., 2004) shape, while visual and haptic texture responses overlap in the right medial occipital cortex at the V1/V2 boundary (Stilla & Sathian, 2008). Thus, multisensory integration of different properties involves cortically distinct regions; but we are not aware of any imaging study that has crossed structural and surface properties in the visuo-haptic domain, so it is unclear how multisensory integration and property binding relate to each other in cortical terms. That said, the left LOC is active during imagery of both structural and surface properties of objects (Newman, Klatzky, Lederman & Just, 2005) so that the LOC may be involved in binding both across modalities and across properties.

We conclude that the multisensory view-independent object representation underlying visuo-haptic object recognition integrates both structural and surface properties. The limits and conditions under which this is so and the neural basis for this remain to be examined.

Acknowledgments

This study was supported by grants to KS from the National Eye Institute, National Science Foundation and the Veterans Administration. JH was supported by the Summer Undergraduate Research Program at Emory University. We thank two anonymous reviewers for their helpful comments on earlier versions of this paper.

Footnotes

1

In this paper, we have employed one-tailed tests of significance since we have a directional hypothesis that changing the surface properties of an object will impair, and not improve, its recognition. Our position on one-tailed testing is that it is not sufficient to specify the directional hypothesis in advance of data collection (though this is, of course, necessary). Following Abelson (1995), two further conditions must be satisfied: (i) there must be a strong rationale for adopting a directional hypothesis such that (ii) a result in the other direction, or ‘wrong’ tail, would be uninterpretable to the extent that it could be disregarded. In the present case, firstly, it has already been shown that a change in the surface properties of an object impairs its subsequent recognition (Nicholson & Humphrey, 2003) and it is therefore reasonable to expect that this result is both replicable (Experiment 1 for color and vision) and generalizable to other surface properties and modalities (Experiment 2 for texture and both vision and touch). There can be few better reasons for adopting a directional hypothesis than previous demonstrations of a directional effect. Secondly, if surface properties really are bound to shape, it is difficult to find a convincing explanation why a change in those properties would significantly improve rather than impair recognition. Hence, such a finding in the ‘wrong’ tail would be not be meaningful.

References

  1. Abelson RP. Statistics As Principled Argument. Psychology Press; Hove, UK: 1995. [Google Scholar]
  2. Amedi A, Jacobson G, Hendler T, Malach R, Zohary E. Convergence of visual and tactile shape processing in the human lateral occipital complex. Cereb Cortex. 2002;12:1202–1212. doi: 10.1093/cercor/12.11.1202. [DOI] [PubMed] [Google Scholar]
  3. Amedi A, Malach R, Hendler T, Peled S, Zohary E. Visuo-haptic object-related activation in the ventral visual pathway. Nat Neurosci. 2001;4:324–330. doi: 10.1038/85201. [DOI] [PubMed] [Google Scholar]
  4. Behrmann M, Peterson MA, Moscovitch M, Suzuki S. Independent representation of parts and the relations between them: evidence from integrative agnosia. J Exp Psychol Human. 2006;32:1169–1184. doi: 10.1037/0096-1523.32.5.1169. [DOI] [PubMed] [Google Scholar]
  5. Cant JS, Arnott SR, Goodale MA. fMR-adaptation reveals separate processing regions for the perception of form and texture in the human ventral stream. Exp Brain Res. 2009;192:391–405. doi: 10.1007/s00221-008-1573-8. [DOI] [PubMed] [Google Scholar]
  6. Cant JS, Goodale MA. Attention to form or surface properties modulates different regions of human occipitotemporal cortex. Cereb Cortex. 2007;17:713–731. doi: 10.1093/cercor/bhk022. [DOI] [PubMed] [Google Scholar]
  7. Cant JS, Large ME, McCall L, Goodale MA. Independent processing of form, colour, and texture in object perception. Perception. 2008;37:57–78. doi: 10.1068/p5727. [DOI] [PubMed] [Google Scholar]
  8. Cernin PA, Keller BK, Stoner JA. Color vision in Alzheimer’s patients: can we improve object recognition with color cues? Aging Neuropsychol C. 2003;10:255–267. [Google Scholar]
  9. Freides D. Human information processing and sensory modality: cross-modal functions, information complexity, memory and deficit. Psychol Bull. 1974;81:284–310. doi: 10.1037/h0036331. [DOI] [PubMed] [Google Scholar]
  10. Hampstead BM, Lacey S, Ali S, Phillips PA, Stringer AY, Sathian K. Use of complex three dimensional objects to assess visuospatial memory in healthy individuals and patients with unilateral amygdalohippocampectomy. Epilepsy Behav. doi: 10.1016/j.yebeh.2010.02.021. in press. [DOI] [PubMed] [Google Scholar]
  11. Heller MA. Texture perception in sighted and blind observers. Percept Psychophys. 1989;45:49–54. doi: 10.3758/bf03208032. [DOI] [PubMed] [Google Scholar]
  12. Jansson-Boyd C, Marlow N. Not only in the eye of the beholder: Tactile information can affect aesthetic evaluation. Psychology of Aesthetics, Creativity and the Arts. 2007;1:170–173. [Google Scholar]
  13. Klatzky RL, Lederman S, Reed C. There’s more to touch than meets the eye: The salience of object attributes for haptics with and without vision. J Exp Psychol Gen. 1987;116:356–369. [Google Scholar]
  14. Klatzky RL, Lederman S, Reed C. Haptic integration of object properties: Texture, hardness, and planar contour. J Exp Psychol Human. 1989;15:45–57. doi: 10.1037//0096-1523.15.1.45. [DOI] [PubMed] [Google Scholar]
  15. Lacey S, Campbell C. Mental representation in visual/haptic crossmodal memory: evidence from interference effects. Q J Exp Psychol. 2006;59:361–376. doi: 10.1080/17470210500173232. [DOI] [PubMed] [Google Scholar]
  16. Lacey S, Campbell C, Sathian K. Vision and touch: multiple or multisensory representations of objects? Perception. 2007a;36:1513–1521. doi: 10.1068/p5850. [DOI] [PubMed] [Google Scholar]
  17. Lacey S, Peters A, Sathian K. Cross-modal object recognition is viewpoint-independent. PLoS ONE. 2007b;2:e890. doi: 10.1371/journal.pone.0000890. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Lacey S, Pappas M, Kreps A, Lee K, Sathian K. Perceptual learning of view-independence in visuo-haptic object representations. Exp Brain Res. 2009;198:329–337. doi: 10.1007/s00221-009-1856-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Lederman SJ, Klatzky RL. Hand movements: A window into haptic object recognition. Cognitive Psychol. 1987;19:342–368. doi: 10.1016/0010-0285(87)90008-9. [DOI] [PubMed] [Google Scholar]
  20. Lederman SJ, Klatzky RL. Haptic classification of common objects: knowledge-driven exploration. Cognitive Psychol. 1990;19:342–368. doi: 10.1016/0010-0285(90)90009-s. [DOI] [PubMed] [Google Scholar]
  21. Lederman SJ, Klatzky RL, Reed CL. Constraints on haptic integration of spatially shared object dimensions. Perception. 1993;22:723–743. doi: 10.1068/p220723. [DOI] [PubMed] [Google Scholar]
  22. Newell FN, Ernst MO, Tjan BS, Bülthoff HH. View dependence in visual and haptic object recognition. Psychol Sci. 2001;12:37–42. doi: 10.1111/1467-9280.00307. [DOI] [PubMed] [Google Scholar]
  23. Newman SD, Klatzky RL, Lederman SJ, Just MA. Imagining material versus geometric properties of objects: an fMRI study. Cognit Brain Res. 2005;23:235–246. doi: 10.1016/j.cogbrainres.2004.10.020. [DOI] [PubMed] [Google Scholar]
  24. Nicholson KG, Humphrey GK. The effect of colour congruency on shape discriminations of novel objects. Perception. 2003;32:339–353. doi: 10.1068/p5136. [DOI] [PubMed] [Google Scholar]
  25. Phillips F, Egan EJL, Perry BN. Perceptual equivalence between vision and touch is complexity dependent. Acta Psychologica. 2009;132:259–266. doi: 10.1016/j.actpsy.2009.07.010. [DOI] [PubMed] [Google Scholar]
  26. Prather SC, Votaw JR, Sathian K. Task-specific recruitment of dorsal and ventral visual areas during tactile perception. Neuropsychologia. 2004;42:1079–1087. doi: 10.1016/j.neuropsychologia.2003.12.013. [DOI] [PubMed] [Google Scholar]
  27. Sanocki T, Bowyer KW, Heath MD, Sarkar S. Are edges sufficient for object recognition? J Exp Psychol Human. 1998;24:340–349. [Google Scholar]
  28. Stilla R, Sathian K. Selective visuo-haptic processing of shape and texture. Hum Brain Mapp. 2008;29:1123–1138. doi: 10.1002/hbm.20456. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Stoesz MR, Zhang M, Weisser VD, Prather SC, Mao H, Sathian K. Neural networks active during tactile form perception: common and differential activity during macrospatial and microspatial tasks. Int J Psychophysiol. 2003;50:41–49. doi: 10.1016/s0167-8760(03)00123-5. [DOI] [PubMed] [Google Scholar]
  30. Tanaka JW, Presnell LM. Color diagnosticity in object recognition. Percept Psychophys. 1999;61:1140–1153. doi: 10.3758/bf03207619. [DOI] [PubMed] [Google Scholar]
  31. Therriault DJ, Yaxley RH, Zwaan RA. The role of color diagnosticity in object recognition and representation. Cogn Process. 2009;10:335–342. doi: 10.1007/s10339-009-0260-4. [DOI] [PubMed] [Google Scholar]
  32. Whitaker TA, Simões-Franklin C, Newell FN. Vision and touch: Independent or integrated systems for the perception of texture? Brain Res. 2008;1242:59–72. doi: 10.1016/j.brainres.2008.05.037. [DOI] [PubMed] [Google Scholar]
  33. Zhang M, Weisser VD, Stilla R, Prather SC, Sathian K. Multisensory cortical processing of object shape and its relation to mental imagery. Cogn Affect Behav Ne. 2004;4:251–259. doi: 10.3758/cabn.4.2.251. [DOI] [PubMed] [Google Scholar]

RESOURCES