Abstract
Object affordance refers to possibilities to interact with the objects in our environment, such as grasping. Previous research shows that objects that afford an action activate the motor system and attract attention, for example they elicit an enhanced frontal negativity and posterior P1 in the event-related potential. An effect on posterior N1 is discussed. However, previous findings might have resulted from physical differences between affording and non-affording stimuli, rather than affordance per se. Here we replicated the frontal negativity and posterior P1 effects and further explored the posterior N1 in affordance processing under constant visual input. An ambiguous target was primed either with an affording (pencils) or non-affording (trees) context. Although physically always identical, the target elicited an enhanced frontal negativity and posterior P1 in the pencil prime condition. Posterior N1 was reduced and grip aperture in a grasping task was smaller in the affording context. Source localization revealed stronger activation in occipital and parietal regions for targets in pencil versus tree prime trials. Thus, we successfully show that an ambiguous object primed with an affording context is processed differently than when primed with a non-affording context. This could be related to the ambiguous object acquiring a potential for action through priming.
Keywords: Object affordance, Priming, Event-related potentials, Grasping, Grip aperture, Source localization
Object affordance; Priming; Event-related potentials; Grasping; Grip aperture; Source localization.
1. Introduction
Many objects in our environment come with certain functions. For example, we can grasp a pen and write with it, or we can grasp our keys to lock or unlock a door. Thus, grasping is a frequent activity in our daily lives. Gibson (1979) has coined the term "affordance", which describes possibilities to interact with our environment. Conversely, grasp affordance is a specific type of affordance referring to graspable objects. According to Gibson (1979), an object is graspable if it has an adequate size to take it with a single hand, or if it is furnished with a handle. Furthermore, grasp affordance is directly perceived when viewing a graspable object (Gibson, 1979; see also Thomas and Riley, 2014). The present study set out to investigate whether an object ambiguous in size and function may obtain grasp affordance depending on the previous context established by priming.
Neuroscientific evidence shows that affording objects, compared to non-affording ones, activate motor areas in the brain. For example, a study in which tool processing was compared with the processing of non-graspable stimuli, the left premotor and posterior parietal cortices were significantly activated by tools (Chao and Martin, 2000). Other neuroimaging studies have yielded comparable results (Grafton et al., 1997; Grèzes and Decety, 2002). Another study measured motor-evoked potentials (MEPs) at the right hand, triggered by pulses delivered to the left primary motor cortex using transcranial magnetic stimulation (Cardellicchio et al., 2011). MEPs were higher in the presence of a graspable object (a cup) as compared to a non-graspable one (a large cube), indicating that the presentation of a graspable stimulus increases motor cortex excitability. Remarkably, in these studies, differences between affording and non-affording stimuli emerged despite the stimuli being pictures on screen rather than real objects and in absence of any task associated with these stimuli. Thus, merely viewing pictures of affording objects may trigger motor activations in the brain, even though there is no possibility or intention to interact with them.
The brain quickly detects object affordance. In an EEG study the presentation of tool pictures—as compared to pictures of non-tools—elicited an enhanced left-frontal negativity in the event-related potential (ERP), which started about 210 ms after stimulus onset (Proverbio et al., 2011). Source localization revealed that this negativity could be traced back to the bilateral premotor cortex and left postcentral gyrus, supporting the idea that in the presence of affording objects, the brain rapidly prepares for action. Rowe et al. (2017) recently replicated this frontal negativity for graspable objects as opposed to an empty desk and showed that it was especially pronounced when the dominant (right) hand was positioned close to the object. In sum, converging evidence from neuroimaging and electrophysiology indicates that merely viewing affording objects primes the motor system for action.
One problem of the aforementioned studies is that physical differences between affording and non-affording objects could account for the reported differences in brain activation. Many affording objects, especially those that afford grasping actions, have a handle or other parts that non-affording objects do not have (Handy et al., 2003). Therefore, in the present study, we wished to replicate this frontal ERP negativity using the same object in all trials, which acquired (or not) its grasp affordance through conceptual priming. Research shows that context influences object processing. In a classical study, Palmer (1975) showed that the context previously established by a visual scene (e.g., a kitchen) improved the identification of a briefly presented congruent target object (e.g., a loaf of bread), while identification rates were much lower for incongruent objects (e.g., a mailbox). If an object is presented within a certain context, object-related actions that are congruent with this specific context are potentiated (Kalénine et al., 2014). Moreover, the attentional blink, a decline in the detection of a target picture that follows shortly after a previously detected target, can be reduced if the two targets are functionally related objects (Adamo and Ferber, 2009). Other studies have shown improved object recognition after priming of an object-compatible action (Helbig et al., 2010) or after priming with an object which requires the same grip as the target object (McNair and Harris, 2012). These results confirm that the processing of an object is influenced by the previous context.
In addition to motor system activation by objects that afford grasping, there is ample evidence of preferential attention allocation to these objects (Craighero et al., 1999; Garrido-Vásquez and Schubö, 2014; Handy et al., 2003; Handy and Tipper, 2007; Matheson et al., 2014). In a seminal study, Handy et al. (2003) showed that the posterior P1 component of the ERP was enhanced for targets that were superimposed on graspable, rather than non-graspable objects, which indicates that the former attracted more attention than the latter. A related study showed P1 enhancement for target dots that were placed next to an object's handle location rather than next to its functional end (e.g., next to a handsaw's handle rather than next to its blade; Matheson et al., 2014). In our previous study (Garrido-Vásquez and Schubö, 2014) participants detected luminance changes on a graspable object more quickly than on a non-graspable one. While these studies measured target detection responses, Wykowska and Schubö (2012) embedded a visual search task into an action plan. When participants were planning to execute a pointing movement, the detection of luminance targets in the search array was facilitated, compared to when a grasping movement was prepared–in which case the detection of size targets was improved. Thus, participants became particularly sensitive to the information which was most relevant to their current action plan. Importantly, in the ERPs elicited by the search array, these differences started to emerge as early as in the P1 component (Wykowska and Schubö, 2012). Thus, in the present study, we were also interested in visual attention allocation to objects that afford grasping actions. Previous ERP studies that have reported the frontal negativity in response to affording objects (Proverbio et al., 2011; Rowe et al., 2017) have not related this component to early attention allocation as measured by the posterior P1. Therefore, we combined these two measures.
Visual P1 generators have been identified in occipital regions, including the posterior fusiform gyrus (Key et al., 2005; Natale et al., 2006; Perri et al., 2019), and further evidence suggests that parietal areas could be involved in generating the P1 (Key et al., 2005; Novitskiy et al., 2011).
The posterior P1 is followed by the posterior N1 component, which shows an amplitude reduction for affording objects (Righi et al., 2014), potentially related to action suppression (Debruille et al., 2019). However, Rowe et al. (2017) failed to replicate an N1 modulation by object affordance when comparing ERPs elicited by graspable objects versus an empty desk. Thus, a further aim of the present study was to seek more evidence on how the posterior N1 relates to affordance processing.
Visual N1 sources have been identified in inferior occipital and occipitotemporal regions, extending into the inferior temporal lobe (Key et al., 2005). Further evidence suggests parietal involvement in the N1 (Fu et al., 2008; Natale et al., 2006).
Even though the mere presentation of pictures of affording objects triggers motor-related responses in the brain, these are magnified when the action link is more salient, for example when a presented object is oriented towards the dominant hand and when real objects are present and interacted with during the experiment (Gallivan et al., 2009; Garrido-Vásquez and Schubö, 2014). We therefore decided to include action trials, to establish an actual action context and to be able to measure motor behavior as well.
To sum up, in the present study we aimed to replicate the frontal negativity associated with affordance processing (Proverbio et al., 2011; Rowe et al., 2017) and concurrently measure the posterior P1 as an early index of visual attention allocation while ruling out physical stimulus differences which may have influenced the results of previous studies. To this end, we used prime pictures displaying either pencils (graspable) or trees (non-graspable), followed by the picture of a target stimulus that was always identical (a wooden stick, originally a clave). We predicted an enhanced frontal negativity in the ERPs starting at a latency of approximately 210 ms after target onset in pencil prime trials (Proverbio et al., 2011; Rowe et al., 2017). We also hypothesized increased visual attention allocation to the target in pencil prime trials (Handy et al., 2003; Handy and Tipper, 2007; Mangun and Hillyard, 1991), reflected in an enhancement of the posterior P1 component evoked by the target. Furthermore, we wished to further examine the role of the posterior N1 in affordance processing, since the literature on the N1 in object affordance is still inconclusive. For each of the three components of interest, we furthermore identified the underlying neural sources using EEG source reconstruction.
We also included action trials, in which participants were prompted to grasp a rod in front of them. This procedure ensured that the experiment was embedded in an action context (Cavina-Pratesi et al., 2010; Gallivan et al., 2009; Garrido-Vásquez and Schubö, 2014), and we were further able to determine whether our priming procedure also influenced motor behavior. We predicted that participants would initiate grasping movements faster in pencil than in tree prime trials, based on previous reaction time evidence (Garrido-Vásquez and Schubö, 2014; Vingerhoets et al., 2009). Moreover, since pencils require small grip apertures and grip aperture is tightly related to object size (Castiello, 2005), we expected that participants would open their hands less in grasp trials preceded by pencil rather than tree primes. This would be in line with evidence from number magnitude priming, which shows that if large numbers are primed, grip aperture is larger than in the context of smaller numbers, even though the to-be-grasped object stays the same (Lindemann et al., 2007). If the presentation of a rod is preceded by a small number, people overestimate its graspability (i.e., the possibility that the rod would fit between the thumb and index finger), while they underestimate it with large numbers (Badets et al., 2007). Semantic priming has comparable effects: when names of large objects are primed, grip aperture is higher than in the case of small objects (Glover et al., 2004). Therefore, here we investigated how priming with graspable versus non-graspable objects influences grip aperture.
2. Methods
2.1. Participants
Twenty-two students from Philipps University Marburg participated in the experiment for course credit. One participant was discarded from the sample due to being left-handed. The remaining 21 participants (16 females) had a mean age of 21.2 years (SD = 2.5) and were right-handed according to the Edinburgh Handedness Inventory (Oldfield, 1971). All showed normal or corrected-to-normal vision as determined by self-report and a screening test. Color vision according to the Ishihara Test (Ishihara, 1917) was also intact. The study was approved by the local ethics committee at the department of psychology at Philipps University Marburg. Written informed consent was obtained from all participants before testing.
2.2. Stimuli and apparatus
Stimulus presentation was realized on a 22-inch computer screen with a resolution of 1680 × 1050 pixels and a refresh rate of 100 Hz. The screen was positioned at a distance of approximately 85 cm from the participant. Placement of the screen on a pedestal ensured that the cylinder, placed in front of the participants for the grasp trials, would not cover the display. To ensure a constant starting position for right-hand movements, participants enclosed the right arm of a small plastic cross with their right thumb and index finger. The cross was placed centrally in front of the participants. It was attached to a large key (home key) which reacted to presses and releases and served as hand rest. Additionally, for the keypress responses, a small button box was placed below the participant's left index finger. This hand assignment (right hand for grasping, left hand for button press) ensured that people executed the grasping action with their dominant hand. The circular cylinder on which participants executed the grasping movements was sitting on a marked central position in front of the participant, at a distance of approximately 30 cm from the starting position of the right hand. It was flexibly mounted on a stick and had a diameter of 6 cm and a height of 8 cm. Stimulus delivery and experimental timing were controlled with Presentation Software (Neurobehavioral Systems, Albany, CA).
A Polhemus Liberty electromagnetic motion tracker was used to record hand movements with a total of three motion sensors. One sensor was attached to the right wrist, one to the thumb, and one to the index finger of the right hand, using adhesive tape. The tracker operated at a sampling rate of 240 Hz, measuring the location of each sensor in space in x, y, and z coordinates. Motion data recording was controlled with custom Matlab scripts (Mathworks, Natick, MA) and interfaced with Presentation using the Matlab Workspace Extension.
The prime stimuli consisted of a total of 104 color pictures, half of which depicted trees or tree trunks, the other half pencils (for examples, see Table 1). These were equally distributed across different orientations of their central element, which was later to be replaced by the target, in 0 (vertical), -45 (tilted to the left), 90 (horizontal), and 45 (tilted to the right) degrees. These different orientations were used to induce more variation to our stimuli and also for exploring whether orientation influenced affordance processing of the pencil stimuli. The processing of grasp affordance is especially pronounced when objects appear “ready to hand” (Garrido-Vásquez and Schubö, 2014), an issue that could vary according to object orientation. All pictures were real photographs, and no artificial rotation was used to keep them as natural and realistic as possible.
Table 1.
Stimulus examples.
| 0° | 45° | 90° | -45° | |
|---|---|---|---|---|
| Tree primes | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
|
| Pencil primes | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
The target stimulus consisted of a centered wooden stick on a blurred-looking background, and it was identical for the tree and pencil prime conditions. To allow for gradual prime-target transitions, three blurred versions were created for each prime picture using a Gaussian filter with a blurring radius of 20, 40, and 60 pixels, respectively. The central area at which the target appeared later was not blurred, i.e., blurring was limited to the surroundings of the target area. Prime and target orientation always matched. The picture size was 1680 × 1050 pixels.
2.3. Procedure
Trials started with fixation for 1000 ms, consisting of a central black cross on a gray background. Fixation was followed by the original prime picture for 1500 ms, which was then gradually blurred in three steps (see stimulus description). The first two blurring stages were presented for 100 ms each, while the third and last blurring stage remained on screen for 1000 ms and was then replaced by the target stimulus in the orientation corresponding to the prime (0, -45, 45, or 90°). The target stimulus was visible during 200 ms, and 120 ms after target onset it was surrounded by a red or blue margin, which remained visible until target offset, i.e., for 80 ms. This timing was based on previous studies (Bub and Masson, 2010; Garrido-Vásquez and Schubö, 2014). The margin served as task cue, and its color indicated which task had to be executed in a specific trial. The task was either to press a button with the left index finger as quickly as possible or to grasp the cylinder with the right hand, with no emphasis on speed. The assignment of red and blue colors to the two tasks was counterbalanced among participants. A new trial started after an inter-trial interval of random duration between 1000 and 1500 ms after the left-hand button press was registered or after the right hand had returned to the home key after grasping, in the case of grasp trials. See also Figure 1 for a graphical display.
Figure 1.
Time course of an experimental trial.
Of the total of 624 trials in the experiment, grasp trials comprised one third (208 trials) and button press trials two thirds (416 trials). Grasp trials were included to establish and constantly refresh an action context. The 33% proportion was based on previous studies (e.g., Gallivan et al., 2009; Gallivan et al., 2011, with 40% and 33%, respectively) and on practical reasons, because grasp trials take more time than button press trials. Both trial types were paired equally often with the two prime conditions, such that prime presentation did not allow for any prediction on which task would follow in a specific trial. The experiment was divided into 12 blocks of 52 trials each, with breaks in between. The order of trials during the experiment was pseudo-randomized for each participant individually.
2.4. Behavioral data analysis
Mean target reaction times in left-hand button press trials were computed separately for the eight conditions (2 prime types: trees/pencils x 4 orientations: 0°/-45°/45°/90°). Trials with incorrect or missing responses were discarded. Furthermore, intra-individual reaction time outliers were identified using the mean reaction times for each participant and condition. Trials in which reaction times exceeded ±3 standard deviations from the intra-individual mean for the respective condition were excluded. Taken together, these exclusion criteria affected 3.79% (SD = 1.63) of all trials.
Motion tracking data, recorded during right-hand grasp trials, were processed with custom Matlab scripts, based on procedures reported previously (Hesse et al., 2012, 2016). Grasp reaction time (movement onset) was quantified as the latency from task cue onset (colored frame around the target) until the wrist, index, or thumb sensor exceeded a velocity of 5 cm/s (whichever came first). Movement time was measured from the reaction time until the touch of the cylinder, which was defined as the point in time at which the wrist sensor velocity dropped below 5 cm/s and the index sensor had a minimum distance of 30 cm (in 3D coordinates) from its starting position. We also calculated grip aperture, defined as the distance in 3D between thumb and index sensors. Mean grip aperture during movement time, maximum grip aperture, and the point in time at which maximum grip aperture occurred during the movement towards the cylinder were determined.
All behavioral variables were analyzed with 2 (prime) x 4 (orientation) repeated-measures ANOVAs. Greenhouse-Geisser adjusted values were used when the sphericity assumption was violated. Of interest were the main effects for the factor prime (tree vs. pencil) as well as prime x orientation interactions.
2.5. EEG data recording and analysis
The electroencephalogram was acquired with 64 active Ag/AgCl electrodes mounted in an elastic cap according to the extended International 10-10 system. The horizontal electrooculogram (EOG) was measured at electrode locations F9 and F10, and the vertical EOG was acquired from Fp1 and an additional electrode placed below the left eye. Two additional electrodes were attached to the left and right mastoids. The electrode on the left mastoid bone served as an online reference, and data were re-referenced offline to the average of left and right mastoids. Electrode impedances were kept below 5 kΩ. The sampling rate during data acquisition was 500 Hz.
We used FieldTrip (Oostenveld et al., 2011) running on Matlab (The Mathworks, Natick, USA) to further process the EEG data offline. The signal was segmented into 800 ms epochs time-locked to target onset, including a pre-stimulus baseline of 100 ms. Primes were presented for quite long (2700 ms in total); and the last prime blurring stage was on screen during a whole second, of which only the last 100 ms served as baseline. Therefore, we did not expect any systematic baseline differences as a function of prime, even though the baseline overlapped with prime presentation.
Only button press trials with correct responses were analyzed. We first manually rejected all trials containing atypical artifacts. Then, an independent component analysis (ICA) was applied to the data to identify components associated with blinks, eye movements, or other artifacts (electrocardiographic artifacts or noisy electrodes). These components were removed from the data, and then the ICA-corrected data were inspected manually again to reject any trials that still contained artifacts. On average, 6.5% (SD = 4.2%) of all trials were rejected for each participant.
Time windows and electrodes for ERP calculations were defined based on the collapsed localizers procedure (Luck and Gaspelin, 2017), in which data are collapsed into only one waveform irrespective of experimental condition, to define time windows and electrodes at which the components of interest are most pronounced. Posterior P1 and N1 components were measured at electrodes PO7, PO3, POz, PO4, PO8, O1, Oz, and O2 in time windows from 100 to 140 ms and 140–180 ms post target onset, respectively. The frontal negativity was measured from 210 to 260 ms post target onset at electrodes AF3, AF4, F3, Fz, and F4. Mean voltage was computed for these electrodes and time windows. The analysis followed the same procedure as for the behavioral data. All statistical computations were executed on unfiltered data.
2.6. Source reconstruction
Source reconstruction was used to identify any differences between the two prime conditions regarding the underlying neural generators of ERP effects. These analyses were realized in SPM12 (http://www.fil.ion.ucl.ac.uk/spm/software/spm12/). Electrode locations were co-registered with SPM's standard template head model in MNI space with a cortical mesh of 8,196 vertices, and the forward model was constructed using the Boundary Elements Method in SPM. We inverted the data at the group level, using the minimum norm estimation algorithm (IID). The results were smoothed with a Gaussian kernel full-width half-maximum (FWHM) of 12 mm. Inversions were obtained for the three time-windows corresponding to the ERP components of interest. Paired-samples t-tests were calculated to compare both prime conditions, using the t contrasts tree > pencil and pencil > tree.
We restricted our solutions to previously defined anatomical regions, based on prior evidence. P1 and N1 analyses were restricted to occipital and parietal sites, and the frontal negativity analysis was restricted to the precentral and postcentral gyri. This restriction was applied using small volume correction in SPM, with an initial threshold at p < .005 (uncorrected), and subsequently the significance level was set to p < .05 family-wise error corrected.
3. Results
3.1. Behavioral data
The mean reaction time for left-hand button press responses was 447.03 ms (SD = 73.51). There were no significant main effects or interactions for the factors prime or orientation (ps > .2).
3.1.1. Motion tracking
In grasp trials, participants initiated their movements (reaction time) on average 568.56 ms (SD = 155.28) after the onset of the color cue and touched the object (movement time) 981.94 ms (SD = 187.71) later. Neither reaction time nor movement time varied systematically as a function of prime or orientation (ps > .13). In contrast, mean grip aperture was influenced by prime condition: participants opened their hands less in pencil prime trials (M = 6.32 cm, SD = 0.55) than in tree prime trials (M = 6.42 cm, SD = 0.60), F(1, 20) = 11.307, p = .003, η2p = .361. These differences between conditions were also reflected in the maximum grip aperture data, with lower maxima in pencil prime trials (M = 10.20 cm, SD = 0.81) than in tree prime trials (M = 10.31 cm, SD = 0.83), F(1, 20) = 11.815, p = .003, η2p = .317. Maximum grip aperture was reached on average after 70.31% (SD = 5.35) of the total movement time, with no significant difference between the two prime conditions (p > .78). Prime orientation did not affect grip aperture (all ps > .30).
3.1.2. Time course of grip aperture
To analyze how priming affected the time course of grip aperture during movement, we adopted a procedure previously employed in studies on number magnitude priming (Andres et al., 2008; Chiou et al., 2012). The total movement duration was divided into five temporal units of equal length, i.e., we calculated mean grip aperture during the first 20% of the movement and so forth. These values were submitted to a 2 (prime) x 5 (timing) x 4 (orientation) within-subjects ANOVA. Both the main effect of prime, F(1, 20) = 11.330, p = .003, η2p = .362 –with larger grip apertures in tree than in pencil prime trials– and the main effect of timing, F(1.560, 31.209) = 397.630, p < .001, η2p = .952, were significant. Grip aperture increased significantly from the first until the fourth movement unit and decreased during the last 20% of movement execution (all ts > 3.37, all ps < .013; Bonferroni-Holm corrected). The prime x timing interaction was only marginally significant (p = .069, η2p = .122). An exploratory post-hoc analysis of this effect revealed that for the third and fourth temporal units (i. e., from 40 – 60% and from 60 – 80% of the total movement time), grip aperture was significantly smaller for pencil prime than for tree prime trials (t(20) = 2.764, p = .024, and t(20) = 4.837, p < .001, respectively).
There were no significant effects involving the factor orientation (ps > .64).
3.2. ERP data
Posterior P1 amplitude for the target stimulus was larger when it had been primed by the pencil context (M = 3.27 μV, SD = 2.67) compared to the tree context (M = 2.91 μV, SD = 2.79). This was reflected in a significant main effect of prime, F(1, 20) = 4.578, p = .045, η2p = .186. The main effect of orientation was also significant, F(3, 60) = 8.681, p < .001, η2p = .303, but the interaction between prime and orientation was not (p = .055, η2p = .118).
The N1 response to the target was less negative in the pencil prime condition (M = 1.03 μV, SD = 2.60) than in the case of tree primes (M = 0.34 μV, SD = 2.36), F(1, 20) = 11.713, p = .003, η2p = .369. The main effect of orientation was also significant, F(3, 60) = 3.154, p = .031, η2p = .136, but there was again no significant interaction of this factor with prime (p = .303, η2p = .058). Panel A of Figure 2 graphically displays the ERPs for the tree and pencil prime conditions at posterior electrodes.
Figure 2.
ERP results for the pencil and tree prime conditions, time-locked to target onset. Note. Time windows for ERP calculations are shaded in gray. Panel A: Posterior ERP components (P1 and N1). Panel B: Frontal negativity. Panel C: scalp potential maps for the three time windows of interest, showing the tree minus pencil difference.
Finally, in line with our hypothesis, the frontal negativity elicited by the target was more negative in pencil (M = −0.97 μV, SD = 4.23) than in tree prime trials (M = −0.14 μV, SD = 4.08), F(1, 20) = 9.365, p = .006, η2p = .319. There was no significant main effect of orientation (p = .145, η2p = .085), but a significant prime x orientation interaction, F(3, 60) = 4.470, p = .007, η2p = .183, whose descriptive statistics are presented in Table 2. Post-hoc analyses revealed a significant main effect of orientation for tree prime trials, F(3, 60) = 3.750, p = .015, η2p = .158, but not for pencil trials (p = .13, η2p = .089). Paired-samples t-tests (Bonferroni-Holm corrected) revealed that when the trees had been primed in an upright orientation (0°), the target ERP response was significantly more negative than for the 45° or -45° orientation, t(20) = 3.381, p = .003 and t(20) = 2.680, p = .028, respectively. All other comparisons were not significant (ps > .3). For a graphical display of the frontal negativity, please refer to Figure 2, panel B.
Table 2.
Mean amplitudes (standard deviations) in μV for the frontal negativity in all eight conditions.
| tree |
pencil |
||||||
|---|---|---|---|---|---|---|---|
| 0° | -45° | 45° | 90° | 0° | -45° | 45° | 90° |
| -0.89 (4.04) | 0.30 (4.31) | 0.42 (4.66) | -0.41 (4.04) | -0.91 (4.36) | -0.44 (4.61) | -1.56 (4.63) | -0.97 (4.10) |
3.3. Source localization
For the P1 component, several occipital and parietal sites in both hemispheres were more strongly activated for targets in the pencil prime than in the tree prime condition (see Table 3). In the N1 time window, stronger activations for the pencil than the tree prime condition were mainly observed in bihemispheric parietal regions. The frontal negativity was associated with stronger postcentral gyrus activation in the pencil as compared to the tree prime condition. The tree > pencil comparison yielded no significant results for any component of interest. See Figure 3 for a display of the source reconstruction results.
Table 3.
Significant activation peaks from the source localization analysis in the pencil > tree comparison.
| Hemisphere | Region | MNI peak coordinates | Z | p |
|---|---|---|---|---|
| P1 (100–140 ms) | ||||
| L | Visual-associative | -26 -88 22 | 2.88 | .027 |
| -16 -84 40 | 2.78 | .036 | ||
| -20 -86 20 | 2.74 | .039 | ||
| -28 -92 16 | 2.77 | .040 | ||
| -20 -92 26 | 2.76 | .041 | ||
| R | Visual-associative | 54 -64 14 | 3.01 | .010 |
| 44 -72 2 | 2.66 | .026 | ||
| 20 -98 12 | 2.78 | .039 | ||
| 18 -82 42 | 2.72 | .040 | ||
| 26 -84 24 | 2.67 | .045 | ||
| L | Visuo-motor coordination | -18 -68 48 | 3.00 | .020 |
| -16 -54 68 | 2.62 | .018 | ||
| -34 -46 58 | 2.58 | .019 | ||
| R | Visuo-motor coordination | 20 -72 52 | 3.24 | .010 |
| 30 -66 56 | 3.21 | .011 | ||
| 24 -48 68 | 2.65 | .011 | ||
| L | Angular gyrus | -54 -60 22 | 2.84 | .013 |
| -54 -64 10 | 2.70 | .023 | ||
| -50 -66 20 | 2.85 | .030 | ||
| -28 -68 36 | 2.66 | .047 | ||
| -42 -66 34 | 2.65 | .048 | ||
| R | Angular gyrus | 50 -56 34 | 2.83 | .014 |
| 56 -56 22 | 2.69 | .019 | ||
| 58 -52 30 | 2.61 | .024 | ||
| 50 -52 30 | 2.59 | .025 | ||
| 34 -78 26 | 3.15 | .013 | ||
| 32 -66 30 | 2.65 | .047 | ||
| R |
Superior parietal |
28 -48 62 | 2.75 | .009 |
| 26 -46 58 |
2.58 |
.013 |
||
| N1 (140–180 ms) | ||||
| L | Visual-associative | -26 -88 22 | 2.66 | .047 |
| R | Visual-associative | 28 -80 34 | 2.66 | .046 |
| L | Visuo-motor coordination | -18 -68 48 | 3.00 | .020 |
| -26 -88 22 | 2.66 | .047 | ||
| R | Visuo-motor coordination | 20 -72 52 | 3.37 | .007 |
| R | Angular gyrus | 50 -56 34 | 2.61 | .024 |
| 30 -62 46 | 3.41 | .006 | ||
| R |
Superior parietal |
28 -48 62 |
2.58 |
.013 |
| Frontal negativity (210–260 ms) | ||||
| R | Postcentral gyrus | 38 -36 64 | 2.74 | .042 |
Note. p-values are family-wise error corrected.
Figure 3.
Results from the source localization analysis (contrast pencil > tree) projected onto glass-brain displays. Note. Images are thresholded at p < .005 (uncorrected) and k = 5.
3.4. Prime-target similarity
To assess whether there were any significant differences in low-level features between the two prime categories, each prime stimulus at the last blurring stage was compared to the target stimulus using the feature similarity index by Zhang et al. (2011). We computed both the FSIM index, which is based on luminance information, and the FSIMc index, which also considers chromatic information. Tree prime stimuli were more similar to the target (FSIM: M = .864, SD = .030; FSIMc: M = .842, SD = .030) than pencil primes (FSIM: M = .855, SD = .018; FSIMc: M = .826, SD = .024). This difference was marginally significant for the FSIM index (p = .067) and significant for FSIMc (p = .002). When correlating the eight FSIM and FSIMc values with their respective mean ERP amplitudes, there were no significant correlations for the P1 and N1 (ps > .2). The correlation with the frontal negativity amplitudes was marginally significant (rs = .793 and .758, ps = .057 and .087, respectively, Bonferroni-corrected).
4. Discussion
In the present study, we showed that an elongated wooden object, ambiguous in size and function, is processed differently depending on the context previously established by conceptual priming. When the pencil context, and thus a context of graspable objects, was primed, the target triggered larger posterior P1 amplitudes, indicative of increased visual attention, as well as a reduced posterior N1 and an enhanced frontal negativity, which has been previously related to object affordance (Proverbio et al., 2011; Rowe et al., 2017). Activation of certain brain areas was also stronger. Moreover, grip aperture in the grasping task was influenced by priming, with participants opening their hands less in pencil prime than in tree prime trials. Thus, our results indicate that even though the target was physically identical across the two prime conditions, it triggered different processes depending on the context previously established by priming.
The enhanced frontal negativity in the ERPs starting at about 210 ms after target onset in the pencil compared to the tree context supports the idea that the pencil primes were successful at making the ambiguous target trigger processes previously related to affordances (Proverbio et al., 2011; Rowe et al., 2017). Thus, we replicated and extended previous findings on the frontal negativity, by ruling out that previously reported ERP effects were due to physical differences between affording and non-affording stimuli. These results unlikely reflect post-decision processes, since the task cue appeared 120 ms after target onset and button press reaction times were 450 ms on average. Moreover, the absence of differences in button press and grasp reaction times between the two prime conditions and the nonsignificant correlations between frontal negativity amplitudes and reaction times support this view.
Stimulus orientation affected the frontal negativity elicited by the target stimulus only when trees had been primed, but not in the case of pencils. This seems somewhat surprising since the literature shows that affording objects are perceived as even more affording when positioned “ready to hand”, e. g. with their handle oriented towards the dominant hand (Garrido-Vásquez and Schubö, 2014). In the present study, the dominant hand was relatively far away from the objects presented on screen, while in other studies the hand was placed closer or the body was oriented towards the object displays (e.g. Rowe et al., 2017). Moreover, much of the evidence comes from handled objects, for which orientation is presumably more important than for pencils.
In addition to the modulation of the frontal negativity, we observed increased allocation of attention to the target when preceded by pencil primes: the posterior P1 was enhanced in this condition as compared to the tree prime condition. This result is in line with previous findings from related studies with graspable objects (Handy et al., 2003; Handy and Tipper, 2007; Matheson et al., 2014). The present study extends this evidence by ruling out the influence of physical stimulus differences between conditions. Research has shown that physically identical stimuli may elicit different P1 amplitudes depending on the amount of attention that is drawn to them. Enhanced P1 amplitudes for physically identical stimuli have been shown when participants were subjectively on-task rather than off-task (distracted) (Kam et al., 2011), at validly versus invalidly cued stimulus locations (Mangun and Hillyard, 1991), for visual stimuli in the attended as compared to the unattended hemifield (Rugg et al., 1987), and for targets in a visual search array that were compatible versus incompatible with a current action plan (Wykowska and Schubö, 2012). Therefore, our results suggest enhanced target processing when graspable objects were primed, in line with the idea of graspable objects drawing visuospatial attention (Craighero et al., 1999; Handy et al., 2003).
The present study also further explored the role of the posterior N1 component for the processing of graspable versus non-graspable objects. We revealed a smaller N1 amplitude for targets primed with pencil rather than tree primes. This finding is in line with Righi et al. (2014), who showed that highly affording objects elicited smaller posterior N1 amplitudes than objects low in affordance, and with Goslin et al. (2012), who reported N1 reductions when object orientations were compatible with the response hand in a current trial. Thus, the N1 reduction to targets primed with a pencil context could be due to the target acquiring a potential for action. Furthermore, Debruille et al. (2019) have recently associated posterior N1 amplitude reductions with suppressed motor reactions, which might also be an explanation for the current data.
Contrary to these lines of evidence, Rowe et al. (2017) failed to find amplitude differences in the posterior N1 when comparing graspable objects to an empty desk. However, in contrast to other studies and the present research, these authors used a passive viewing task. Even though motor-related activity has been reported for affording stimuli during passive viewing (e.g., Proverbio et al., 2011), the posterior N1 component could be related to the action context in which the experiment is embedded, which would explain these contrasting findings and be in line with the suppressed motor reactions explanation (Debruille et al., 2019).
We also found a direct influence of priming on motor behavior, with smaller mean and maximum grip apertures in the pencil than in the tree prime condition. This difference in grip aperture can be explained neither by the size of the to-be-grasped cylinder since it was always the same, nor can it be attributed to target or task cue presentation, which was identical across conditions. Thus, we conclude that the prime context affected motor behavior in the present experiment. This fits with findings from number magnitude priming, in which the priming of low or high numbers decreases or increases grip aperture, respectively, while the object that is to be grasped remains the same (Andres et al., 2008; Chiou et al., 2012; Lindemann et al., 2007; Namdar et al., 2014). The same is true for pictures of small versus pictures of large objects that are shown while grasping the same object (Girardi et al., 2010), and for priming with words that describe small objects (e.g., grape or pencil) compared to larger objects (e.g., jar or apple; Glover et al., 2004). Hence, in accordance with and extending this previous evidence, the present results show that grip aperture can be modulated by prime context. Tentative evidence from the time course analysis furthermore indicates that this difference was most pronounced from 40 – 80% of the total movement duration.
In contrast to previous research (Garrido-Vásquez and Schubö, 2014; Vingerhoets et al., 2009), and contrary to our initial predictions, we did not observe a reaction time advantage for targets primed with graspable versus non-graspable contexts. This held for both the right-hand grasp movements and the left-hand button presses. It could be due to the target not being visible on screen anymore when participants executed their responses. Previous research has determined object visibility as a prerequisite for object affordance effects on reaction times (Tucker and Ellis, 2001); yet other studies have shown that object affordance can influence reaction times even though the objects are not visible (Derbyshire et al., 2006; Tucker and Ellis, 2004). Thus, it is unclear whether object visibility influenced the present results. Additionally, in the present experiment, the target object was not immediately responded to when it appeared on screen, but rather participants had to wait 120 ms for the colored frame to indicate the task in each trial, which means that the behavioral response to the target was delayed. Thus, while in related studies the task was clear from the beginning (e.g., Vingerhoets et al., 2009), in our experiment it was indicated during target processing. This may lead to additional reaction time costs because participants had to determine first which reaction was required from them in a certain trial. Yet another critical aspect could be the time course of affordance processing, as shown in a study by Makris et al. (2011), who presented either a power or precision grip object on the screen. After a variable delay, the background color of the display changed, prompting participants to either perform a power or precision grip on a special device. Reliable effects in line with the concept of micro-affordances (Ellis and Tucker, 2000) were only found at the short (400 ms) delay, but not when the color change took more time (800 or 1200 ms), which was confirmed with a TMS experiment measuring MEPs at the right hand (Makris et al., 2011). In the present experiment, it was clear before target onset where the target would appear and what it would look like, since it was always identical. Thus, the affording character of the target might have partly vanished already at its onset, and this issue did not allow for reaction time differences to be observed. On the other hand, priming context modulated the ERPs during target processing, and we observed an influence of prime context on grasping behavior.
4.1. Limitations of the study
Even though the present study shows that priming context influences how an ambiguous and always identical object is processed, it is at this point impossible to determine what exactly was primed, whether function, graspability (which is related to size), or a combination. Pencils are small and graspable, and at the same time, they come with a predetermined function (writing), while trees or tree trunks are too big to grasp and at the same time, they lack a determined functional association. All these properties have been associated with affordance, with size being a factor that was already described by Gibson (1979) as a prerequisite for graspability, even though despite an adequate object size for grasping with a single hand, some objects may be more graspable than others (Vingerhoets et al., 2009). Additionally, functional associations may play a role, as revealed in a study that showed differences in processing graspable shapes as opposed to tools (Creem-Regehr and Lee, 2005). However, if unfamiliar but highly graspable tools are used instead of shapes, this difference may disappear, at least at the behavioral level (Vingerhoets et al., 2009), while other research shows a decrease in the visuomotor response due to familiarity with the functional role of certain objects (Handy et al., 2006). Moreover, it seems that processes related to object graspability and function are mediated by different neural mechanisms (Valyear et al., 2007). They may also operate at different time scales, with early grasp-related motor activations observed simultaneously with the frontal affordance-related negativity, while activations in areas supposed to play a role in processing the functional significance of tools (e.g., the supramarginal gyrus) appear to operate later (Proverbio et al., 2013). Thus, since the two prime categories in the present experiment differed in many respects, future studies should manipulate the number of dimensions on which the stimuli can be differentiated and include more stimulus categories.
One might argue that having to perform a motor task in each trial (either a grasping movement or a button press) involved some form of minimal motor preparation, which makes us unable to tell apart “pure” attentional and perceptual mechanisms from motor-related processes. Our motivation for including grasp trials was to create an action context for the experiment, based on previous evidence (Garrido-Vásquez and Schubö, 2014; Wykowska and Schubö, 2012). We think that the effects on posterior P1 and N1 might have been facilitated by the grasping context, but nevertheless they probably would have been present without it (Goslin et al., 2012; Handy et al., 2003; Righi et al., 2014). The same holds for the frontal negativity (Proverbio et al., 2011). Since our effects are quite comparable to other studies, in which passive viewing or delayed responses were used (Proverbio et al., 2011; Righi et al., 2014; Rowe et al., 2017), we believe that motor preparation for the tasks in the present experiment is not a major cause for the effects we report.
Even though a strength of our experiment is that all effects were measured on physically identical input, the similarity between the primes and the target stimulus was more pronounced for tree primes, as revealed by the additional picture similarity analysis. Thus, the prime-target transition might have been smoother in the tree compared to the pencil prime condition, which might affect early ERP components that are strongly modulated by sensory qualities of stimuli, such as the posterior P1. However, there were no significant correlations of the similarity indices with P1 or N1 amplitudes, which makes this possibility rather unlikely. Furthermore, similar results have been observed with physically very different types of stimuli before (Proverbio et al., 2011; Righi et al., 2014; Rowe et al., 2017).
5. Conclusion
In the present study, we showed that a physically identical target stimulus, namely an elongated wooden object, triggers an enhanced frontal negativity previously related to affordance processing, stronger allocation of early visual attention, and more activation in occipital and parietal cortex when primed with a context of graspable (pencils) rather than non-graspable (trees) objects. Furthermore, the posterior N1 amplitude was reduced for targets that had been primed with pencils, which we interpret as the target acquiring potential for action when primed with graspable stimuli. We were also able to relate the ERP results to overt motor behavior, showing that prime condition affected grip aperture during the grasping task. Since all effects in the present study were measured on identical visual input, we can rule out that physical stimulus differences might account for our findings, a problem inherent to previous studies in the field.
Declarations
Author contribution statement
P. Garrido-Vásquez: Conceived and designed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
E. Wengemuth: Conceived and designed the experiments; Performed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper.
A. Schubö: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Funding statement
This work was supported by the Deutsche Forschungsgemeinschaft (German Research Foundation; SFB/TRR 135, TP B03).
Data availability statement
Data associated with this study has been deposited at Open Science Framework under the url https://osf.io/y25n4/.
Declaration of interests statement
The authors declare no conflict of interest.
Additional information
No additional information is available for this paper.
References
- Adamo M., Ferber S. A picture says more than a thousand words: behavioural and ERP evidence for attentional enhancements due to action affordances. Neuropsychologia. 2009;47(6):1600–1608. doi: 10.1016/j.neuropsychologia.2008.07.009. [DOI] [PubMed] [Google Scholar]
- Andres M., Ostry D.J., Nicol F., Paus T. Time course of number magnitude interference during grasping. Cortex. 2008;44(4):414–419. doi: 10.1016/j.cortex.2007.08.007. [DOI] [PubMed] [Google Scholar]
- Badets A., Andres M., Di Luca S., Pesenti M. Number magnitude potentiates action judgements. Exp. Brain Res. 2007;180(3):525–534. doi: 10.1007/s00221-007-0870-y. [DOI] [PubMed] [Google Scholar]
- Bub D.N., Masson M.E. Grasping beer mugs: on the dynamics of alignment effects induced by handled objects. J. Exp. Psychol. Hum. Percept. Perform. 2010;36(2):341. doi: 10.1037/a0017606. [DOI] [PubMed] [Google Scholar]
- Cardellicchio P., Sinigaglia C., Costantini M. The space of affordances: a TMS study. Neuropsychologia. 2011;49(5):1369–1372. doi: 10.1016/j.neuropsychologia.2011.01.021. [DOI] [PubMed] [Google Scholar]
- Castiello U. The neuroscience of grasping. Nat. Rev. Neurosci. 2005;6(9):726–736. doi: 10.1038/nrn1744. [DOI] [PubMed] [Google Scholar]
- Cavina-Pratesi C., Monaco S., Fattori P., Galletti C., McAdam T.D., Quinlan D.J.…Culham J.C. Functional magnetic resonance imaging reveals the neural substrates of arm transport and grip formation in reach-to-grasp actions in humans. J. Neurosci. 2010;30(31):10306–10323. doi: 10.1523/JNEUROSCI.2023-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chao L.L., Martin A. Representation of manipulable man-made objects in the dorsal stream. Neuroimage. 2000;12(4):478–484. doi: 10.1006/nimg.2000.0635. [DOI] [PubMed] [Google Scholar]
- Chiou R.Y.-C., Wu D.H., Tzeng O.J.-L., Hung D.L., Chang E.C. Relative size of numerical magnitude induces a size-contrast effect on the grip scaling of reach-to-grasp movements. Cortex. 2012;48(8):1043–1051. doi: 10.1016/j.cortex.2011.08.001. [DOI] [PubMed] [Google Scholar]
- Craighero L., Fadiga L., Rizzolatti G., Umiltà C. Action for perception: a motor-visual attentional effect. J. Exp. Psychol. Hum. Percept. Perform. 1999;25(6):1673. doi: 10.1037//0096-1523.25.6.1673. [DOI] [PubMed] [Google Scholar]
- Creem-Regehr S.H., Lee J.N. Neural representations of graspable objects: are tools special? Cognit. Brain Res. 2005;22(3):457–469. doi: 10.1016/j.cogbrainres.2004.10.006. [DOI] [PubMed] [Google Scholar]
- Debruille J.B., Touzel M., Segal J., Snidal C., Renoult L. A central component of the N1 event-related brain potential could index the early and automatic inhibition of the actions systematically activated by objects. Front. Behav. Neurosci. 2019;13:95. doi: 10.3389/fnbeh.2019.00095. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Derbyshire N., Ellis R., Tucker M. The potentiation of two components of the reach-to-grasp action during object categorisation in visual memory. Acta Psychol. 2006;122(1):74–98. doi: 10.1016/j.actpsy.2005.10.004. [DOI] [PubMed] [Google Scholar]
- Ellis R., Tucker M. Micro-affordance: the potentiation of components of action by seen objects. Br. J. Psychol. 2000;91(Pt 4):451–471. doi: 10.1348/000712600161934. [DOI] [PubMed] [Google Scholar]
- Fu S., Zinni M., Squire P.N., Kumar R., Caggiano D.M., Parasuraman R. When and where perceptual load interacts with voluntary visuospatial attention: an event-related potential and dipole modeling study. Neuroimage. 2008;39(3):1345–1355. doi: 10.1016/j.neuroimage.2007.09.068. [DOI] [PubMed] [Google Scholar]
- Gallivan J.P., Cavina-Pratesi C., Culham J.C. Is that within reach? fMRI reveals that the human superior parieto-occipital cortex encodes objects reachable by the hand. J. Neurosci. 2009;29(14):4381–4391. doi: 10.1523/JNEUROSCI.0377-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gallivan J., McLean A., Culham J. Neuroimaging reveals enhanced activation in a reach-selective brain area for objects located within participants’ typical hand workspaces. Neuropsychologia. 2011;49(13):3710–3721. doi: 10.1016/j.neuropsychologia.2011.09.027. [DOI] [PubMed] [Google Scholar]
- Garrido-Vásquez P., Schubö A. Modulation of visual attention by object affordance. Front. Psychol. 2014;5:59. doi: 10.3389/fpsyg.2014.00059. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gibson J.J. The theory of affordances. In: Gibson J.J., editor. The Ecological Approach to Visual Perception. Lawrence Erlbaum Associates; Hillsdale, NJ: 1979. pp. 127–143. [Google Scholar]
- Girardi G., Lindemann O., Bekkering H. Context effects on the processing of action-relevant object features. J. Exp. Psychol. Hum. Percept. Perform. 2010;36(2):330. doi: 10.1037/a0017180. [DOI] [PubMed] [Google Scholar]
- Glover S., Rosenbaum D.A., Graham J., Dixon P. Grasping the meaning of words. Exp. Brain Res. 2004;154(1):103–108. doi: 10.1007/s00221-003-1659-2. [DOI] [PubMed] [Google Scholar]
- Goslin J., Dixon T., Fischer M.H., Cangelosi A., Ellis R. Electrophysiological examination of embodiment in vision and action. Psychol. Sci. 2012;23(2):152–157. doi: 10.1177/0956797611429578. [DOI] [PubMed] [Google Scholar]
- Grafton S.T., Fadiga L., Arbib M.A., Rizzolatti G. Premotor cortex activation during observation and naming of familiar tools. Neuroimage. 1997;6(4):231–236. doi: 10.1006/nimg.1997.0293. [DOI] [PubMed] [Google Scholar]
- Grèzes J., Decety J. Does visual perception of object afford action? Evidence from a neuroimaging study. Neuropsychologia. 2002;40(2):212–222. doi: 10.1016/s0028-3932(01)00089-6. [DOI] [PubMed] [Google Scholar]
- Handy T.C., Grafton S.T., Shroff N.M., Ketay S., Gazzaniga M.S. Graspable objects grab attention when the potential for action is recognized. Nat. Neurosci. 2003;6(4):421–427. doi: 10.1038/nn1031. [DOI] [PubMed] [Google Scholar]
- Handy T.C., Tipper C.M. Attentional orienting to graspable objects: what triggers the response? Neuroreport. 2007;18(9):941–944. doi: 10.1097/WNR.0b013e3281332674. [DOI] [PubMed] [Google Scholar]
- Handy T.C., Tipper C.M., Borg J.S., Grafton S.T., Gazzaniga M.S. Motor experience with graspable objects reduces their implicit analysis in visual- and motor-related cortex. Brain Res. 2006;1097(1):156–166. doi: 10.1016/j.brainres.2006.04.059. [DOI] [PubMed] [Google Scholar]
- Helbig H.B., Steinwender J., Graf M., Kiefer M. Action observation can prime visual object recognition. Exp. Brain Res. 2010;200(3-4):251–258. doi: 10.1007/s00221-009-1953-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hesse C., Miller L., Buckingham G. Visual information about object size and object position are retained differently in the visual brain: evidence from grasping studies. Neuropsychologia. 2016;91:531–543. doi: 10.1016/j.neuropsychologia.2016.09.016. [DOI] [PubMed] [Google Scholar]
- Hesse C., Schenk T., Deubel H. Attention is needed for action control: further evidence from grasping. Vis. Res. 2012;71:37–43. doi: 10.1016/j.visres.2012.08.014. [DOI] [PubMed] [Google Scholar]
- Ishihara S. Kanehra Trading Inc; 1917. Test for Colour Blindness. [Google Scholar]
- Kalénine S., Shapiro A.D., Flumini A., Borghi A.M., Buxbaum L.J. Visual context modulates potentiation of grasp types during semantic object categorization. Psychon. Bull. Rev. 2014;21(3):645–651. doi: 10.3758/s13423-013-0536-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kam J.W.Y., Dao E., Farley J., Fitzpatrick K., Smallwood J., Schooler J.W. Slow fluctuations in attentional control of sensory cortex. J. Cognit. Neurosci. 2011;23(2):460–470. doi: 10.1162/jocn.2010.21443. [DOI] [PubMed] [Google Scholar]
- Key A.P.F., Dove G.O., Maguire M.J. Linking brainwaves to the brain: an ERP primer. Dev. Neuropsychol. 2005;27(2):183–215. doi: 10.1207/s15326942dn2702_1. [DOI] [PubMed] [Google Scholar]
- Lindemann O., Abolafia J.M., Girardi G., Bekkering H. Getting a grip on numbers: numerical magnitude priming in object grasping. J. Exp. Psychol. Hum. Percept. Perform. 2007;33(6):1400–1409. doi: 10.1037/0096-1523.33.6.1400. [DOI] [PubMed] [Google Scholar]
- Luck S.J., Gaspelin N. How to get statistically significant effects in any ERP experiment (and why you shouldn’t) Psychophysiology. 2017;54:146–157. doi: 10.1111/psyp.12639. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Makris S., Hadar A.A., Yarrow K. Viewing objects and planning actions: on the potentiation of grasping behaviours by visual objects. Brain Cognit. 2011;77(2):257–264. doi: 10.1016/j.bandc.2011.08.002. [DOI] [PubMed] [Google Scholar]
- Mangun G.R., Hillyard S.A. Modulations of sensory-evoked brain potentials indicate changes in perceptual processing during visual-spatial priming. J. Exp. Psychol. Hum. Percept. Perform. 1991;17(4):1057–1074. doi: 10.1037//0096-1523.17.4.1057. [DOI] [PubMed] [Google Scholar]
- Matheson H., Newman A.J., Satel J., McMullen P. Handles of manipulable objects attract covert visual attention: ERP evidence. Brain Cognit. 2014;86:17–23. doi: 10.1016/j.bandc.2014.01.013. [DOI] [PubMed] [Google Scholar]
- McNair N.A., Harris I.M. Disentangling the contributions of grasp and action representations in the recognition of manipulable objects. Exp. Brain Res. 2012;220(1):71–77. doi: 10.1007/s00221-012-3116-6. [DOI] [PubMed] [Google Scholar]
- Namdar G., Tzelgov J., Algom D., Ganel T. Grasping numbers: evidence for automatic influence of numerical magnitude on grip aperture. Psychon. Bull. Rev. 2014;21(3):830–835. doi: 10.3758/s13423-013-0550-9. [DOI] [PubMed] [Google Scholar]
- Natale E., Marzi C.A., Girelli M., Pavone E.F., Pollmann S. 2006. ERP and fMRI Correlates of Endogenous and Exogenous Focusing of Visual-spatial Attention. [DOI] [PubMed] [Google Scholar]
- Novitskiy N., Ramautar J.R., Vanderperren K., De Vos M., Mennes M., Mijovic B.…Sunaert S. The BOLD correlates of the visual P1 and N1 in single-trial analysis of simultaneous EEG-fMRI recordings during a spatial detection task. Neuroimage. 2011;54(2):824–835. doi: 10.1016/j.neuroimage.2010.09.041. [DOI] [PubMed] [Google Scholar]
- Oldfield R.C. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9(1):97–113. doi: 10.1016/0028-3932(71)90067-4. [DOI] [PubMed] [Google Scholar]
- Oostenveld R., Fries P., Maris E., Schoeffelen J. FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. 2011:156869. doi: 10.1155/2011/156869. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Palmer S.E. The effects of contextual scenes on the identification of objects. Mem. Cognit. 1975;3(5):519–526. doi: 10.3758/BF03197524. [DOI] [PubMed] [Google Scholar]
- Perri R.L., Berchicci M., Bianco V., Quinzi F., Spinelli D., Di Russo F. Perceptual load in decision making: the role of anterior insula and visual areas. An ERP study. Neuropsychologia. 2019;129:65–71. doi: 10.1016/j.neuropsychologia.2019.03.009. [DOI] [PubMed] [Google Scholar]
- Proverbio A.M., Adorni R., D’Aniello G.E. 250 ms to code for action affordance during observation of manipulable objects. Neuropsychologia. 2011;49(9):2711–2717. doi: 10.1016/j.neuropsychologia.2011.05.019. [DOI] [PubMed] [Google Scholar]
- Proverbio A.M., Azzari R., Adorni R. Is there a left hemispheric asymmetry for tool affordance processing? Neuropsychologia. 2013;51(13):2690–2701. doi: 10.1016/j.neuropsychologia.2013.09.023. [DOI] [PubMed] [Google Scholar]
- Righi S., Orlando V., Marzi T. Attractiveness and affordance shape tools neural coding: insight from ERPs. Int. J. Psychophysiol. 2014;91(3):240–253. doi: 10.1016/j.ijpsycho.2014.01.003. [DOI] [PubMed] [Google Scholar]
- Rowe P.J., Haenschel C., Kosilo M., Yarrow K. Objects rapidly prime the motor system when located near the dominant hand. Brain Cognit. 2017;113:102–108. doi: 10.1016/j.bandc.2016.11.005. [DOI] [PubMed] [Google Scholar]
- Rugg M.D., Milner A.D., Lines C.R., Phalp R. Modulation of visual event-related potentials by spatial and non-spatial visual selective attention. Neuropsychologia. 1987;25(1A):85–96. doi: 10.1016/0028-3932(87)90045-5. [DOI] [PubMed] [Google Scholar]
- Thomas B.J., Riley M.A. Remembered affordances reflect the fundamentally action-relevant, context-specific nature of visual perception. J. Exp. Psychol. Hum. Percept. Perform. 2014;40(6):2361. doi: 10.1037/xhp0000015. [DOI] [PubMed] [Google Scholar]
- Tucker M., Ellis R. The potentiation of grasp types during visual object categorization. Vis. Cognit. 2001;8(6):769–800. [Google Scholar]
- Tucker M., Ellis R. Action priming by briefly presented objects. Acta Psychol. 2004;116(2):185–203. doi: 10.1016/j.actpsy.2004.01.004. [DOI] [PubMed] [Google Scholar]
- Valyear K.F., Cavina-Pratesi C., Stiglick A.J., Culham J.C. Does tool-related fMRI activity within the intraparietal sulcus reflect the plan to grasp? Neuroimage. 2007;36:T94–T108. doi: 10.1016/j.neuroimage.2007.03.031. [DOI] [PubMed] [Google Scholar]
- Vingerhoets G., Vandamme K., Vercammen A.L.I.C.E. Conceptual and physical object qualities contribute differently to motor affordances. Brain Cognit. 2009;69(3):481–489. doi: 10.1016/j.bandc.2008.10.003. [DOI] [PubMed] [Google Scholar]
- Wykowska A., Schubö A. Action intentions modulate allocation of visual attention: electrophysiological evidence. Front. Psychol. 2012;3:379. doi: 10.3389/fpsyg.2012.00379. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang L., Zhang L., Mou X., Zhang D. FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011;20(8):2378–2386. doi: 10.1109/TIP.2011.2109730. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data associated with this study has been deposited at Open Science Framework under the url https://osf.io/y25n4/.



























