Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2022 Jun 5;12:9042. doi: 10.1038/s41598-022-12174-9

The role of the anterior temporal cortex in action: evidence from fMRI multivariate searchlight analysis during real object grasping

Ethan Knights 1, Fraser W Smith 1, Stéphanie Rossit 1,
PMCID: PMC9167815  PMID: 35662252

Abstract

Intelligent manipulation of handheld tools marks a major discontinuity between humans and our closest ancestors. Here we identified neural representations about how tools are typically manipulated within left anterior temporal cortex, by shifting a searchlight classifier through whole-brain real action fMRI data when participants grasped 3D-printed tools in ways considered typical for use (i.e., by their handle). These neural representations were automatically evocated as task performance did not require semantic processing. In fact, findings from a behavioural motion-capture experiment confirmed that actions with tools (relative to non-tool) incurred additional processing costs, as would be suspected if semantic areas are being automatically engaged. These results substantiate theories of semantic cognition that claim the anterior temporal cortex combines sensorimotor and semantic content for advanced behaviours like tool manipulation.

Subject terms: Neuroscience, Cognitive neuroscience, Motor control, Sensorimotor processing, Sensory processing, Visual system

Introduction

The human ability to use tools (like using a knife for cutting) symbolises a great step in our evolutionary lineage1, but the brain mechanisms underpinning this behaviour remain debated. Over the past decades, theoretical models24 and neuroimaging evidence converge to propose that intelligent tool-use is the result of functionally interacting neural systems (for recent summaries see5,6). One such neural system is the posterior parietal sensorimotor circuit proposed to perform conceptual processing about the objects during sensing and handling by cognitive embodiment theories7,8,9. The classic dual visual stream theory10 further incorporates visual ventrally located brain areas (e.g., Lateral Occipital Temporal Cortex) for perceiving tool properties (e.g., visual form, shape11). Additional dual stream models describe the Inferior Parietal Lobule (IPL) and posterior Middle Temporal Gyrus (MTG) as neural sites that integrate information from sensorimotor and perceptual brain regions into a visuo-kinesthetic format relevant for tool manipulation12,2,4. Most recently, focus has shifted toward the role of the anterior temporal cortex in tool-use (e.g.,13, based on claims from semantic models that this area constitutes an amodal hub which weaves abstract conceptual representations14,15).

Each of these ‘tool-use’ brain regions have been identified by seminal picture-viewing neuroimaging studies (e.g.,16). The involvement of posterior/inferior parietal and lateral occipital cortices initially suggested to code tool-related information by picture viewing studies has since been replicated by a small number of functional MRI (fMRI) experiments involving real tool manipulation17, 18,1421, see Valyear et al.22 for a review. The anterior temporal cortex, however, has yet to be identified with real action tasks during which participants are asked to manipulate tools with their hands. This is at odds with traditional neuropsychology evidence showing that anterior temporal lobe degeneration in semantic dementia patients causes the loss of conceptual knowledge about everyday objects, despite retained shape processing and praxis23,24. In fact, converging neuroimaging evidence shows that anterior temporal cortex represents conceptual information about tools, like the usual locations or functions associated with a tool, but these findings are restricted to high-level cognitive tasks thought to rely on mechanisms distinct from real hand-tool manipulation25,26, such as picture recognition, language or pantomime90,2231.

To test across the whole-brain which regions are sensitive to learned tool-use knowledge, we applied whole-brain searchlights to an fMRI dataset in which participants performed real hand actions with 3D-printed tools. Participants grasped 3D tools in ways that were considered typical for use (grasping a spoon by the handle) or not (grasped by the tool-head; Fig. 2A; re-analysis of Knights et al.20). To ensure the decoding effects between typical vs. atypical actions were independent of kinematic differences, we also included biomechanically matched actions with control non-tools (grasping right vs. left). In a control behavioural experiment, we lastly tested if these tool and non-tool actions were appropriately matched for biomechanics by recording hand kinematic using high-resolution motion-capture during the same paradigm outside the MR environment (Fig. 1B).

Figure 2.

Figure 2

(A) Wholebrain searchlight classification. For each participant, brain activation patterns were extracted from a mask (single blue cube) that was shifted through the entire fMRI volume. Decoding accuracy was measured with independent linear pattern classifiers for tool (top row) and non-tool actions (bottom row) that were trained to map between brain-activity patterns and the type of grasp being performed with the tools (typical vs. atypical) or non-tools (right vs. left). Typicality difference maps were produced by subtracting the decoding accuracy maps for tools and non-tools, as well as chance-level accuracy (50%). (B) Searchlight Results. The group typicality difference map demonstrated clusters in the left anterior temporal cortex, as well as right medial parietal and fusiform areas, where decoding accuracies were significantly higher for actions with tools (typical vs. atypical grasps) than non-tools (biomechanically equivalent right vs. left grasps). Acronyms: ATC: Anterior Temporal Cortex; FG: Fusiform Gyrus; SPOC: Superior Parieto-Occipital Cortex.

Figure 1.

Figure 1

(A) fMRI Experiment. Participants laid under a custom-built MR-compatible turntable where 3D-printed tool and non-tool stimuli were presented within reaching distance in a block-design. (B) Motion-Capture Experiment. As a behavioural control experiment, participants performed this paradigm in a motion-capture laboratory to measure kinematics with infrared-reflective (IRED) markers affixed to the hand. (A and B) During the experiments, the rooms were completely dark, objects were visible only when illuminated, all actions were performed with the right-hand only and participants were naïve to study goals (i.e., they were asked to grasp right or left side of objects without mentioning we were investigating tools or typicality manipulation).

Results

Real action fMRI experiment

Whole-brain searchlight Multivoxel Pattern Analysis (MVPA) (Fig. 2A)32,33 was used to identify the brain regions that represented how to appropriately grasp tools for use (i.e., by handle rather than tool-head). Specifically, a stringent typicality difference map (Fig. 2) was generated using a searchlight subtraction analysis that controlled for low-level hand kinematics: the multivariate decoding map of right versus left grasps of control non-tools was subtracted from the decoding map of typical (right) versus atypical (left) grasps of tools (see Methods). This difference map thus reveals which brain areas contain information about how to grasp tools correctly for subsequent use, independently of low-level differences between right versus leftward grasping movements.

As presented in Fig. 2, significantly higher decoding accuracy for tools than non-tools was observed in a large cluster (see Table 1 for cluster sizes) comprising an anterior portion of the left Superior and Middle Temporal Gyri (STG; MTG) that extended into the Parahippocampal Gyrus (PHG). Other clusters surviving correction for multiple comparisons included those within the right Fusiform Gyrus (FG) and anterior Superior Parieto-Occipital Cortex (aSPOC). No cluster of activity demonstrated higher decoding accuracy in the reverse direction, that is, for non-tools higher than tools.

Table 1.

Searchlight result cluster sizes (voxels), peak coordinates (Talairach) and statistics.

Region Cluster size X Y Z t-statistic p
L-MTG 1674 − 39 − 16 − 11 5.6 < .001
L-STG − 45 − 7 − 5 5 < .001
L-PHG − 27 − 19 − 23 4.8 < .001
R-FG 1410 30 − 73 − 5 4.8 < .001
R-SPOC 278 15 − 67 31 4.64 < .001

Behavioural motion-capture experiment

To better understand action processing speed for tools vs. non-tools, we measured hand kinematics with high-resolution motion-capture while participants performed the same task outside the MRI. As presented in Fig. 3, analysis of reaction time (RT) and movement time (MT) both revealed a significant main effect of object category (RT: F(1,21) = 15, p = 0.001, ηp2 = 0.42; MT: F(1,21) = 5.74, p = 0.026, ηp2 = 0.22) where grasping was slower for tools than non-tools (RT mean difference [standard error] = 9.7 ms [2.5 ms]; MT mean difference = 6 ms [2.5 ms]). Overall effects of reach direction (i.e., slower across-body reaches) were also observed where leftward (relative to rightward) actions had longer MTs (F(1,21) = 8.9, p = 0.007, ηp2 = 0.3) and a decreased peak velocity (PV) (F(1,21) = 11.48, p = 0.003, ηp2 = 0.35) (MT mean difference = 14.8 ms [5 ms]; PV mean difference = 34.4 ms [10.2 ms]) (Fig. S1). No other significant main effects or any interaction between object category and typicality were found (all p’s > 0.15). Importantly, this lack of interaction indicates that timing did not differ specifically for grasping tools typically vs. atypically when compared to the matched movements with control non-tools.

Figure 3.

Figure 3

Behavioural Results. Hand kinematics differed between object categories: participant’s RTs and MTs were slower when grasping tools, relative to non-tools. Error bars represent standard error of the mean.

Discussion

Our real action searchlight analysis presents the first fMRI evidence that left anterior temporal cortex is sensitive to action information about tool movements during real 3D object manipulation (Fig. 2B). These results are in line with recent tool-use models (e.g.,13 that include claims from semantic cognitive theories about the role of anterior temporal cortex in constructing abstract object representations14,15,34). According to these leading models, the anterior temporal cortex processes conceptual knowledge that is feature invariant (i.e., generalises across exemplar identities) like the typical way tools are handled for use (i.e., grasp tool by its handle), as demonstrated here.

Anatomically, the reported neural region peaks here are near anterior temporal lobe clusters known to code semantics during tool pantomimes27 or tool manufacturing35. The peak location of these regions is further along the posterior axis than reported during object knowledge association tasks (e.g.,31), but general standardised neuropsychological tests of associative knowledge have been reported at comparable locations (e.g., 36). Implementing specialised distortion correction 37 or an increased field of view38 will be useful for future fMRI studies to address whether areas further along the temporal pole also code information about tool manipulation. As for the left lateralisation of this effect, this resembles a popular model of the left hemisphere tool processing networks3 and is in line with the fact that all movements were performed with the right-hand during our study. Moreover, left lateralised anterior temporal responses have been reported for semantic language processing38 which, when considered alongside our results, resembles the prevalent view across philosophy39, and more recently neuroscience (e.g.,40,41), that language and motor skills are tightly linked.

Remarkably, this tool-related semantic content was detectable even when task performance was independent to tool conceptual processing. That is, unlike prior tasks that have asked participants to explicitly attend to different tool associations (e.g., pantomiming actions related to scissors vs. pliers27 or recalling if a tool is typically found in the kitchen vs. garage31), our participants were simply instructed to grasp the ‘left’ or ‘right’ side of the stimuli and, throughout all aspects of experimentation (see Methods) the stimuli were purposefully referred to as ‘objects’ (rather than ‘tools’). Since participants were not required to form intentions of using these tools, or even process their identities, our results therefore demonstrate that tool representations are automatically triggered. Like similar findings (e.g., 42,17,43), this automaticity supports influential affordance theories2,3546 which predict that merely viewing objects potentiates action. Our results provide evidence of this phenomena for humans at a fine spatial resolution (e.g., compared to the Bereitschaftspotential47) and during realistic object manipulation (i.e., for tool-use that are directly viewed without the use of mirrors).

Representations about actions with tools also extended into the fusiform and medial parieto-occipital cortex (Fig. 2B), consistent with previous views that these areas code semantics, due to either showing crossmodal responses (e.g., reading tool words and viewing tool pictures3950; or representing learnt object-associations51). In fact, our results are in line with the hub-and-spoke theory52 suggesting that these two domain-general systems (e.g., for perception or action10), may act as spokes to a left anterior temporal cortex ‘hub’ when automatically processing learnt tool movements. Indeed, fusiform cortex is well known for processing perceptual information about object form (e.g.,53). And SPOC, along with corresponding regions across the medial wall (V6Av, V6Ad), are known to be involved in planning hand actions (e.g., pointing, reaching and grasping) in both the macaque54 and human brain (e.g.,5558). Both the fusiform and SPOC have previously been shown to sensitive to prior experience, such as for processing typical action routines59,60 or object functions61. Alternatively, these regions could be implicated in networks supporting inference about object properties and their relationship to the laws of physics (e.g.,4,62,63), though this account does not necessarily preclude a role of the anterior temporal cortex in the semantic aspects of tool-use.

Consistent with the neural differences observed by contrasting actions with tool and non-tool objects (Fig. 2B), our behavioural motion-capture results similarly demonstrated slower overall responses for grasping tools than non-tools (Fig. 3). From an experimental perspective, the finding of a general object category main effect independent of reach direction indicates that the biomechanics for actions involving the handle and head of the tools were appropriately matched. In other words, basic kinematic differences between different actions cannot simply explain the tool-specific decoding. Considered theoretically, the observed faster non-tool responses are consistent with many accounts describing how tool-related actions are achieved via psychological (e.g.,5266) and neural (e.g.,2,10,11,6770) mechanisms that are distinct from those used for basic motor control. Similar slowing for tools has been observed in simple button-press RT experiments when comparing pictures of tools and of simple shapes71 or other object categories (e.g., natural objects; 72). As with our findings, these simple RT effects are thought to be caused by the interference from the additional processing of competing (yet task irrelevant) functional associations that are automatically triggered by viewing tools (e.g.,73,74).

By virtue of the grasping paradigm used here, our results are unable to capture which brain regions represent real tool-use (like scooping with a spoon). Our grasping paradigm ensured that biomechanical properties of the movements were tightly controlled across conditions (e.g., by specifying grip points), but ongoing work in our laboratory is extending these paradigms to real tool-use with more variable degrees of freedom. Further, additional functional connectivity approaches utilising Dynamic Causal Modelling (DCM) (e.g.,75) will be suited to deepen our understanding of the relationship between the anterior temporal cortex and other systems proposed to support tool-use. For example, DCM could be used to determine whether, as predicted by hub-and-spoke theory15, left anterior temporal cortex influences ventral visual stream activity in a bidirectional manner. As a final consideration, we expect that the lack of decoding effects in LOTC and IPS (i.e., regions that we identified to be sensitive to typical tool grasping in a previous analysis of this dataset) is due to our searchlight approach. Searchlight decoding relies on the assumption that information is contained within local clusters of voxels at a specific resolution (i.e., of the sphere or cube76), and this resolution is reduced by averaging maps across participants, particularly for regions that are anatomically variable (like the IPS). In Knights et al.20 we instead used a Region of Interest (ROI) approach to identify the LOTC and IPS at the subject-level and even used a functional localiser to select functionally relevant voxels (e.g., hand-selective) which likely boosts decoding sensitivity. In fact, examining an uncorrected version of the typicality difference map does reveal a cluster in the left IPS consistent with20 (see Fig. S2).

Altogether, neural representations were detected for the first time in anterior temporal areas that leading theories of semantic cognition claim to build rich amodal relationships about objects and their uses. By observing the automaticity of these task-irrelevant effects across both behaviour and the brain, our results begin to uncover which, as well as how, specific brain regions have evolved to support efficient tool-use, a defining feature of our species.

Methods

fMRI

Participants

Nineteen healthy participants (10 male; mean age = 23 +/− 4.2 years; age range, 18–34 years, described in Knights et al.20), performed the fMRI real action experiment, with each providing written consent in line with procedures approved by the School of Psychology Ethics Committee at the University of East Anglia.

Ethics

The research was carried out according to the Declaration of Helsinki and approved by the Ethics Committee of the School of Psychology at the University of East Anglia. All participants gave informed consent prior to participation.

Apparatus and stimuli

The 3D-printed kitchen tool and biomechanically matched non-tool bar objects were adapted from Brandi et al.19 (Fig. 1A). As in Knights et al.20, the dimensions of each non-tool were matched to one of the tools, such that variability was minimized and kinematic requirements were as similar as possible between different grasps (i.e., left vs. right and small vs. large), including controlling for low-level shape features that can confound tool-effects, like elongation77.

An MR-compatible turntable apparatus was custom-built for presenting the 3D objects within reachable space (Fig. 1; also see20). Specifically, objects were placed on the turntable that was located above the participant’s pelvis and were only visible when illuminated by a bright white light-emitting diode (LED). To control for eye movements, participants were instructed to fixate a small red LED positioned above objects. Right eye and arm movements were monitored online and recorded using two MR-compatible infrared-sensitive cameras (MRC Systems, Fig. 1 Left) to verify that participants maintained fixation and performed the correct grasping movements. Participants laid the scanner with a head-tilted configuration (~ 20 deg) to allow direct viewing of the workspace and 3D stimuli without the use of mirrors. An upper-arm restraint and industry standard cushioning were used to minimise motion artefacts by ensuring that movements were performed by flexion around the elbow only. Auditory instructions were delivered via earphones (Sensimetrics MRI-Compatible Insert Earphones Model S14).

Experimental design

A powerful block-design fMRI paradigm20 maximised the contrast-to-noise ratio, to generate reliable estimates of average voxel response patterns, while also improving the detection of blood oxygenation level-dependent (BOLD) signal changes without significant interference from artefacts during overt movement78. Briefly, a block began with an auditory instruction (‘Left’ or ‘Right’) and participants grasped the object during 10 s ON-block when the object was briefly illuminated using a right-handed precision grip (i.e., index finger and thumb) along the vertical axis. Throughout experimentation (i.e., consent materials, training instructions) the stimuli were referred to as ‘objects’ such that participants were naïve to the study’s purpose of examining typical versus atypical tool actions.

Acquisition

The BOLD fMRI measurements were acquired using a 3 T wide bore GE-750 Discovery MR scanner. To achieve a good signal to noise ratio during the real action fMRI experiment, the posterior half of a 21-channel receive-only coil was tilted and a 16-channel receive-only flex coil was suspended over the anterior–superior part of the skull (see Fig. 1B). A T1-weighted (T1w) anatomical image was acquired with BRAVO sequences, followed by T2*-weighted single-shot gradient Echo-Planer Imaging (EPI) sequences for each block of the real action experiment, using standard parameters for whole-brain coverage (see20).

Data preprocessing

Preprocessing of the raw functional datasets and ROI definitions were performed using BrainVoyager QX [version 2.8.2] (Brain Innovation, Maastricht, The Netherlands). Anatomical data were transformed to Talairach space and fMRI time series were pre-processed using standard parameters (no smoothing) before being coaligned to an anatomical dataset (see20). For each block of interest, and each single run independently, the timeseries were subjected to a general linear model with predictors per condition of interest, as to estimate activity patterns for searchlight MVPA (6 tool and 6 non-tools blocks per run). A small number of runs with movement or eye errors were removed from further analysis (see20).

Searchlight pattern classification

Searchlight MVPA32 was performed independently, per participant, for tool and non-tool trial types using separate linear pattern classifiers (linear support vector machines) that were trained to learn the mapping between a set of brain-activity patterns (β values computed from single blocks of activity) and the type of grasp being performed with the tools (typical vs. atypical) or non-tools (right vs. left). A cube mask (5 × 5 × 5 voxel length, equal to 125 voxels) was shifted through the entire brain volume, applying the classification procedure at each centre voxel33 to measure the accuracy that a given cluster of activity patterns could be used to discriminate between the different tool, or non-tool, actions.

To test the performance of our classifiers, decoding accuracy was assessed using an n-fold leave-one-run-out cross-validation procedure; thus, our models were built from n − 1 runs and were tested on the independent nth run (repeated for the n different possible partitions of runs in this scheme; 79,80); 33,80, before averaging across n iterations to produce a representative decoding accuracy measure per participant and per voxel. Searchlight analysis space was restricted to a common group mask within Talairach space, defined by voxels with a mean BOLD signal > 100 for every participant’s fMRI runs to ensure that all voxels included in searchlight MVPA contained suitable activation. Beta estimates for each voxel were normalized (separately for training and test data) within a range of − 1 to 1 before input to the SVM 81, and the linear SVM algorithm was implemented using the default parameters provided in the LibSVM toolbox (C = 1). Pattern classification was performed with a combination of in-house scripts (Smith and Muckli 2010;33) implemented in Matlab using the SearchMight toolbox 82.

Statistical analysis

Voxel accuracies from searchlight MVPA for each participant were converted to unsmoothed statistical maps. To assess where in the brain coded information about typicality, we used a paired samples t-test approach: non-tool accuracy maps were subtracted from the tool accuracy map, producing single participant typicality difference maps (i.e., tool > non-tool) where it was tested, at the group-level, whether the difference in decoding accuracies for tools versus non-tools was greater than zero at each voxel. The BrainVoyager cluster-level statistical threshold estimator83,84 was used for cluster correction (voxelwise thresholds were set to p = 0.01 and then the cluster-wise thresholds were set to p < 0.05 using a Monte Carlo simulation of 1000 iterations), before projecting results on to a standard surface85.

Behavioural control experiment

Participants

Twenty-two right-handed (Edinburgh Handedness Questionnaire;86) healthy volunteers completed the motion-capture experiment (6 males, 19–29 years of age, mean age = 22.3, SD = 2.4). Ten participants had completed the previous fMRI experiment. All had normal or corrected-to-normal vision, no history of motor, psychiatric or neurological disorders.

Ethics

The research was performed in line with the Declaration of Helsinki and approved by the School of Psychology ethics committee at the University of East Anglia. All participants gave informed consent prior to taking part.

Apparatus and Stimuli

Stimuli were the same 3D-printed objects used in the fMRI experiment. A Qualisys Oqus (AB, Gothenberg, Sweden) sampling at 179 Hz, measured the position of small passive markers affixed to the participants’ right wrist and the nails of the right index finger and thumb (Fig. 1B). The MR-compatible turntable apparatus was setup in the motion-capture laboratory identically to the fMRI experiment. This included using the same distances between the resting hand and object centre (43 cm) and the centrally aligned red fixation LED (subtending a mean visual angle of ~ 20° from the centre of stimuli), as well as requiring a comparable head tilt (~ 20°). The two minor differences between the MR and motion-capture environments was that for motion-capture there was no arm-strap or eye-monitoring cameras (though participants completed the same pre-experiment training and received verbal reminders between experimental blocks to maintain fixation and to minimise upper arm movements) and the use of noise cancelling headphones (Bose Corporation, USA) to ensure that the sound of stimulus placement did not provide cues about an upcoming trial.

Experimental design

Experimental designs were almost identical across the fMRI and behavioural control experiments. The first difference was that the elements critical for modelling the haemodynamic response (baseline periods between trials) during fMRI were omitted in this behavioural experiment. Second, an additional block was collected due to the risk of excluding trials due to marker-occlusion. On average participants completed seven runs (minimum six, maximum seven) totalling 84 experimental trials and 21 repetitions per condition per participant.

Data preprocessing

Kinematic data were obtained by localising the x, y and z positions of the markers attached to the index finger, thumb and wrist of the participants’ right hand (Fig. 1B). These 3D positions were filtered using a low-pass Butterworth filter (10 Hz-cut-off, 2nd order). Wrist marker position determined movement on-offset (velocity-based criterion = 50 mm/s) and, in the case that these value was never exceeded, the local minimum of the velocity trace was used as the offset of the outward reaches87.

Trial-level reach kinematic dependent variables (Reaction Time, Movement Time, Peak Velocity and time to Peak Velocity; RT; MT; PV; tPV) were computed per the five grasping repetitions and subsequently collapsed. The grand mean, per participant, for the four conditions were retained after removing problematic trials (2.62%) based on the following cases: marker occlusion (2.09%), incorrect object presentation (0.04%) and participant responses that were extremely slow (0.11%; i.e., > 1000 ms) or in the wrong direction (0.38%).

Statistical analysis

Repeated measures ANOVAs were used to compare behavioural performance across conditions in a 2 (object category: tools vs. non-tools) × 2 (typicality: typical vs. atypical) factorial design.

Supplementary Information

Acknowledgements

We thank Courtney Mansfield, Diana Tonin, Janak Saada, Jenna Green, Richard Greenwood, Holly Weaver, Iwona Szymura and Emmeline Mottram for support in data collection and Derek Quinlan for building the real action set-up. This work was funded by grant (184/14) from the BIAL Foundation awarded to S. Rossit & F.W. Smith.

Author contributions

S.R., E.K. and F.W.S. conceptualised the study; E.K. and S.R. collected data; E.K., F.W.S and S.R. analysed data; E.K., F.W.S. and S.R. wrote the manuscript; S.R. and F.W.S. acquired funding.

Data availability

The full raw f/MRI dataset is accessible from OpenNEURO (https://openneuro.org/datasets/ds003342/versions/1.0.0). The motion-capture datasets are accessible from the Open Science Framework (https://osf.io/uy3qa/).

Code availability

Computer code for running the experiments and analysis of the fMRI (https://osf.io/zxnpv) and behavioural datasets are accessible from the Open Science Framework (https://osf.io/uy3qa/).

Competing interests

The authors declare no competing interests.

Footnotes

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

The online version contains supplementary material available at 10.1038/s41598-022-12174-9.

References

  • 1.Ambrose SH. Paleolithic technology and human evolution. Sci. 2001;291(5509):1748–1753. doi: 10.1126/science.1059487. [DOI] [PubMed] [Google Scholar]
  • 2.Buxbaum, L. J. Learning, remembering, and predicting how to use tools: Distributed neurocognitive mechanisms: Comment on Osiurak and Badets. Psychol. Rev.124, 346–360 (2016). [DOI] [PMC free article] [PubMed]
  • 3.Lewis JW. Cortical networks related to human use of tools. Neuroscientist. 2006;12(3):211–231. doi: 10.1177/1073858406288327. [DOI] [PubMed] [Google Scholar]
  • 4.Osiurak F, Badets A. Tool use and affordance: Manipulation-based versus reasoning-based approaches. Psychol. Rev. 2016;123(5):534. doi: 10.1037/rev0000027. [DOI] [PubMed] [Google Scholar]
  • 5.Garcea FE, Mahon BZ. Parcellation of left parietal tool representations by functional connectivity. Neuropsychologia. 2014;60:131–143. doi: 10.1016/j.neuropsychologia.2014.05.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Reynaud E, Lesourd M, Navarro J, Osiurak F. On the neurocognitive origins of human tool use: A critical review of neuroimaging data. Neurosci. Biobehav. Rev. 2016;64:421–437. doi: 10.1016/j.neubiorev.2016.03.009. [DOI] [PubMed] [Google Scholar]
  • 7.Allport, D. A. Distributed memory, modular subsystems and dysphasia. In Newman, S. K., Epstein, R. (Eds.), Current perspectives in dysphasia (pp. 32–60). New York, NY: Churchill Livingstone (1985).
  • 8.Mahon BZ, Caramazza A. A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. J. Physiol. Paris. 2008;102(1–3):59–70. doi: 10.1016/j.jphysparis.2008.03.004. [DOI] [PubMed] [Google Scholar]
  • 9.Martin A. GRAPES—Grounding representations in action, perception, and emotion systems: How object properties and categories are represented in the human brain. Psychon. Bull. Rev. 2016;23(4):979–990. doi: 10.3758/s13423-015-0842-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Milner AD, Goodale MA. The Visual Brain in Action. 2. Oxford: Oxford University Press; 1995. [Google Scholar]
  • 11.Lingnau, A., & Downing, P. E. The lateral occipitotemporal cortex in action. Trends Cogn. Sci., 19(5), 268–277 (2015). [DOI] [PubMed]
  • 12.Rizzolatti, G., & Matelli, M. Two different streams form the dorsal visual system: anatomy and functions. Experimental brain research, 153(2), 146–157 (2003). [DOI] [PubMed]
  • 13.Lesourd M, Servant M, Baumard J, Reynaud E, Ecochard C, Medjaoui FT, Osiurak F. Semantic and action tool knowledge in the brain: Identifying common and distinct networks. Neuropsychologia. 2021;159:107918. doi: 10.1016/j.neuropsychologia.2021.107918. [DOI] [PubMed] [Google Scholar]
  • 14.Jefferies E, Thompson H, Cornelissen P, Smallwood J. The neurocognitive basis of knowledge about object identity and events: Dissociations reflect opposing effects of semantic coherence and control. Philos. Trans. R. Soc. B. 2020;375(1791):20190300. doi: 10.1098/rstb.2019.0300. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Lambon Ralph MA, Jefferies E, Patterson K, Rogers TT. The neural and computational bases of semantic cognition. Nat. Rev. Neurosci. 2017;18(1):42–55. doi: 10.1038/nrn.2016.150. [DOI] [PubMed] [Google Scholar]
  • 16.Chao LL, Haxby JV, Martin A. Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nat. Neurosci. 1999;2(10):913–919. doi: 10.1038/13217. [DOI] [PubMed] [Google Scholar]
  • 17.Valyear KF, Gallivan JP, McLean DA, Culham JC. fMRI repetition suppression for familiar but not arbitrary actions with tools. J. Neurosci. 2012;32(12):4247–4259. doi: 10.1523/JNEUROSCI.5270-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Gallivan, J. P., McLean, D. A., Valyear, K. F., & Culham, J. C. Decoding the neural mechanisms of human tool use. elife, 2, e00425 (2013). [DOI] [PMC free article] [PubMed]
  • 19.Brandi ML, Wohlschläger A, Sorg C, Hermsdörfer J. The neural correlates of planning and executing actual tool use. J. Neurosci. 2014;34(39):13183–13194. doi: 10.1523/JNEUROSCI.0597-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Knights E, Mansfield C, Tonin D, Saada J, Smith FW, Rossit S. Hand-selective visual regions represent how to grasp 3D tools: Brain decoding during real actions. J. Neurosci. 2021;41(24):5263–5273. doi: 10.1523/JNEUROSCI.0083-21.2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Styrkowiec PP, Nowik AM, Króliczak G. The neural underpinnings of haptically guided functional grasping of tools: An fMRI study. Neuroimage. 2019;194:149–162. doi: 10.1016/j.neuroimage.2019.03.043. [DOI] [PubMed] [Google Scholar]
  • 22.Valyear K. Evolution of Nervous Systems. 2. New York: Academic Press; 2016. The neuroscience of human tool use; pp. 341–353. [Google Scholar]
  • 23.Hodges JR, Patterson K, Oxbury S, Funnell E. Semantic dementia: Progressive fluent aphasia with temporal lobe atrophy. Brain. 1992;115(6):1783–1806. doi: 10.1093/brain/115.6.1783. [DOI] [PubMed] [Google Scholar]
  • 24.Mummery CJ, Patterson K, Price CJ, Ashburner J, Frackowiak RS, Hodges JR. A voxel-based morphometry study of semantic dementia: Relationship between temporal lobe atrophy and semantic memory. Ann. Neurol. 2000;47(1):36–45. doi: 10.1002/1531-8249(200001)47:1&#x0003c;36::AID-ANA8&#x0003e;3.0.CO;2-L. [DOI] [PubMed] [Google Scholar]
  • 25.Goldenberg G. Facets of pantomime. J. Int. Neuropsychol. Soc. 2017;23(2):121–127. doi: 10.1017/S1355617716000989. [DOI] [PubMed] [Google Scholar]
  • 26.Snow JC, Culham JC. The treachery of images: How realism influences brain and behavior. Trends Cogn. Sci. 2021;25(6):506–519. doi: 10.1016/j.tics.2021.02.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Chen Q, Garcea FE, Mahon BZ. The representation of object-directed action and function knowledge in the human brain. Cereb. Cortex. 2016;26(4):1609–1618. doi: 10.1093/cercor/bhu328. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ishibashi R, Lambon Ralph MA, Saito S, Pobric G. Different roles of lateral anterior temporal lobe and inferior parietal lobule in coding function and manipulation tool knowledge: Evidence from an rTMS study. Neuropsychologia. 2011;49(5):1128–1135. doi: 10.1016/j.neuropsychologia.2011.01.004. [DOI] [PubMed] [Google Scholar]
  • 29.Ishibashi R, Mima T, Fukuyama H, Pobric G. Facilitation of function and manipulation knowledge of tools using transcranial direct current stimulation (tDCS) Front. Integr. Neurosci. 2018;11:37. doi: 10.3389/fnint.2017.00037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Marstaller L, Fynes-Clinton S, Burianová H, Reutens DC. Evidence for a functional specialization of ventral anterior temporal lobe for language. Neuroimage. 2018;183:800–810. doi: 10.1016/j.neuroimage.2018.08.062. [DOI] [PubMed] [Google Scholar]
  • 31.Peelen MV, Caramazza A. Conceptual object representations in human anterior temporal cortex. J. Neurosci. 2012;32(45):15728–15736. doi: 10.1523/JNEUROSCI.1953-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Kriegeskorte N, Goebel R, Bandettini P. Information-based functional brain mapping. Proc. Natl. Acad. Sci. 2006;103(10):3863–3868. doi: 10.1073/pnas.0600244103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Smith FW, Goodale MA. Decoding visual object categories in early somatosensory cortex. Cereb. Cortex. 2015;25(4):1020–1031. doi: 10.1093/cercor/bht292. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Schwartz MF, Kimberg DY, Walker GM, Brecher A, Faseyitan OK, Dell GS, Coslett HB. Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain. Proc. Natl. Acad. Sci. 2011;108(20):8520–8524. doi: 10.1073/pnas.1014935108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Putt SS, Wijeakumar S, Franciscus RG, Spencer JP. The functional brain networks that underlie Early Stone Age tool manufacture. Nat. Hum. Behav. 2017;1(6):1–8. doi: 10.1038/s41562-017-0102. [DOI] [Google Scholar]
  • 36.Visser, M., Jefferies, E., Embleton, K. V., & Lambon Ralph, M. A. Both the middle temporal gyrus and the ventral anterior temporal area are crucial for multimodal semantic processing: distortion-corrected fMRI evidence for a double gradient of information convergence in the temporal lobes. J. Cogn. Neurosci.24(8), 1766–1778 (2012). [DOI] [PubMed]
  • 37.Embleton, K. V., Haroon, H. A., Morris, D. M., Ralph, M. A. L., & Parker, G. J. Distortion correction for diffusion‐weighted MRI tractography and fMRI in the temporal lobes. Hum. Brain Mapp, 31(10), 1570–1587 (2010). [DOI] [PMC free article] [PubMed]
  • 38.Visser, M., & Lambon Ralph, M. A. Differential contributions of bilateral ventral anterior temporal lobe and left anterior superior temporal gyrus to semantic processes. J cog. neurosci., 23(10), 3121–3131 (2011). [DOI] [PubMed]
  • 39.Montagu A. Toolmaking, hunting, and the origin of language. Ann. N. Y. Acad. Sci. 1976;280(1):266–274. doi: 10.1111/j.1749-6632.1976.tb25493.x. [DOI] [Google Scholar]
  • 40.Stout D, Chaminade T. Stone tools, language and the brain in human evolution. Philos. Trans. R. Soc. B Biol. Sci. 2012;367(1585):75–87. doi: 10.1098/rstb.2011.0099. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Thibault S, Py R, Gervasi AM, Salemme R, Koun E, Lövden M, Brozzoli C. Tool use and language share syntactic processes and neural patterns in the basal ganglia. Science. 2021;374(6569):eabe0874. doi: 10.1126/science.abe0874. [DOI] [PubMed] [Google Scholar]
  • 42.Rizzolatti, G., Camarda, R., Fogassi, L., Gentilucci, M., Luppino, G., & Matelli, M. Functional organization of inferior area 6 in the macaque monkey. Exp. Brain Res., 71(3), 491–507 (1988). [DOI] [PubMed]
  • 43.Tucker M, Ellis R. On the relations between seen objects and components of potential actions. J. Exp. Psychol. Hum. Percept. Perform. 1998;24(3):830. doi: 10.1037/0096-1523.24.3.830. [DOI] [PubMed] [Google Scholar]
  • 44.Bach P, Nicholson T, Hudson M. The affordance-matching hypothesis: How objects guide action understanding and prediction. Front. Hum. Neurosci. 2014;8:254. doi: 10.3389/fnhum.2014.00254. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Cisek P. Cortical mechanisms of action selection: The affordance competition hypothesis. Philos. Trans. R. Soc. B Biol. Sci. 2007;362(1485):1585–1599. doi: 10.1098/rstb.2007.2054. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Gibson JJ. The Ecological Approach to Visual Perception. Boston: Houghton-Mifflin; 1979. [Google Scholar]
  • 47.Shibasaki H, Hallett M. What is the Bereitschaftspotential? Clin. Neurophysiol. 2006;117(11):2341–2356. doi: 10.1016/j.clinph.2006.04.025. [DOI] [PubMed] [Google Scholar]
  • 48.Binder JR, Desai RH, Graves WW, Conant LL. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex. 2009;19(12):2767–2796. doi: 10.1093/cercor/bhp055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Devlin JT, Rushworth MF, Matthews PM. Category-related activation for written words in the posterior fusiform is task specific. Neuropsychologia. 2005;43(1):69–74. doi: 10.1016/j.neuropsychologia.2004.06.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Fairhall SL, Caramazza A. Brain regions that represent amodal conceptual knowledge. J. Neurosci. 2013;33(25):10552–10558. doi: 10.1523/JNEUROSCI.0051-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Liuzzi AG, Aglinskas A, Fairhall SL. General and feature-based semantic representations in the semantic network. Sci. Rep. 2020;10(1):1–12. doi: 10.1038/s41598-020-65906-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Patterson K, Nestor PJ, Rogers TT. Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 2007;8(12):976–987. doi: 10.1038/nrn2277. [DOI] [PubMed] [Google Scholar]
  • 53.Kourtzi Z, Betts LR, Sarkheil P, Welchman AE. Distributed neural plasticity for shape learning in the human visual cortex. PLoS Biol. 2005;3(7):e204. doi: 10.1371/journal.pbio.0030204. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Gamberini M, Passarelli L, Fattori P, Galletti C. Structural connectivity and functional properties of the macaque superior parietal lobule. Brain Struct Funct, 225, 1349–1367 (2020). [DOI] [PubMed]
  • 55.Prado J, Clavagnier S, Otzenberger H, Scheiber C, Kennedy H, Perenin MT. Two cortical systems for reaching in central and peripheral vision. Neuron. 2005;48(5):849–858. doi: 10.1016/j.neuron.2005.10.010. [DOI] [PubMed] [Google Scholar]
  • 56.Pitzalis, S., Sereno, M. I., Committeri, G., Fattori, P., Galati, G., Tosoni, A., & Galletti, C. The human homologue of macaque area V6A. Neuroimage, 82, 517–530 (2013). [DOI] [PMC free article] [PubMed]
  • 57.Tosoni, A., Pitzalis, S., Committeri, G., Fattori, P., Galletti, C., & Galati, G. Resting-state connectivity and functional specialization in human medial parieto-occipital cortex. Brain Structure and Function, 220(6), 3307–3321 (2015). [DOI] [PubMed]
  • 58.Sulpizio, V., Neri, A., Fattori, P., Galletti, C., Pitzalis, S., & Galati, G. Real and imagined grasping movements differently activate the human dorsomedial parietal cortex. Neuroscience, 434, 22–34 (2020). [DOI] [PubMed]
  • 59.Rossit S, McAdam T, Mclean DA, Goodale MA, Culham JC. fMRI reveals a lower visual field preference for hand actions in human superior parieto-occipital cortex (SPOC) and precuneus. Cortex. 2013;49(9):2525–2541. doi: 10.1016/j.cortex.2012.12.014. [DOI] [PubMed] [Google Scholar]
  • 60.Scholz J, Klein MC, Behrens TE, Johansen-Berg H. Training induces changes in white-matter architecture. Nat. Neurosci. 2009;12(11):1370–1371. doi: 10.1038/nn.2412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Weisberg J, Van Turennout M, Martin A. A neural system for learning about object function. Cereb. Cortex. 2006;17(3):513–521. doi: 10.1093/cercor/bhj176. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Fischer J, Mikhael JG, Tenenbaum JB, Kanwisher N. Functional neuroanatomy of intuitive physical inference. Proc. Natl. Acad. Sci. 2016;113(34):E5072–E5081. doi: 10.1073/pnas.1610344113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Schwettmann S, Tenenbaum JB, Kanwisher N. Invariant representations of mass in the human brain. Elife. 2019;8:e46619. doi: 10.7554/eLife.46619. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Arbib, M. A. Perceptual structures and distributed motor control. In: Brooks V. B. (Ed.) Handbook of physiology – The nervous system II, Motor control, Part 1. American Physiological Society, Bethesda, Md., pp. 1449–1480 (1981).
  • 65.Christensen W, Sutton J, Bicknell K. Memory systems and the control of skilled action. Philos. Psychol. 2019;32(5):692–718. doi: 10.1080/09515089.2019.1607279. [DOI] [Google Scholar]
  • 66.Rumiati RI, Humphreys GW. Recognition by action: Dissociating visual and semantic routes to action in normal observers. J. Exp. Psychol. Hum. Percept. Perform. 1998;24(2):631. doi: 10.1037/0096-1523.24.2.631. [DOI] [PubMed] [Google Scholar]
  • 67.Jeannerod M, Arbib MA, Rizzolatti G, Sakata H. Grasping objects: The cortical mechanisms of visuomotor transformation. Trends Neurosci. 1995;18(7):314–320. doi: 10.1016/0166-2236(95)93921-J. [DOI] [PubMed] [Google Scholar]
  • 68.Johnson-Frey SH. What's so special about human tool use? Neuron. 2003;39(2):201–204. doi: 10.1016/S0896-6273(03)00424-0. [DOI] [PubMed] [Google Scholar]
  • 69.Young G. Are different affordances subserved by different neural pathways? Brain Cogn. 2006;62(2):134–142. doi: 10.1016/j.bandc.2006.04.002. [DOI] [PubMed] [Google Scholar]
  • 70.Bub, D. N., Masson, M. E., & Cree, G. S. Evocation of functional and volumetric gestural knowledge by objects and words. Cognition, 106(1), 27–58 (2008). [DOI] [PubMed]
  • 71.Vingerhoets, G., Vandamme, K., & Vercammen, A. L. I. C. E. Conceptual and physical object qualities contribute differently to motor affordances. Brain and Cognition, 69(3), 481–489 (2009). [DOI] [PubMed]
  • 72.Borghi, A. M., Bonfiglioli, C., Lugli, L., Ricciardelli, P., Rubichi, S., & Nicoletti, R. Are visual stimuli sufficient to evoke motor information?: Studies with hand primes. Neuroscience Lett., 411(1), 17–21 (2007). [DOI] [PubMed]
  • 73.Cisek P, Kalaska JF. Neural mechanisms for interacting with a world full of action choices. Annu. Rev. Neurosci. 2010;33:269–298. doi: 10.1146/annurev.neuro.051508.135409. [DOI] [PubMed] [Google Scholar]
  • 74.Jax SA, Buxbaum LJ. Response interference between functional and structural actions linked to the same familiar object. Cognition. 2010;115(2):350–355. doi: 10.1016/j.cognition.2010.01.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Tak YW, Knights E, Henson R, Zeidman P. Ageing and the ipsilateral M1 BOLD response: A connectivity study. Brain Sci. 2021;11(9):1130. doi: 10.3390/brainsci11091130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Haynes JD. A primer on pattern-based approaches to fMRI: Principles, pitfalls, and perspectives. Neuron. 2015;87(2):257–270. doi: 10.1016/j.neuron.2015.05.025. [DOI] [PubMed] [Google Scholar]
  • 77.Sakuraba, S., Sakai, S., Yamanaka, M., Yokosawa, K., & Hirayama, K. Does the human dorsal stream really process a category for tools?. J Neuroscience, 32(11), 3949–3953 (2012). [DOI] [PMC free article] [PubMed]
  • 78.Birn RM, Cox RW, Bandettini PA. Experimental designs and processing strategies for fMRI studies involving overt verbal responses. Neuroimage. 2004;23(3):1046–1058. doi: 10.1016/j.neuroimage.2004.07.039. [DOI] [PubMed] [Google Scholar]
  • 79.Duda, R. O., Hart, P. E., & Stork, D. G. Pattern classification. Int. J. Comput. Intell. Appl.1, 335–339 (2001).
  • 80.Gallivan, J. P., Johnsrude, I. S., & Randall Flanagan, J. (2016). Planning ahead: object-directed sequential actions decoded from human frontoparietal and occipitotemporal networks. Cereb. Cortex, 26(2), 708–730. [DOI] [PMC free article] [PubMed]
  • 81.Chang, C. C., & Lin, C. J. LIBSVM: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3), 1–27 (2011).
  • 82.Pereira, F., & Botvinick, M. Information mapping with pattern classifiers: a comparative study. Neuroimage, 56(2), 476–496. (2011). [DOI] [PMC free article] [PubMed]
  • 83.Goebel, R., Esposito, F., & Formisano, E. Analysis of functional image analysis contest (FIAC) data with brainvoyager QX: From single‐subject to cortically aligned group general linear model analysis and self‐organizing group independent component analysis. Hum. Brain Mapp, 27(5), 392–401 (2006). [DOI] [PMC free article] [PubMed]
  • 84.Forman, S. D., Cohen, J. D., Fitzgerald, M., Eddy, W. F., Mintun, M. A., & Noll, D. C. Improved assessment of significant activation in functional magnetic resonance imaging (fMRI): Use of a cluster‐size threshold. Magn. Reson. Med., 33(5), 636–647 (1995). [DOI] [PubMed]
  • 85.Xia M, Wang J, He Y. BrainNet viewer: A network visualization tool for human brain connectomics. PLoS ONE. 2013;8:e68910. doi: 10.1371/journal.pone.0068910. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Oldfield RC. The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia. 1971;9(1):97–113. doi: 10.1016/0028-3932(71)90067-4. [DOI] [PubMed] [Google Scholar]
  • 87.Quinlan, D. J., & Culham, J. C. Direct comparisons of hand and mouth kinematics during grasping, feeding and fork-feeding actions. Front. Hum. Neurosci., 9, 580 (2015). [DOI] [PMC free article] [PubMed]
  • 88.Anzellotti S, Fairhall SL, Caramazza A. Decoding representations of face identity that are tolerant to rotation. Cereb. Cortex. 2014;24(8):1988–1995. doi: 10.1093/cercor/bht046. [DOI] [PubMed] [Google Scholar]
  • 89.Freud E, Macdonald SN, Chen J, Quinlan DJ, Goodale MA, Culham JC. Getting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations. Cortex. 2018;98:34–48. doi: 10.1016/j.cortex.2017.02.020. [DOI] [PubMed] [Google Scholar]
  • 90.Kalénine S, Peyrin C, Pichat C, Segebarth C, Bonthoux F, Baciu M. The sensory-motor specificity of taxonomic and thematic conceptual relations: A behavioral and fMRI study. Neuroimage. 2009;44(3):1152–1162. doi: 10.1016/j.neuroimage.2008.09.043. [DOI] [PubMed] [Google Scholar]
  • 91.Snow JC, Pettypiece CE, McAdam TD, McLean AD, Stroman PW, Goodale MA, Culham JC. Bringing the real world into the fMRI scanner: Repetition effects for pictures versus real objects. Sci. Rep. 2011;1(1):1–10. doi: 10.1038/srep00130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Zagha E, Erlich JC, Lee S, Lur G, O’Connor DH, Steinmetz NA, Yang H. The importance of accounting for movement when relating neuronal activity to sensory and cognitive processes. J. Neurosci. 2022 doi: 10.1523/JNEUROSCI.1919-21.2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.Visser, M., Jefferies, E., & Lambon Ralph, M. A. Semantic processing in the anterior temporal lobes: a metaanalysis of the functional neuroimaging literature. J. Cog. Neurosci.22(6), 1083–1094 (2010). [DOI] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The full raw f/MRI dataset is accessible from OpenNEURO (https://openneuro.org/datasets/ds003342/versions/1.0.0). The motion-capture datasets are accessible from the Open Science Framework (https://osf.io/uy3qa/).

Computer code for running the experiments and analysis of the fMRI (https://osf.io/zxnpv) and behavioural datasets are accessible from the Open Science Framework (https://osf.io/uy3qa/).


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES