Skip to main content
iScience logoLink to iScience
. 2023 Sep 1;26(10):107808. doi: 10.1016/j.isci.2023.107808

Decoding hand kinetics and kinematics using somatosensory cortex activity in active and passive movement

Alavie Mirfathollahi 1,2, Mohammad Taghi Ghodrati 2, Vahid Shalchyan 2, Mohammad Reza Zarrindast 1,3, Mohammad Reza Daliri 1,2,4,
PMCID: PMC10509302  PMID: 37736040

Summary

Area 2 of the primary somatosensory cortex (S1), encodes proprioceptive information of limbs. Several studies investigated the encoding of movement parameters in this area. However, the single-trial decoding of these parameters, which can provide additional knowledge about the amount of information available in sub-regions of this area about instantaneous limb movement, has not been well investigated. We decoded kinematic and kinetic parameters of active and passive hand movement during center-out task using conventional and state-based decoders. Our results show that this area can be used to accurately decode position, velocity, force, moment, and joint angles of hand. Kinematics had higher accuracies compared to kinetics and active trials were decoded more accurately than passive trials. Although the state-based decoder outperformed the conventional decoder in the active task, it was the opposite in the passive task. These results can be used in intracortical micro-stimulation procedures to provide proprioceptive feedback to BCI subjects.

Subject areas: Kinematics, Behavioral neuroscience, Sensory neuroscience

Graphical abstract

graphic file with name fx1.jpg

Highlights

  • Accurate decoding of hand kinematics and kinetics using area 2 of S1 SUA

  • Active and passive movements decoded with high accuracy using area 2 of S1 SUA

  • State-based decoders perform better than conventional methods in active movements


Kinematics; Behavioral neuroscience; Sensory neuroscience

Introduction

Neurological diseases and spinal cord injuries can cause the spinal cord to be disconnected from the brain, while networks of the brain that generate and control the movements remain undamaged and functional. Brain-computer interfaces (BCIs) are being increasingly developed to bypass this disconnection in the motor pathway of people with movement disabilities. Motor Rehabilitation BCIs intend to decode user commands using neural sources, such as electroencephalography (EEG), electrocorticography (ECoG), and intracortical signals, and translate them into commands for assistive devices. Some patients suffering from disabilities stated that functionality of the hand and arm is essential for their recovery.1 In these people, the neural interfaces can approach brain activity head-on and make a direct communication path between neural activity and external devices.2 Using the current understanding of cortical motor activity during natural forelimb movements and the development of decoding algorithms, human and non-human individuals have been able to control computer cursors,3,4 reach and grasp using robotic limbs,5,6,7 and their own reanimated limbs.8,9 In these studies, neural signals recorded from motor areas, including the primary motor cortex (M1), premotor cortex (PM), and posterior parietal cortex (PPC), translate into commands that allow people with disability to communicate with their environment.

As a result, the advancement of BCIs has improved our understanding of neural phenomena10 and neuroprosthetics using decoding kinetic and kinematic parameters derived from intracortical signals during limb movements and then converting them to command signals to control an external device or prosthetic limb.5,6 In most of the studies, only one group of kinematics/kinetics parameters has been addressed; some studies have dealt with decoding the kinematic movement parameters,11,12,13,14,15 and some studies have decoded kinetic movement parameters,16,17,18,19 and limited studies have investigated the accuracy of decoding in both categories of these parameters using intracortical neural signals from the same area.20,21,22 Investigating the movement parameters in both groups, kinetic and kinematic, allows us to compare the information available about each parameter in the brain areas, and these findings can be used in the development of BCI systems.

Most decoding studies have focused on the motor cortex area so far; however, a critical complement in movement control is the sensory feedback about the consequences of the movement.23 Muscles and joints continuously send proprioceptive feedback, including body position in space and muscular force, to the somatosensory motor cortex,24 and impairment in the proprioceptive system can lead to imprecise and failed movements.25 The primary somatosensory cortex (S1) and mainly area 2 of this region contains neurons that respond to the proprioceptive feedback received from joints and muscles during active and passive movements.26,27 The activity of S1 has already been characterized during hand movements in passive28 and active tasks.29 Even though encoding of hand movements by the neural activities of area 2 of S1 has been investigated,30 the capability of this area to provide feedback to BCI systems must be investigated more. One good strategy to address this question is by evaluating our capacity to decode kinematics/kinetics parameters through the neural activity of this area. As somatosensory neurons process detailed hand movement information, these neural representations can be used to convey artificial proprioceptive feedback,31 and artificial tactile feedback using intracortical microstimulation.32 As online brain control may cause functional plasticity,33 proprioceptive feedback may also cause changes in neuronal networks.

Therefore, one of the objectives of this study is to investigate the extent to which movement parameters can be decoded from area 2 of S1 neural activity. For a better comparison, we tried to consider both categories of movement parameters, kinematics/kinetics, and compare the decoding accuracy of these two categories to see which type can be decoded more accurately. Also, we compared the decoding accuracy in active and passive tasks to investigate the amount of available information in this area regarding each task. These results can help us evaluate the utility of area 2 of S1 neural information in providing proprioceptive feedback to BCI systems.

In goal-directed movements, modulation of neural activity changes during different stages of movement, including preparative, pre-movement, and execution periods.34 Hence, cortical activity has been used to predict time intervals between movements.35 Some studies have investigated how a state-based decoder could be a movement decoder using neural ensemble activity recorded from the PM,12,36,37 the parietal cortex,38 and M1.12,39,40 As mentioned, despite the importance of S1 in the development of accurate BCI systems that can auto-correct movements and provide sensory feedback, decoding of movement parameters from neural activity of this area has been investigated in limited studies.

The goal of this study is to investigate whether hand kinematics and kinetics can be decoded accurately from the neural population responses of area 2 of S1 neurons. First, we show that neural signals in area 2 of S1 carry precise representations of the hand to classify the direction of movement and to reconstruct time-varying hand trajectories, joint angles, force, and moment. Second, we show that these kinetic and kinematic parameters can be decoded in the passive task as well as the active task. Still, area 2 of S1 carries different amounts of information about the hand state in these two tasks. Finally, we use a state-based decoding algorithm to decode movement parameters in both active and passive tasks and compare the results. We find an optimized decoding approach for each task, suggesting possible differences in cortical encoding for active and passive tasks. Our results emphasize the promise of using somatosensory signals to achieve better control of hand and demonstrate that neural activities in area 2 of S1 accurately represent forelimb configuration that can be used to restore proprioception through intracortical micro-stimulation.

Results

The neural data used in this study was recorded by Chowdhury et al. and the details are described in Chowdhury et al.30 Briefly, they recorded neural signals from two Rhesus macaques (Monkeys C and H) from the arm representation of Brodmann’s area 2 of S1 during center-out task (COT), and the sensory receptive fields of each neuron were mapped in two modalities (a) deep or cutaneous, (b) the location of each field. In this study, the feasibility of discriminating neural activity of area 2 of S1 according to kinematics and kinematic parameters of hand movement was investigated. Totally, in this study, 2764 trials of data (52.17% active task, and 47.83% passive task), were analyzed during the monkey performing COT (see STAR methods). In the results sections, first, we will investigate neural activity in two modes of hand movements, active and passive, and compare these two modes. Then, we will present the results of integrating the discrete state classifier into the continuous decoder, and also compare the performance of the state-based continuous decoder with the conventional continuous decoder in terms of R and R2 decoding performance in active and passive, respectively. We will also compare the result of the state-based continuous decoder employing both partial least square (PLS) and multiple linear regression (MLR) regression methods in these two movement modes. The procedures for state-based continuous decoding of movement parameters in the training and test phases are depicted in Figure 1.

Figure 1.

Figure 1

Schematic of the proposed method

The schematic representation of the proposed state-based continuous movement parameters decoder.

Neuronal population activity patterns are different during active and passive movements

In Figure 2, the neural activity of neurons recorded in the second session of monkey H during the execution of the COT task in two active and passive modes are presented. At first, we arranged the neurons based on the start of their activity over time in each direction, which active and passive task results plotted in Figures 2A and 2C, respectively. As can be seen, in the active task, the activity of the neurons is carried out hierarchically. In contrast, in the passive task, the activity of the neurons is carried out collectively at the beginning of the movement. In Figure 2E, the neurons of each direction in the passive task were arranged with the order of the neurons in the active task in the same direction to realize how the neurons that were gradually activated in the active task, behave in the passive task. The results shown in the passive task most of the activity of the neurons takes place in the bins after the start of the movement. Figures 2B and 2D show the post-stimulus time histogram for each direction in the active and passive tasks, respectively. As shown, in the active task, the activity of the neurons increases over time during the execution of the task and continues until the end of the movement. In contrast, in the passive task, the activity of the neurons increases shortly after the start of the movement and quickly decreases to the initial amount. Furthermore, if the neurons in the passive task are arranged in the activation pattern of the neurons in the active task, the set of neurons that were activated slowly in the active task is activated in the passive task at the beginning bins of the movement (primary interval), and their activity decreases. This difference in neuron behavior in these two tasks is most likely due to the cerebral cortex’s different coding of these two types of movement.

Figure 2.

Figure 2

Firing patterns during hand movement in different directions

The neural activity of neurons recorded in the second session of monkey H during the execution of the COT task in active and passive modes.

(A) In active task, the neurons arranged based on the start of their activity over time in each direction.

(B) PETH of neurons in the active task.

(C) In the passive task, the neurons arranged based on the start of their activity over time in each direction.

(D) PETH of neurons in Passive task.

(E) The passive task, the neurons arranged based on the start of their activity in the active task.

(F) PETH of (E).

Movement direction classification

By examining the firing rate of neurons in different directions, we found that this signal probably contains valuable information about the direction of movement. We used the median of the kernelled firing rate of each neuron in two 10 ms bins after the movement started as discriminate features between the four classes of interest. Adding a feature selection step can ensure that the most discriminative features are selected for the classification purpose. The output signal of the test fold was classified into desired direction class. This clearly shows that this feature extraction and selection technique can highly discriminate the features of direction classes. Four direction states were identified with average accuracies of 98.98% in the active task, and 99.13% in the passive task (Table 1). For the classification of four hand movement directions using neuronal information of area 2 of S1, no significant difference was observed during the active and passive tasks (P<0.005).

Table 1.

Summary of hand direction state classification based on the firing rate of neurons

Task Monkey 1
Monkey 2
Average
Session 1 Session 2 Session 1 Session 2
Active 98.5 ± 1.0 99.1 ± 0.7 99.1 ± 1.7 99.2 ± 1.7 98.98%
Passive 99.0 ± 1.0 98.6 ± 1.0 99.2 ± 1.9 99.7 ± 1.0 99.13%

Figure 3 shows the overall normalized confusion matrices of classifying hand movement direction in both tasks. As shown in Figure 3A, in the active task, the accuracies for the four directions (0°, 90°, 180°, 270°) states were obtained as approximately 99.27%, 98.43%, 98.93%, and 98.83% on average, respectively. In this task, the overall recognition rate of 0° is higher than in other directions. In the passive task (Figure 3B), the accuracies were 99.50%, 97.29%, 99.46%, and 99.50% on average for the direction (0°, 90°, 180°, 270°), respectively. In the passive task, the overall recognition rate of 90° is lower than in other directions.

Figure 3.

Figure 3

State classification

(A) The overall normalized confusion matrices of classifying hand movement direction ((1) 0°, (2) 90°, (3) 180°, (4) 270°) in active task across all sessions of all subjects.

(B) The overall normalized confusion matrices of classifying hand movement direction in the passive task.

(C) The mean classification accuracies across sessions of all subjects using LDA classifier and different window time lengths in the active and passive tasks (mean ± SD).

(D) Average classification accuracies across sessions of all subjects using LDA classifier and different numbers of sorted features (using MI method) in active and passive tasks.

As previously stated, in both active and passive tasks, the firing rate data have been used in the interval of 200 ms after the start of the movement to perform the hand direction classification. The firing rate is divided into two 100 ms windows, and the median firing rate of each window in each neuron was used as the feature. Different time lengths for classification steps were utilized, and the results were compared. Figure 3C depicts the results of classification accuracy using different time lengths of classification data. As can be seen, the classification accuracy increases as the length of the window used for classification increases. This rise continues until a window of 200 ms, after which the surge becomes insignificant (P>0.005). The purple color represents the active task, while the orange color represents the passive task.

Furthermore, the effect of feature dimensionality on classification accuracy was also investigated to determine the sensitivity of hand direction state classification to the dimensionality of features and whether, despite the reduction in feature dimensions, the direction state can still be decoded with acceptable accuracy. In Figure 3D, the mean and standard deviation of classification accuracy are plotted as a function of feature count. The active task is represented by purple, while the passive task is represented by orange.

Continuous movement decoding using state-based decoder and conventional method

In this study, the multiple parameters of hand movement were decoded using linear decoders, which expressed the parameters as weighted sums of neuronal firing rates. We decoded the position of the handle in the x and y axis, the velocity of the handle in the x and y axis, the interaction forces between the monkey’s hand and the handle in the three axes (x, y, and z), the moment of hand in the three axes (x, y, and z), and the joint parameters including shoulder adduction, shoulder rotation, shoulder flexion, elbow flexion, radial protonation, wrist flexion, wrist abduction.

Two decoding approaches have been used to decode these movement parameters, state-based and conventional decoders. To evaluate the performance of these two decoders, the correlation coefficient and the coefficient of determination between the actual and decoded parameters were computed. Figure 4 illustrates the result of continuous movement parameters decoding in the average of all sessions in subjects using a PLS state-based movement parameters decoder and a PLS conventional decoder. Figure 4A illustrates the average correlation coefficient of the state-based decoder in active task in each parameter group, including positions, velocities, forces, moments, and joint movement parameters were 0.98, 0.9, 0.85, 0.86, and 0.88, respectively (ten times 5-fold).

Figure 4.

Figure 4

Performance of state-based decoder vs. conventional method

(A) The results were displayed by correlation coefficient (R) averaged in all sessions in the active task.

(B) The results were displayed by the coefficient of determination (R2) averaged in all sessions in the active task.

(C) The results were displayed by correlation coefficient (R) averaged in all sessions in the active task.

(D) The results were displayed by the coefficient of determination (R2) averaged in all sessions in the passive task (mean ± SEM). Asterisks indicate a statistically significant difference (P<0.05). The color of the asterisk shows the method with higher mean accuracy.

In all parameters except shoulder adduction joint angle, the state-based decoder led to significantly better results in terms of correlation coefficient (P<0.005); but decoding accuracies were not significantly different for shoulder adduction (P>0.005). Figure 4B shows the average coefficient determination of decoding in the average of all sessions in subjects using a state-based decoder and conventional decoder in the active task. Using the state-based decoder, the average coefficient determination of decoding in positions, velocities, forces, moments, and joint movement parameters were 0.97, 0.8, 0.71, 0.71, and 0.76, respectively (ten times 5-fold). In coefficient determination criteria such as correlation coefficient, the state-based method outperforms the conventional method in all parameters except shoulder adduction joint angle, demonstrating that the state-based method is more efficient than the conventional method (P<0.005). Figure 4C illustrates the average correlation coefficient of PLS state-based and PLS conventional decoders in positions, velocities, forces, moments, and joint movement parameters in the passive task, which is 0.91, 0.7, 0.68, 0.69, and 0.78, respectively, for specified movement parameters (ten times of 5-fold). Figure 4D shows the average coefficient determination of decoding in the average of all sessions in subjects using a state-based movement parameters decoder and conventional decoder in the passive task. The average coefficient determination of decoding using the state-based decoder in positions, velocities, forces, moments, and joint parameters were 0.82, 0.4, 0.36, 0.38, and 0.56, respectively (ten times 5-fold). In the passive task, using compare coefficient determination criteria, positions, the moment in x and z dimensions, and all joint parameters except elbow flexion were decoded more efficiently using the conventional decoder compared to the state-based decoder (P<0.005).

Figure 5 shows examples of a 6-s time segment of output signal decoded using the proposed state-based PLS method from neural activity in the active task. As can be seen, using a state-based decoder, the decoded output signal was accurate in all parameters. In this figure, the red color represents the predicted, and the blue color represents the actual. The examples of a 6-s time segment of output signal using the proposed state-based PLS method from neural activity in the passive task as shown in Figure 6; the red and blue color represents the predicted and actual output parameters, respectively.

Figure 5.

Figure 5

Sample decoding results for hand movement parameters during active movement

Segments of actual (blue) and neural state-based regression-estimated (red) kinematics/kinematics during COT active task. Here, R2avg stands for the average coefficient of determination of each kinematic parameter in all trials of all recorded sessions.

Figure 6.

Figure 6

Sample decoding results for hand movement parameters during passive movement

Segments of actual (blue) and neural state-based regression-estimated (red) kinematics/kinematics during COT active task. Here, R2avg stands for the average coefficient of determination of each kinematic parameter in all trials of all recorded sessions.

Decoding kinematic and kinetic parameters in active and passive tasks

In order to compare the result of decoding kinematic versus kinetic parameters in active and passive tasks, the mean and standard error of the mean of the correlation coefficient of position, velocity, force, and moment in two dimensions, x/y axis, are shown in Figure 7 x axis parameters are shown in Figures 7A and 7B for the active and passive tasks, respectively. Figures 7C and 7D represented the correlation coefficient in the y axis in the active and passive tasks, respectively. The correlation coefficient of position is higher than other parameters in both tasks (P<0.005). In decoding the x axis, position, velocity, force, and moment were decoded with better results, respectively. Despite that, the speed has less decoding accuracy in decoding the y axis, and the order of the results in the y axis are as follows: position, moment, force, and, velocity.

Figure 7.

Figure 7

Comparison of decoding accuracies

(A and B) Comparison of the kinematic versus kinetic parameters decoding. The correlation coefficient of position, velocity, force, and moment in two dimensions, the x axis in active, and (B) passive task.

(C and D) Comparison of the y axis kinematic versus kinetic parameters decoding in active, and (D) passive task.

(E and F) Decoding accuracy broke down by movement parameter groups in state-based versus conventional decoder in active, and (F) in the passive task.

(G) The result of decoding using the state-based decoder in active vs. passive tasks.

(H) The result of decoding using the conventional decoder in active vs. passive tasks.

In the previous analyses, we showed that movement-related parameters could be directly decoded from the neuronal activity of area 2 of S1, and the decoding performance is better in the active task. With this in mind, we assessed whether neuronal responses in area 2 of S1 preferentially encode kinetic/kinematic parameters during COT hand movement. To this end, we reconstructed five groups of movement parameters from area 2 of S1 responses and compared these results in active and passive tasks. In Figure 7E, position, velocity, force, moment, and joint parameters are shown in red, green, blue, purple, and yellow, respectively. Each circle shows the result of one of the coefficients of determination results (5-folds, ten-time cross-validation for four sessions results). Figure 7E, illustrated the averaged decoding performance of the state-based versus the conventional decoder in the active task. In the active task, all parameters except moment and joint parameters were decoded accurately in all sessions using the state-based decoder. In the passive task, the conventional decoder decoded movement parameters better than the state-based decoder (Figure 7F). In the usage of the state-based decoder, movement-related parameters were decoded significantly better in the active task (Figure 7G). In the use of the conventional decoder, the performance of movement-related parameters decoding is almost similar in both tasks (Figure 7H).

State-based decoder using MLR regression and PLS regression

We also used a different regression method to evaluate the effect of incorporating a discrete-state classifier into a continuous variable decoder. In this section, the same classification methods were used, but MLR regression was used instead of PLS. Table 2 demonstrates that the PLS method outperforms MLR in both active and passive tasks, and can lead to significant improvement in decoding performance (P<0.001).

Table 2.

Decoding coefficient of determination (R2) obtained in different scenarios for state-based PLS and state-based MLR decoders in active and passive tasks

parameters name Active
Passive
PLS MLR PLS MLR
Position X 0.96 0.84 0.83 0.72
Position Y 0.97 −0.09 0.8 −0.75
Average Position 0.965 0.38 0.82 −0.06
 Velocity X 0.84 0.70 0.60 0.47
 Velocity Y 0.76 0.63 0.20 −0.27
Average Velocity 0.80 0.67 0.40 0.10
 Force X 0.73 0.46 0.46 0.12
 Force Y 0.65 0.35 0.16 −0.30
 Force Z 0.75 0.46 0.47 0.06
 Moment X 0.64 0.38 0.35 0.05
 Moment Y 0.78 0.55 0.34 −0.05
 Moment Z 0.72 0.58 0.45 0.20
Average Force and moments 0.71 0.46 0.37 0.01
 Shoulder adduction 0.75 0.56 0.62 0.38
 Shoulder rotation 0.94 0.72 0.78 0.60
 Shoulder flexion 0.93 0.91 0.77 0.64
 Elbow flexion 0.94 0.19 0.77 −0.45
 Radial pronation 0.66 0.21 0.37 −0.91
 Wrist flexion 0.73 0.14 0.54 −0.24
 Wrist abduction 0.38 −0.30 0.14 −0.70
Average Angles 0.76 0.35 0.57 −0.097

By comparing the obtained results, we realized that the decoding of the movement kinematic and kinetic parameters from area 2 of S1 neuronal information, in active and passive tasks, has different results in the use of the state-based and conventional decoders. For example, in the active task, in most kinematic and kinetic parameters, decoding using the state-based method led to better results, while this was not the case in the passive task. Therefore, in the next step, we investigated the results of the state-based decoder when the number of states is considered less than the actual movement’s direction. To this aim, only the main direction of movement was considered, movement in the x axis and y axis. The results of active task using the state-based decoder are also better than the results of the conventional decoder. Meanwhile, the results of the conventional decoder are better in most parameters in the passive task. Also, in most parameters, decoding in the active task was better than in the passive task.

Discussion

In the current study, we showed that the kinematic and kinetic parameters of hand movement can be continuously decoded from neural activities of area 2 of S1 with high accuracy. As far as we know, this is the first study on the continuous decoding of the forelimb kinetic and kinematic movement parameters using area 2 of S1 intracortical signals during active and passive movement. This area is considered to be mediating the hand-reach-related proprioception. Proprioception is critical in coordinating movements by providing information about body position and movement.30 Previously, Chowdhury et al.30 conducted a study on how proprioceptive information is represented in area 2 of S1 during both active and passive hand movement tasks. They compared two categories of models, namely the hand-only models and the whole-arm models. Their findings revealed that the whole-arm model performed better than the hand-only model when it came to explaining the neural activities in area 2 of S1. This may suggest that area 2 encodes the overall state of the entire arm during movement. Interestingly, the researchers also noted that area 2 of S1 encodes passive movements, but in a different manner compared to active movements. However, their study did not delve into the investigation of decoding these parameters. However, their study did not delve into the investigation of decoding these parameters. To the best of our knowledge, the possibility of decoding both kinetic and kinematic movement-related parameters using neuronal activities in this area has not been comprehensively studied and the difference between decoding capacity during active and passive movements has not been well investigated. Decoding proprioceptive sensory information using area 2 of S1 signals can represent the amount of information about these parameters received by this area. This can also clarify the type of relationship between neural activities and movement-related parameters, which in turn can be used to develop effective stimulation paradigms to restore proprioceptive sensory information.

Some studies have shown that somatosensory cortical neural activities can be used in BCI applications. The results of investigations into the S1 neural activities hold the potential for advancing the field of BCIs in three manners. Firstly, these studies provide valuable insights for designing effective stimulation strategies aimed at mimicking sensations encompassing tactile perception and proprioceptive awareness.30,41,42 These systems can be used alongside with BCIs and can provide meaningful proprioceptive and tactile feedback to subjects. Secondly, according to some studies, S1 is activated during the observation of touch43 and encodes imagined hand movement in the absence of sensory feedback.44 Thus, BCIs can use S1 neural activities as control signals in tasks such as tactile or movement imagery, in which subjects imagine experiencing a touch or moving their limbs, respectively. These paradigms have been shown to be useful for developing BCIs.45,46 Thirdly, in addition to the previous manner, the activation of S1 and encoding properties in the absence of movement and afferent sensory information, suggest that the S1 neural activities can also be used as a feedback source to periodically tune the BCI system and correct the decoding errors. However, further studies are required to determine the applicability of using S1 signals as feedback sources in the absence of limb movement. Having these reasons in mind, decoding S1 neural signals can be beneficial in all of the aforementioned manners. Decoding analysis can provide further insight into the amount of information about movement available in S1, neuron contribution analysis in decoding can provide richer insight into developing stimulation strategies, and developing S1 decoders with higher accuracy can provide the basis for BCIs that use sensory feedback to refine their output.

In this study, we tried to investigate (1) whether kinematic and kinetic movement parameters can be decoded using area 2 of S1 signals, (2) which type of parameters (position, velocity, joint angle, force, and moment) can be decoded more accurately, (3) whether the decoding accuracy is affected by the type of movement (passive/active movement), (4) how do the neurons in area 2 of S1 contribute to decoding these movement-related parameters, (5) what is the relationship between the receptive field of neurons and their contribution in decoding discrete and continuous movement parameters, and (6) whether state-based decoders outperform conventional methods in decoding movement parameters using area 2 of S1 neural activities. Chowdhury et al.30 showed that area 2 of S1 encodes whole-arm movements and the encoding of movements is different during active and passive movement. However, decoding kinetic or kinematic movement parameters was not investigated in Chowdhury et al.30 As mentioned before, to the best of our knowledge, these items have not been investigated in previous studies, despite their potential application in BCIs and enhancement of our knowledge about the S1. Our results show that the investigated movement parameters can be decoded with very high accuracies in both active and passive tasks. However, the decoding accuracy is significantly higher in active task compared to the passive movement. Additionally, using state-based decoders can significantly increase the decoding accuracy in the active movement. However, in the passive movement, conventional decoder can predict most of the investigated parameters with significantly higher accuracy.

In this study, we investigated the decoding of these parameters in passive and active reaching movements in single trial mode using area 2 of S1 neural activities. The results show that neural activities of this area contain accurate information about position, velocity, angle, forces, and moment of hand at each instant of time during active and passive execution of the COT. The accuracy of continuous decoding of several kinematic and kinetic parameters in terms of the correlation coefficient and coefficient of determination is demonstrated in Table 3. As can be seen, position, velocity, angles, and forces could be decoded with mean R2 values as 0.94, 0.86, 0.82, and 0.78 in the active task, respectively, which indicates that the proprioceptive sensory information in this area can accurately represent the high-level and low-level active hand movement parameters. Although the decoding accuracies during the passive task are significantly lower, the mean accuracy of position decoding is still high enough (R2=0.74) to accurately reconstruct the hand endpoint. The significant difference between decoding accuracy in passive and active tasks may be originated from the different encoding of hand movements in area 2 of S1 for these two movement modes.

Table 3.

The correlation coefficient (R) and coefficient of determination (R2) in XY mode state-based decoder and using conventional PLS decoder in the active and passive task

Parameters name R
R2
RMSE
Active Passive Active Passive Active Passive
Position X 0.93 0.88 0.87 0.76 1.62 2.07
Position Y 0.95 0.86 0.9 0.72 1.91 2.02
Average Position 0.94 0.87 0.89 0.74 1.77 2.05
Velocity X 0.88 0.76 0.75 0.53 7.56 9.66
Velocity Y 0.84 0.5 0.67 0.01 9.96 10.34
Average Velocity 0.86 0.63 0.71 0.27 8.76 10
Force X 0.78 0.59 0.57 0.19 0.34 0.46
Force Y 0.75 0.54 0.5 0.08 0.41 0.58
Force Z 0.81 0.69 0.61 0.39 1.81 1.82
Moment X 0.74 0.65 0.48 0.29 0.06 0.05
Moment Y 0.8 0.59 0.6 0.18 0.05 0.05
Moment Z 0.82 0.68 0.63 0.37 0.02 0.02
Average Force and moments 0.78 0.62 0.56 0.25 0.45 0.5
Shoulder adduction 0.77 0.77 0.55 0.53 2.64 2.32
Shoulder rotation 0.92 0.84 0.84 0.67 3.83 4.8
Shoulder flexion 0.93 0.84 0.86 0.67 5.95 5.94
Elbow flexion 0.93 0.84 0.85 0.67 6.42 6.38
Radial pronation 0.71 0.62 0.42 0.23 5.88 5.43
Wrist flexion 0.81 0.73 0.61 0.46 8.85 7.8
Wrist abduction 0.65 0.54 0.29 0.07 3.9 3.2
Average Angles 0.82 0.74 0.63 0.47 5.35 5.12

The analysis of peri-event time histogram (PETH) and the pattern of individual neurons’ activation also indicate that the neuronal activities in this area are different in active and passive tasks as well as in different directions of hand movement. Figure 2 shows the activity of different neurons during the COT in different directions in passive and active tasks. Neurons are sorted based on the peak firing rate time in active (Figure 2A) and passive (Figure 2C) tasks. The pattern of activation of neurons during task execution is different in passive and active tasks. In the active task, neurons’ firing rates peak at different times, and simultaneous firing rate peaks for different neurons is not observed, except for 180o direction in which several neurons’ firing rates peak almost simultaneously near the end of movement execution (Figure 2A). However, in passive trials, the ensemble firing rate of neurons peaks within 140 ms after the start of movement (Figure 2C). These peaks are visible in PETH diagrams of active and passive tasks as well (Figures 2B and 2D). These peaks in firing rate may be related to the response of neurons in this area to the unintentional movement of the hand. Furthermore, the order of neurons in these two modes is different, emphasizing the different encoding of movement in these two modes. Several studies have shown that the S1 receives information about movements from the motor system.47,48,49,50 In addition, previous studies have shown that the neural circuits responsible for generating movements can influence the sensory processing in the S1.47,51,52 The difference in neural activities and higher decoding accuracy in active task is consistent with the findings of these studies, suggesting that as the S1 receives movement-related information from motor circuits in the active task, thus, higher amount of information is available for decoding proprioceptive information.

We compared the capability of different decoding strategies as well. Firstly, we compared the multilinear regression method with PLS regression. MLR is a straightforward method that is widely used in several studies to decode movement-related parameters.53 On the other hand, PLS has shown to be very effective in neural decoding as it is capable of handling high dimensional inputs and can linearly decode the target variables with high accuracy. Our results show that PLS method significantly outperforms MLR in both active and passive tasks. The accuracies of decoding kinematic and kinetic parameters, in terms of R2, using these two methods are summarized in Table 2. As the PLS transforms the neural and movement-related parameters and performs the estimation on latent variables, higher accuracy obtained by this model may indicate that studies focusing on the encoding of these parameters in S1 can develop more accurate models by exploiting these latent variables.

In addition, a state-based method in which the direction of movement is detected using a classifier, and then continuous decoding is done using a decoder trained by data of the detected direction is developed, and its results are compared with the conventional method. In this method, four specialized decoders are trained for each movement-related parameter to decode the parameter in each condition. As expected, this method significantly outperformed the conventional method, in which a decoder is trained with all data in all conditions, in active mode in almost all parameters except for shoulder adduction (see Figure 4B). However, the accuracy obtained by the conventional method was significantly higher in passive mode for most of the parameters (see Figure 4D). As the direction classification accuracy is not different in these two modes, this difference in performance may solely be due to the distinct encoding of movement in different directions in the active mode which could be captured by the decoding model. This may mean that a portion of the information conveyed by the motor circuits during the active task describes the details of movement in each direction in a particular and distinct manner. Furthermore, the continuous decoder in active mode reaches its maximum performance with far fewer PLS components than in the passive mode, indicating that the latent variables of PLS can more effectively explain the neural activities related to movement parameters during active mode.

Figures 8A and 8C show the classification and regression accuracies obtained by using only one neuron as the predictor and Figure 8B show the contribution of each neuron in classification. The classification accuracy decreases as we move from the anterior medial side of the array to the posterior lateral side in monkey H. However, this pattern was not observed in monkey C, which may be due to the lower number of detected single units. This pattern is also observable in the contribution of neurons in maximizing the classification accuracy (Figure 8B). On the other hand, by analyzing the regression accuracy of each neuron (Figure 8C), in terms of correlation coefficient, a relation between the location of neurons and their regression accuracy was not observed. We further investigated the relationship between the receptive field of neurons and their ability to decode, either continuously (Figure 8H) or discretely (Figures 8D and 8F), hand movements. In the active task, neurons with shoulder, elbow, and hand receptive fields had the highest continuous decoding accuracies, respectively. However, shoulder, elbow, and torso neurons had the highest accuracies in the passive task, respectively. Hand, shoulder, and humerus neurons had the highest classification accuracies and contribution in both active and passive tasks. In addition, neurons with deep receptive fields, which respond to joint movement or muscle palpation, were more informative than cutaneous ones, which respond to being brushed or stretched, as expected (Figure 8E). These results show that neurons with proximal deep receptive fields provide more information for decoding than distal ones, except for hand in discrete decoding, which may be surprising. In fact, we expected neurons with hand, wrist, and forearm receptive fields to contribute more than the torso and shoulder in decoding. These results may show that the activities of neurons in area 2 of S1 can provide more proprioceptive information about the configuration of arm than a specific part of the arm, which is consistent with the results of Chowdhury et al.30 It is also worth noting that the number of electrodes with each receptive field may affect the results (Figure 8J).

Figure 8.

Figure 8

Contribution of neurons in decoding

(A) Single unit accuracy in discrete and continuous decoding. The classification accuracy was obtained by using only one neuron to classify the movement direction. Best classification accuracy is demonstrated in cases where more than one single unit was detected from one recording channel. Squares with dot patterns demonstrate non-recording sites.

(B) The contribution of neurons of each electrode in the selected features (MI feature selection method) for classification. In the color map, one means that the features of a neuron in the respective electrode were present in selected features in all folds/runs and zero indicates that features from the respective electrode were not selected in any folds/runs.

(C) The correlation coefficient of continuously decoding x-position using only one neuron.

(D) The mean classification accuracies obtained by single neurons in each receptive field are shown for each subject, session, and task. The red dotted line indicates the chance level.

(E) The performance of neurons with the modality of each receptive field. The mean classification accuracies obtained by single neurons in each modality field (deep or cutaneous). The red dotted line indicates the chance level.

(F) The contribution of neurons of each receptive field in maximizing the classification accuracy.

(G) The contribution of neurons of each modality field in maximizing the classification accuracy.

(H) The mean correlation coefficient of continuously decoding x-position using each neuron in each receptive field.

(I) The mean correlation coefficient of continuously decoding x-position using each neuron in each modality field.

(J) Number of channels in each receptive field and modality field.

In this study, we focused on investigating the decoding of movement-related parameters using MLR, as a basic decoding method, and PLS, as a high performance method. PLS regression is well known for its ability in avoiding over-fitting and handling high dimensional features and has been used in BCI studies with promising results for both offline and online decoding.16,54,55,56 In addition, we compared our results using PLS with other decoding methods, namely Kalman filter, iterative reweighted least squares (IRLS) regression, support vector regression (SVR), and decision tree regression. These methods are widely used in machine learning and decoding studies. PLS significantly outperformed other methods in the active task. However, in the passive task for monkey C, Kalman filter, IRLS, and SVR performed either better than or with no significant difference with PLS in different parameter categories (position and velocity, force and moment, and joint angle). The difference between decoding accuracies obtained by PLS and other methods is significantly higher for monkey H data which contains more detected single units, which highlights the capabilities of PLS regression method in working with high-dimensional predictors.

Decoding kinematic parameters of hand movement using somatosensory cortex signals has been investigated in several studies. Weber et al.,41 decoded hand position, velocity, and acceleration by Wiener filter using area 2 of S1 neural activities with relatively high accuracies in terms of R2. Glaser et al.,57 compared the decoding accuracy of several decoding methods using motor cortex, area 2 of S1, and hippocampus. The best accuracy for area 2 of S1 dataset, for which they decoded the hand velocity, was obtained by long short-term memory neural network (LSTM) and the ensemble decoder, which combined the predictions of eight decoders, with peak accuracy as high as 0.86 and mean accuracies less than 0.8 (in terms of R2). Gallego et al.,58 investigated the robustness of cortical population dynamics and used latent variables for decoding movement parameters using M1, PMd, and area 2 of S1 neural activities. They decoded hand velocity using latent variables and Wiener filter and obtained high decoding accuracies in the area 2 of S1 dataset. Keshtkaran et al.,59 also decoded hand movement parameters from M1, area 2 of S1, and dorsomedial frontal cortex signals using latent variables. Their decoding accuracy in the area 2 of S1 dataset was also high. Our results are higher than the decoding accuracies obtained in these studies, which may demonstrate the effectiveness of state-based decoders and latent variables of PLS in decoding studies. It is worth mentioning that decoding movement parameters was not the main focus of the mentioned studies, except for Glaser et al.57 Additionally, it is worth noting that the decoding of kinetic and kinematic parameters of hand movement in active and passive tasks was not investigated in previous studies.

Previous studies investigated the decoding of hand movements using neural activities recorded from motor areas. Wessberg et al.60 decoded three-dimensional hand position using PMd, M1, PPC, and ipsilateral PMd, M1, and PPC (in one monkey) with mean correlation coefficients as high as 0.76. Our results in the active task also outperform the results obtained by Flint et al.61 in which the position of the hand in the COT was decoded using neural activities of PMd and M1 with mean R2 values of 0.80, and 0.62 with spikes and local field potentials, respectively. It is worth noting that the COT in this study was 8-directional, but it was 4-directional in our study. Carmena et al.22 compared the kinematic and kinetic information available in different areas of the cortex—namely PMd, supplementary motor area (SMA), M1, S1, and PPC—by decoding hand position, velocity, and grip force. They concluded that the M1 spikes were the best predictors for hand position, velocity, and grip force. The results of this study show that S1 neural activities are far less informative about the aforementioned parameters than M1 in terms of decoding accuracy. Barra et. Al.,62 conducted a study on the decoding of hand movement parameters during reach and grasp task, utilizing the Kalman filter to analyze M1 and S1 neural activities. They decoded continuous arm and hand kinematics with mean R2 values of 0.83 and 0.65 from M1 and S1 areas, respectively, for one monkey and 0.68 and 0.67 from M1 and S1 areas, respectively, for the other monkey. Their results show that M1 outperforms S1 in decoding these parameters. Our results on decoding kinematic and kinetic parameters using area 2 of S1 neural activities are far higher than the results obtained in these studies, and show that both kinematic and kinetic parameters of hand movement can be decoded from neural activities in this area. Overall, the results of our study may highlight the richness of information provided by neurons in the area 2 of S1, the effectiveness of utilizing state-based decoders, and the capacity of PLS method in decoding studies.

Limitations of the study

This study has some limitations. The number of isolated single-units recorded varied significantly across different recording sessions. This variability makes it difficult to draw a general and definitive conclusion about the information available in sub-regions of area 2 of S1. In addition, as there was a significant difference in the number of neurons with similar receptive fields, it was challenging to accurately infer the relationship between receptive field and contribution in decoding. There are several factors which can affect the difference between our results and the results of other studies, in which decoding from S1 neural activities has been investigated, namely the implantation site, the number of electrodes, the difference in task, and decoding methods. These factors may limit the generalizability and extensibility of the results. We did not compare our methods and results with data recorded from other areas which could be beneficial but was not possible due to the dataset used in this study. Furthermore, as we used a previously recorded data, we did not test our results in S1 stimulation experiments to evaluate the conclusions about stimulation in practice, which can be addressed in future studies.

Conclusion

In this study, decoding kinematic and kinetic parameters of active and passive hand movement during the COT using area 2 of S1 neural activities has been investigated in conventional and state-based decoders. The results of this study show that the neuronal activities in area 2 of SC can be used to decode the position, velocity, forces, moments, and joint angles of hand with very high accuracies. Decoding accuracy was higher for kinematic parameters, and the hand position was decoded more accurately than other parameters. Generally, active trials were better decoded than passive ones, and most of the assessed parameters were decoded more accurately using the state-based decoder in active trials. However, in passive trials, the conventional decoder outperformed the state-based decoder in most of the parameters.

STAR★Methods

Key resources table

REAGENT or RESOURCE SOURCE IDENTIFIER
Software and algorithms

MATLAB (2019a) MathWorks RRID:SCR_001622
GraphPad Prism http://www.graphpad.com/ RRID:SCR_002798
Custom software code This paper GitHub: https://github.com/AlavieMirfathollahi/S1-COT-Decoding

Deposited data

Data from: Area 2 of primary somatosensory cortex encodes kinematics of the whole arm Chowdhury et al.30 Dryad: https://doi.org/10.5061/dryad.nk98sf7q7

Resource availability

Lead contact

Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Mohammad Reza Daliri (daliri@iust.ac.ir).

Materials availability

This study did not generate new materials.

Method details

Data description

This public dataset was introduced by Chowdhury et al. and the procedures for intracortical array implantation, behavioral task, and neural signal recording are explained in detail in.30 Here, we present details associated with the present study to support data analysis. First, we will explain the behavioral task and neural signal recording from monkeys. Then we will explain the methods used for decoding the behavioral states (hand-moving directions), and we will finally demonstrate the approach of integrating the discrete state decoder into the continuous parameter decoder.

Three Rhesus macaque monkeys were used in the experiment and neural signal recording, but only two monkeys performed COT which is investigated in this study. All surgical and experimental procedures were conducted with the guide for the care and use of laboratory animals and approved by the institutional animal care and use committee of Northwestern University under protocol #IS00000367.30 The Micro-electrode arrays (100 electrodes, Blackrock Microsystems) implanted in the arm representation of two monkeys’ area 2 of S1 were used for neural signal recording. In surgery, the implantation site was confirmed using a recording from the cortical surface while the hand and arm were manipulated. For neural data recording, the Cerebus recording system (96 electrodes, 30 kHz, Blackrock) was used. In the recording sessions, after detecting spikes using a threshold (-5x signal RMS) and sorting the spikes using features like waveform shape and inter-spike interval (Plexon Offline Sorter), the firing rate of neurons in 10 ms bins was used as input for the decoding algorithm.

Behavioral task

Neural activity from two monkeys’ area 2 of S1 was recorded while they used the manipulandum to reach targets presented on a screen workspace (20 cm × 20 cm). The monkeys performed a center out reaching task in 2 conditions, active and passive. In the active task, the monkey held a target at the center of the workspace, after while one of four targets (0, 90, 180, and 270) was presented. When the monkey reached the target correctly, the trial was completed successfully; otherwise, the trial was considered invalid. In passive trials, when the monkey held a target at the center of the workspace, the manipulandum was used to supply 2N perturbation to the monkey’s hand in the direction of one of the targets. After each passive trial, the monkey returned to the center. In this study, only successful trials were considered. Movement onset in the passive task was determined by looking for a peak in the handle acceleration after the motor pulse. In the active task, after 200 ms, the post-go cue and sweeping backward in time until the acceleration was less than 10% of the peak. After each successful passive or active trial, a pulse of water or juice was given as a reward. The position of the handle (x axis and y axis), the velocity of the handle (x axis and y axis), the interaction forces between the monkey’s hand and the handle (x axis, y axis, z axis), and the location of the ten markers of four different colors painted on the outside of the monkey’s arm were used to track joint parameters (shoulder adduction, shoulder rotation, shoulder flexion, elbow flexion, radial protonation, wrist flexion, wrist abduction).

Movement direction states classification

As pointed out in the data acquisition section, the intracortical signal was recorded from area 2 of the S1. The procedures for state-based continuous decoding of movement parameters in the training and test phases are depicted in Figure 1. After the online recording, the spikes were identified, sorted, and the firing rate of each neuron in 10 ms bins was calculated. The data used in this study was the firing rate signal. In the preprocessing step, the firing signal was smoothed with a Gaussian kernel as below:

W(t)=e12(αt(L1)/2)2

where W is the Gaussian kernel, L is window length (L=11), and α is the width factor (here α=2.5). These parameters have been achieved by cross-validation. In the classification step, a 200 ms window from the firing rate data (from the start of the movement onset) was separated, and the median firing rate of each neuron was calculated in two 10 ms windows as features. Linear discriminant analysis (LDA) with a pseudo-linear discriminant analysis type classifier was used to classify the movement direction. The large number of features that are extracted from the firing rate information poses a challenge to the classification algorithms because it lengthens training time and causes overfitting, which reduces classification accuracy. This issue can be solved using feature selection techniques. The mutual information (MI) method was employed to sort features in this study. This criterion expresses the amount to which two variables depend on each other.63 The mutual information was obtained from Equation 2:

MI=xXlLp(x.l)log(p(x.l)p(x)p(l))

where x represents the feature and l represents the class label. The mutual information between each feature and class label was calculated during the feature selection step. Large values of MI indicate a high degree of dependency between the feature and class labels; thus, their greater applicability for the classification procedure. The optimum number of sorting features was determined by 5-fold cross-validation on training data, and features were sorted according to their MI with class labels. As a result, the most discriminatory features were selected and fed into a classifier. To select the most discriminative features, features were ranked based on mutual information, and an optimized number of features was selected in train data using cross-validation. The top-ranked features were chosen as the LDA classifier’s input. The classification accuracy was calculated using a 5-fold cross-validation method repeated ten times after the order of trials was shuffled.

State-based continuous movement decoding

In this phase, seventeen continuous parameters related to hand movements including hand position (x axis and y axis), hand velocity (x axis and y axis), force (x axis, y axis, and z axis), and angle of seven joint parameters (Shoulder adduction, Shoulder rotation, Shoulder flexion, Elbow flexion, Radial protonation, Wrist flexion, Wrist abduction) were decoded continuously from the firing rates signal using partial least square (PLS) regression algorithm64 and Multiple linear regression (MLR).65 The firing signal was smoothed with a gamma kernel in the preprocessing step for continuous decoding, as shown below:

R(t)={(tts)α1βαexp(β(tts))/Γ(α)iftts0iftts

The shape (α = 1.5) and rate parameter (β = 11) were chosen to achieve a small delay. This procedure ensured that the resulting firing rate was smooth, continuous, and causal, meaning that the value at any time point was only influenced by spikes that occurred prior to that point in time, but not afterward.66 So the convolution of the gamma kernel in the firing rate of each neuron was computed to produce regression features, and N features were obtained for N neurons recorded in each session. After these steps, two categories of features were removed to reduce the computational load: 1) zero variance Features (because some neurons fired very sparsely, and this activity as features do not contain information). 2) redundant features. To remove redundancy, a correlation test was used, and features with more than 0.98% similarity were eliminated. The remaining features are fed into the PLS model. PLS is an appropriate algorithm for solving high-dimensional regression problems. By maximizing the covariance between the projected input and output data, both input and output data are projected into a new low-dimensional subspace. As a result, PLS captures not only the input and output components that maximize covariance but also ignores non-output related components due to noise. The features corresponding to each movement direction were concatenated during the training stage, and a PLS model was obtained between the input neural features and the output parameters (seventeen outputs) of this direction. Similarly, the features corresponding to the other movement directions were concatenated separately, and each class obtained a different PLS model based on the input neural features and output parameters. During the testing phase, the continuous output signal was decoded by selecting the corresponding PLS model based on the LDA’s identified class (4 hand movement directions). This decoder is called a state-based decoder, and it demonstrates how a state decoder can be combined with PLS regression:

Yi(t)=j=1Nβi(j)Si(j)

where Yi(t) represents the output signal associated with the state i. PLS algorithm optimization was used to determine the regression coefficient βi for each state.64 The input feature matrix Si is multiplied by their corresponding βi for each neuron j (j=1.2.N).

The MLR method, in addition to the PLS algorithm, was considered to investigate the effect of combining state detectors with continuous regression strategies.

Yˆ=b0+b1X1+b2X2++bNXN

Where Yˆ(t) is the decoded output or expected value of the dependent variable, X1 through XN are N distinct independent or predictor variables, b0 is the value of Y when all of the independent variables (X1 through XN) are equal to zero, and b1 through bN are the estimated regression coefficients. Each regression coefficient represents the change in Y relative to a one-unit change in the respective independent variable. In the case of multiple regression situations, b1, for example, is the change in Y relative to a one-unit change in X1, holding all other independent variables constant (i.e., when the remaining independent variables are held at the same value or are fixed). Again, statistical tests can be performed to assess whether each regression coefficient is significantly different from zero.

Pearson’s correlation coefficients (R) and the coefficient of determination (R2) between the actual and decoded parameters were calculated to evaluate the performance of the decoder:

R=t=1T(Y(t)Y¯)(Yˆ(t)Y)t=0T(Y(t)Y¯)2t=0T(Yˆ(t)Y)2
R2=1t=0T(Y(t)Yˆ(t))2t=0T(Y(t)Y¯(t))2
RMSE=t=0T(Y(t)Yˆ(t))2T

where Y(t), and Yˆ(t) are the actual and decoded output at time sample t, respectively. Y¯ and Y are the average of the actual and decoded output signal in a test fold with T time samples. We used a 5-fold cross-validation method with ten repetitions after shuffling the order of trials to evaluate our decoder under different data combinations.

Quantification and statistical analysis

Statistical analysis of the obtained results was performed using GraphPad Prism 9 (GraphPad Software Inc., San Diego, CA) software to perform statistical tests on the output of repeated ten times 5-fold cross-validation. The Wilcoxon signed rank test was used to compare the results of prediction using two decoding approaches, namely conventional and state-based decoders, in each movement parameter. The False Discovery Rate (FDR) method was used as the multiple comparisons correction. The significance level was considered P=0.005.

Acknowledgments

The authors thank Abed Khorasani, Raeed H Chowdhury, Joshua I Glaser for their assistance and for providing additional information about the dataset, and Ataollah Mirfathollahi for his assistance in interpreting the results.

Author contributions

A.M., M.T.G., V.S., M.R.Z., and M.R.D. conceptualized the study; A.M. and M.T.G. analyzed the data; A.M. and M.T.G. wrote the manuscript; all authors interpreted the results, reviewed, edited, and approved the final version of the manuscript.

Declaration of interests

The authors declare no competing interests.

Published: September 1, 2023

Data and code availability

  • This paper analyzes existing, publicly available data. The accession number for the dataset is listed in the key resources table.

  • All original code has been deposited at GitHub and is publicly available as of the date of publication. The repository link is listed in the key resources table.

  • Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.

References

  • 1.Snoek G.J., IJzerman M.J., Hermens H.J., Maxwell D., Biering-Sorensen F. Survey of the needs of patients with spinal cord injury: impact and priority for improvement in hand function in tetraplegics. Spinal Cord. 2004;42:526–532. doi: 10.1038/sj.sc.3101638. [DOI] [PubMed] [Google Scholar]
  • 2.Hatsopoulos N.G., Donoghue J.P. The Science of Neural Interface Systems. Annu. Rev. Neurosci. 2009;32:249–266. doi: 10.1146/annurev.neuro.051508.135241. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Taylor D.M., Tillery S.I.H., Schwartz A.B. Direct Cortical Control of 3D Neuroprosthetic Devices. Science. 2002;296:1829–1832. doi: 10.1126/science.1070291. [DOI] [PubMed] [Google Scholar]
  • 4.Willett F.R., Young D.R., Murphy B.A., Memberg W.D., Blabe C.H., Pandarinath C., Stavisky S.D., Rezaii P., Saab J., Walter B.L., et al. Principled BCI Decoder Design and Parameter Selection Using a Feedback Control Model. Sci. Rep. 2019;9:8881. doi: 10.1038/s41598-019-44166-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Hochberg L.R., Bacher D., Jarosiewicz B., Masse N.Y., Simeral J.D., Vogel J., Haddadin S., Liu J., Cash S.S., van der Smagt P., Donoghue J.P. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 2012;485:372–375. doi: 10.1038/nature11076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Collinger J.L., Wodlinger B., Downey J.E., Wang W., Tyler-Kabara E.C., Weber D.J., McMorland A.J.C., Velliste M., Boninger M.L., Schwartz A.B. High-performance neuroprosthetic control by an individual with tetraplegia. Lancet. 2013;381:557–564. doi: 10.1016/S0140-6736(12)61816-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Wodlinger B., Downey J.E., Tyler-Kabara E.C., Schwartz A.B., Boninger M.L., Collinger J.L. Ten-dimensional anthropomorphic arm control in a human brain−machine interface: difficulties, solutions, and limitations. J. Neural. Eng. 2015;12 doi: 10.1088/1741-2560/12/1/016011. [DOI] [PubMed] [Google Scholar]
  • 8.Bouton C.E., Shaikhouni A., Annetta N.V., Bockbrader M.A., Friedenberg D.A., Nielson D.M., Sharma G., Sederberg P.B., Glenn B.C., Mysiw W.J., et al. Restoring cortical control of functional movement in a human with quadriplegia. Nature. 2016;533:247–250. doi: 10.1038/nature17435. [DOI] [PubMed] [Google Scholar]
  • 9.Ajiboye A.B., Willett F.R., Young D.R., Memberg W.D., Murphy B.A., Miller J.P., Walter B.L., Sweet J.A., Hoyen H.A., Keith M.W., et al. Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration. Lancet. 2017;389:1821–1830. doi: 10.1016/S0140-6736(17)30601-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Golub M.D., Chase S.M., Batista A.P., Yu B.M. Brain–computer interfaces for dissecting cognitive processes underlying sensorimotor control. Curr. Opin. Neurobiol. 2016;37:53–58. doi: 10.1016/j.conb.2015.12.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Zhuang J., Truccolo W., Vargas-Irwin C., Donoghue J.P., Zhuang J., Truccolo W., Vargas-Irwin C., Donoghue J.P. Decoding 3-D Reach and Grasp Kinematics From High-Frequency Local Field Potentials in Primate Primary Motor Cortex. IEEE Trans. Biomed. Eng. 2010;57:1774–1784. doi: 10.1109/TBME.2010.2047015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Aggarwal V., Mollazadeh M., Davidson A.G., Schieber M.H., Thakor N.V. State-based decoding of hand and finger kinematics using neuronal ensemble and LFP activity during dexterous reach-to-grasp movements. J. Neurophysiol. 2013;109:3067–3081. doi: 10.1152/jn.01038.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Okorokova E.V., Goodman J.M., Hatsopoulos N.G., Bensmaia S.J. Decoding hand kinematics from population responses in sensorimotor cortex during grasping. J. Neural. Eng. 2020;17:046035. doi: 10.1088/1741-2552/ab95ea. [DOI] [PubMed] [Google Scholar]
  • 14.Bansal A.K., Truccolo W., Vargas-Irwin C.E., Donoghue J.P. Decoding 3D reach and grasp from hybrid signals in motor and premotor cortices: spikes, multiunit activity, and local field potentials. J. Neurophysiol. 2012;107:1337–1355. doi: 10.1152/jn.00781.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Mirfathollahi A., Ghodrati M.T., Shalchyan V., Daliri M.R. Decoding locomotion speed and slope from local field potentials of rat motor cortex. Comput. Methods Programs Biomed. 2022;223 doi: 10.1016/j.cmpb.2022.106961. [DOI] [PubMed] [Google Scholar]
  • 16.Khorasani A., Heydari Beni N., Shalchyan V., Daliri M.R. Continuous Force Decoding from Local Field Potentials of the Primary Motor Cortex in Freely Moving Rats. Sci. Rep. 2016;6 doi: 10.1038/srep35238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Fagg A.H., Ojakangas G.W., Miller L.E., Hatsopoulos N.G. Kinetic Trajectory Decoding Using Motor Cortical Ensembles. IEEE Trans. Neural Syst. Rehabil. Eng. 2009;17:487–496. doi: 10.1109/TNSRE.2009.2029313. [DOI] [PubMed] [Google Scholar]
  • 18.Flint R.D., Wang P.T., Wright Z.A., King C.E., Krucoff M.O., Schuele S.U., Rosenow J.M., Hsu F.P.K., Liu C.Y., Lin J.J., et al. Extracting kinetic information from human motor cortical signals. Neuroimage. 2014;101:695–703. doi: 10.1016/j.neuroimage.2014.07.049. [DOI] [PubMed] [Google Scholar]
  • 19.Gupta R., Ashe J. Offline decoding of end-point forces using neural ensembles: Application to a brain machine interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2009;17:254–262. doi: 10.1109/TNSRE.2009.2023290. [DOI] [PubMed] [Google Scholar]
  • 20.Suminski A.J., Willett F.R., Fagg A.H., Bodenhamer M., Hatsopoulos N.G., Willett F.R., Bodenhamer M., Hatsopoulos N.G., Fagg A.H., Bodenhamer M., et al. 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE; 2011. Continuous decoding of intended movements with a hybrid kinetic and kinematic brain machine interface; pp. 5802–5806. [DOI] [PubMed] [Google Scholar]
  • 21.Suminski A.J., Fagg A.H., Willett F.R., Bodenhamer M., Hatsopoulos N.G. 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) IEEE; 2013. Online adaptive decoding of intended movements with a hybrid kinetic and kinematic brain machine interface; pp. 1583–1586. [DOI] [PubMed] [Google Scholar]
  • 22.Carmena J.M., Lebedev M.A., Crist R.E., O’Doherty J.E., Santucci D.M., Dimitrov D.F., Patil P.G., Henriquez C.S., Nicolelis M.A.L. Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates. PLoS Biol. 2003;1:e42. doi: 10.1371/journal.pbio.0000042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Scott S.H. Optimal feedback control and the neural basis of volitional motor control. Nat. Rev. Neurosci. 2004;5:532–546. doi: 10.1038/nrn1427. [DOI] [PubMed] [Google Scholar]
  • 24.Soechting J.F., Flanders M. Errors in pointing are due to approximations in sensorimotor transformations. J. Neurophysiol. 1989;62:595–608. doi: 10.1152/jn.1989.62.2.595. [DOI] [PubMed] [Google Scholar]
  • 25.Ghez C., Sainburg R. Proprioceptive control of interjoint coordination. Can. J. Physiol. Pharmacol. 1995;73:273–284. doi: 10.1139/y95-038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.London B.M., Miller L.E. Responses of somatosensory area 2 neurons to actively and passively generated limb movements. J. Neurophysiol. 2013;109:1505–1513. doi: 10.1152/jn.00372.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Prud’homme M.J., Kalaska J.F. Proprioceptive activity in primate primary somatosensory cortex during active arm reaching movements. J. Neurophysiol. 1994;72:2280–2301. doi: 10.1152/jn.1994.72.5.2280. [DOI] [PubMed] [Google Scholar]
  • 28.Gardner E.P., Costanzo R.M. Properties of kinesthetic neurons in somatosensory cortex of awake monkeys. Brain Res. 1981;214:301–319. doi: 10.1016/0006-8993(81)91196-3. [DOI] [PubMed] [Google Scholar]
  • 29.Goodman J.M., Tabot G.A., Lee A.S., Suresh A.K., Rajan A.T., Hatsopoulos N.G., Bensmaia S. Postural Representations of the Hand in the Primate Sensorimotor Cortex. Neuron. 2019;104:1000–1009.e7. doi: 10.1016/j.neuron.2019.09.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Chowdhury R.H., Glaser J.I., Miller L.E. Area 2 of primary somatosensory cortex encodes kinematics of the whole arm. Elife. 2020;9 doi: 10.7554/eLife.48198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Armenta Salas M., Bashford L., Kellis S., Jafari M., Jo H., Kramer D., Shanfield K., Pejsa K., Lee B., Liu C.Y., Andersen R.A. Proprioceptive and cutaneous sensations in humans elicited by intracortical microstimulation. Elife. 2018;7 doi: 10.7554/eLife.32904. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Flesher S.N., Collinger J.L., Foldes S.T., Weiss J.M., Downey J.E., Tyler-Kabara E.C., Bensmaia S.J., Schwartz A.B., Boninger M.L., Gaunt R.A. Intracortical microstimulation of human somatosensory cortex. Sci. Transl. Med. 2016;8:361ra141. doi: 10.1126/scitranslmed.aaf8083. [DOI] [PubMed] [Google Scholar]
  • 33.Ghodrati M.T., Mirfathollahi A., Shalchyan V., Daliri M.R. Intracortical Hindlimb Brain–Computer Interface Systems: A Systematic Review. IEEE Access. 2023;11:28119–28139. doi: 10.1109/ACCESS.2023.3258969. [DOI] [Google Scholar]
  • 34.Crammond D.J., Kalaska J.F. Prior Information in Motor and Premotor Cortex: Activity During the Delay Period and Effect on Pre-Movement Activity. J. Neurophysiol. 2000;84:986–1005. doi: 10.1152/jn.2000.84.2.986. [DOI] [PubMed] [Google Scholar]
  • 35.Lebedev M.A., O’Doherty J.E., Nicolelis M.A.L. Decoding of Temporal Intervals From Cortical Ensemble Activity. J. Neurophysiol. 2008;99:166–186. doi: 10.1152/jn.00734.2007. [DOI] [PubMed] [Google Scholar]
  • 36.Achtman N., Afshar A., Santhanam G., Yu B.M., Ryu S.I., Shenoy K.V. Free-paced high-performance brain–computer interfaces. J. Neural. Eng. 2007;4:336–347. doi: 10.1088/1741-2560/4/3/018. [DOI] [PubMed] [Google Scholar]
  • 37.Kemere C., Santhanam G., Yu B.M., Afshar A., Ryu S.I., Meng T.H., Shenoy K.V. Detecting Neural-State Transitions Using Hidden Markov Models for Motor Cortical Prostheses. J. Neurophysiol. 2008;100:2441–2452. doi: 10.1152/jn.00924.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Hwang E.J., Andersen R.A. Brain Control of Movement Execution Onset Using Local Field Potentials in Posterior Parietal Cortex. J. Neurosci. 2009;29:14363–14370. doi: 10.1523/JNEUROSCI.2081-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Ahmadi A., Khorasani A., Shalchyan V., Daliri M.R. State-Based Decoding of Force Signals From Multi-Channel Local Field Potentials. IEEE Access. 2020;8:159089–159099. doi: 10.1109/ACCESS.2020.3019267. [DOI] [Google Scholar]
  • 40.Kao J.C., Nuyujukian P., Ryu S.I., Shenoy K.V. A High-Performance Neural Prosthesis Incorporating Discrete State Selection With Hidden Markov Models. IEEE Trans. Biomed. Eng. 2017;64:935–945. doi: 10.1109/TBME.2016.2582691. [DOI] [PubMed] [Google Scholar]
  • 41.Weber D.J., London B.M., Hokanson J.A., Ayers C.A., Gaunt R.A., Torres R.R., Zaaimi B., Miller L.E. Limb-State Information Encoded by Peripheral and Central Somatosensory Neurons: Implications for an Afferent Interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2011;19:501–513. doi: 10.1109/TNSRE.2011.2163145. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Sombeck J.T., Miller L.E. Short reaction times in response to multi-electrode intracortical microstimulation may provide a basis for rapid movement-related feedback. J. Neural. Eng. 2019;17 doi: 10.1088/1741-2552/ab5cf3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Kuehn E., Mueller K., Turner R., Schütz-Bosbach S. The functional architecture of S1 during touch observation described with 7 T fMRI. Brain Struct. Funct. 2014;219:119–140. doi: 10.1007/s00429-012-0489-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Jafari M., Aflalo T., Chivukula S., Kellis S.S., Salas M.A., Norman S.L., Pejsa K., Liu C.Y., Andersen R.A. The human primary somatosensory cortex encodes imagined movement in the absence of sensory information. Commun. Biol. 2020;3:757. doi: 10.1038/s42003-020-01484-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Yakovlev L., Syrov N., Miroshnikov A., Lebedev M., Kaplan A. Event-Related Desynchronization Induced by Tactile Imagery: an EEG Study. eneuro. 2023;10 doi: 10.1523/ENEURO.0455-22.2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Yao L., Sheng X., Mrachacz-Kersting N., Zhu X., Farina D., Jiang N. Decoding Covert Somatosensory Attention by a BCI System Calibrated With Tactile Sensation. IEEE Trans. Biomed. Eng. 2018;65:1689–1695. doi: 10.1109/TBME.2017.2762461. [DOI] [PubMed] [Google Scholar]
  • 47.Umeda T., Isa T., Nishimura Y. The somatosensory cortex receives information about motor output. Sci. Adv. 2019;5:eaaw5388. doi: 10.1126/sciadv.aaw5388. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Jiang W., Chapman C.E., Lamarre Y. Modulation of somatosensory evoked responses in the primary somatosensory cortex produced by intracortical microstimulation of the motor cortex in the monkey. Exp. Brain Res. 1990;80:333–344. doi: 10.1007/BF00228160. [DOI] [PubMed] [Google Scholar]
  • 49.Khateb M., Schiller J., Schiller Y. Feedforward motor information enhances somatosensory responses and sharpens angular tuning of rat S1 barrel cortex neurons. Elife. 2017;6 doi: 10.7554/eLife.21843. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Zagha E., Casale A.E., Sachdev R.N.S., McGinley M.J., McCormick D.A. Motor Cortex Feedback Influences Sensory Processing by Modulating Network State. Neuron. 2013;79:567–578. doi: 10.1016/j.neuron.2013.06.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Seki K., Fetz E.E. Gating of Sensory Input at Spinal and Cortical Levels during Preparation and Execution of Voluntary Movement. J. Neurosci. 2012;32:890–902. doi: 10.1523/JNEUROSCI.4958-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Starr A., Cohen L.G. ‘Gating’ of somatosensory evoked potentials begins before the onset of voluntary movement in man. Brain Res. 1985;348:183–186. doi: 10.1016/0006-8993(85)90377-4. [DOI] [PubMed] [Google Scholar]
  • 53.Bradberry T.J., Gentili R.J., Contreras-Vidal J.L. Reconstructing Three-Dimensional Hand Movements from Noninvasive Electroencephalographic Signals. J. Neurosci. 2010;30:3432–3437. doi: 10.1523/JNEUROSCI.6107-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Khorasani A., Foodeh R., Shalchyan V., Daliri M.R. Brain Control of an External Device by Extracting the Highest Force-Related Contents of Local Field Potentials in Freely Moving Rats. IEEE Trans. Neural Syst. Rehabil. Eng. 2018;26:18–25. doi: 10.1109/TNSRE.2017.2751579. [DOI] [PubMed] [Google Scholar]
  • 55.Bundy D.T., Pahwa M., Szrama N., Leuthardt E.C. Decoding three-dimensional reaching movements using electrocorticographic signals in humans. J. Neural. Eng. 2016;13 doi: 10.1088/1741-2560/13/2/026021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Lorach H., Galvez A., Spagnolo V., Martel F., Karakas S., Intering N., Vat M., Faivre O., Harte C., Komi S., et al. Walking naturally after spinal cord injury using a brain–spine interface. Nature. 2023;618:126–133. doi: 10.1038/s41586-023-06094-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Glaser J.I., Benjamin A.S., Chowdhury R.H., Perich M.G., Miller L.E., Kording K.P. Machine learning for neural decoding. eNeuro. 2020;7 doi: 10.1523/ENEURO.0506-19.2020. ENEURO.0506–19.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Gallego J.A., Perich M.G., Chowdhury R.H., Solla S.A., Miller L.E. Long-term stability of cortical population dynamics underlying consistent behavior. Nat. Neurosci. 2020;23:260–270. doi: 10.1038/s41593-019-0555-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Keshtkaran M.R., Sedler A.R., Chowdhury R.H., Tandon R., Basrai D., Nguyen S.L., Sohn H., Jazayeri M., Miller L.E., Pandarinath C. A large-scale neural network training framework for generalized estimation of single-trial population dynamics. Nat. Methods. 2022;19:1572–1577. doi: 10.1038/s41592-022-01675-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Wessberg J., Stambaugh C.R., Kralik J.D., Beck P.D., Laubach M., Chapin J.K., Kim J., Biggs S.J., Srinivasan M.A., Nicolelis M.A. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature. 2000;408:361–365. doi: 10.1038/35042582. [DOI] [PubMed] [Google Scholar]
  • 61.Flint R.D., Lindberg E.W., Jordan L.R., Miller L.E., Slutzky M.W. Accurate decoding of reaching movements from field potentials in the absence of spikes. J. Neural. Eng. 2012;9 doi: 10.1088/1741-2560/9/4/046006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Barra B., Badi M., Perich M.G., Conti S., Mirrazavi Salehian S.S., Moreillon F., Bogaard A., Wurth S., Kaeser M., Passeraub P., et al. A versatile robotic platform for the design of natural, three-dimensional reaching and grasping tasks in monkeys. J. Neural. Eng. 2019;17 doi: 10.1088/1741-2552/ab4c77. [DOI] [PubMed] [Google Scholar]
  • 63.Pohjalainen J., Räsänen O., Kadioglu S. Feature selection methods and their combinations in high-dimensional classification of speaker likability, intelligibility and personality traits. Comput. Speech Lang. 2015;29:145–171. doi: 10.1016/j.csl.2013.11.004. [DOI] [Google Scholar]
  • 64.Geladi P., Kowalski B.R. Partial least-squares regression: a tutorial. Anal. Chim. Acta X. 1986;185:1–17. doi: 10.1016/0003-2670(86)80028-9. [DOI] [Google Scholar]
  • 65.Chatterjee S., Hadi A.S. Influential Observations, High Leverage Points, and Outliers in Linear Regression. Stat. Sci. 1986;1 doi: 10.1214/ss/1177013622. [DOI] [Google Scholar]
  • 66.Baumann M.A., Fluet M.-C., Scherberger H. Context-Specific Grasp Movement Representation in the Macaque Anterior Intraparietal Area. J. Neurosci. 2009;29:6436–6448. doi: 10.1523/JNEUROSCI.5479-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

  • This paper analyzes existing, publicly available data. The accession number for the dataset is listed in the key resources table.

  • All original code has been deposited at GitHub and is publicly available as of the date of publication. The repository link is listed in the key resources table.

  • Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.


Articles from iScience are provided here courtesy of Elsevier

RESOURCES