Skip to main content
. 2023 May 31;12:e81499. doi: 10.7554/eLife.81499

Figure 5. Analysis of single unit tuning properties for spatial-temporal models.

(A) Polar scatter plots showing the activation of units (radius r) as a function of end-effector direction, as represented by the angle θ for directionally tuned units in different layers of the top-performing spatial-temporal model trained on the action recognition task, where direction corresponds to that of the end-effector while tracing characters in the model workspace. The activation strengths of one (velocity-dependent) muscle spindle one unit each in layers 3, 5, and 8 are shown. (B) Similar to (A), except that now radius describes velocity and color represents activation strength. The contours are determined following linear interpolation, with gaps filled in by neighbor interpolation and smoothed using a Gaussian filter. Examples of one muscle spindle, one unit each in layers 3, 5, and 8, are shown. (C) For each layer of one trained instantiation, the units are classified into types based on their tuning. A unit was classified as belonging to a particular type if its tuning had a test R2>0.2. Tested features were direction tuning, speed tuning, velocity tuning, Cartesian and polar position tuning, acceleration tuning, and label tuning (18/5446 scores excluded for action recognition task [ART]-trained, 430/5446 for trajectory decoding task [TDT]-trained; see Methods). (D) The same plot but for the spatial-temporal model of the same architecture but trained on the trajectory decoding task. (E) For an example instantiation, the distribution of test R2 scores for both the ART- and TDT-trained models are shown as vertical histograms (split-violins), for five kinds of kinematic tuning for each layer: direction tuning, speed tuning, Cartesian position tuning, polar position tuning, and label specificity indicated by different shades and arranged left-right for each layer including spindles. Tuning scores were excluded if they were equal to 1, indicating a constant neuron, or less than −0.1, indicating an improper fit (12/3890 scores excluded for ART, 285/3890 for TDT; see Methods). (F) The means of 90% quantiles over all five model instantiations of models trained on ART and TDT are shown for direction tuning (dark) and position tuning (light). 95% confidence intervals are shown over instantiations (N=5).

Figure 5.

Figure 5—figure supplement 1. Extended kinematic tuning of single neurons.

Figure 5—figure supplement 1.

(A) For an example spatial-temporal model instantiation, the distribution of test R2 scores for both the action recognition task (ART)- and trajectory decoding task (TDT)-trained models are shown, for direction, speed, velocity, acceleration, and labels (12/3890 scores excluded over all layers for ART-trained, 290/3890 for TDT-trained; see Methods). (B) The individual traces (faint) as well as the means (dark) of 90% quantiles over all five model instantiations of models trained on action recognition and trajectory decoding are shown for direction tuning (solid line) and acceleration tuning (dashed line). (C, D) Same as (A, B) but for the spatiotemporal model (0/2330 scores excluded for ART-trained, 138/2330 for TDT; see Methods). (E, F) Same as (A, B) but for the long short-term memory (LSTM) model (10/6530 scores excluded for ART-trained, 1052/6530 for TDT-trained; see Methods).
Figure 5—figure supplement 2. Analysis of single unit tuning properties for action recognition task (ART)-trained and untrained models.

Figure 5—figure supplement 2.

(A) For an example instantiation of the top-performing spatial-temporal model, the distribution of test R2 scores for both the trained and untrained model are shown, for five kinds of kinematic tuning for each layer: direction tuning, speed tuning, Cartesian position tuning, polar position tuning, and label specificity. The solid line connects the 90% quantiles of two of the tuning curve types, direction tuning (dark) and position tuning (light) (12/3890 scores excluded summed over all layers for ART-trained, 351/3890 for untrained; see Methods). (B) The means of 90% quantiles over all five model instantiations of models trained on action recognition and trajectory decoding are shown for direction tuning (dark) and position tuning (light). 95% confidence intervals are shown over instantiations (N=5). (C) The same plot as in (A) but for the top-performing spatiotemporal model (0/2330 scores excluded for ART-trained, 60/2330 for the untrained; see Methods). (D) The same plot as (B) for the spatiotemporal model. (E) The same plot as in (A) but for the top-performing long short-term memory (LSTM) model (10/6530 scores excluded for ART-trained, 395/6530 for the untrained; see Methods). (F) The same plot as (B) for the LSTM model.
Figure 5—figure supplement 3. Analysis of single unit tuning properties for trajectory decoding task (TDT)-trained and untrained models.

Figure 5—figure supplement 3.

(A) For an example instantiation of the top-performing spatial-temporal model, the distribution of test R2 scores for both the trained and untrained model are shown, for five kinds of kinematic tuning for each layer: direction tuning, speed tuning, Cartesian position tuning, polar position tuning, and label specificity. The solid line connects the 90% quantiles of two of the tuning curve types, direction tuning (dark) and position tuning (light) (285/3890 scores excluded summed over all layers for TDT-trained, 351/3890 for untrained; see Methods). (B) The means of 90% quantiles over all five model instantiations of models trained on action recognition and trajectory decoding are shown for direction tuning (dark) and position tuning (light). 95% confidence intervals are shown over instantiations (N=5). (C) The same plot as in (A) but for the top-performing spatiotemporal model (138/2330 scores excluded for TDT-trained, 60/2330 for the untrained; see Methods). (D) The same plot as (B), for the spatiotemporal model. (E) The same plot as in (A) but for the top-performing long short-term memory (LSTM) model (1052/6530 scores excluded for TDT-trained, 395/6530 for the untrained; see Methods). (F) The same plot as (B), for the LSTM model.
Figure 5—figure supplement 4. Analysis of single unit tuning properties for spatiotemporal and long short-term memory (LSTM) models.

Figure 5—figure supplement 4.

(A) For an example instantiation of the top-performing spatiotemporal model, the distribution of test R2 scores for both the action recognition task (ART)-trained and trajectory decoding task (TDT)-trained model are shown, for five kinds of kinematic tuning for each layer: direction tuning, speed tuning, Cartesian position tuning, polar position tuning, and label specificity. The solid line connects the 90% quantiles of two of the tuning curve types, direction tuning (dark) and position tuning (light) (0/2330 scores excluded summed over all layers for ART-trained, 138/2330 for TDT; see Methods). (B) The means of 90% quantiles over all five model instantiations of models trained on action recognition and trajectory decoding are shown for direction tuning (dark) and position tuning (light). 95% confidence intervals are shown over instantiations (N=5). (C) The same plot as in (A) but for the top-performing LSTM model (10/6530 scores excluded for ART-trained, 1052/6530 for TDT; see Methods). (D) The same plot as (B), for the LSTM model.