Abstract
The task of predicting human motion is complicated by the natural heterogeneity and compositionality of actions, necessitating robustness to distributional shifts as far as out‐of‐distribution (OoD). Here, we formulate a new OoD benchmark based on the Human3.6M and Carnegie Mellon University (CMU) motion capture datasets, and introduce a hybrid framework for hardening discriminative architectures to OoD failure by augmenting them with a generative model. When applied to current state‐of‐the‐art discriminative models, we show that the proposed approach improves OoD robustness without sacrificing in‐distribution performance, and can theoretically facilitate model interpretability. We suggest human motion predictors ought to be constructed with OoD challenges in mind, and provide an extensible general framework for hardening diverse discriminative architectures to extreme distributional shift. The code is available at: https://github.com/bouracha/OoDMotion.
Keywords: deep learning, generative models, human motion prediction, variational autoencoders
1. INTRODUCTION
Human motion is naturally intelligible as a time‐varying graph of connected joints constrained by locomotor anatomy and physiology. Its prediction allows the anticipation of actions with applications across healthcare, 1 , 2 physical rehabilitation and training, 3 , 4 robotics, 5 , 6 , 7 navigation, 8 , 9 , 10 , 11 manufacture, 12 entertainment, 13 , 14 , 15 and security. 16 , 17 , 18
The favoured approach to predicting movements over time has been purely inductive, relying on the history of a specific class of movement to predict its future. For example, state‐space models 19 enjoyed early success for simple, common, or cyclic motions. 20 , 21 , 22 The range, diversity and complexity of human motion has encouraged a shift to more expressive, deep neural network architectures, 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 but still within a simple inductive framework.
This approach would be adequate were actions both sharply distinct and highly stereotyped. But their complex, compositional nature means that within one category of action the kinematics may vary substantially, while between two categories they may barely differ. Moreover, few real‐world tasks restrict the plausible repertoire to a small number of classes—distinct or otherwise—that could be explicitly learnt. Rather, any action may be drawn from a great diversity of possibilities—both kinematic and teleological—that shape the characteristics of the underlying movements. This has two crucial implications. First, any modelling approach that lacks awareness of the full space of motion possibilities will be vulnerable to poor generalisation and brittle performance in the face of kinematic anomalies. Second, the very notion of in‐distribution (ID) testing becomes moot, for the relations between different actions and their kinematic signatures are plausibly determinable only across the entire domain of action. A test here arguably needs to be out‐of‐distribution (OoD) if it is to be considered a robust test at all.
These considerations are amplified by the nature of real‐world applications of kinematic modelling, such as anticipating arbitrary deviations from expected motor behaviour early enough for an automatic intervention to mitigate them. Most urgent in the domain of autonomous driving, 9 , 11 such safety concerns are of the highest importance, and are best addressed within the fundamental modelling framework. Indeed, 31 cites the ability to recognise our own ignorance as a safety mechanism that must be a core component in safe AI. Nonetheless, to our knowledge, current predictive models of human kinematics neither quantify OoD performance nor are designed with it in mind. There is therefore a need for two frameworks, applicable across the domain of action modelling: one for hardening a predictive model to anomalous cases, and another for quantifying OoD performance with established benchmark datasets. General frameworks are here desirable in preference to new models, for the field is evolving so rapidly greater impact can be achieved by introducing mechanisms that can be applied to a breadth of candidate architectures, even if they are demonstrated in only a subset. Our approach here is founded on combining a latent variable generative model with a standard predictive model, illustrated with the current state‐of‐the‐art discriminative architecture, 29 , 32 a strategy that has produced state‐of‐the‐art in the medical imaging domain. 33 Our aim is to achieve robust performance within a realistic, low‐volume, high‐heterogeneity data regime by providing a general mechanism for enhancing a discriminative architecture with a generative model.
In short, our contributions to the problem of achieving robustness to distributional shift in human motion prediction are as follows:
1. We provide a framework to benchmark OoD performance on the most widely used open‐source motion capture datasets: Human3.6M, 34 and Carnegie Mellon University (CMU)‐Mocap (http://mocap.cs.cmu.edu/) and evaluate state‐of‐the‐art models on it.
2. We present a framework for hardening deep feed‐forward models to OoD samples. We show that the hardened models are fast to train, and exhibit substantially improved OoD performance with minimal impact on ID performance.
We begin Section 2 with a brief review of human motion prediction with deep neural networks, and of OoD generalisation using generative models. In Section 3, we define a framework for benchmarking OoD performance using open‐source multi‐action datasets. We introduce in Section 4 the discriminative models that we harden using a generative branch to achieve a state‐of‐the‐art (SOTA) OoD benchmark. We then turn in Section 5 to the architecture of the generative model and the overall objective function. Section 6 presents our experiments and results. We conclude in Section 7 with a summary of our results, current limitations, and caveats, and future directions for developing robust and reliable OoD performance and a quantifiable awareness of unfamiliar behaviour.
2. RELATED WORK
2.1. Deep‐network‐based human motion prediction
Historically, sequence‐to‐sequence prediction using recurrent neural networks (RNNs) have been the de facto standard for human motion prediction. 26 , 28 , 30 , 35 , 36 , 37 , 38 , 39 Currently, the SOTA is dominated by feed‐forward models. 24 , 27 , 29 , 32 These are inherently faster and easier to train than RNNs. The jury is still out, however, on the optimal way to handle temporality for human motion prediction. Meanwhile, recent trends have overwhelmingly shown that graph‐based approaches are an effective means to encode the spatial dependencies between joints, 29 , 32 or sets of joints. 28 In this study, we consider the SOTA models that have graph‐based approaches with a feed‐forward mechanism as presented by, 29 and the subsequent extension which leverages motion attention,. 32 Further attention‐based approaches may indicate an upcoming trend. 40 We show that these may be augmented to improve robustness to OoD samples.
2.2. Generative models for out‐of‐distribution prediction and detection
Despite the power of deep neural networks for prediction in complex domains, 41 they face several challenges that limit their suitability for safety‐critical applications. Amodei et al 31 list robustness to distributional shift as one of the five major challenges to AI safety. Deep generative models, have been used extensively for the detection of OoD inputs and have been shown to generalise well in such scenarios. 42 , 43 , 44 While recent work has shown some failures in simple OoD detection using density estimates from deep generative models, 45 , 46 they remain a prime candidate for anomaly detection. 45 , 47 , 48
Myronenko 33 use a variational autoencoder (VAE) 49 to regularise an encoder‐decoder architecture with the specific aim of better generalisation. By simultaneously using the encoder as the recognition model of the VAE, the model is encouraged to base its segmentations on a complete picture of the data, rather than on a reductive representation that is more likely to be fitted to the training data. Furthermore, the original loss and the VAE's loss are combined as a weighted sum such that the discriminator's objective still dominates. Further work may also reveal useful interpretability of behaviour (via visualisation of the latent space as in Reference [50]), generation of novel motion, 51 or reconstruction of missing joints as in Reference [52].
3. QUANTIFYING OUT‐OF‐DISTRIBUTION PERFORMANCE OF HUMAN MOTION PREDICTORS
Even a very compact representation of the human body such as OpenPose's 17 joint parameterisation 53 explodes to unmanageable complexity when a temporal dimension is introduced of the scale and granularity necessary to distinguish between different kinds of action: typically many seconds, sampled at hundredths of a second. Moreover, though there are anatomical and physiological constraints on the space of licit joint configurations, and their trajectories, the repertoire of possibility remains vast and the kinematic demarcations of teleologically different actions remain indistinct. Thus, no practically obtainable dataset may realistically represent the possible distance between instances. To simulate OoD data, we first need ID data that can be varied in its quantity and heterogeneity, closely replicating cases where a particular kinematic morphology may be rare, and therefore undersampled, and cases where kinematic morphologies are both highly variable within a defined class and similar across classes. Such replication needs to accentuate the challenging aspects of each scenario.
We therefore propose to evaluate OoD performance where only a single action, drawn from a single action distribution, is available for training and hyperparameter search, and testing is carried out on the remaining classes. To determine which actions can be clearly separated from the other actions we train a classifier of action category based on the motion inputs. We select the action “walking” from H3.6M, and “basketball” from CMU. Where the classifier can identify these actions with a precision and recall of 0.95 and 0.81, respectively for walking, in H3.6M, and 1.0, and 1.0 for basketball, in CMU. This is discussed further in Appendix A.
4. BACKGROUND
Here, we describe the current SOTA model proposed by Mao et al 29 (graph convolutional network [GCN]). We then describe the extension by Mao et al 32 (attention‐GCN) which antecedes the GCN prediction model with motion attention.
4.1. Problem formulation
We are given a motion sequence consisting of consecutive human poses, where , with the number of parameters describing each pose. The goal is to predict the poses for the subsequent time steps.
4.2. Discrete cosine transformations‐based temporal encoding
The input is transformed using discrete cosine transformations (DCT). In this way, each resulting coefficient encodes information of the entire sequence at a particular temporal frequency. Furthermore, the option to remove high or low frequencies is provided. Given a joint, , the position of over time steps is given by the trajectory vector: where we convert to a DCT vector of the form: where represents the lth DCT coefficient. For , these coefficients may be computed as
(1) |
If no frequencies are cropped, the DCT is invertible via the inverse discrete cosine transform (IDCT):
(2) |
Mao et al. use the DCT transform with a GCN architecture to predict the output sequence. This is achieved by having an equal length input‐output sequence, where the input is the DCT transformation of , here is the observed sequence and are replicas of (ie, for ). The target is now simply the ground truth .
4.3. Graph convolutional network
Suppose is defined on a graph with nodes and dimensions, then we define a GCN to respect this structure. First, we define a graph convolutional layer (GCL) that, as input, takes the activation of the previous layer (), where is the current layer.
(3) |
where and is a layer‐specific learnable normalised graph laplacian that represents connections between joints, are the learnable inter‐layer weightings and are the learnable biases where are the number of hidden units in layer .
4.4. Network structure and loss
The network consists of 12 graph convolutional blocks (GCBs), each containing two GCLs with skip (or residual) connections, see Figures A6 and A7. In addition, there is one GCL at the beginning of the network, and one at the end. , for each layer, . There is one final skip connection from the DCT inputs to the DCT outputs, which greatly reduces train time. The model has around 2.6M parameters. Hyperbolic tangent functions are used as the activation function. Batch normalisation is applied before each activation.
The outputs are converted back to their original coordinate system using the IDCT (Equation (2)) to be compared to the ground truth. The loss used for joint angles is the average distance between the ground‐truth joint angles, and the predicted ones. Thus, the joint angle loss is:
(4) |
where is the predicted kth joint at timestep and is the corresponding ground truth.
This is separately trained on three‐dimensional (3D) joint coordinate prediction making use of the mean per joint position error (MPJPE), as proposed in Reference [34] and used in References [29, 32]. This is defined, for each training example, as
(5) |
where denotes the predicted jth joint position in frame . And is the corresponding ground truth, while J is the number of joints in the skeleton.
4.5. Motion attention extension
Mao et al. 32 extend this model by summing multiple DCT transformations from different sections of the motion history with weightings learned via an attention mechanism. For this extension, the above model (the GCN) along with the anteceding motion attention is trained end‐to‐end. We refer to this as the attention‐GCN.
5. OUR APPROACH
Myronenko 33 augment an encoder‐decoder discriminative model by using the encoder as a recognition model for a VAE. 49 , 54 Myronenko 33 show this to be a very effective regulariser. Here, we also use a VAE, but for conjugacy with the discriminator, we use graph convolutional layers in the decoder. This can be compared to the Variational Graph Autoencoder (VGAE), proposed by Kipf and Welling 55 However, Kipf and Welling's application is a link prediction task in citation networks and thus it is desired to model only connectivity in the latent space. Here we model connectivity, position and temporal frequency. To reflect this distinction, the layers immediately before, and after, the latent space are fully connected creating a homogenous latent space.
The generative model sets a precedence for information that can be modelled causally, while leaving elements of the discriminative machinery, such as skip connections, to capture correlations that remain useful for prediction but are not necessarily persuant to the objective of the generative model. In addition to performing the role of regularisation in general, we show that we gain robustness to distributional shift across similar, but different, actions that are likely to share generative properties. The architecture may be considered with the visual aid in Figure 1.
5.1. VAE branch and loss
Here we define the first 6 GCB blocks as our VAE recognition model, with a latent variable , where . n z = 8, or 32 depending on training stability.
The KL divergence between the latent space distribution and a spherical Gaussian is given by:
(6) |
The decoder part of the VAE has the same structure as the discriminative branch; 6 GCBs. We parametrise the output neurons as , and . We can now model the reconstruction of inputs as samples of a maximum likelihood of a Gaussian distribution which constitutes the second term of the negative variational lower bound (VLB) of the VAE:
(7) |
where are the DCT coefficients of the ground truth.
5.2. Training
We train the entire network together with the addition of the negative VLB:
(8) |
Here, is a hyperparameter of the model. The overall network is parameters. The number of parameters varies slightly as per the number of joints, K, since this is reflected in the size of the graph in each layer ( for H3.6M, for CMU joint angles, and for CMU Cartesian coordinates). Furthermore, once trained, the generative model is not required for prediction and hence for this purpose is as compact as the original models.
6. EXPERIMENTS
6.1. Datasets and experimental setup
6.1.1. Human3.6M (H3.6M)
The H3.6M dataset, 34 , 56 so called as it contains a selection of 3.6 million 3D human poses and corresponding images, consists of seven actors each performing 15 actions, such as walking, eating, discussion, sitting and talking on the phone. Li et al, 28 Mao et al, 29 and Martinez et al 30 all follow the same training and evaluation procedure: training their motion prediction model on 6 (5 for train and 1 for cross‐validation) of the actors, for each action, and evaluate metrics on the final actor, subject 5. For easy comparison to these ID baselines, we maintain the same train; cross‐validation; and test splits. However, we use the single, most well‐defined action (see Appendix A), walking, for train and cross‐validation, and we report test error on all the remaining actions from subject 5. In this way, we conduct all parameter selections based on ID performance.
6.1.2. CMU motion capture
(CMU‐mocap) The CMU dataset consists of five general classes of actions. Similar to References [27, 29, 57], we use eight detailed actions from these classes: “basketball,” “basketball signal,” “directing traffic,” “jumping,” “running,” “soccer,” “walking,” and “window washing.” We use two representations, a 64‐dimensional vector that gives an exponential map representation 58 of the joint angle, and a 75‐dimensional vector that gives the 3D Cartesian coordinates of 25 joints. We do not tune any hyperparameters on this dataset and use only a train and test set with the same split as is common in the literature. 29 , 30
6.1.3. Model configuration
We implemented the model in PyTorch 59 using the Adam optimiser. 60 The learning rate was set to for all experiments where, unlike Mao et al., 29 , 32 we did not decay the learning rate as it was hypothesised that the dynamic relationship between the discriminative and generative loss would make this redundant. The batch size was 16. For numerical stability, gradients were clipped to a maximum ‐norm of and and values were clamped between −20 and 3. Code for all experiments is available at: https://github.com/bouracha/OoDMotion
6.1.4. Baseline comparison
Both Mao et al 29 (GCN), and Mao et al 32 (attention‐GCN) use this same GCN architecture with DCT inputs. In particular, Mao et al 32 increase the amount of history accounted for by the GCN by adding a motion attention mechanism to weight the DCT coefficients from different sections of the history prior to being input to the GCN. We compare against both of these baselines on OoD actions. For attention‐GCN, we leave the attention mechanism preceding the GCN unchanged such that the generative branch of the model is reconstructing the weighted DCT inputs to the GCN, and the whole network is end‐to‐end differentiable.
6.1.5. Hyperparameter search
Since a new term has been introduced to the loss function, it was necessary to determine a sensible weighting between the discriminative and generative models. In Reference [33], this weighting was arbitrarily set to . It is natural that the optimum value here will relate to the other regularisation parameters in the model. Thus, we conducted random hyperparameter search for and in the ranges on a linear scale, and on a logarithmic scale. For fair comparison, we also conducted hyperparameter search on GCN, for values of the dropout probability () between 0.1 and 0.9. For each model, 25 experiments were run and the optimum values were selected on the lowest ID validation error. The hyperparameter search was conducted only for the GCN model on short‐term predictions for the H3.6M dataset and used for all future experiments hence demonstrating generalisability of the architecture.
6.2. Results
Consistent with the literature, we report short‐term () and long‐term () predictions. In comparison to GCN, we take short‐term history into account (10 frames, ) for both datasets to predict both short‐ and long‐term motion. In comparison to attention‐GCN, we take long‐term history (50 frames, 2 seconds) to predict the next 10 frames, and predict further into the future by recursively applying the predictions as input to the model as in Reference[32]. In this way, a single short‐term prediction model may produce long‐term predictions.
We use Euclidean distance between the predicted and ground‐truth joint angles for the Euler angle representation. For 3D joint coordinate representation, we use the MPJPE as used for training (Equation (5)). Table 1 reports the joint angle error for the short‐term predictions on the H3.6M dataset. Here, we found the optimum hyperparameters to be for GCN, and , with for our augmentation of GCN. The latter of which was used for all future experiments, where for our augmentation of attention‐GCN we removed dropout altogether. On average, our model performs convincingly better both ID and OoD. Here, the generative branch works well as both a regulariser for small datasets and by creating robustness to distributional shifts. We see similar and consistent results for long‐term predictions in Table 2.
TABLE 1.
Walking (ID) | Eating (OoD) | Smoking (OoD) | Average (of 14 for OoD) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 160 | 320 | 400 | 160 | 320 | 400 | 160 | 320 | 400 | 160 | 320 | 400 |
GCN (OoD) | 0.37 | 0.60 | 0.65 | 0.38 | 0.65 | 0.79 | 0.55 | 1.08 | 1.10 | 0.69 | 1.09 | 1.27 |
SD | 0.008 | 0.008 | 0.01 | 0.01 | 0.03 | 0.04 | 0.01 | 0.02 | 0.02 | 0.02 | 0.04 | 0.04 |
Ours (OoD) | 0.37 | 0.59 | 0.64 | 0.37 | 0.59 | 0.72 | 0.54 | 1.01 | 0.99 | 0.68 | 1.07 | 1.21 |
SD | 0.004 | 0.03 | 0.03 | 0.01 | 0.03 | 0.04 | 0.01 | 0.01 | 0.02 | 0.01 | 0.01 | 0.02 |
Note: Each experiment conducted three times. We report the mean and SD. Note that we have lower variance in our results. Full table is given in Table A1. Bold values correspond to the best score for the respective simulation across the different models.
Abbreviations: GCN, graph convolutional network; OoD, out‐of‐distribution.
TABLE 2.
Walking | Eating | Smoking | Discussion | Average | ||||||
---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 560 | 1000 | 560 | 1000 | 560 | 1000 | 560 | 1000 | 560 | 1000 |
GCN (OoD) | 0.80 | 0.80 | 0.89 | 1.20 | 1.26 | 1.85 | 1.45 | 1.88 | 1.10 | 1.43 |
Ours (OoD) | 0.66 | 0.72 | 0.90 | 1.19 | 1.17 | 1.78 | 1.44 | 1.90 | 1.04 | 1.40 |
Note: Bold correspond to lowest values.
Abbreviations: GCN, graph convolutional network; OoD, out‐of‐distribution.
From Tables 3 and 4, we can see that the superior OoD performance generalises to the CMU dataset with the same hyperparameter settings with a similar trend of the difference being larger for longer predictions for both joint angles and 3D joint coordinates. For each of these experiments .
TABLE 3.
Basketball (ID) | Basketball signal (OoD) | Average (of 7 for OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 |
GCN | 0.40 | 0.67 | 1.11 | 1.25 | 1.63 | 0.27 | 0.55 | 1.14 | 1.42 | 2.18 | 0.36 | 0.65 | 1.41 | 1.49 | 2.17 |
Ours | 0.40 | 0.66 | 1.12 | 1.29 | 1.76 | 0.28 | 0.57 | 1.15 | 1.43 | 2.07 | 0.34 | 0.62 | 1.35 | 1.41 | 2.10 |
Note: Full table is given in Table A2. Bold values correspond to the best score for the respective simulation across the different models.
Abbreviations: GCN, graph convolutional network; ID, in‐distribution; OoD, out‐of‐distribution.
TABLE 4.
Basketball | Basketball signal | Average (of 7 for OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 |
GCN (OoD) | 15.7 | 28.9 | 54.1 | 65.4 | 108.4 | 14.4 | 30.4 | 63.5 | 78.7 | 114.8 | 20.0 | 43.8 | 86.3 | 105.8 | 169.2 |
Ours (OoD) | 16.0 | 30.0 | 54.5 | 65.5 | 98.1 | 12.8 | 26.0 | 53.7 | 67.6 | 103.2 | 21.6 | 42.3 | 84.2 | 103.8 | 164.3 |
Note: Full table is given in Table A3.
Abbreviations: GCN, graph convolutional network; OoD, out‐of‐distribution.
Table 5, shows that the effectiveness of the generative branch generalises to the very recent motion attention architecture. For attention‐GCN we used . Here, interestingly short‐term predictions are poor but long‐term predictions are consistently better. This supports our assertion that information relevant to generative mechanisms are more intrinsic to the causal model and thus, here, when the predicted output is recursively used, more useful information is available for the future predictions.
TABLE 5.
Walking (ID) | Eating (OoD) | Smoking (OoD) | Average (of 14 for OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 |
att‐GCN (OoD) | 55.4 | 60.5 | 65.2 | 68.7 | 87.6 | 103.6 | 113.2 | 120.3 | 81.7 | 93.7 | 102.9 | 108.7 | 112.1 | 129.6 | 140.3 | 147.8 |
Ours (OoD) | 58.7 | 60.6 | 65.5 | 69.1 | 81.7 | 94.4 | 102.7 | 109.3 | 80.6 | 89.9 | 99.2 | 104.1 | 113.1 | 127.7 | 137.9 | 145.3 |
Note: Here ours is also trained with the attention‐GCN model. Full table is given in Table A4. Bold values correspond to the best score for the respective simulation across the different models.
Abbreviations: GCN, graph convolutional network; ID, in‐distribution; OoD, out‐of‐distribution.
7. CONCLUSION
We draw attention to the need for robustness to distributional shifts in predicting human motion, and propose a framework for its evaluation based on major open‐source datasets. We demonstrate that state‐of‐the‐art discriminative architectures can be hardened to extreme distributional shifts by augmentation with a generative model, combining low in‐distribution predictive error with maximal generalisability. Our investigation argues for wider use of generative models in behavioural modelling, and shows it can be performed with minimal or no performance penalty, within hybrid architectures of potentially diverse constitution. Further work could examine the survey ability of latent space introduced by the VAE.
ACKNOWLEDGEMENTS
Anthony Bourached is funded by the UKRI UCL Centre for Doctoral Training in AI‐enabled Healthcare Systems. Robert Gray, Ashwani Jha and Parashkev Nachev are funded by the Wellcome Trust (213038) and the NHR UCL Biomedical Research Centre.
APPENDIX A.
The appendix consists of four parts. We provide a brief summary of each section below.
Appendix A: We provide results from our experimentation to determine the optimum way of defining separable distributions on the H3.6M, and the CMU datasets.
Appendix B: We provide the full results of tables which are shown in part in the main text.
Appendix C: We inspect the generative model by examining its latent space and use it to consider the role that the generative model plays in learning as well as possible directions of future work.
Appendix D: We provide larger diagrams of the architecture of the augmented GCN.
A.1. Appendix A: Discussion of the definition of out‐of‐distribution
Here, we describe in more detail the empirical motivation for our definition of out‐of‐distribution (OoD) on the H3.6M and CMU datasets.
Figure A1 shows the distribution of actions for the H3.6M and CMU datasets. We want our ID data to be small in quantity, and narrow in domain. Since this dataset is labelled by action, we are provided with a natural choice of distribution being one of these actions. Moreover, it is desirable that the action be quantifiably distinct from the other actions
To determine which action supports these properties we train a simple classifier to determine which action is most easily distinguished from the others based on the DCT inputs: , where for . We make no assumption on the architecture that would be optimum to determine the separation, and so use a simple fully connected model with 4 layers. Layer 1: , layer 2: , layer 3: , layer 4: (or for CMU). Where the final layer uses a softmax to predict the class label. Cross entropy is used as a loss function on these logits during training. We used ReLU activations with a dropout probability of 0.5.
We trained this model using the last 10 historic frames (, ) with DCT coefficients for both the H3.6M and CMU datasets, as well as (, ) with DCT coefficients additionally for H3.6M (here we select only the 20 lowest frequency DCT coefficients). We trained each model for 10 epochs with a batch size of , and a learning rate of . The confusion matrices for the H3.6M dataset are shown in Figures A2 and A3, respectively. Here, we use the same train set as outlined in Section 6.1. However, we report results on subject 11‐ which for motion prediction was used as the validation set. We did this because the number of instances are much greater than subject 5, and no hyperparameter tuning was necessary. For the CMU dataset, we used the same train and test split as for all other experiments.
In both cases, for the H3.6M dataset, the classifier achieves the highest precision score (0.91 and 0.95, respectively) for the action walking as well as a recall score of 0.83 and 0.81, respectively. Furthermore, in both cases walking together dominates the false negatives for walking (50%, and 44% in each case) as well as the false positives (33% in each case).
The general increase in the distinguishability that can be seen in Figure A3 increases the demand to be able to robustly handle distributional shifts as the distribution of values that represent different actions only gets more pronounced as the time scale is increased. This is true with even the näive DCT transformation to capture longer time scales without increasing vector size.
As we can see from the confusion matrix in Figure A4, the actions in the CMU dataset are even more easily separable. In particular, our selected ID action in the paper, Basketball, can be identified with 100% precision and recall on the test set.
A.2. Appendix B: Full results
TABLE A1.
Walking (ID) | Eating (OoD) | Smoking (OoD) | Discussion (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 |
GCN (OoD) | 0.22 | 0.37 | 0.60 | 0.65 | 0.22 | 0.38 | 0.65 | 0.79 | 0.28 | 0.55 | 1.08 | 1.10 | 0.29 | 0.65 | 0.98 | 1.08 |
SD | 0.001 | 0.008 | 0.008 | 0.01 | 0.003 | 0.01 | 0.03 | 0.04 | 0.01 | 0.01 | 0.02 | 0.02 | 0.004 | 0.01 | 0.04 | 0.04 |
Ours (OoD) | 0.23 | 0.37 | 0.59 | 0.64 | 0.21 | 0.37 | 0.59 | 0.72 | 0.28 | 0.54 | 1.01 | 0.99 | 0.31 | 0.65 | 0.97 | 1.07 |
SD | 0.003 | 0.004 | 0.03 | 0.03 | 0.008 | 0.01 | 0.03 | 0.04 | 0.005 | 0.01 | 0.01 | 0.02 | 0.005 | 0.009 | 0.02 | 0.01 |
Directions (OoD) | Greeting (OoD) | Phoning (OoD) | Posing (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 |
GCN (OoD) | 0.38 | 0.59 | 0.82 | 0.92 | 0.48 | 0.81 | 1.25 | 1.44 | 0.58 | 1.12 | 1.52 | 1.61 | 0.27 | 0.59 | 1.26 | 1.53 |
SD | 0.01 | 0.03 | 0.05 | 0.06 | 0.006 | 0.01 | 0.02 | 0.02 | 0.006 | 0.01 | 0.01 | 0.01 | 0.01 | 0.05 | 0.1 | 0.1 |
Ours (OoD) | 0.38 | 0.58 | 0.79 | 0.90 | 0.49 | 0.81 | 1.24 | 1.43 | 0.57 | 1.10 | 1.52 | 1.61 | 0.33 | 0.68 | 1.25 | 1.51 |
SD | 0.007 | 0.02 | 0.0 | 0.05 | 0.006 | 0.005 | 0.02 | 0.02 | 0.004 | 0.003 | 0.01 | 0.01 | 0.02 | 0.05 | 0.03 | 0.03 |
Purchases (OoD) | Sitting (OoD) | Sitting down (OoD) | Taking photo (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 |
GCN (OoD) | 0.62 | 0.90 | 1.34 | 1.42 | 0.40 | 0.66 | 1.15 | 1.33 | 0.46 | 0.94 | 1.52 | 1.69 | 0.26 | 0.53 | 0.82 | 0.93 |
SD | 0.001 | 0.001 | 0.02 | 0.03 | 0.003 | 0.007 | 0.02 | 0.03 | 0.01 | 0.03 | 0.04 | 0.05 | 0.005 | 0.01 | 0.01 | 0.02 |
Ours (OoD) | 0.62 | 0.89 | 1.23 | 1.31 | 0.39 | 0.63 | 1.05 | 1.20 | 0.40 | 0.79 | 1.19 | 1.33 | 0.26 | 0.52 | 0.81 | 0.95 |
SD | 0.001 | 0.002 | 0.005 | 0.01 | 0.001 | 0.001 | 0.004 | 0.005 | 0.007 | 0.009 | 0.01 | 0.02 | 0.005 | 0.01 | 0.01 | 0.01 |
Waiting (OoD) | Walking dog (OoD) | Walking together (OoD) | Average (of 14 for OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 | 80 | 160 | 320 | 400 |
GCN (OoD) | 0.29 | 0.59 | 1.06 | 1.30 | 0.52 | 0.86 | 1.18 | 1.33 | 0.21 | 0.44 | 0.67 | 0.72 | 0.38 | 0.69 | 1.09 | 1.27 |
SD | 0.01 | 0.03 | 0.05 | 0.05 | 0.01 | 0.02 | 0.02 | 0.03 | 0.005 | 0.02 | 0.03 | 0.03 | 0.007 | 0.02 | 0.04 | 0.04 |
Ours (OoD) | 0.29 | 0.58 | 1.06 | 1.29 | 0.52 | 0.88 | 1.17 | 1.34 | 0.21 | 0.44 | 0.66 | 0.74 | 0.38 | 0.68 | 1.07 | 1.21 |
SD | 0.0007 | 0.003 | 0.001 | 0.006 | 0.006 | 0.01 | 0.008 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.006 | 0.01 | 0.01 | 0.02 |
Note: Each experiment conducted three times. We report the mean and standard deviation. Note that we have lower variance in our results.
Abbreviations: GCN, graph convolutional network; ID, in‐distribution; OoD, out‐of‐distribution.
TABLE A2.
Basketball (ID) | Basketball signal (OoD) | Directing traffic (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 |
GCN | 0.40 | 0.67 | 1.11 | 1.25 | 1.63 | 0.27 | 0.55 | 1.14 | 1.42 | 2.18 | 0.31 | 0.62 | 1.05 | 1.24 | 2.49 |
Ours | 0.40 | 0.66 | 1.12 | 1.29 | 1.76 | 0.28 | 0.57 | 1.15 | 1.43 | 2.07 | 0.28 | 0.56 | 0.96 | 1.10 | 2.33 |
Jumping (OoD) | Running (OoD) | Soccer (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 |
GCN | 0.42 | 0.73 | 1.72 | 1.98 | 2.66 | 0.46 | 0.84 | 1.50 | 1.72 | 1.57 | 0.29 | 0.54 | 1.15 | 1.41 | 2.14 |
Ours | 0.38 | 0.72 | 1.74 | 2.03 | 2.70 | 0.46 | 0.81 | 1.36 | 1.53 | 2.09 | 0.28 | 0.53 | 1.07 | 1.27 | 1.99 |
Walking (OoD) | Washing window (OoD) | Average (of 7 for OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 |
GCN | 0.40 | 0.61 | 0.97 | 1.18 | 1.85 | 0.36 | 0.65 | 1.23 | 1.51 | 2.31 | 0.36 | 0.65 | 1.41 | 1.49 | 2.17 |
Ours | 0.38 | 0.54 | 0.82 | 0.99 | 1.27 | 0.35 | 0.63 | 1.20 | 1.51 | 2.26 | 0.34 | 0.62 | 1.35 | 1.41 | 2.10 |
Abbreviations: GCN, graph convolutional network; ID, in‐distribution; OoD, out‐of‐distribution.
TABLE A3.
Basketball (ID) | Basketball signal (OoD) | Directing traffic (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 |
GCN | 15.7 | 28.9 | 54.1 | 65.4 | 108.4 | 14.4 | 30.4 | 63.5 | 78.7 | 114.8 | 18.5 | 37.4 | 75.6 | 93.6 | 210.7 |
Ours | 16.0 | 30.0 | 54.5 | 65.5 | 98.1 | 12.8 | 26.0 | 53.7 | 67.6 | 103.2 | 18.3 | 37.2 | 75.7 | 93.8 | 199.6 |
Jumping (OoD) | Running (OoD) | Soccer (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 |
GCN | 24.6 | 51.2 | 111.4 | 139.6 | 219.7 | 32.3 | 54.8 | 85.9 | 99.3 | 99.9 | 22.6 | 46.6 | 92.8 | 114.3 | 192.5 |
Ours | 25.0 | 52.0 | 110.3 | 136.8 | 200.2 | 29.8 | 50.2 | 83.5 | 98.7 | 107.3 | 21.1 | 44.2 | 90.4 | 112.1 | 202.0 |
Walking (OoD) | Washing window (OoD) | Average of 7 for (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 | 80 | 160 | 320 | 400 | 1000 |
GCN | 10.8 | 20.7 | 42.9 | 53.4 | 86.5 | 17.1 | 36.4 | 77.6 | 96.0 | 151.6 | 20.0 | 43.8 | 86.3 | 105.8 | 169.2 |
Ours | 10.5 | 18.9 | 39.2 | 48.6 | 72.2 | 17.6 | 37.3 | 82.0 | 103.4 | 167.5 | 21.6 | 42.3 | 84.2 | 103.8 | 164.3 |
Abbreviations: GCN, graph convolutional network; ID, in‐distribution; OoD, out‐of‐distribution.
TABLE A4.
Walking (ID) | Eating (OoD) | Smoking (OoD) | Discussion (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
milliseconds | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 |
Attention‐GCN (OoD) | 55.4 | 60.5 | 65.2 | 68.7 | 87.6 | 103.6 | 113.2 | 120.3 | 81.7 | 93.7 | 102.9 | 108.7 | 114.6 | 130.0 | 133.5 | 136.3 |
Ours (OoD) | 58.7 | 60.6 | 65.5 | 69.1 | 81.7 | 94.4 | 102.7 | 109.3 | 80.6 | 89.9 | 99.2 | 104.1 | 115.4 | 129.0 | 134.5 | 139.4 |
Directions (OoD) | Greeting (OoD) | Phoning (OoD) | Posing (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 |
Attention‐GCN (OoD) | 107.0 | 123.6 | 132.7 | 138.4 | 127.4 | 142.0 | 153.4 | 158.6 | 98.7 | 117.3 | 129.9 | 138.4 | 151.0 | 176.0 | 189.4 | 199.6 |
Ours (OoD) | 107.1 | 120.6 | 129.2 | 136.6 | 128.0 | 140.3 | 150.8 | 155.7 | 95.8 | 111.0 | 122.7 | 131.4 | 158.7 | 181.3 | 194.4 | 203.4 |
Purchases (OoD) | Sitting (OoD) | Sitting down (OoD) | Taking photo (OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 |
Attention‐GCN (OoD) | 126.6 | 144.0 | 154.3 | 162.1 | 118.3 | 141.1 | 154.6 | 164.0 | 136.8 | 162.3 | 177.7 | 189.9 | 113.7 | 137.2 | 149.7 | 159.9 |
Ours (OoD) | 128.0 | 143.2 | 154.7 | 164.3 | 118.4 | 137.7 | 149.7 | 157.5 | 136.8 | 157.6 | 170.8 | 180.4 | 116.3 | 134.5 | 145.6 | 155.4 |
Waiting (OoD) | Walking Dog (OoD) | Walking together (OoD) | Average (of 14 for OoD) | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Milliseconds | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 | 560 | 720 | 880 | 1000 |
Attention‐GCN (OoD) | 109.9 | 125.1 | 135.3 | 141.2 | 131.3 | 146.9 | 161.1 | 171.4 | 64.5 | 71.1 | 76.8 | 80.8 | 112.1 | 129.6 | 140.3 | 147.8 |
Ours (OoD) | 110.4 | 124.5 | 133.9 | 140.3 | 138.3 | 151.2 | 165.0 | 175.5 | 67.7 | 71.9 | 77.1 | 80.8 | 113.1 | 127.7 | 137.9 | 145.3 |
Note: Here ours is also trained with the attention‐GCN model.
Abbreviations: GCN, graph convolutional network; ID, in‐distribution; OoD, out‐of‐distribution.
A.3. Appendix C: Latent space of the VAE
One of the advantages of having a generative model involved is that we have a latent variable which represents a distribution over deterministic encodings of the data. We considered the question of whether or not the VAE was learning anything interpretable with its latent variable as was the case in Reference [55].
The purpose of this investigation was 2‐fold. First to determine if the generative model was learning a comprehensive internal state, or just a nonlinear average state as is common to see in the training of VAE like architectures. The result of this should suggest a key direction of future work. Second, an interpretable latent space may be of paramount usefulness for future applications of human motion prediction. Namely, if dimensionality reduction of the latent space to an inspectable number of dimensions yields actions, or behaviour that are close together if kinematically or teleolgically similar as in Reference [50], then, human experts may find unbounded potential application for a interpretation that is both quantifiable and qualitatively comparable to all other classes within their domain of interest. For example, a medical doctor may consider a patient to have unusual symptoms for condition, say, A. It may be useful to know that the patient's deviation from a classic case of A, is in the direction of condition, say, B.
We trained the augmented GCN model discussed in the main text with all actions, for both datasets. We use Uniform Manifold Approximation and Projection (UMAP) 61 to project the latent space of the trained GCN models onto two dimensions for all samples in the dataset for each dataset independently. From Figure A5, we can see that for both models the 2D project relatively closely resembles a spherical gaussian. Furthermore, we can see from Figure A5B that the action walking does not occupy a discernible domain of the latent space. This result is further verified by using the same classifier as used in Appendix 8, which achieved no better than chance when using the latent variables as input rather than the raw data input.
This result implies that the benefit observed in the main text is by using the generative model is significant even if the generative model has poor performance itself. In this case we can be sure that the reconstructions are at least not good enough to distinguish between actions. It is hence natural for future work to investigate if the improvement on OoD performance is greater if trained in such a way as to ensure that the generative model performs well. There are multiple avenues through which such an objective might be achieve. Pre‐training the generative model being one of the salient candidates.
A.4. Appendix D: Architecture diagrams
Bourached A, Griffiths R‐R, Gray R, Jha A, Nachev P. Generative model‐enhanced human motion prediction. Applied AI Letters. 2022;3(2):e63. doi: 10.1002/ail2.63
Funding information UCLH Biomedical Research Centre; UK Research and Innovation; Biomedical Research Centre; Wellcome Trust
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are openly available, instructions at https://github.com/bouracha/OoDMotion.
REFERENCES
- 1. Geertsema EE, Thijs RD, Gutter T, et al. Automated video‐based detection of nocturnal convulsive seizures in a residential care setting. Epilepsia. 2018;59:53‐60. [DOI] [PubMed] [Google Scholar]
- 2. Kakar M, Nyström H, Aarup LR, Nøttrup TJ, Olsen DR. Respiratory motion prediction by using the adaptive neuro fuzzy inference system (anfis). Phys Med Biol. 2005;50(19):4721‐4728. [DOI] [PubMed] [Google Scholar]
- 3. Chang C‐Y, Lange B, Zhang M, et al. Towards pervasive physical rehabilitation using microsoft kinect. 2012 6th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops. IEEE; 2012:159‐162. https://ieeexplore.ieee.org/abstract/document/6240377 [Google Scholar]
- 4. Webster D, Celik O. Systematic review of kinect applications in elderly care and stroke rehabilitation. J Neuroeng Rehabil. 2014;11(1):108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Gui L‐Y, Zhang K, Wang Y‐X, Liang X, Moura JM, Veloso M. Teaching robots to predict human motion. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE; 2018:562‐567. https://ieeexplore.ieee.org/abstract/document/8594452 [Google Scholar]
- 6. Koppula H, Saxena A. Learning spatio‐temporal structure from RGB‐D videos for human activity detection and anticipation. International Conference on Machine Learning; 2013:792‐800. https://proceedings.mlr.press/v28/koppula13.html [Google Scholar]
- 7. Koppula HS, Saxena A. Anticipating human activities for reactive robotic response. Tokyo: IROS; 2013:2071. [DOI] [PubMed] [Google Scholar]
- 8. Alahi A, Goel K, Ramanathan V, Robicquet A, Fei‐Fei L, Savarese S. Social lstm: human trajectory prediction in crowded spaces. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016:961‐971. https://openaccess.thecvf.com/content_cvpr_2016/html/Alahi_Social_LSTM_Human_CVPR_2016_paper.html [Google Scholar]
- 9. Bhattacharyya A, Fritz M, Schiele B. Long‐term on‐board prediction of people in traffic scenes under uncertainty. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018:4194‐4202. https://openaccess.thecvf.com/content_cvpr_2018/html/Bhattacharyya_Long-Term_On-Board_Prediction_CVPR_2018_paper.html [Google Scholar]
- 10. Paden B, Čáp M, Yong SZ, Yershov D, Frazzoli E. A survey of motion planning and control techniques for self‐driving urban vehicles. IEEE Trans Intell Veh. 2016;1(1):33‐55. https://ieeexplore.ieee.org/abstract/document/7490340 [Google Scholar]
- 11. Wang Y, Liu Z, Zuo Z, Li Z, Wang L, Luo X. Trajectory planning and safety assessment of autonomous vehicles based on motion prediction and model predictive control. IEEE Trans Veh Technol. 2019;68(9):8546‐8556. [Google Scholar]
- 12. Švec P, Thakur A, Raboin E, Shah BC, Gupta SK. Target following with motion prediction for unmanned surface vehicle operating in cluttered environments. Autonomous Robots. 2014;36(4):383‐405. [Google Scholar]
- 13. Lau RW, Chan A. Motion prediction for online gaming. International Workshop on Motion in Games. Berlin/Heidelberg, Germany: Springer; 2008:104‐114. [Google Scholar]
- 14. Rofougaran A. R., Rofougaran M., Seshadri N., Ibrahim B. B., Walley J., and Karaoguz J.. Game console and gaming object with motion prediction modeling and methods for use therewith, 2018. US Patent 9,943,760.
- 15. Shirai A, Geslin E, Richir S. Wiimedia: motion analysis methods and applications using a consumer video game controller. Proceedings of the 2007 ACM SIGGRAPH Symposium on Video Games. New York, NY: Association for Computing Machinery; 2007:133‐140. [Google Scholar]
- 16. Grant J, Boukouvalas A, Griffiths R‐R, Leslie D, Vakili S, De Cote EM. Adaptive sensor placement for continuous spaces. International Conference on Machine Learning. PMLR; 2019:2385‐2393. https://proceedings.mlr.press/v97/grant19a.html [Google Scholar]
- 17. Kim D, Paik J. Gait recognition using active shape model and motion prediction. IET Compu Vis. 2010;4(1):25‐36. [Google Scholar]
- 18. Ma Z, Wang X, Ma R, Wang Z, Ma J. Integrating gaze tracking and head‐motion prediction for mobile device authentication: a proof of concept. Sensors. 2018;18(9):2894. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Koller D, Friedman N. Probabilistic Graphical Models: Principles and Techniques. Cambridge, MA: MIT Press; 2009. [Google Scholar]
- 20. Lehrmann AM, Gehler PV, Nowozin S. Efficient nonlinear markov models for human motion. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2014:1314‐1321. https://openaccess.thecvf.com/content_cvpr_2014/html/Lehrmann_Efficient_Nonlinear_Markov_2014_CVPR_paper.html [Google Scholar]
- 21. Sutskever I, Hinton GE, Taylor GW. The recurrent temporal restricted boltzmann machine. Advances in Neural Information Processing Systems; 2009:1601‐1608. https://www.cs.utoronto.ca/~hinton/absps/rtrbm.pdf [Google Scholar]
- 22. Taylor GW, Hinton GE, Roweis ST. Modeling human motion using binary latent variables. Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press; 2007:1345‐1352. https://proceedings.neurips.cc/paper/2006/file/1091660f3dff84fd648efe31391c5524‐Paper.pdf [Google Scholar]
- 23. Aksan E, Kaufmann M, Hilliges O. Structured prediction helps 3d human motion modelling. Proceedings of the IEEE International Conference on Computer Vision; 2019:7144‐7153. [Google Scholar]
- 24. Butepage J, Black MJ, Kragic D, Kjellstrom H. Deep representation learning for human motion prediction and classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017:6158‐6166. [Google Scholar]
- 25. Cai Y, Huang L, Wang Y, et al. Learning progressive joint propagation for human motion prediction. Proceedings of the European Conference on Computer Vision (ECCV); 2020. [Google Scholar]
- 26. Fragkiadaki K, Levine S, Felsen P, Malik J. Recurrent network models for human dynamics. Proceedings of the IEEE International Conference on Computer Vision; 2015:4346‐4354. [Google Scholar]
- 27. Li C, Zhang Z, Sun Lee W, Hee Lee G. Convolutional sequence to sequence model for human dynamics. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018:5226‐5234. [Google Scholar]
- 28. Li M, Chen S, Zhao Y, Zhang Y, Wang Y, Tian Q. Dynamic multiscale graph neural networks for 3D skeleton based human motion prediction. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2020:214‐223. [Google Scholar]
- 29. Mao W, Liu M, Salzmann M, Li H. Learning trajectory dependencies for human motion prediction. Proceedings of the IEEE International Conference on Computer Vision; 2019:9489‐9497. [Google Scholar]
- 30. Martinez J, Black MJ, Romero J. On human motion prediction using recurrent neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017:2891‐2900. [Google Scholar]
- 31. Amodei D, Olah C, Steinhardt J, Christiano P, Schulman J, Mané D. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565. 2016. [Google Scholar]
- 32. Mao W, Miaomiao L, Mathieu S. History repeats itself: human motion prediction via motion attention. ECCV. 2020. [Google Scholar]
- 33. Myronenko A. 3D MRI brain tumor segmentation using autoencoder regularization. International MICCAI Brainlesion Workshop. Cham, Switzerland: Springer; 2018:311‐320. [Google Scholar]
- 34. Ionescu C, Papava D, Olaru V, Sminchisescu C. Human3.6m: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Trans Pattern Anal Mach Intell. 2013;36(7):1325‐1339. [DOI] [PubMed] [Google Scholar]
- 35. Gopalakrishnan A, Mali A, Kifer D, Giles L, Ororbia AG. A neural temporal model for human motion prediction. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2019:12116‐12125. [Google Scholar]
- 36. Gui L‐Y, Wang Y‐X, Liang X, Moura JM. Adversarial geometry‐aware human motion prediction. Proceedings of the European Conference on Computer Vision (ECCV); 2018:786‐803. [Google Scholar]
- 37. Guo X, Choi J. Human motion prediction via learning local structure representations and temporal dependencies. Proc AAAI Conf Artif Intel. 2019;33:2580‐2587. [Google Scholar]
- 38. Jain A, Zamir AR, Savarese S, Saxena A. Structural‐rnn: deep learning on spatio‐temporal graphs. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016:5308‐5317. [Google Scholar]
- 39. Pavllo D, Grangier D, Auli M. Quaternet: a quaternion‐based recurrent model for human motion. arXiv preprint arXiv:1805.06485. 2018. [Google Scholar]
- 40. Gossner O, Steiner J, Stewart C. Attention please! Econometrica. 2021;89(4):1717‐1751. [Google Scholar]
- 41. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436‐444. [DOI] [PubMed] [Google Scholar]
- 42. Hendrycks D, Gimpel K. A baseline for detecting misclassified and out‐of‐distribution examples in neural networks. arXiv preprint arXiv: 1610.02136. 2016. [Google Scholar]
- 43. Hendrycks D, Mazeika M, Dietterich T. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606. 2018. [Google Scholar]
- 44. Liang S, Li Y, Srikant R. Enhancing the reliability of out‐of‐distribution image detection in neural networks. arXiv preprint arXiv:1706.02690. 2017. [Google Scholar]
- 45. Daxberger E, Hernández‐Lobato JM. Bayesian variational autoencoders for unsupervised out‐of‐distribution detection. arXiv preprint arXiv:1912.05651. 2019. [Google Scholar]
- 46. Nalisnick E, Matsukawa A, Teh YW, Gorur D, Lakshminarayanan B. Do deep generative models know what they don't know? arXiv preprint arXiv:1810.09136. 2018. [Google Scholar]
- 47. Grathwohl W, Wang K‐C, Jacobsen J‐H, Duvenaud D, Norouzi M, Swersky K. Your classifier is secretly an energy based model and you should treat it like one. arXiv preprint arXiv:1912.03263. 2019. [Google Scholar]
- 48. Kendall A, Gal Y. What uncertainties do we need in bayesian deep learning for computer vision? Advances in Neural Information Processing Systems; 2017:5574‐5584. [Google Scholar]
- 49. Kingma DP, Welling M. Auto‐encoding variational bayes. arXiv preprint arXiv:1312.6114. 2013. [Google Scholar]
- 50. Bourached A, Nachev P. Unsupervised videographic analysis of rodent behaviour. arXiv preprint arXiv:1910.11065. 2019. [Google Scholar]
- 51. Motegi Y, Hijioka Y, Murakami M. Human motion generative model using variational autoencoder. Int J Model Optim. 2018;8(1):8‐12. [Google Scholar]
- 52. Chen N, Bayer J, Urban S, P. Van Der Smagt. Efficient movement representation by embedding dynamic movement primitives in deep autoencoders. 2015 IEEE‐RAS 15th International Conference on Humanoid Robots (Humanoids). IEEE; 2015:434‐440. [Google Scholar]
- 53. Cao Z, Hidalgo G, Simon T, Wei S‐E, Sheikh Y. Openpose: realtime multi‐person 2D pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008. 2018. [DOI] [PubMed] [Google Scholar]
- 54. Rezende DJ, Mohamed S, Wierstra D. Stochastic backpropagation and approximate inference in deep generative models. International Conference on Machine Learning; 2014:1278‐1286. [Google Scholar]
- 55. Kipf TN, Welling M. Variational graph auto‐encoders. arXiv preprint arXiv:1611.07308. 2016. [Google Scholar]
- 56. Ionescu C, Li F, Sminchisescu C. Latent structured models for human pose estimation. 2011 International Conference on Computer Vision. IEEE; 2011:2220‐2227. [Google Scholar]
- 57. Li D, Rodriguez C, Yu X, Li H. Word‐level deep sign language recognition from video: a new large‐scale dataset and methods comparison. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV); 2020. [Google Scholar]
- 58. Grassia FS. Practical parameterization of rotations using the exponential map. J Graph Tools. 1998;3(3):29‐48. [Google Scholar]
- 59. Paszke A., Gross S., Chintala S., Chanan G., Yang E., DeVito Z., Lin Z., Desmaison A., Antiga L., and Lerer A.. Automatic Differentiation in Pytorch. 2017. https://openreview.net/forum?id=BJJsrmfCZ [Google Scholar]
- 60. Kingma DP, Ba J. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. 2014. [Google Scholar]
- 61. McInnes L, Healy J, Melville J. Umap: uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. 2018. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data that support the findings of this study are openly available, instructions at https://github.com/bouracha/OoDMotion.