Deep belief networks [38,39,51] |
Deep belief networks have direct connection to the lower layer of the network and hierarchically extract features from data. |
Uses feedback mechanism to extract relevant features through unsupervised adaption. |
High computation complexity due to high parameters initialization. |
Convolutional neural networks [8,14,37,40,42] |
Uses interconnected network structures to extract features that are invariant to distortion. |
Widely utilized for human activity identification due to its ability to model time dependent data. It is invariant to changes in data distribution. |
Requires large amount of training data to obtain discriminant features. In addition, it requires a high number of hyper-parameter optimization. |
Recurrent neural networks [9,43,44] |
Deep learning algorithm for modeling temporal changes in data. |
Ability to extract temporal dependencies and complex changes in sequential data. |
Difficult to train due to large parameter update and vanishing gradients. |
Deep autoencoder algorithms [46,47,49,50] |
Generative deep learning model that replicates copies of training data as input. |
Reduces high-dimensional data to low dimensional feature vectors. This helps to reduce computational complexity. |
Lack of scalability to high-dimensional data. It is difficult to train and optimize, especially for one layer autoencoder. |