Skip to main content
. 2024 Mar 25;10:e1903. doi: 10.7717/peerj-cs.1903

Table 2. Comparison of the most popular deep learning methods.

Deep learning
algorithms
Description Strengths Weaknesses
Denoising autoencoders
Chi & Deng (2020)
Designed to handle corrupted data. High ability to extract useful features and compress information. High computational cost and addition of random noise. Scalability issues in problems with high dimensions.
Sparse autoencoder
Munir et al. (2019)
Model for data reconstruction in a high sparsity network where only part of the connections between neurons is active. Linearly separable variables are produced in the encoding layer with ease. High computational training time is required for processing input data.
Restricted Boltzmann
machine (RBM)
Sarker (2021)
A stochastic recurrent generative neural network is one of the first capable of learning internal representations. The ability to create patterns with missing data and to solve complex. combinatorial problems The training process is complex and time-consuming due to the connectivity between neurons.
Deep Boltzmann
machine (DBM)
Wang et al. (2019)
Boltzmann network with connectivity constraints between layers to facilitate inference and learning Allows robust extraction of information with the possibility of supervised training enabling the use of a feedback mechanism High training time for large datasets and difficult adjustment of internal parameters
Deep belief network
(DBN) Wang et al. (2018)
Designed with undirected connection in the first two layers and layers and direct link in the remaining layers Ability to extract global insights from data, performing well on dimensionality reduction problems Slow training process due to the number of connections
Convolutional neural network (CNN) Liu et al. (2019) A deep neural network structure inspired by the mechanisms of the biological visual cortex. Allows different variations of training strategies with good performance for multidimensional data and ability to the abstract representation of raw data A large volume of data with more hyperparameter tuning is required to perform well.
Recurrent neural
Network
Mirjebreili, Shalbaf & Shalbaf (2023)
Framework designed for modelling sequential time series data through a time layer to learn about complex variations in the data Most widely used for modelling time-series data. The training process is complex and sometimes affected by vanishing gradients. Many parameters must be updated, making the learning process time-consuming and difficult.