Table 5.
Advantages and disadvantages of compared DL techniques
DL method | Advantages | Disadvantages |
---|---|---|
DNN |
1. Its implementation is simple. Deep neural networks with multiple hidden layers automatically discover the features of complex objects such as images 2. ANNs can be applied in parallel and work fast. Consequently, they are specially programmed to perform online processes 3. It is unnecessary to identify key criteria where DNN can define all criteria and then determine which criteria are relevant 4. DANN implementations allow developers to add learning capabilities to their applications 5. Self-organization and Usability in big data due to the training process |
1. Lack of sufficient theoretical foundation 2. Computationally cost. It requires a long training time. Learning a DNN when dealing with big data can take days or months 3. In DNNs, a large number of hyper-parameters need to be adjusted. Moreover, with an increasing number of hidden layers and nodes, the training algorithm is more likely get trapped in the local optimal 4. A large amount of training data is required to training process |
DBN |
1. The training of DBNs is divided into two phases: the pre-training and the fine-tuning. In the pre-training process, an unsupervised algorithm based training is performed for the feature extraction; while in the fine-tuning process, a supervised algorithm is performed for further adjustment of the hyper-parameters 2. DBN networks have a level of flexibility 3. DBN is applied to applications with unlabeled data. Moreover, the overfitting and underfitting errors can be avoided |
1. Deep in time (two phases learning) 2. local information (Spatial data) is lost as the network gets deeper |
CNN |
1. CNN is the first truly successful DL method due to the successful training of the hierarchical layers 2. CNN requires minimal pre-processing 3. It is suitable for feature extraction, image classification, image recognition, and prediction problems 4. CNN reduce the number of parameters by leverages spatial relationships 5. CNN Fine-tunes all the layers of the network |
1. A large amount of training data is required to training process 2. It requires a lot of time and computing resources |
RNN |
1. RNNs Deal with sequential data 2. RNNs can capture longer context patterns 3. RNNs are used to earn metal |
1. It requires a long training time 2. Training process is difficult 3. The performance of RNN decreases rapidly |
LSTM |
1. It allows information to flow in both forwards and backward processes within the network 2. It has a sensible processing for time series data 3. It can learn its tasks without ability to predict the local sequence |
1. Training process is difficult 2. Complex network structure 3. It is computationally expensive |
DBM |
1. Able to learn internal representations 2. It is a fully connected NN 3. DBM Deals strongly with ambiguous inputs |
1. It requires a long training time 2. Difficult to train |
DAE |
1. It has the ability to extract useful features during the propagation and filter the useless data 2. DAE is an unsupervised DL architecture used for dimensionality reduction |
1. Training process is difficult 2. DAE Requires pre-training |