1 |
Three layered feed forward ANNs and two real world problems are set as a benchmark to access the performance of Group Search Optimizer (GSO) [27]. |
GSOANN has a far better performance as compared to regular ANN. |
—–
|
2 |
A hybrid model of DSA and DL to help improve the relationship of computer science and bioinformatics [28]. |
Differential Search Algorithm (DSA) and DL can help produce more xylitol for sugar free gums. |
Computational biologists and computer scientist can together produce a hybrid model using deep learning OA. |
3 |
In auto-encoders like VVG-9 and CIFAR-10, they design some experiments to study the properties of RMSProp and Adam against Nesterov’s Accelerated Gradient method [29]. |
On very high values of 1 = 0.99 Adam outperforms lower training and test losses, whereas with 1 = 0.9, NAG performs better. |
Advance theory in getting more better results by getting 1 close to 1. |
4 |
Different optimization algorithms are studied by side CNN architecture [30]. |
Among 7 optimizers, on the LeNet architecture, Adam provides the smallest MSE whereas SGD and Adagrad failed. |
Can build analytical protable image devices |
5 |
Constructed a few illustrative binary classification problems and examined empirical generalization capability of adptive methods agaisnt GD. |
Solutions found by adaptive methods generalize worse than GSD. |
Adaptive methods should be reconsidered. |
6 |
Energy Index based Optimization Method (EIOM) that automatically adjusts the learning rate in backpropagation [31]. |
EIOM proves to be the best when compared with state-of-the-art optimzation methods. |
—– |
7 |
A non-asymptotic analysis of the convergence of two algorithms: SGD and simple averaging [32]. |
The analysis suggests that the learning rate is proportional to the inverse of the number of iterations. |
Differential and non-differential stochastic |
8 |
Adaptive learning rate and laplacian approach have been proposed for Deep Learning in MLP [33]. |
Improved classification accuracy |
—– |
9 |
Proposed a fundamental approach for anatomical, celluler stuctures, and tissue segmentation using CNN through image patches measuring 13 × 13 voxels [34]. |
On different data sets, comparing the six commonly used tools (i.e., ROBEX, HWA, BET, BEaST, BSE, and 3dSkullStrip), they achived the highest average specifity. |
Can be performed on most advanced tools and used a real time data set to get better result. |
10 |
Used a pretrained CNN model on augmented and orginal data for brain tumor classification [35] |
They achieved 90.67 accuracy before and after data augmentation on the proposed methed and compared with most advanced methods |
Used light weight CNN to entend their work for fine-grained classification differential stochastic. |
11 |
A CapsNet for brain tumor classification and investigation of the overfitting problem based on CapNet [36]. |
On 10 epochs, they achieved 86.56% accuracy, with the comparative analysis with CNN learning rate proportional to the inverse of the number of iterations. |
In the future, investigations on the effects of more layers on the classification accuracy will be performed. |
12 |
A review on deep learning techniques in the field of medical images classification [37] |
They discussed in detail the deep learning approaches and their suitability for medical images. The learning rate is proportional to the inverse of the number of iterations. |
Further research is required to apply the techniques to the modalities, where these are not applied. |
13 |
GA-SVM and PSO-SVM method used to classify heart disease [38]. |
GA and particle swarm optimization (PSO) algorithms combined with SVM achieved a high accuracy. |
—– |
14 |
Applied U-NET approach using BraTS2017 data set and prediction of patient survival [39] |
89.6% Accuracy achieved with less computational time |
—– |
15 |
Two-way path architecture based on CNN for brain tumor segmentation on the BraTS 2013 and 2015 data sets [3] |
Input cascaded CNN got a high accuracy with 88.2% on the comparitive analysis with other architechtures. |
Further improved the results with increasing architechture layers and data set. |