Skip to main content
. 2022 Jan 29;166:107970. doi: 10.1016/j.cie.2022.107970

Table 1.

The literature review table.

Author Previous Work Research Gap Our Contributions
Qolomany et al., 2017 Showed that the PSO technique holds great potential to optimize parameter settings and thus saves valuable computational resources during the tuning process of deep learning models. Most existing studies focus on optimizing the hyperparameters of deep learning evolutionarily. However, there is little research exploring improved evolutionary algorithms to enhance the performance of deep learning models. Propose an IPSO-DNN model to optimize the kernel hyperparameters of DNN in a self-adaptive evolutionary way without degrading the DNN prediction precision.
Young et al., 2015 Presented the multi-node evolutionary neural networks for automating network selection on computational clusters through hyperparameters optimization performed via genetic algorithm.
Darwish et al., 2020 Developed the orthogonal learning particle swarm optimization algorithm to find optimal values for the hyperparameters of convolutional neural networks.
Ye, 2017 Introduced a new automatic hyperparameter selection approach for determining the optimal network configuration for DNN using PSO in combination with a steepest gradient descent algorithm.
Pornsing et al., 2016 Used a self-adaptive method by encoding the parameters into the particles and optimizing them with the position during the run time. Most pervious works have been proposed to avoid premature convergence to improve the performance of PSO. However, it is very challenge to simultaneously achieve both goals of accelerating convergence speed and avoiding the local optimal in evolutionary algorithms. Develop an IPSO algorithm which employs the self-adaptive strategy and generalized opposition-based learning ability in a micro-population setting, to balance global exploitation and local exploration for improving the performance of the PSO algorithm.
Hop et al., 2021 Proposed an Adaptive Particle Swarm Optimization (APSO) algorithm with all automatically adjusted parameters of inertia weight, cognitive coefficient and social coefficient was developed to search for better solutions in scheduling problems.
Zhang et al., 2017 Presented an immune particle swarm algorithm based on adaptive search, and the algorithm can dynamically adjust the subscale size and automatically adjust the search range using the maximum particle concentration value.
Liang et al., 2006 Introduced comprehensive-learning PSO (CLPSO) focuses on avoiding the local optimal but brings in a slower convergence and the higher computational cost of the algorithm.
Singh et al., 2020 Apply the multi-objective differential evolution algorithm to tune the initial parameters of convolution neural networks in order to identify the COVID-19 patients from chest CT images. Although many studies have focused on exploring the deep learning techniques for the COVID-19 infection detection, there is little research to measure the effect of social distancing on COVID-19 spread. Estimate the effect of social distancing in terms of mobility metrics and then explore the proposed IPSO-DNN hybrid model to predict the effect of social distancing on the spread of COVID-19.
Dhayne et al., 2021 Introduced deep learning techniques to link potential patients to suitable clinical trials.
Te Vrugt et al., 2020 Developed an extended model for disease spread based on combining an SIR model with a dynamical density functional theory where social distancing is explicitly considered.
Greenstone & Nigam, 2020 A developed method was implemented to monetize the impact of moderate social distancing on deaths from COVID-19.
Fong et al., 2020 Presented the systematic reviews of the evidence base for the effectiveness of multiple mitigation measures, which shows that more drastic social distancing measures might be reserved for severe pandemics.