Table 1. Summary of DL-based DRP methods in the literature.
| Study | Model | Algorithm | Strengths | Limitations | Datasets | Results |
|---|---|---|---|---|---|---|
| Chang et al. (2018) | CDRscan | Cancer drug response profile scan a novel deep learning model | High prediction accuracy | Low R2 values were found in a few GDSC compounds. | GDSC | The R2 value of 0.84 and AUROC value of 0.98 |
| Zhang, Chen & Li (2021) | ConsDeepSignaling | Deep learning model constrained by signaling pathways | Extracts the meaningful features with less complexity | Over-fitting problem | GDSC | MSE of 0.008 and Pearson correlation of 0.85 |
| Liu et al. (2019) | tCNNS | Twin convolutional neural network for drugs in SMILES format (tCNNS) | Two convolutional networks to extract features for cancer cell lines and drugs | Small training data and fewer features | GDSC | 0.826 R2R2 and 0.909 Pearson correlation |
| Nguyen et al. (2022) | GraphDRP | Graph convolutional networks for drug response prediction |
Deep representation of vital features | High complexity | GDSC | RMSE of 0.0362 and Pearson correlation of 0.8402 |
| Su et al. (2019) | Deep-Resp-Forest | Deep cascaded forest model, Deep-Resp-Forest | Multi-grained transformation of raw features | Does not provide the exact sensitivity values | GDSC CCLE |
93% to 98% accuracy and reduced time consumption of 300 s |
| Zhang et al. (2018) | HNMDRP | Heterogeneous network-based method for drug response prediction | 2% to 25% improvement of AUC | Poor incorporation of the cell line, drug, and target similarity network | GDSC | AUC-0.69 to 0.86 |
| Preuer et al. (2018) | DeepSynergy | Deep learning for drug synergies model | Maximal efficacy for combined representation of cell lines and drug synergies | Difficulties in generalizing the network when smaller drugs and cell lines | GDSC | Pearson correlation coefficient -0.73 and AUC-0.90 |
| Chen et al. (2018) | DBN and ontology fingerprints | Deep belief network and ontology fingerprints | High performance even when the data is unbalanced | Limited training capability | GDSC | The precision of 100%, 85% recall, and f-measure of 92% |
| Matlock et al. (2018) | RF. | Random forest | Automatically lower the inherent bias | Stacking only with linear bias but does not consider nonlinear bias | GDSC | AUC of 0.9, error of 0.4, Eigen values as 0.95 and 0.23 |
| Xia et al. (2018) | ReNN | Recurrent neural network | Increased the response variance to 94% | Required hyper-parameter optimization for better tuning | GDSC | Pearson correlation of 0.972, Spearman’s rank correlation 0.965, R2R2 of 0.94. |
| Tan et al. (2019) | Ensemble learning | Novel ensemble learning method |
Integrated the gene expression data signatures to improve prediction | It does not consider the cancer relationships from the sub-networks | GDSC | MSE 2.03 |
| CCLE | MSE 4.496 | |||||
| Chiu et al. (2019) | DNN | Deep neural network | High accuracy due to pre-training with a large pan-cancer dataset | Limited interpretability. | GDSC | MSE 1.96 |
| Rampášek et al. (2019) | Dr.VAE | Drug response variational autoencoder | Improves the evidence of the training data | High complexity | GDSC | AUROC 0.71, Pearson correlation 0.89 and P-value 0.475 |
| Sharifi-Noghabi et al. (2019) | MOLI-DNN | Multi-omics late integration with deep neural networks | Optimize the representation of features for each omics type. | Class imbalance problem, data heterogeneity and limited learning of combination data | GDSC | 0.8 AUC |
| Kuenzi et al. (2020) | DrugCell using VNN | Visible neural network | High interpretation of cells | Does not consider some vital mutations | GDSC | Spearman correlation of 0.8 and high AUC of 0.83 |
| Snow et al. (2020) | DNN | Deep neural network | Omit drug docking to save time and generalise the model. | It is limited to mutants of androgen receptors. | GDSC | 80% precision, 79% recall and 79% F1-score with MCC values of 0.654 |
| Wang et al. (2020) | DL-based drug metabolite prediction | Deep learning | High accuracy and reduced the time complexity | High false-positive problem | GDSC | Accuracy of 78% |
| Liu et al. (2020) | DeepCDR | Hybrid graph convolutional network | Automatically learns the latent representation | Higher memory usage for the graph network formation | GDSC | Pearson correlation of 0.923, RMSE of 1.058, and Spearman correlation equal to 0.903 |
| Li et al. (2020) | DNN | Deep neural network | Large perturbation sample sets were used for training. | Validation requires large in vitro or in vivo experiments | GDSC, NSCLC | AUC of 0.89 |
| Kim et al. (2021) | DrugGCN | Graph convolutional network | High accuracy feature learning using past knowledge | High complexity for the larger graph plotting | GDSC | RMSE of 2.5, Pearson correlation of 0.45, and Spearman correlation values of 0.45 |
| Emdadi & Eslahchi (2021) | Auto-HMM-LMF | Autoencoder and hidden markov model | High accuracy | High randomness in the prediction process | GDSC | 70% accuracy, 0.78 AUC and 0.39 MCC coefficients |
| CCLE | 79% accuracy, 0.83 AUC and 0.53 MCC coefficients | |||||
| Malik, Kalakoti & Sundar (2021) | DL with NCA | Deep learning with neighborhood component analysis | High accuracy in both DRP and survival prediction | Additional complexity for clustering to achieve binary responses | GDSC | Survival prediction accuracy of 94%, regression value 0.92, and MSE of 1.154, |
| Li et al. (2021) | DeepDSC | Deep neural network for drug sensitivity in cancer | Less complexity | Limitation due to training on a merged dataset | GDSC | RMSE of 0.52 and R2R2 of 0.78 |
| CCLE | RMSE of 0.23 and R2R2 of 0.78 | |||||
| Ma et al. (2021) | Few-shot learning | The few-shot learning framework bridges the many samples surveyed screens (n-of-many) to the distinctive contexts of individual patients (n-of-one) | High versatility | It does not consider all vital features | GDSC | Pearson correlation of 0.54 and accuracy of 81% |
| Zhang et al. (2021) | AuDNNsynergy | Synergistic drug combination prediction by integrating multi-omics data in deep learning models. | Accuracy in predicting drugs combination responses to specific cancer cell lines | Higher complexity in terms of processing cost | GDSC | 93% accuracy, 72% precision, 0.91 AUC, 0.51 Kappa coefficients, RMSE of 15.46, and Pearson correlation of 0.74 |
| She et al. (2022) | DeepMDS | A deep learning-based approach that integrated multi-omics data to predict novel synergistic multi-drug combinations | High performance with (RMSE) in the regression task, also gave the best classification accuracy, sensitivity, and a specificity | High complexity, Over-fitting problem | GDSC | (MSE) of 2.50 and (RMSE) of 1.58, the accuracy of 0.94, the sensitivity of 0.95, and a specificity of 0.93 |
| Tahmouresi et al. (2022) | PGSA | Pyramid gravitational search algorithm (PGSA) | A hybrid method in which the number of genes is cyclically reduced is proposed to conquer the curse of dimensionality, The PGSA ranked first in terms of accuracy with 73 genes | Classification of high-dimensional microarray gene expression data is a major problem | GEO From NCBI |
High accuracy (84.5%) |
| Shaban (2023) | NHFSM | New hybrid feature selection method, hybrid method | New hybrid feature selection method, a hybrid method that combines the advantages of bat algorithm and particle swarm optimization based on filter method to eliminate many drawbacks | Validation requires large in vitro or in vivo experiments | GDSC | 0.97, 0.76, 0.75, and 0.716 in terms of accuracy, precision, sensitivity/recall, and F-measure. |
| Alweshah et al. (2023) | BWO-IG | Using two unique methodologies: the unaltered BWO variation and the hybridized BWO variant combined with the Iterated Greedy algorithm (BWO-IG) | The hybridized BWO-IG method is the best at doing local searches quickly and accurately. | These datasets contain a plethora of diverse and high-dimensional samples and genes. There is a significant discrepancy in the number of samples and genes present | GDSC | The key findings were an average classification accuracy of 94.426, average fitness values of 0.061, and an average number of selected genes of 2933.767. |
| Zhao et al. (2023) | CMGS | Glucose sensor based on a cell membrane (CMGS) to track GLUT1 transmembrane transport in tumor cells and look for GLUT1 inhibitors in traditional Chinese medicines (TCMs). | The CMGS demonstrated high selectivity and stability in the presence of interfering molecules. | This technology still lacks comprehensive kinetic monitoring of other membrane proteins in addition to the effects of receptors on cell membranes. | TCMs | high selectivity and stability |
| He et al. (2023) | TOO | Cross-cohort computational framework to trace the tumor Tissue-of-Origin (TOO) | A cross-cohort computational framework uses RNA sequencing to trace tumor tissue-of-origin (TOO) of 32 cancer types, utilizing logistic regression models for high accuracy. | Complexity, and limited learning of combination data | TCGA ICGC |
|
| Sahu & Dash (2023) | GWO-RNN and GWO-LSTM | Hybrid multifilter-ensemble machine-learning model using Grey Wolf Optimizer, Recurrent Neural Network, and LSTM | The performance of the MF-GWO-RNN outperforms with high accuracy with leukemia and lung from the SRBCT datasets | Difficulties in generalizing, and limited training | SRBCT | MF-GWO-RNN accuracy of 97.11%, 95.92%, and 92.81%, while MF-GWO-LSTM has an accuracy of 97.17%, 98.56%, and 96.38%, respectively. |