Table 1. Summary of literature works on depression detection in online social networks.
| Study | Data Source | Methods | Results | Limitations |
|---|---|---|---|---|
| AlSagri & Ykhlef (2020) | Naïve Bayes, Support Vector Machines, Decision Tree | Accuracy: 0.825 Precision: 0.739 Sensitivity: 0.850 F-measure: 0.791 AUC: 0.780 |
Utilizing the different kernels in SVM Not avoiding overfitting dataset Lack of interpretability and comprehensibility |
|
| Kim et al. (2020) | XGBoost, Convolutional Neural Network | Accuracy: 0.751 Precision: 0.891 Sensitivity: 0.718 F-measure: 0.795 |
Low success rate Lack of explainability Limited scope |
|
| Fatima et al. (2018) | LiveJournal | Random Forest | Accuracy: 0.918 | Small size of data Single machine-learning algorithm Absence of other evaluation metrics Inefficiency in unbalanced dataset |
| Orabi et al. (2018) | Convolutional Neural Network, Bidirectional LSTM | Accuracy: 0.850 | Small size of data Lack of interpretability and comprehensibility Hardness task of exercising for RNN Report and Slope disappearing problems |
|
| De Choudhury et al. (2013) | Support Vector Machines | Accuracy: 0.700 | Low success rate Lack of explainability Single machine-learning algorithm Inappropriateness for large data sets |
|
| Thorstad & Wolff (2019) | Cluster Analysis, Logistic Regression | Accuracy: = 0.390 F-measure = 0.380 |
Low performance Lack of interpretability |
|
| Nadeem (2016) | Support Vector Machines, Decision Tree, Naïve Bayes, Logistic Regression |
Accuracy: 0.860 Sensitivity: 0.830 F-measure: 0.840 |
Usage of the old dataset Emphasis on user confession Lack of interpretability |
|
| Aldarwish & Ahmad (2017) | Facebook, Twitter, and LiveJournal | Support Vector Machines, Naïve Bayes, | Accuracy: 0.633 Sensitivity: 0.570 |
Usage of old Arabic dataset Limited phrases and sentences Low success rate |
| Gaikar et al. (2019) | Support Vector Machines–Naïve Bayes hybrid model | Accuracy: 0.850 | High computational complexity for comparing the long-short snippets. Need for determining the appropriate values for combined methods Lack of interpretability |
|
| Islam et al. (2018) | Support Vector Machines and LIWC | Accuracy: 0.700 | Inappropriateness for large data sets Lack of interpretability |
|
| Wang et al. (2018) | Convolutional Neural Network | F-measure: 0.670 | Not encrypting the situation and alignment of an entity Lack of interpretability |
|
| Burdisso, Errecalde & MontesyGómez (2019) | SS3 | F-measure: 0.610 Precision: 0.630 Sensitivity: 0.600 |
Small size of data Low performance Lack of interpretability |
|
| Adarsh et al. (2023) | One-shot Decision, Combining of SVM and KNN | Accuracy: 0.981 Precision: 0.968 Sensitivity: 0.976 F-measure: 0.973 AUC: 0.979 |
Lack of handling multiclass depression classification Requiring too many parameters that should be adjusted a priori |
|
| Gupta, Pokhriyal & Gola (2022) | Combining Convolutional Neural Network and LSTM | Accuracy: 0.940 Precision: 0.942 Sensitivity: 0.937 F-measure: 0.940 |
Necessity of many values that should be adjusted a priori for the combined methods Lack of explainability Inappropriateness for unbalanced data sets |
|
| Chen et al. (2023) | Combining Convolutional Neural Network and SBERT | Accuracy: 0.860 Precision: 0.850 Sensitivity: 0.870 F-measure: 0.860 |
High computational cost Lack of explainability Requiring too many parameters |