Table 3. The quantitative results for accuracy, precision, and recall of SparkText using three datasets.
Dataset | Classifier | Accuracy | Precision | Recall |
---|---|---|---|---|
Abstracts | SVM | 94.63% | 93.11% | 94.81% |
Abstracts | Logistic Regression | 92.19% | 91.07% | 89.49% |
Abstracts | Naïve Byes | 89.38% | 89.13% | 90.82% |
Full-text Articles I | SVM | 94.47% | 92.97% | 93.14% |
Full-text Articles I | Logistic Regression | 91.05% | 90.77% | 89.19% |
Full-text Articles I | Naïve Bayes | 88.02% | 89.01% | 90.68% |
Full-text Articles II | SVM | 93.81% | 91.88% | 92.27% |
Full-text Articles II | Logistic Regression | 90.57% | 90.28% | 91.59% |
Full-text Articles II | Naïve Bayes | 86.44% | 87.61% | 89.12% |