Abstract
A binary classification problem is common in medical field, and we often use sensitivity, specificity, accuracy, negative and positive predictive values as measures of performance of a binary predictor. In computer science, a classifier is usually evaluated with precision (positive predictive value) and recall (sensitivity). As a single summary measure of a classifier’s performance, F1 score, defined as the harmonic mean of precision and recall, is widely used in the context of information retrieval and information extraction evaluation since it possesses favorable characteristics, especially when the prevalence is low. Some statistical methods for inference have been developed for the F1 score in binary classification problems; however, they have not been extended to the problem of multi-class classification. There are three types of F1 scores, and statistical properties of these F1 scores have hardly ever been discussed. We propose methods based on the large sample multivariate central limit theorem for estimating F1 scores with confidence intervals.
Keywords: Precision, Recall, Machine learning, F1 measures, Multi-class classification, Delta-method
1. Introduction
In medical field, a binary classification problem is common, and we often use sensitivity, specificity, accuracy, negative and positive predictive values as measures of performance of a binary predictor. In computer science, a classifier is usually evaluated with precision and recall, which are equal to positive predictive value and sensitivity, respectively. For measuring the performance of text classification in the field of information retrieval and of a classifier in machine learning, the F score (F measure) has been widely used. In particular, the F1 score has been popular, which is defined as the harmonic mean of precision and recall [1, 2]. The F1 score is rarely used in diagnostic studies in medicine despite its favorable characteristics. As a single performance measure, the F1 score may be preferred to specificity and accuracy, which may be artificially high even for a poor classifier with a high false negative probability when disease prevalence is low. The F1 score is especially useful when identification of true negatives is relatively unimportant because the true negative rate is not included in the computation of either precision or recall.
To evaluate a multi-class classification, a single summary measure is often sought. And as extensions of the F1 score for the binary classification, there exist two types of such measures: a micro-averaged F1 score and a macro-averaged F1 score [2]. The micro-averaged F1 score pools per-sample classifications across classes, and then calculates the overall F1 score. Contrarily, the macro-averaged F1 score computes a simple average of the F1 scores over classes. Sokolova and Lapalme [3] gave an alternative definition of the macro-averaged F1 score as the harmonic mean of the simple averages of the precision and recall over classes. Both micro-averaged and macro-averaged F1 scores have a simple interpretation as an average of precision and recall, with different ways of computing averages. Moreover, as will be shown in Section 2, the micro-averaged F1 score has an additional interpretation as the total probability of true positive classifications.
For binary classification, some statistical methods for inference have been proposed for the F1 scores (e.g., [4]); however, the methodology has not been extended to the multi-class F1 scores. To our knowledge, methods for computing variance estimates of the micro-averaged F1 score and macro-averaged F1 score have not been reported. Thus, computing confidence intervals for the multi-class F1 scores is not possible, and the inference about them is usually solely based on point estimates, and thus highly limited in practical utility. For example, consider the results of an analysis reported by Dong et al. [5]. In this analysis, the authors calculated the point estimates of macro-averaged F1 scores for four classifiers, and they concluded a classifier outperformed the others by comparing the point estimates without taking into account their uncertainty. Others have also used multi-class F1 scores but only reported point estimates without confidence intervals [6–16].
To address this knowledge gap, we provide herein the methods for computing variances of these multi-class F1 scores so that estimating the micro-averaged F1 score and macro-averaged F1 score with confidence intervals becomes possible in multi-class classification.
The rest of the manuscript is organized as follows: The definitions of the micro-averaged F1 score and macro-averaged F1 score are reviewed in Section 2. In Section 3, variance estimates and confidence intervals for the multi-class F1 scores are derived. A simulation study to investigate the coverage probabilities of the proposed confidence intervals is presented in Section 4. Then, our method is applied to a real study as an example in Section 5 followed by a brief discussion in Section 6.
2. Averaged F1 scores
This section introduces notations and definitions of multi-class F1 scores, namely, macro-averaged and micro-averaged F1 scores. Consider an r × r contingency table for a nominal categorical variable with r classes (r ≥ 2). The columns indicate the true conditions, and rows indicate the predicted conditions. It is called the binary classification when r = 2, and the multi-class classification when r > 2. Such a table is also called a confusion matrix. We consider multi-class classification, i.e., r > 2, and denote cell probabilities and marginal probabilities by pij, pi·, and p·j, respectively (i, j = 1, ⋯, r). For each class (i = 1, ⋯, r), the true positive rate (TPi), the false positive rate (FPi), and the false negative rate (FNi) are defined as follows:
TPi is the i-th diagonal element, FPi is the sum of off-diagonal i-th row, and FNi is the sum of off-diagonal elements of the i-th column. Note that TPi + FPi = pi·, and TPi + FNi = p·i.
In the current and following sections, we will use the simple 3-by-3 confusion matrix in Table 1 as an example to demonstrate various computations. Columns represent the true state, and rows represent the predicted classification. The total sample size is 100.
Table 1.
True Classification | |||||
---|---|---|---|---|---|
Class 1 | Class 2 | Class 3 | |||
a: Frequencies | |||||
Class 1 | 2 | 2 | 2 | ||
Prediction | Class 2 | 5 | 70 | 2 | |
Class 3 | 0 | 2 | 15 | ||
b: Proportions | |||||
Class 1 | 0.02 | 0.02 | 0.02 | 0.06 | |
Prediction | Class 2 | 0.05 | 0.70 | 0.02 | 0.77 |
Class 3 | 0.00 | 0.02 | 0.15 | 0.17 | |
0.07 | 0.74 | 0.19 |
The within-class probabilities are:
Micro-averaged F1 score
The micro-averaged precision (miP) and micro-averaged recall (miR) are defined as
Note that for both miP and miR, the denominator is the sum of all the elements (diagonal and off-diagonal) of the confusion matrix, and it is 1. Finally, the micro-averaged F1 score is defined as the harmonic mean of these quantities:
(1) |
This definition is commonly used (e.g., [6, 8–12, 14, 15]).
By definition, we have miP, miR, and miF1 all equal to the sum of the diagonal elements, which, in our example, is 0.87.
Macro-averaged F1 score
To define the macro-averaged F1 score (maF1), first consider the following precision (Pi) and recall (Ri) within each class, i = 1, ⋯, r:
For our example, simple calculation shows:
And F1 score within each class (F1i) is defined as the harmonic mean of Pi and Ri, that is,
The macro-averaged F1 score is defined as the simple arithmetic mean of F1i:
(2) |
This score, like miF1 is frequently reported (e.g., [5–10, 13]).
F1i and maF1 in our example are:
Alternative definition of Macro-averaged F1 score
Sokolova and Lapalme [3] gave an alternative definition of the macro-averaged F1 score (). First, macro-averaged precision (maP) and macro-averaged recall (maR) are defined as simple arithmetic means of the within-class precision and within-class recall, respectively.
And is is defined as the harmonic mean of these quantities.
(3) |
This version of macro-averaged F1 score is less frequently used (e.g., [11, 12, 16]). For our example,
In this example, the micro-averaged F1 score is higher than the macro-averaged F1 scores because both within-class precision and recall are much lower for the first class compared to the other two. Micro-averaging puts only a small weight on the first column because the sample size there is relatively small. This numeric example shows a shortcoming of summarizing a performance of a multi-class classification with a single number when within-class precision and recall vary substantially. However, aggregate measures such as the micro-averaged and macro-averaged F1 scores are useful in quantifying the performance of a classifier as a whole.
3. Variance estimate and confidence interval
In this section, we derive the confidence interval for miF1, maF1, and . We assume that the observed frequencies, nij, for 1 ≤ i ≤ r, 1 ≤ j ≤ r, have a multinomial distribution with sample size n and probabilities
where “T” represents the transpose, that is
The expectation, variance, and covariance for i, j = 1, ⋯, r, are:
respectively, where is the overall sample size. The maximum likelihood estimate (MLE) of pij is . Using the multivariate central limit theorem, we have
where is r2 × 1 vector whose elements are all 0, diag(p) is an r2 × r2 diagonal matrix whose diagonal elements are p, and “⩪”represents “approximately distributed as.”
By invariance property of MLE’s, the maximum likelihood estimates of miF1, maF1, and , and other quantities in the previous section can be obtained by substituting pij by . In the following subsections, we use the multivariate delta-method to derive large-sample distributions of , , and .
3.1. Confidence interval for miF1
As shown in (1), , and the maximum likelihood estimate (MLE) of miF1 is
Using the multivariate delta-method (Appendix A), we have
where variance of is
(4) |
And a (1 – α) × 100% confidence interval of maF1 is
where is with {pii} replaced by , and Zp denote the 100 p-th percentile of the standard normal distribution. Computation of for our numeric example is straightforward using (4):
And a 95% confidence interval for miF1 is
3.2. Confidence interval for maF1
The MLE of maF1 can be obtained by substituting pii, p·i and p·i by their MLE’s in (2).
Again by the multivariate delta-method (Appendix B), we have the variance of as
And a (1 – α) × 100% confidence interval of miF1 is
where is with {pii} replaced by . This computation is complex even for a small 3 by 3 table; an R code (Appendix D) was used to compute the variance estimate and a 95% confidence interval of maF1.
3.3. Confidence interval form
To obtain the MLE’s of we first substitute pii, pi and p·i by their MLE’s of maP and maR and use these in (3):
As shown in Appendix C,
where
A (1 – α) × 100% confidence interval of is
Again to get all components of are replaced by their respective MLE’s. Using the accompanying R code (Appendix D), we computed the variance estimate and a 95% confidence interval of :
4. Simulation
We performed a simulation study to assess the coverage probability of the confidence intervals proposed in Section 3. We set r = 3 (class 1, 2, 3), and generated data according to the multinomial distributions with p summarized in Table 2. The total sample size, n, was set to 25, 50, 100, 500, 1,000, and 5,000. For each combination of the true distribution and sample size, we generated 1,000,000 data, each time computing 95% confidence intervals for miF1, maF1, and .
Table 2.
True condition | ||||
---|---|---|---|---|
1 | 2 | 3 | ||
Scenario 1 | ||||
1 | 8/30 | 1/30 | 1/30 | |
Predicted condition | 2 | 1/30 | 8/30 | 1/30 |
3 | 1/30 | 1/30 | 8/30 | |
Scenario 2 | ||||
1 | 64/100 | 3/100 | 3/100 | |
Predicted condition | 2 | 8/100 | 4/100 | 3/100 |
3 | 8/100 | 3/100 | 4/100 | |
Scenario 3 | ||||
1 | 32/100 | 1/100 | 1/100 | |
Predicted condition | 2 | 24/100 | 8/100 | 1/100 |
3 | 24/100 | 1/100 | 8/100 |
In scenario 1, the true conditions of class 1, 2, and 3 have the same probability (1/3), and the recall and precision are equal (80%). Thus miP = maP = 0.80, miR = maR = 0.80, and .
In scenario 2, the true condition of class 1 has higher probability than the others (80% vs 10%), and the recall and precision of class 1 are also higher than the others (80% vs 40%, and 91% vs 27%, respectively). miF1 gives equal weight to each per-sample classification decision, whereas miF1 gives equal weight to each class. Thus, large classes dominate small classes in computing miF1 [2], and miF1 is larger than maF1 (miF1 = 0.72, maF1= 0.50, ) in scenario 2 because class 1 has higher probability and has higher precision and recall.
In scenario 3, the true condition of class 1 has higher probability than the others (80% vs 10%). The precision of class 1 is higher than the others (94% vs 24%), and the recall of class 1 is lower than the others, (40% vs 80%). Compared to the other two scenarios, the diagonal entries are relatively small, which makes miF1 small (miF1 = 0.48, maF1 = 0.44, and ).
Table 3 shows the coverage probability of the proposed 95% confidence intervals for each scenario. The coverage probabilities for both miF1 and maF1 are close to the nominal 95% when the sample size is large. When n smaller than 95%, especially for maF1 and . Moreover, computing a confidence interval for for small n is often impossible because is undefined when either pi· = 0 or p·j = 0 for any i or j. In typical applications where these F scores are computed, n is large, and the small n problem is unlikely to occur.
Table 3.
Scenario 1 | Scenario 2 | Scenario 3 | |||||||
---|---|---|---|---|---|---|---|---|---|
n | miF 1 | maF 1 | miF 1 | maF 1 | miF 1 | maF 1 | |||
25 | 0.885 | 0.901 | 0.890 | 0.921 | 0.790 | 0.774 | 0.930 | 0.870 | 0.821 |
50 | 0.937 | 0.935 | 0.923 | 0.941 | 0.864 | 0.853 | 0.935 | 0.918 | 0.905 |
100 | 0.933 | 0.938 | 0.936 | 0.937 | 0.914 | 0.914 | 0.943 | 0.936 | 0.933 |
500 | 0.949 | 0.949 | 0.948 | 0.947 | 0.944 | 0.945 | 0.946 | 0.947 | 0.947 |
1000 | 0.946 | 0.948 | 0.948 | 0.947 | 0.947 | 0.947 | 0.947 | 0.949 | 0.947 |
5000 | 0.950 | 0.950 | 0.950 | 0.951 | 0.949 | 0.949 | 0.951 | 0.950 | 0.950 |
5. Example
As an example, we applied our method to the temporal sleep stage classification data provided by Dong et al. [5]. They proposed a new approach based on a Mixed Neural Network (MNN) to classify sleep into five stages with one awake stage (W), three sleep stages (N1, N2, N3), and one rapid eye movement stage (REM). In addition to the MNN, they evaluated the following three classifiers: Support Vector Machine (SVM), Random Forest (RF), and Multilayer Perceptron (MLP). The data came from 62 healthy subjects, and classification by a single sleep expert was used as the gold standard. The staging is based on a 30-second window of the physiological signals called an EEG (electroencephalography) epoch. Thus, each subject contributes a large number of data to be classified. The total number of epochs depends on the classifiers, and it is about 59,000. Performance of each classifier was evaluated using maF1 along with precision, recall, and overall accuracy. They concluded that the MNN outperformed the competitors by comparing the point estimates of maF1 and overall accuracy. We provide here 95% confidence intervals for miF1, maF1, and for each of the four methods, as summarized in Table 4. The confidence intervals of miF1, maF1, and for the MNN do not overlap with the point estimates of other methods, providing further evidence that MNN is superior to the other method. For completeness we present 95% confidence intervals for other methods in Table 4 as well. As n is large for this example, the confidence intervals are narrow, and the ones for MNN do not overlap with confidence intervals for other three methods.
Table 4.
Method | n | 95% CI | 95% CI | 95% CI | |||
---|---|---|---|---|---|---|---|
MNN | 59,066 | 0.859 | (0.856, 0.862) | 0.805 | (0.801, 0.809) | 0.807 | (0.803, 0.811) |
SVM | 59,255 | 0.797 | (0.794, 0.800) | 0.750 | (0.746, 0.754) | 0.756 | (0.752, 0.760) |
RF | 59,193 | 0.817 | (0.814, 0.820) | 0.724 | (0.720, 0.729) | 0.746 | (0.741, 0.750) |
MLP | 59,130 | 0.814 | (0.811, 0.817) | 0.772 | (0.768, 0.776) | 0.778 | (0.774, 0.782) |
6. Discussion
We derived large sample variance estimates of miF1, maF1, and in terms of the observed cell probabilities and sample size. This enabled us to derive large sample confidence intervals.
Coverage probabilities of the proposed confidence intervals were assessed through the simulation study. According to the result of the simulation, when n is larger than 100, the coverage probability was close to the nominal level; however, for n < 100, the coverage probabilities tended to be smaller than the target. Moreover, with an extremely small sample size, could not be estimated as computation of requires all margins to be non-zero. Zhang et al. [17] have considered interval estimation miF1 and maF1 and proposed the highest density interval through Bayesian framework. On the other hand, we have proposed confidence interval for miF1, maF1, and maF1* through frequentist framework using a large-sample approximation.
There is an inherit drawback of multi-class F1 scores that these scores do not summarize the data appropriately when a large variability exists between classes. This was demonstrated in the numeric example in Section 2 for which the within-class F1 values are 0.308, 0.927, and 0.833, and miF1, maF1, and are 0.870, 0.689, and 0.691, respectively. Reporting multiple within-class F1 scores may be an option as done in [18] and [19]; however, an aggregate measure is useful in evaluating an overall performance of a classifier across classes. Another limitation with F1 scores is that they do not take into consideration the true negative rate, and they may not be an appropriate measure when true negatives are important.
For future works, we are working on developing hypothesis testing procedure for miF1, maF1, and based on the variance estimates proposed in this article.
An R code for computing confidence intervals for miF1, maF1, and , available and presented in Appendix D.
Funding
This research was partially supported by Grant-in-Aid for Young Scientists No. 18K17325 (Takahashi), Grant-in-Aid for Scientific Research (C) No. 18K11195 (Yamamoto), and P30 CA068485 Cancer Center Support Grant (Koyama).
Biographies
Kanae Takahashi is currently an Assistant Professor at Hyogo College of Medicine, Japan. She received the BS degree from Osaka University in 2008 and MPH degree from Kyoto University in 2010. Her research interests include design of clinical trials and diagnostic study.
Kouji Yamamoto is currently an Associate Professor at Yokohama City University, School of Medicine, Japan. He received the PhD in statistics from Tokyo University of Science in 2009. His research interests include design of clinical trials, diagnostic study, and categorical data analysis.
Aya Kuchiba is an Associate Professor at Graduate School of Health Innovation, Kanagawa University of Human Services, Japan. She received her PhD in Health Sciences (Biostatistics & Epidemiology) from University of Tokyo in 2008. Prior to joining Kanagawa University of Human Services, she was a Section Head of the Biostatistics Division at the National Cancer Center, Japan. Her research interest has focused on developing and applying statistical methods to cancer research in the areas of epidemiology with molecular and genetic data, diagnostic testing, and prevention, and in conducting clinical trials.
Tatsuki Koyama is an Associate Professor of Biostatistics at Vanderbilt University Medical Center. He received his PhD in statistics from University of Pittsburgh in 2003. His research interests are primarily centered on flexible experimental designs for clinical trials and inference from the data arising from such flexible and adaptive designs both in the Frequentist and Bayesian paradigms. His medical research interests include comparative effectiveness of treatments for localized prostate cancer, and association of ambient air pollution and acute lung injury.
Appendix A: Derivation of the distribution and variance of
Let p be the ordered elements of a confusion matrix. p = (p11, ⋯, p1r, p21, ⋯, p2r, ⋯, pr1, ⋯, prr)T. Using the multivariate delta-method for , we get
(5) |
Because we have
And
Note that all the elements corresponding to the diagonal entries (pii) of the confusion matrix is 1. To evaluate the variance in (5), further note that
Then we have
Thus,
Finally,
And
Appendix B: Derivation of the distribution and variance of
In a similar manner to Appendix A, using the multivariate delta-method, we get
Now we take the partial derivatives of (2) to get
Arranging these terms according to the order of the elements in p, we have
Next, we note
because
Therefore,
which can be shown to equal
Putting all together, we have
where
Appendix C: Derivation of the distribution and variance of
For marcro-averaged precision (maP) and macro-averaged recall (maR), let the vector m and its MLE be
respectively. Using the multivariate delta-method, we have
where
This is a 2 × 2 matrix with
Using the multivariate delta-method again, we get
where
Using this and Σ above, we obtain
Finally, we have
where
Appendix D: R code
The following R code computes point estimates and confidence intervals for miF1, maF1, and .
## Takahashi et al. ## ## Computation of F1 score and its confidence interval ## f1scores <- function(mat, conf.level=0.95){ ## This function computes point estimates and (conf.level*100%) confidence intervals ## for microF1, macroF1, and macroF1* scores. ## mat is an r by r matrix (confusion matrix). ## Rows indicate the predicted (fitted) conditions, ## and columns indicate the truth. ## miF1 is micro F1 ## maF1 is macro F1 ## maF2 is macro F1* (Sokolova and Lapalme) ## ###### ## ## Set up ## ## ###### ## r <- ncol(mat) n <- sum(mat) ## Total sample size p <- mat/n ## probabilities pii <- diag(p) pi. <- rowSums(p) p.i <- colSums(p) ## ############### ## ## Point estimates ## ## ############### ## miP <- miR <- sum(pii) ## MICRO precision, recall miF1 <- miP ## MICRO F1 F1i <- 2*pii/(pi.+p.i) maF1 <- sum(F1i)/r ## MACRO F1 maP <- sum(pii/rowSums(p))/r ## MACRO precision maR <- sum(pii/colSums(p))/r ## MACRO recall maF2 <- 2*(maP*maR)/(maP+maR) ## MACRO F1* ## ################## ## ## Variance estimates ## ## ################## ## ## ----------------- ## ## MICRO F1 Variance ## ## ----------------- ## miF1.v <- sum(pii)*(1-sum(pii))/n miF1.s <- sqrt(miF1.v) ## ----------------- ## ## MACRO F1 Variance ## ## ----------------- ## for(i in 1:r){ jj <- (1:r)[−i] for(j in jj){ b <- b+ p[i,j]*F1i[i]*F1i[j]/((pi.[i]+p.i[i])*(pi.[j]+p.i[j])) }} maF1.v <- 2*(a+b)/(n*rˆ2) maF1.s <- sqrt(maF1.v) ## ------------------ ## ## MACRO F1* Variance ## ## ------------------ ## varmap <- sum(pii*(pi.−pii)/pi.ˆ3) / rˆ2 / n varmar <- sum(pii*(p.i−pii)/p.iˆ3) / rˆ2 / n covmpr1 <- sum( ((pi.−pii) * pii * (p.i−pii)) / (pi.ˆ2 * p.iˆ2) ) covmpr2 <- 0 for(i in 1:r){ covmpr2 <- covmpr2 + sum(pii[i] * p[i,−i] * pii[−i] / pi.[i]ˆ2 / p.i[−i]ˆ2) } covmpr <- (covmpr1+covmpr2) / rˆ2 / n maF2.v <- 4 * (maRˆ4*varmap + 2*maPˆ2*maRˆ2*covmpr + maPˆ4*varmar) / (maP+maR)ˆ4 maF2.s <- sqrt(maF2.v) ## #################### ## ## Confidence intervals ## ## #################### ## z <- qnorm(1-(1-conf.level)/2) miF1.ci <- miF1 + c(−1,1)*z*miF1.s maF1.ci <- maF1 + c(−1,1)*z*maF1.s maF2.ci <- maF2 + c(−1,1)*z*maF2.s ## ################# ## ##Formattnig output ## ## ################# ## pr <- data.frame(microPrecision=miP, microRecall=miR, macroPrecision=maP, macroRecall=maR) fss <- data.frame( rbind(miF1=c(miF1, miF1.s, miF1.ci), maF1=c(maF1, maF1.s, maF1.ci), maF1.star=c(maF2, maF2.s, maF2.ci))) names(fss) <- c(’PointEst’,’Sd’, ’Lower’,’Upper’) out <- list(pr, fss) names(out) <- c(’Precision.and.Recall’, ’Confidence.Interval’) out } ## Example ## ## Table V from Dong et al. (2017) PMID: 28767373 mnn <- cbind(c(5022,577,188,19,395), c(407,2468,989,4,965), c(130,630,27254,1021,763), c(13,0,1236,6399,5), c(103,258,609,0,9611) ) f1scores(mnn) ## End ##
Footnotes
Code Availability The R code for computing point estimates and confidence intervals for miF1, maF1, and is available in Appendix D.
Conflict of Interests None.
References
- 1.van Rijsbergen CJ (1979) Information retrieval. Butterworths, Oxford [Google Scholar]
- 2.Manning CD, Raghavan P, Schütze H (2008) Introduction to information retrieval. Cambridge University Press, Cambridge [Google Scholar]
- 3.Sokolova M, Lapalme G (2009) A systematic analysis of performance measures for classification tasks. Inf Process Manage 45:427–437 [Google Scholar]
- 4.Wang Y, Li J, Li Y, Wangi R, Yang X (2015) Confidence interval for F1 measure of algorithm performance based on blocked 3 × 2 cross-validation. IEEE Trans Knowl Data Eng 27:651–659 [Google Scholar]
- 5.Dong H, Supratak A, Pan W, Wu C, Matthews PM, Guo Y (2018) Mixed neural network approach for temporal sleep stage classification. IEEE Trans Neural Syst Rehabil Eng 26(2):324–333 [DOI] [PubMed] [Google Scholar]
- 6.Wang J, Zhang J, An Y, Lin H, Yang Z, Zhang Y, Sun Y (2016) Biomedical event trigger detection by dependency-based word embedding. BMC Med Genomics 2(9 Suppl):45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Socoró JC, Alías F, Alsina-Pagès RM (2017) An anomalous noise events detector for dynamic road traffic noise mapping in real-life urban and suburban environments. Sensors (Basel) 17(10) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Chowdhury S, Dong X, Qian L, Li X, Guan Y, Yang J, Yu Q (2018) A multitask bi-directional RNN model for named entity recognition on Chinese electronic medical records. BMC Bioinforma 19(Suppl 17):499. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Troya-Galvis A, Gançarski P, Berti-Équille L (2018) Remote sensing image analysis by aggregation of segmentation-classification collaborative agents. Pattern Recognit 73:259–274 [Google Scholar]
- 10.Hong N, Wen A, Stone DJ, Tsuji S, Kingsbury PR, Rasmussen LV, Pacheco JA, Adekkanattu P, Wang F, Luo Y, Pathak J, Liu H, Jiang G (2019) Developing a FHIRbased EHR phenotyping framework: A case study for identification of patients with obesity and multiple comorbidities from discharge summaries. J Biomed Inform 99:103310. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Li L, Zhong B, Hutmacher C, Liang Y, Horrey WJ, Xu X (2020) Detection of driver manual distraction via image-based hand and ear recognition. Accid Anal Prev 137:105432. [DOI] [PubMed] [Google Scholar]
- 12.Zhou H, Ma Y, Li X (2020) Feature selection based on term frequency deviation rate for text classification. Appl Intell [Google Scholar]
- 13.Rashid MM, Kamruzzaman J, Hassan MM, Imam T, Gordon S (2020) Cyberattacks detection in IoT-based smart city applications using machine learning techniques. Int J Environ Res Public Health 17(24) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Wang SH, Nayak DR, Guttery DS, Zhang X, Zhang YD (2021) COVID-19 classification by CCSHNet with deep fusion using transfer learning and discriminant correlation analysis. Inf Fusion 68:131–148 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Hao J, Yue K, Zhang B, Duan L, Fu X (2021) Transfer learning of bayesian network for measuring qos of virtual machines. Appl Intell [Google Scholar]
- 16.Li J, Lin M (2021) Ensemble learning with diversified base models for fault diagnosis in nuclear power plants. Ann Nucl Energy 158:108265 [Google Scholar]
- 17.Zhang D, Wang J, Zhao X (2015) Estimating the uncertainty of average F1 scores. In: Proceedings of the 2015 International conference on the theory of information retrieval [Google Scholar]
- 18.Zhu F, Li X, Mcgonigle D, Tang H, He Z, Zhang C, Hung GU, Chiu PY, Zhou W (2020) Analyze informant-based questionnaire for the early diagnosis of senile dementia using deep learning. IEEE J Transl Eng Health Med 8:2200106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Bhalla S, Kaur H, Kaur R, Sharma S, Raghava GPS (2020) Expression based biomarkers and models to classify early and late-stage samples of papillary thyroid carcinoma. PLoS One 15(4):e0231629. [DOI] [PMC free article] [PubMed] [Google Scholar]