Abstract
With the advances of liquid biopsy technology, there is increasing evidence that body fluid such as blood, urine, and saliva could harbor the potential biomarkers associated with tumor origin. Traditional correlation analysis methods are no longer sufficient to capture the high-resolution complex relationships between biomarkers and cancer subtype heterogeneity. To address the challenge, researchers proposed machine learning techniques with liquid biopsy data to explore the essence of tumor origin together. In this survey, we review the machine learning protocols and provide corresponding code demos for the approaches mentioned. We discuss algorithmic principles and frameworks extensively developed to reveal cancer mechanisms and consider the future prospects in biomarker exploration and cancer diagnostics.
Keywords: machine learning, early cancer detection, liquid biopsy
1. Introduction
When cells mutate, they could divide uncontrollably and eventually form cancer [1]. According to the World Health Organization, cancer accounts for nearly 10 million deaths in 2020. Unfortunately, this number is estimated to be still climbing in the following decades and will reach 27 million new cases in 2040 [2]. As the second factor of death, cancer accounts for one-sixth of deaths worldwide each year [3]. Therefore, fighting against cancer is a huge challenge for global public health. Early detection, followed by tailored site-specific treatment, plays an important role in the front-line cure of cancer and could reduce the eventual mortality of cancer patients [4,5,6].
Cancer is associated with mutated genes; and genetic analysis is increasingly applied in cancer diagnosis [7]. The traditional methods for genetic testing on cancer patients are sampling from tumor tissues. However, tumor tissue biopsy is limited by several drawbacks such as invasive acquisition, clinical complications, sample preservation, and tumor heterogeneity [8,9,10].
Liquid biopsy [7,11], which surmounts the limitation of tissue biopsy, is evaluated as a potential tool for early cancer detection and monitoring [12]. By sampling from blood, stool, urine, saliva, and other fluid samples, liquid biopsy provides a non-invasive and feasible cancer detection service [13,14,15,16]. Compared with tissue biopsy, liquid biopsy is also more comprehensive to evaluate tumor heterogeneity since tumor sites can release aberrant signals into body fluid [17,18]. Researchers paid significant attention to the different components from liquid biopsy which are associated with cancers [19,20,21,22,23].
As the possibility or severity of tumor in the body is relevant to the liquid biopsy components, accurate cancer prediction based on the characteristics of these components becomes a significant problem. The application of machine learning protocols has been widely studied in recent years, proving to be valuable in early cancer detection. Nevertheless, the required knowledge to implement these methods is high, posing an obstacle to researchers who are looking to get started on liquid biopsy analysis and early cancer detection. Therefore, this review not only focuses on the published research of machine learning in early cancer detection but also demonstrates the entire implementation procedure in this effort.
The rest of this review is written in the following sections. Section 2 introduces the procedures of implementing machine learning, including data preprocessing, model selection, model evaluation, and hypothesis testing. Section 3 summarizes the mainly liquid biopsy components associated with cancer. Section 4 is an overview of the most widely used machine learning algorithms and the relevant literature with corresponding datasets. Section 5 is the discussion on this topic. For all machine learning protocols and algorithms, we provided the code demo as a tutorial available at (https://github.com/ElaineLIU-920/Code-Deme-for-ML-procedures-and-algorithms, accessed on 10 June 2021).
2. Machine Learning Related Procedures
The data sets of cancer liquid biopsies are large and complex. Therefore, it is difficult to deal with using traditional methods. Machine learning algorithms, as a potential tool, can automatically analyze and identify regularities from data and then predict future data based on the obtained experience. For machine learning, the detection of cancer is regarded as a supervised problem, which is called a classification task. In this section, we focus on the supervised machine learning protocols and some of the preparatory work before implementing these methods. This section is organized according to a typical workflow for supervised machine learning. Firstly, we will discuss some techniques for data preprocessing. Moreover, model evaluation and selection methods which include the performance metrics for supervised learning are also discussed. Next, we will introduce the hypothesis test to indicate the statistical significance.
2.1. Data Preprocessing
Data preprocessing is a fundamental step of the machine learning implementation, which has been stated to have a significant influence on the performance of machine learning models [24,25]. Data preprocessing consists of missing-value solution, normalization, dimension reduction, and feature reconstruction. As the future data is unknown in reality, we suggest that all data preprocessing methods are only applied to training data.
2.1.1. Missing Value
Missing value cannot be avoided in a dataset, which may create an obstacle for predictors. Inappropriately handling strategy will easily result in extracting poor knowledge, and wrongly prediction [26].
The first option to deal with this problem is to delete samples with missing values [27,28,29], which may result in discarding a large number of samples and increasing bias prediction [30]. Alternatively, the missing value can be filled by the mean, mode, or a random value [25]. Moreover, some model-based methods are also employed to predict the missing value [30]. Model-based methods do neither delete missing-value samples nor fill the value by simple imputation; Instead, it builds a model for the missing feature based on inferences from existing complete data.
Model-based methods consist of two steps: (1) Build a regression or classification model based on complete samples for the feature which is corresponding to the missing values; (2) Predict on the incomplete samples with its existed feature as input, and then the output is an estimate of missing value [31].
2.1.2. Normalization
The main advantage of implementing normalization is that it prevents the predictions of later stages from being dominated by relatively large or small values in the data set. Besides, normalization is significant to ensure comparability over different samples. In this section, we will introduce three commonly used normalization methods, namely Z-Score standardization, Max-min normalization, and Decimal scaling [32].
- Z-Score standardization. In Formula (1), A is feature (attribute), is the original value of feature A, is the normalized value; is the mean of feature A, and is the standard deviation of feature A.
(1) - Max-Min normalization. Max-Min normalization, also called deviation standardization, is transformed by Formula (2), where min is the minimum of feature A; max is the maximum of the feature A.
(2) - Decimal scaling. This method is realized by moving the decimal point position according to the absolute maximum of feature A. In Formula (3), j is the smallest integer such that all is less that 1, . Here, max is the maximum of the feature A.
(3)
2.1.3. Dimension Reduction
Feature is the observation of samples, which is also synonymous with input variables or attributes. The dimension of the dataset is the number of variables measured on each sample, equal to the number of features. Owing to the development of detection technology, the available samples have increased explosively in terms of dimension. When machine learning algorithms are applied to these high-dimensional data [33,34], dimension curse becomes a crucial issue to resolve, which is especially severe in bioinformatics [35,36].
One of the problems with the high-dimensional dataset is that some algorithms tend to perform poorly on high-dimensional data, as not all features are valuable for prediction. In many cases, a large amount of the features are irrelevant or redundant with the learning task, resulting in overfitting for learning models [32]. In addition, high dimensional data will also increase computation time as well as the memory of storage. Moreover, if the dimension of data is very high, visualization becomes quite difficult.
Feature extraction (also known as feature transformation, feature projection or dimension reduction specifically) and feature selection are two dimension reduction techniques [37] to solve these problems. The choice of feature extraction or feature selection depends on different data types and applications. We will next briefly introduce some typical approaches for dimension reduction.
Feature Extraction
Feature extraction method develops a transformation from the original high-dimensional feature space into a new low-dimensional space. The essence of feature extraction reduction is to learn a mapping function , where X is the original data, and is a low-dimensional vector representation after data mapping. Linear mapping and non-linear mapping methods are two main types to implement feature extraction [38]. Linear mapping is mainly represented by principal component analysis (PCA) [39,40], linear discriminant analysis (LDA) [41], and non-negative matrix factorization (NMF) [42], while non-linear mapping is mainly represented by locally linear embedding (LLE) [43] and Isomap [44].
The advantage of feature extraction is that it decreases the dimension of feature through data transformation, which enables obtaining a lower feature space without losing information. However, it is precisely for this reason that the new space is obtained from the linear or non-linear transformation of the original space, causing the inexplicability of the new features.
-
B.
Feature Selection
Different from feature extraction, feature selection directly selects a valuable subset features and removes noisy, redundant, or irrelevant features from the original dataset, which only contains the important information to solve the problem [45,46,47]. Based on different pathways of combining feature selection strategy with machine learning models, feature selection techniques are categorized into three types: filter method, wrapper method and embedded method [48].
Filter methods, independent of any learning models, assess the importance of features based on the statistical and intrinsic properties of the original dataset. In this setup, importance ranking is adopted as the principal criteria for feature selection. By reserving high-scoring features and removing low-scoring features, a subset with a lower dimension of features is obtained. Many filter-type methods have been studied, including Pearson correlation coefficient [49], F-statistic [50], Chi-squared-statistic [51] and Mutual information [52].
Wrapper methods adopt different search algorithms to generate the subsets of features. Subsequently, a specific subset is evaluated by training and testing the performance of the classification model, which is wrapped in the search algorithm. The whole process works iteratively until the highest learning performance is achieved or the desired number of selected features is obtained. A wide range of search strategies can be used, including Sequential Selection Algorithms, Recursive Feature Elimination, and Meta-heuristic Algorithms (e.g., genetic algorithm) [53,54].
Embedded methods explore the optimal subset of features during the process of constructing a learning model. Similar to the wrapper methods, the embedded methods are specific to the adopted machine learning algorithm. Least absolute shrinkage and selection operator, Elastic net and Ridge regression are three typical regularization algorithms [55,56].
Detail comparison of these three pathways to implement feature selection is discussed in [48,57]. As feature selection merely explores a valuable subset of the original feature, it retains the semantics of the original features, which possesses the advantage of interpretable analysis. However, some information may be lost when employing feature selection methods, as only a subset is reserved, and some of the features will be omitted.
Feature extraction, as well as feature selection, has the ability to improve model performance, computational efficiency, utilization of memory storage, and data visualization. Therefore, both of these two methods are employed as effective dimension reduction techniques, used alone or in combination.
2.1.4. Feature Construction
Feature construction is also known as attribute generation. Different from dimension reduction, in some cases, the features may be insufficient to describe the problem for learning models. Therefore, feature construction is adopted utilized to enrich the data. According to the definition, taken from Motoda and Liu [58], feature construction aims to discover the hidden relationships of original features by constructing new high-level features. Similar to feature selection, the process of constructing feature can also be categorised into three classes: filter methods, wrapper methods, and embedded methods [59,60]. For numerical features, simple algebraic operators such as addition, subtraction, multiplication, and division are often used to compound features.
2.2. Model Evaluation
Model evaluation is the process of assessing the performance of models on the future data [61]. In the straight forward, it aims to evaluate how well the built model by estimating the generalization error on unseen data. A good machine learning model should perform well not only on the training data but also on the future data. Therefore, before implementing a model for production, we should be fairly sure that the performance of the model will not decline when confronted with the new data. For most practical applications, the true performance of the model cannot be calculated as we do not have real future data. Hence, it is important to use new data for model evaluation to prevent the likelihood of overfitting problems to the training set. Holdout, bootstrap, and cross-validation are most commonly used method for model evaluation [62,63,64].
2.2.1. Holdout Method
Holdout method is the simplest model evaluation method, which directly splits the dataset into two portions: training set and test set. For example, we randomly choose 2/3 of the whole dataset as the training set and 1/3 as the test set. Firstly, we utilize the training set to fit and build the model. Subsequently, we evaluate the built model on the test set by comparing the predictions of the label and the ground truth. To some extent, the test set represents the new and unseen data in practice. As the estimation result obtained by applying the holdout method once is often not reliable, it necessitates the repeating of splitting and evaluating several times, which is called the repeated holdout method. The average performance evaluation is reported as the final estimation result. We usually utilize about 2/3 to 4/5 of the dataset for training and the rest for testing.
It should be noted that we cannot train and evaluate the model based on the training dataset simultaneously, which is called resubstitution evaluation or resubstitution validation. As resubstitution evaluation would introduce optimistic bias due to overfitting on resubstitution samples, we cannot ascertain whether the model works because it remembers the training data or because it could generalize well on new data.
2.2.2. Cross-Validation
The basic idea of cross-validation is to divide the data into different subsets. In this setup, some of these subsets are used to train the model and the rest are used to test the model until all the samples have been used for testing. k-fold cross-validation strategy is most commonly used in the classification research [65]. With k-fold cross-validation, the dataset is partitioned into k disjoint subsets, the union of which is equivalent to the whole dataset. A single subset from these k disjoint subsets is retained as the test data to evaluate the classifier, and the remaining subsets are used as training data. This process is then repeated for k times until all subsets are used as the test data exactly once. The performance evaluation results on k test set are averaged as the performance estimation for the classifier.
The step-by-step instruction of k-fold cross-validation is summarized as below. Figure 1 is the diagram of k-fold cross-validation.
Step 1: Randomly split the original dataset into k equal folds.
Step 2: Select one of these folds as test set, and the remaining folds as training set to build model.
Step 3: Compute generalization performance of the built model on the test set.
Step 4: Repeat step 2 to step 3 for k times until each fold has and only has one chance to act as the test set, and the remaining folds act as the training set.
Step 5: Report the average of generalization performance on all test sets as an estimations of the model performance.
The different values of k, which is usually five, ten or equal to the number of instances in the dataset, determine the different subtypes of cross-validation. Assuming that the dataset includes n samples, if , we obtain a special case of the cross-validation, namely, the leave-one-out cross-validation (LOOCV). Obviously, the LOOCV method is not affected by the partition of samples, as there is only one unique way for n samples to be divided into n subsets, each of which contains only one sample. Although the evaluation results of the LOOCV method are often considered to be more accurate, LOOCV method also has unbearable computational overhead when the dataset is relatively large. For example, LOOCV needs to build 1 thousand models if the dataset contains 1 thousand samples. However, 5-fold cross-validation and 10-fold cross-validation only need to build five and ten models, respectively.
2.2.3. Bootstrapping
The bootstrapping method is a re-sampling technique to draw sample data repeatedly with replacement from the original dataset, proposed by Bradley Efron in 1979 [66]. The workflow of bootstrap method is summarized as following:
Step 1: The size of original dataset is n. We randomly select one instance from this dataset and then assign it to the bootstrap dataset. Repeating this process until the size of bootstrap sample reaches n.
Step 2: Fit a model to bootstrap dataset and compute the performance.
Step 3: Repeat Step 2 and 3 for b times. Calculate the model performance as the average over the b estimates. If accuracy is the performance metric, then the model performance for bootstrapping is:
(4) |
In 1983, Bradley Efron described the 0.632 Estimate [67] to address the bias of the bootstrap approach aforementioned. The bias in the conventional bootstrap method is owning to the fact that the bootstrap approach only utilize approximately 63.2% of the samples from the whole dataset. For example, we can calculate the probability that a specific sample, from a dataset with size n, is not selected as as following:
(5) |
The value of Equation (5) is asymptotically equivalent to when . Therefore, the probability that the specific sample is chosen as:
(6) |
Subsequently, to adjust the bias that is owing to the sampling strategy, Bradley Efron introduced the 0.632 Estimation method, computed by the Formula (7):
(7) |
where is the resubstitution accuracy, and is the accuracy on out-of-bag samples (samples which are not selected as the bootstrap samples). The 0.632 Boostrap could address the pessimistic bias, however, an optimistic bias may occur. Therefore, 0.632 + Bootstrap was proposed [68].
(8) |
Instead of using a fixed weight , 0.632 + Bootstrap compute the weight as
(9) |
where R is the relative overfitting rate:
(10) |
where is the no-information rate. We can calculate through fitting a model to a dataset that contains all possible combinations between features and target class labels :
(11) |
Additionally, the no-information rate could be estimates as:
(12) |
where is the percentage of examples belonging to class k and observed in the dataset, and is the percentage of examples that the classifier predicts to belong class k.
2.2.4. Performance Evaluation Metrics
There are four types of possible outcomes for classification tasks, true positive, true negative, false positive, and false negative. The definition of these four terms is listed in Table 1.
Table 1.
Term | Definition |
---|---|
True Positive (TP) | The prediction is positive and it is actually positive. |
False Positive (FP) | The prediction is positive but it is actually negative. |
True Negative (TN) | The prediction is negative and it is actually negative. |
False Negative (FN) | The prediction is negative but it is actually positive. |
These four outcomes are often listed on the confusion matrix. The following confusion matrix (Table 2) is an illustration for the case of binary classification.
Table 2.
Predict | Yes | No | |
---|---|---|---|
Actual | |||
Yes | TP | FP | |
No | FP | TN |
Next, we will introduce some model evaluation metrics.
Accuracy (also known as recognition rate) is defined as the fraction of correct predictions. It can be calculated easily by dividing the number of correct predictions by the number of total predictions.
(13) |
Precision (also known as positive predictive value, PPV) is defined as the fraction of correct positive predictions among all of positive predictions.
(14) |
Recall (also known as sensitivity, true positive rate, TPR) is defined as the ratio of true positive predictions with respect to all of the examples that truly belong in positive class.
(15) |
score consider both precision and recall together as an evaluation index. The parameter allows us to control the trade-off of importance between precision and recall. focuses more on precision while focuses more on recall. When , it is called score.
(16) |
Brier score is used to check the goodness of a predicted probability score, whose values range between 0 and 1. For binary classification, the score is given by:
(17) |
where is the prediction probability, and the term is equal to 1 if the event occurred and 0 if not.
Receiver Operating Characteristic Curve (ROC Curve) is the plot between the true positive rate and false positive rate. Following (Figure 2) is an example of the ROC curve. The area under the ROC curve (AUC) is to measure how well the classifiers make correct predictions on the different thresholds.
2.3. Model Selection
With the development of machine learning, researchers proposed many efficient machine learning algorithms. For each algorithm, there are several hyperparameters that can be tuned to fit different datasets. Using different hyperparameters and algorithms to fit the training data sets results in different candidate models. As we are usually interested in obtaining the best-performing model from these candidate models, we need to find an approach to evaluate their respective performance in order to rank them. Model selection is the process of selecting the best machine learning model from the candidate models, which are built based on the training dataset. It involves the selection of different types of models (e.g., KNN, SVM, RF, etc.) and the selection of models with different hyperparameters for a certain type (e.g., different kernels for SVM).
As mentioned before, it is essential to evaluate our model with new data to prevent the possibility of overfitting on the training set. However, in order to select the best model, we need to evaluate the candidate models while building the model. In light of the fact that we cannot evaluate the candidate models on the test set. Otherwise, we will obtain a model that performs best on the test set but may not generalize well in practice. To evaluate the model as we build and adjust the model, we create a third subset of the dataset, called the validation set. If we have plenty of data, which may be at least 1000 to infinite, we could straightforward create the validation set. To evaluate the model as we build and tune the model, we could randomly split the full dataset into training, validation, and test sets. Then, we would fit candidate models on the training set with a different configuration of hyperparameters and algorithms. Subsequently, we can evaluate the performance of candidate models on the validation set and select the winning model which performs best (model evaluation and selection). With the hyperparameters of the best model, we retrain it using the training + validation set, and the generalization performance of the final model is evaluated on the test set (model evaluation). If the performance on the test set is similar to the performance on the validation set, there is reason to believe that the model will perform well on future data. Finally, we retrain the model on the full dataset (training, validation, and test set) for production use.
However, we rarely have such sufficient datasets in practice. We mainly have two approaches, re-sample methods and analytical methods, to implement model selection for a limited size of the dataset [69].
2.3.1. Re-Sample Methods
For re-sample methods, we expand the sample size by repeating a random re-sampling training set and then compute the average of prediction error as the estimation. In general, we split the training dataset into sub-training and validation sets. Sub-training set is used to fit candidate models for different algorithms and hyperparameters. The validation set is used to evaluate these candidate models and select the best model. Model evaluation does not change, in which test set is still utilized to estimate the performance of the final selected model.
We can adopt the aforementioned methods (holdout, bootstrapping and cross-validation) of model evaluation to split the training dataset again. By far, the most widely used is the cross-validation method, which includes many subtypes. Here, nested cross-validation method [70] will be detail for an example. Up to now, we have two tasks: the first task is to select the best model across candidate algorithms and corresponding hyperparameters; and the second task is to estimate the generalized performance of the best model. The nested cross-validation method includes an inner loop and an outer loop. In the inner loop, the target is to select the best model, whereas, in the outer loop, the target is to estimate the generalization performance of the best model selected by the inner loop. Figure 3 illustrates the procedure of the nested cross-validation. It works as follows:
Step 1: Randomly split the whole dataset into K equal folds (outer loop).
Step 2: Select one of them as the test set, and the remaining folds as the training set.
Step 3: Randomly split the training set into equal sub-folds (inner loop).
Step 4: Select one of the sub-folds as the validation set and the remaining folds as the sub-training set. Then we train candidate models under different algorithms and hyperparameters with the sub-training set. Next, we evaluate the performance of candidate models on the current validation set.
Step 5: Repeat step 4 for times, so that each sub-fold has and only has one chance to act as the validation set, and the remaining sub-folds act as the sub-training set.
Step 6: We then compute the average performance of candidate models on all validation sets and select the winning model with the best performance.
Step 7: With the hyperparameters of the best model from Step 6, we retrain it with the whole training set and then evaluate the generalization performance of the best model on the current test set.
Step 8: Repeat step 2 to step 6 for k times, so that each fold has and only has one chance to act as the test set, and the remaining folds act as the training set.
Step 9: Report the average of generalization performance on all test sets as an estimate of the model performance.
Lastly, we retrain the best model using the whole dataset for deployment. For brevity, nested CV with K outer folds and inner folds is denoted as nested CV. Typical values for are , , or , etc.
2.3.2. Analytical Measures
Compared with re-sample methods, the analytical methods not only evaluate model performance but also consider the model complexity. In addition, as analytical methods approximate the test error from the training error, which does not need to repeat several times, it could improve the efficiency of model selection. In this part, three typical used analytical criteria are introduced for model selection.
Akaike Information Criterion (AIC) is a scoring criterion to measure the performance of statistical models, named for the Japanese statistician Hiroji Akaike who proposed AIC in 1973 [71].
(18) |
Formula (18) is a mathematical formulation of AIC, where is the maximized log-likelihood; d is a measure of model complexity, such as the number of parameters for linear models. It is noted that the form of d for nonlinear and complex models differ and should be carefully derived. To use AIC for model selection, we simply choose the model with the smallest AIC over the set of models considered.
Bayesian Information Criterion (BIC), also known as the Schwarz criterion, was derived from Bayesian probability, and inferenced by Schwarz [72]. Like AIC, it is applicable for models that are fitted by the maximum of likelihood. If we use the same formalism defined in Formula (18), the generic form of Bayesian Information Criterion is defined as follows:
(19) |
It is straightforward to find that BIC is proportional to AIC. Compared with AIC, BIC punishes heavily on models, which possess more parameters and higher complexity. Although it looks similar, the original idea of BIC is not similar to AIC, but obtained from a Bayesian perspective.
Minimum Description Length (MDL) is motivated from an optimal coding viewpoint, proposed by Rissanen [73]. MDL recommends us to select the model from an information theory perspective. If we want to transmit our model and the prediction, a good solution from the view of coding is to encode the message with shortest length. According to Shannon’s theorem [74], the length to describe our problem is:
(20) |
In Formula (20), M is our model with as parameter. is the conditional probability of the model output with attribute . The first term of Formula (20) represents the average code length for transmitting the difference between the output of the model and the ground truth, whereas the second term represents the average code length for transmitting the model parameter vector .
One advantage of the analytical measure for the model selection approach is that it does not require a validation dataset. It means that all of the data can be used to build the model, and we can score the candidate models directly. However, the analytical measure also has the limitation of the inability to form general statistics across different types of models. For a more detailed discussion about analytical measures, the material can be obtained from [75].
2.3.3. Hyperparameter Tuning
The hyperparameters of machine learning algorithms enable the model to be tailored to different datasets. Therefore, hyperparameter tuning, which refers to the searching of an appropriate hyperparameter configuration, is an important process for the application of machine learning. Grid search, random search, Bayesian optimization, and meta-heuristic algorithms are most commonly used for hyperparameter tuning.
Grid search is an exhaustive search strategy exploring a grid of evenly spaced values. Generally, grid search can find the global optimum value by setting a large search range and a fine grid. It involves generating a uniform grid of hyperparameter configuration across the search space. With this search strategy, we simply build the model for each potential combination of all of the hyperparameters and evaluate the model to select the one which achieves the best results. The downside is that the number of potential hyperparameter combinations to be explored grows exponentially with the number of hyperparameters. It is quite inefficient to try all hyperparameter combinations one by one, which could take days or even weeks, especially on a large dataset.
Different from grid search, random search simply draws some random samples instead of trying all hyperparameter settings. This strategy randomly samples model hyperparameters following a sampling distribution (e.g., uniform) for a number of iterations. For each iteration, we build the model under a hyperparameter combination, which is randomly sampled from the aforementioned distribution. Subsequently, we evaluate each chosen hyperparameter configuration and select the best one. On account of randomness, it is not guaranteed that random search always finds the optimal solution.
Meta-heuristic algorithm is a generic optimization framework that can resolve almost all optimization problems as it is a problem independent. The iterative generation process of meta-heuristic algorithm realizes the robust searching mechanism by balancing exploration (diversification) and exploitation (intensification) under different intelligent concepts. Therefore, it enables the black-box optimization problem of hyperparameter tuning solvable with an optimal or near-optimal solution. Genetic algorithm [76], Particle Swarm optimization [77], Simulated Annealing [78], and Tabu Search [79] are already introduced for hyperparameter tuning.
Bayesian optimization for machine learning parameter tuning was proposed by J. Snoek (2012) [80]. It works under the assumption that the mapping between hyperparameter setting and generalization performance was sampled from a Gaussian process. We first construct the distribution by the observation of hyperparameters and corresponding generalization performance. Subsequently, the acquisition function was adopted to determine the next point of hyperparameters to evaluate the performance and add the observation to update the distribution. We iteratively repeat these two steps until converging to an optimum. In this setup, the information of the previous hyperparameter setting is included to adjust the exploring process.
2.4. Hypothesis Testing
Once we obtain the final model, we usually want to compare our method with the state-of-the-art methods to prove that it beats or performs as well as the advanced method. With model evaluation methods and performance metrics, it seems possible to compare the performance of the different models by first using an evaluation method to measure certain performance metrics of the models and then comparing the value of performance directly. However, the performance of models and the difference between models may be misleading because of the sampling error instead of essential differences. To make the performance of the model statistically significant, we will introduce hypothesis testing in this part.
Hypothesis testing is a statistical inference method to distinguish whether the results are due to sampling error or intrinsic differences. For hypothesis testing, we firstly compute a statistic from the samples and assume that it follows a certain distribution. If the probability of the statistic to obey this distribution is very low, we may reject the hypothesis; If not, we may accept it. For example, if we have a model with an average error rate which is . In hypothesis testing, we may assume that the error rate is less than . If the test result is consistent with the hypothesis, we accept the hypothesis; otherwise, we should reject the hypothesis as the error rate has a high probability to be greater than .
Let us take the error rate under Student’s t-test [81] for example. If we adopt k-fold cross-validation, the model evaluation process provides k error rates as we split the dataset into k folds. We denote these error rates with , then the average error rate and standard deviation are (Formula (21)) and (Formula (22)).
(21) |
(22) |
We can compute the t-statistic value following Formula (23), which obeys a t-distribution of k − 1 degree of freedom.
(23) |
For , we fail to reject the hypothesis with level of significance if or p-value is greater than the .
2.4.1. Paired t-Test
Paired t-test [82] is a specific Student’s t-test, which is used when the two samples are matched or paired. It is a two-sided test for the null hypothesis that two samples have identical average values.
When comparing machine learning models using paired t-test, we firstly evaluate each model on the same k-fold cross-validation split of the dataset and compute a performance score for each split. For example, we use and to denote the test error rate of model A and model B in the same split. Secondly, we calculate the difference of each pair . If these two models perform the same, the mean of their difference should be zero. Therefore, we can compute the mean and the variance of . Next, we compute the t-statistic value by Formula (24).
(24) |
For : Model A and Model B perform the same vs. : Model A and Model perform differently, we fail to reject the hypothesis with the level of significance if is less than or p-value is greater than the . It means that there is no significant difference between model A and model B.
Valid use of paired t-test is based on the independence of each evaluation. However, in this case, the sub-training sets have overlap with each other, which means they lack independence in the evaluation. If we stick with this hypothesis test, it will lead to overestimating the probability of rejecting the hypothesis.
2.4.2. 5 × 2 Cross-Validation Paired t-Test
To address this problem, 5 × 2 cross-validation, which repeats the 2-fold cross validation five times, was adopted as the evaluation method. We randomly scramble the dataset before each 2 fold cross validation to ensure that each observation occurs only once in the training or test dataset. Then, the evaluation results are tested using a paired t-test. For 5 × 2 cross-validation paired t-test [82], the computation of statistic value is slightly different with paired t-test. If we define to denote the difference of test error rate between model A and model B in the i fold of j repetition, then the average performance in each repetition is and the variance is . With these denotation, we define the statistic of 5 × 2 cross-validation paired t-test in Formula (25).
(25) |
The statistic obeys a T-distribution of 5 degrees of freedom. Similar with paired t-test, if the statistic is less than , model A and Model B perform equivalently. Conversely, the performance of the two models is significantly different, and the one with a lower average error rate performs better.
2.4.3. Wilcoxon Signed-Rank Test
Wilcoxon signed-rank test [83] is a non-parametric hypothesis test, proposed by Frank Wilcoxon in 1945. It applies to the case of two related or paired samples to assess whether their populations have the same distribution. In the specific, it checks whether the difference between the paired observations comes from a population with a median of zero. We can utilize Wilcoxon signed-rank test to compare two different models A and B, at the statistic level as following steps. It should be noted that the denotations not mentioned in this section are the same as in paired t-test.
Setp 1: Build the null hypothesis.: the performance distributions of model A and model B are equal, : the performance distributions of model A and model B are not equal.
Setp 2: Calculate the difference. For , calculate the difference of each pair .
Setp 3: Rank the difference. Order the difference according to its absolute value from the smallest to largest value. Define to denote the rank. The rank of smallest is 1; Ties (pairs with equal ) ranks equal to the average of the orders they cross; The differences equal to zero are omitted when ranking. For example, if we have six difference values (, , , , , ), the absolute values of them are , , , , . Therefore, the ranks of them are , , , , .
Setp 4: Compute the statistic. is the sum of the ranks of the originally positive differences. Conversely, is for negative differences. The test statistic of Wilcoxon signed-rank test is . W can be compared to the critical value table for the Wilcoxon signed-ranks test in [84]. Let denote the number of pairs included in the ranking. If , we reject the null hypothesis. For the instance of step 3, the value of is 13, and the value of is 2. Therefore, W is 2. As is 0 and , we reject and accept . It means the model with a higher performance score is statistically significant. Moreover, the sum of the positive difference ranks ( = 13) is larger than the sum of the negative difference ranks ( = 2), showing a positive advantage from model A. Consequently, our analysis provides significant evidence that model A performs better than model B in statistical significance.
-
Setp 5: Compute the z-score. If , we can implement a large sample approximation. For the population of statistic W, the mean is (Formula (26)) and the standard deviation is (Formula (27)) [84].
(26) (27) Therefore, the z-score is(28) Then, we compare the obtained value to the critical value of normal distribution or calculate the p-value. As a general rule, we set the level of risk to be . If p-value is less than 0.05 or the absolute value of is greater than , we will reject the null hypothesis.
2.4.4. McNemar’s Test
For models that are both very large and built for large datasets, it usually takes several days or weeks to train a single model. Therefore, it is impractical or expensive to perform multiple copies of the model. McNemar’s Test [82,85,86] is capable of comparing the models that can be executed only once. It is named for Quinn McNemar, who proposed it in 1947 [85].
To implement McNemar’s Test, we adopt the holdout method to train and test models A and B. For each model, we record the classification results on the test set and tabulate the outcomes on the following Table 3. It is the contingency table which lists the detail of misclassification by model A and B. For example, is the number of samples misclassified by both model A and B; is the number of samples misclassified by model A but not by model B.
Table 3.
Model B | Misclassification | Correct Classification | |
---|---|---|---|
Model A | |||
Misclassification | n 00 | n 01 | |
Correct classification | n 10 | n 11 |
Similar to before, we have the null hypothesis that the two models have no difference in performance. In other words, the error rates of these two models are the same, which means that . Consequently, we build the statistic as Formula (29). The statistic follows the distribution with 1 degree of freedom. At the significance level of 0.05, the critical value of 1 degree of freedom is 3.841 [87].
(29) |
Then, we compare the obtained value of to the critical value ( and ) of chi-square distribution or calculate the p-value. For the classical approach, if or , we should reject the null hypothesis, which assumes that model A and B performs equally. For the p-value approach, if p-value , we should reject the null hypothesis and state the conclusion that the performance of model A and B are significantly different. In this way, there is sufficient evidence at the significance level of to conclude that these two models perform at a different error rate level.
2.4.5. Friedman Test and Post-Hoc Test
If one model performs well on some data sets and poorly on others compared to other models, how can we tell if this model outperforms the others or not? Friedman test and the corresponding post hoc test are employed to explore the answer to this question.
The Friedman test is a non-parametric hypothesis test to compare multiple models on different datasets, which is proposed by Milton Friedman [88,89]. The procedure of Friedman test involves ranking the models for each dataset separately and calculate the Friedman statistic to infer whether these models perform differently. Suppose we compare k models on N datasets and let represents the average rank of ith algorithm on all datasets. For example, if we have three models, A, B, and C, and four datasets, the best performing algorithm ranks 1, the second-best ranks 2 and the last one ranks 3. In the case of ties, the average of the ranks they across are assigned. Table 4 is an example of the ranking results, where , , .
Table 4.
Rank | Model | Model A | Model B | Model C |
---|---|---|---|---|
Dataset | ||||
Dataset 1 | 1.5 | 1.5 | 3 | |
Dataset 2 | 1 | 3 | 2 | |
Dataset 3 | 1 | 2.5 | 2.5 | |
Dataset 4 | 1 | 2 | 3 | |
Average Rank | 1.125 | 2.250 | 2.625 |
The null hypothesis states that there is no difference among all models, which means that the average rank should be equivalent. The average and variance of are and , respectively. The variable , defined by Formula (30), is distributed according to the distribution with degree of freedom when k and N are big enough.
(30) |
However, the above statistic is over conservative and a new statistic as Formula (31) is adopted.
(31) |
follows a F-distribution with and degrees of freedom. Next, we compare the obtained value of to the critical value or calculate the p-value. For classical approach, if or , we should reject the null hypothesis. For p-value approach, if , we should reject the null hypothesis and state the conclusion that there is at least one model has different performance at the significance level of .
If the null hypothesis that all models have the same performance is rejected, it indicates that the performance of the models is significantly different, necessitating proceeding with a post hoc test to distinguish which models perform better than others. There are several available pathways to realize post hoc test. If we want to compare all the models with each other, the Nemenyi test [90,91] is a commonly used method. If we only aim to compare all models with a control model, such as comparing your proposed model with the state-of-the-art models, the Bonferroni correction procedure, step-up procedure, or step-down procedure are also appropriate [92,93,94,95]. Here, we take Nemenyi test as an example of post hoc test after Friedman test. A more detail description about other procedures, please refer to [96,97,98].
To implement the Nemenyi test, we first define a critical difference (CD) as Formula (32).
(32) |
where (Table 5) is the critical value based on the Studentized range statistic divided by .
Table 5.
k | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |
---|---|---|---|---|---|---|---|---|---|---|
q 0.05 | 1.960 | 2.343 | 2.569 | 2.728 | 2.850 | 2.949 | 3.031 | 3.102 | 3.164 | |
q 0.10 | 1.645 | 2.052 | 2.291 | 2.459 | 2.589 | 2.693 | 2.780 | 2.855 | 2.920 |
If the difference between the average rank of two models exceeds the critical difference, the assumption that the two algorithms perform the same should be rejected with corresponding confidence. For the example in Table 4, and ; , and . With the significant level of 0.05, there is no difference in all of the models. With the significant level of 0.05, model A performs better than model B, as the difference of their average ranks exceeds 1.414.
3. Liquid Biopsy Components
During the formation and growth of primary tumors, cells undergo active release, necrosis, or apoptosis [99,100]. In these process, various components are released into the liquid, including circulating tumor cells, cell-free DNA, circulating tumor DNA, cell-free RNA, exosomes, and tumor educated platelets(TEPs) [101].
3.1. Circulating Tumor Cells
The presence of circulating Tumor Cells (CTCs) was firstly identified by Ashworth (Australia) in 1869 [102]. When Ashworth performed an autopsy on a metastatic breast cancer patient, cells similar to those from the primary tumor were found in the blood. CTCs are currently defined as the tumor cells that shed or migrate actively into the vessel from the primary tumor or metastatic sites and then circulate in the bloodstream [103]. The opinion of tumor self-seeding suggests that CTCs can recirculate back, resulting in the possibility of metastases, which is responsible for the majority of deaths associated with cancer [104,105]. As the access to peripheral blood circulation is a prerequisite for distant metastasis of tumors [106], detection of tumor cells in blood will indicate the possibility of distant metastasis of tumors [107].
Although the content of CTCs is extremely rare, CTCs is still a potential alternative to invasive biopsies as an origin of tumor tissue for the detection and monitoring of cancers [108,109,110,111]. These circulating tumor cells can be enriched and detected via different technologies that take advantage of their physical and biological properties [112]. The technology to obtain these cells is an evolving field of research and is challenged by the ability to isolate CTCs in a condition that can be utilized for molecular analysis and propagation into CTC derived xenografts [113].
CTC is isolated from peripheral blood, which can avoid invasive and complex biopsy procedures. The culture of tumor cell lines takes a long time and is homogeneous, which cannot accurately reflect the genetic diversity and the changing tumor microenvironment. In contrast, CTCs-derived xenografts can reflect the biological characteristics of cancer more accurately, providing a visual window for studying the dynamic evolution of cancer and allowing monitoring of the longitudinal evolution of tumors at the molecular level.
As a marker of early diagnosis, CTCs also has some limitations. A reasonable and effective enrichment method is the most important and urgent problem to be solved. The main challenge is to obtain a sufficient number of CTCs that are optimally available for further evaluation. Besides, techniques for assessing the molecular characteristics of CTCs are still evolving, and the standard in this effort for clinical practice should be unified.
3.2. Cell-Free DNA and Circulating Tumor DNA
In 1948, Mandel and Metais, researchers from France, firstly found nucleic acids circulating in the human blood [114,115]. Circulating cfDNA refers to the DNA which is released into the blood by necrotic or apoptotic cells, or active release [116,117]. For cancer patients, part of cfDNA comes from tumor cells. This subpopulation of the cfDNA is ctDNA. In 1977, scientists firstly confirmed the presence of ctDNA in the blood of cancer patients [118]. ctDNA is single or double stranded [113] and comes from either living, dying tumor cells or CTCs [119,120,121]. The majority of cfDNA are released from normal cells. Therefore, ctDNA only occupies a small proportion of the cfDNA [101].
The concentration of cfDNA in the blood could increase owing to certain events such as cancer, autoimmune, smoking, pregnancy, intense exercise and tissue damaging therapies [122,123,124,125,126,127,128]. Likewise, the fraction of ctDNA may vary due to various factors [129]. Although ctDNA analysis provides a viable option for the diagnosis of early cancer, existing techniques cannot overcome the difficulties of sensitivity analysis. How to standardize the testing method is still a problem to be solved.
3.3. Cell-Free RNA
In 1993, Lee firstly discovered the miRNA [130], which are intracellular non-coding RNA molecules containing about 22 nucleotides. The miRNAs play an important signaling role by mediating the post-transcriptional silencing in various cellular activity [131]. Circulating or cell-free miRNA (cfRNA) refers to those miRNAs that identified in the biological fluids [131]. The high turnover rate of tumor cells needs the high expression of specific genes, leading to the large amounts generation of cfRNA [132]. Therefore, researchers identified the corresponding alteration in the blood of cancer patients [101,133].
The limitation of miRNA is reflected in the inconsistency in the selection of internal or external reference genes for quantitative detection; miRNAs from different sources, such as plasma, serum, whole blood, and exosomes, have differences in quality and quantity during the separation process; Some studies have small sample sizes, which may lead to unreliable results. Therefore, the isolation and quantification of miRNA and the methods used for data analysis still need more verification.
3.4. Exosomes
Exosomes were first discovered in sheep reticulocytes in 1983 and named by Johnstone in 1987 [134]. It refers to the vesicles released by cells, containing an abundance of proteins, genetic information such as DNA and RNA, and other analytes [135]. With a diameter between 30 nm to 100 nm, it can be detected from plasma, saliva, urine, breast milk, hydrothorax, cerebrospinal fluid, semen and other body fluids [136]. Furthermore, it is stable in extreme pH (pH = 1–13) or freeze-thaw [137]. Since playing a key role in tumor growth and metastasis, the complicated impact of exosomes in cancer mechanism needs to be further studied. These concepts support the potential of exosomes and their components to be applied in the detection of cancer [138].
Exosomes have vesicles that enhance the stability of wrapped genetic components. The similarity between circulating exosomal miRNAs and tumor-derived miRNAs enables the former one potentially useful for screening tests for cancer. In addition, other genetic components inside the exosomes will enrich relevant research on tumor genetics. From the preliminary results obtained, the prospects are very promising. However, the technology of acquiring exosomes is still under development, which is also the main reason for limiting exosome-related research.
3.5. Tumor Educated Platelets
Platelets (also termed thrombocytes) are the second most abundant cell types in peripheral blood, existing as circulating anucleated cell fragments. The largest platelets are about 2–3 microns in diameter [139]. More recently, platelets are implicated a central role in the local and systemic responses to tumor growth [140,141]. Confrontation of platelets with tumor cells by transferring tumor-associated biomolecules (‘education’) is an emerging research field resulting in the term of tumor-educated platelets (TEPs).
4. Machine Learning Algorithms and Clinical Application in Early Cancer Detection Based on Liquid Biopsy
Several machine learning algorithms are used to detect cancer based on the characteristics extracted from liquid biopsy. An overview of all relevant papers are listed in the supplementary document (Table: Summary of related publications) with the direct URL of dataset if available. This section discusses and reviews the publications of the most commonly used algorithms for early cancer detection in recent 10 years. As this systematic survey aims to report wide studies related to early cancer detection based on liquid biopsy incorporating machine learning algorithms, over 400 papers were searched using the following keywords: (liquid biopsy OR exosome OR circulating tumor cell OR circulating tumor DNA OR cell free DNA OR microRNA OR tumor educated platelet) AND (cancer OR carcinoma OR adenocarcinoma OR tumor OR malignancy OR malignant disease) AND (svm OR support vector machine). We searched four extensively used machine learning algorithms by replacing the last keyword. For each algorithm, we checked the top 100 relevant publications in recent 10 years according to the following four criteria. Figure 4 is the workflow of select publications.
The research is about liquid biopsy.
The research is about cancer detection.
The research utilized corresponding machine learning method.
For several models compared, we only consider the model which performs best.
4.1. Traditional Machine Learning Algorithms
For traditional machine learning algorithms, we reviewed linear models, support vector machine and random forest.
4.1.1. Linear Models
Linear models are widely used for supervised learning because of the advantage of implementation simplicity and interpretability. Linear regression, logistic regression and LASSO are some examples of linear models.
Principle of Linear Model
Given an input data for . Let denote the prediction made by a model for the given input. Coefficients () are parameters that define the model by assigning a coefficient to each input, and the bias or intercept is provided by an additional coefficient. The training data is used to estimate the coefficients of the logistic regression algorithm using a learning algorithm known as a maximum-likelihood estimation. The learning algorithm assumes data distribution and produces coefficients that minimize the error of probabilities of model prediction to those in the data.
The logistic regression model can be described with a matrix for the input data X, a vector for the output , and a vector for the coefficients using linear algebra represented as the Formula (33).
(33) |
Since the above representation is identical to linear regression, which produces real values as outputs instead of class labels, a nonlinear function is used to ensure that the output of the weighted sum is a value between 0 and 1.
Logistic regression uses the logistic function, also known as the sigmoid function, to ensure class labels’ prediction. The sigmoid function is an S-shaped curve that maps a real-valued number x into a number between 0 and 1 using Equation (34).
(34) |
Therefore, for logistic regression, x in Equation (34) is replaced with the weighted sum given in Equation (35) to produce an output between 0 and 1 for two class labels 0 and 1.
(35) |
The output from the model can be interpreted as a probability from a Binomial probability distribution function.
Least Absolute Shrinkage and Selection Operator (LASSO), also known as L1-norm, adds a regularization term which is used to penalize the less important features in a data by making their respective coefficient () zero, thereby shrinking their weights to zero. The less important features in Equation (33) having are eliminated, thereby making LASSO useful for feature selection and the creation of simple models. It is beneficial for datasets with high dimensions and high correlation. L1-norm is given by Equation (36)
(36) |
where is the hyperparameter that controls the shrinkage. The bias of the model increases as increases while variance increases as decreases.
-
B.
The Application of Linear Models in Early Cancer Detection
Linear models have been applied in many ways to detect several types of cancer, either recurrent or metastatic, in different parts of the body. Table 6 is an overview of relevant publications based on linear models.
Table 6.
Reference | Method | Dataset Availible | URL For Dataset | Cancer Type | Sample Type | Biomarker |
---|---|---|---|---|---|---|
[142] | LR | Y | https://www.ncbi.nlm.nih.gov/gds/?term=GSE31682, accessed on 29 June 2021 | Ovarian | Blood | DNA methylation |
[143] | LR | N | Oral cancer | Plasma | cfDNA | |
[144] | LR | N | Oral cancer | Blood | Exosomes | |
[145] | LASSO | N | Non-Small Cell Lung Cancer | Plasma | cfDNA | |
[146] | LR | N | Colorectal cancer | Blood | miRNA | |
[147] | LR, LASSO | Y | http://www.uni-koeln.de/med-fak/clcgp/, accessed on 29 June 2021 | Non-small cell lung carcinoma | Blood | cfDNA |
[148] | LASSO | N | Lung cancer | Plasma | Exosomes | |
[149] | LASSO | Y | https://identifiers.org/ncbi/insdc.sra:SRP302308, accessed on 29 June 2021 | Breast cancer | Blood | cfDNA |
Maltoni et al. [150] used a logistic regression model to evaluate the role of altered genes in breast cancer like HER2, PI3KCA for patient prognosis due to the possibility of their correlation with CF-DNA quantity. They collected serum samples from 58 non-relapsed and 21 relapsed patients and analyzed the samples for cfDNA integrity and quantity of all oncogenes. To determine the ability of these genes in predicting a relapse, the logistic regression on a two-marker combination produced an area under curve of 0.627 with a 95% confidence interval. With further clinical validity, the study speculates the potential of cfDNA detected as liquid biopsy in clinical practice.
Gene expression information of original tissues is contained in the nucleosome footprint of cfDNA. This information can be used in the prediction of response to chemotherapy. Yang et al. [149] utilized LASSO to evaluate transcription start site (TSS) regions coverage ability of genes. Based on cfDNA data of 85 healthy individuals and 85 individuals who are breast cancer patients, the coverage at the TSS regions was utilized for the classification of individuals into either having cancer or healthy. The LASSO model was repeated 100 times with a 5-fold cross-validation technique using the R package to prevent bias. A test was implemented using plasma from 30 healthy donors and 60 patients to validate the model independently. The model recorded a significant median AUC of 0.863 for the training cohort and 0.834 for the validation cohort. The model was able to avoid overfitting, as noticed in the recorded AUC. With the analysis, the use of cfDNA nucleosome footprints to predict neoadjuvant chemotherapy was highlighted and verified with the LASSO model. The study will improve personalized decision-making per patients’ treatment.
Due to the advancement of lung cancer by the time it is diagnosed, it is the deadliest cancer in the world [151]. El-Khoury et al. [148] used the bootstrap sampling method and LASSO penalization to deduce the suitable combination of protein necessary for predicting outcome to improve early detection and patients’ survival. With data comprising 93 healthy donors and 128 lung cancer patients, the level of plasma in 351 proteins was quantified, and the optimal threshold for the biomarker was selected. The validation of the panel was carried out with independent data of 49 healthy donors and 48 patients using logistic regression. With an AUC of 0.999, sensitivity of 0.992, specificity of 0.989, negative predictive value of 0.989 and positive predictive value of 0.992, lung cancer was detected irrespective of the cancer stage, making it possible to detect lung cancer earlier and aiding early treatment.
For early and accurate decisions on treatment strategies, an accurate diagnosis must be made. Therefore, it is vital to distinguish small cell lung cancer (SCLC) from non-small cell lung cancer (NSCLC). Non-small cell lung cancer can be further categorized as squamous cell carcinoma and inter alia adenocarcinoma. Raman et al. [147] collected public data containing 843 samples (small cell lung cancer = 68, squamous cell carcinoma = 351, and inter alia adenocarcinoma = 424) which were filtered based on histology. cfDNA was extracted was further extracted from plasma. Five classifiers, including random forest, support vector machine, multinomial logistic regression with ridge regularization, multinomial logistic regression with elastic net regularization, and multinomial logistic regression with lasso regularization were evaluated with the data using a leave-one-out cross-validation method. Due to the inability of some classifiers to deal with class imbalance, the authors used a random sampling method to make the number of samples in all classes equal to 68 to make the number of training samples equal to 204. Multinomial logistic regression with ridge regularization, based on iterative one-vs.-all receiver operating curve, had the best performance with a mean area under curve of 0.936. The coefficients of the logistic regression model detected that the prominent regions which differentiate non-small lung cell cancer from small lung cell cancer are located at the chromosome arm, and tumor fraction is a determinant of the prediction probability.
Cucchiara et al. [145], working with the metastatic case of EGFR-positive NSCLC reports the possibility of using the combination of liquid biopsy and radiomics to suggest management of the disease. This can be done by detecting new mutations early. Liquid biopsy is easy to perform, minimally invasive and can be done repeatedly to extract valuable information. cfDNA acquired from plasma of seven metastatic patients was analyzed using digital droplet PCR, and radiomic analysis was also done using computed tomography images. The authors were able to compare the EGFR mutation dynamics in cfDNA with the radiomic features. They used a logistic LASSO regression model to estimate the correlation between the variation in the radiomics features and the EGFR mutation status using a 27-fold Monte Carlo cross-validation method. The model implemented a feature reduction, and maximum likelihood estimation was done for the remaining features. Based on these performance analyses, an early decision can be made for treatment strategy. Although the authors found no significant relationship between the mutational status and tumor volume, there was also no discovered association between the clinical outcomes and the radiomic signatures.
Wei et al. [146] pointed out the need to have less invasive strategies for the early prognosis and detection of colorectal cancer to avoid distant metastasis. The authors extracted extracellular vesicles from plasma samples and used nanoparticle tracking analysis, transmission electron microscopy with western blotting to identify the extracellular vesicles. The samples contained 37 colorectal cancer patients, 22 colorectal adenoma patients and 42 non-cancerous control participants. It was discovered that circulating EV-miR-193a-5p can efficiently distinguish the three classes. Especially with an AUC of 0.752, it can distinguish colorectal cancer patients from the two other classes and with an AUC of 0.759, it can distinguish colorectal cancer from non-cancer. This shows circulating EV-miR-193a-5p can identify colorectal cancer than precancerous lesions. In addition, due to the importance of age factor in colorectal cancer, a logistic regression model was implemented to integrate the age with a cutoff of 55 years and circulating EV-miR-193a-5p. The integration of the age factor increased the area under curve from 0.752 to 0.775 and 0.759 to 0.795 for distinguishing colorectal cancer patients from the two other classes and colorectal cancer from non-cancer, respectively. The integration of the age factor using the model can quickly identify colorectal cancer in high-risk individuals.
Oral cancer, being one of the most frequent cancer in the world, Lin et al. [143] identified the correlation between the progression of oral squamous cell carcinoma and cfDNA. The identification of the biomarkers is essential to improve diagnosis and treatment. Plasma was extracted from 121 oral cancer patients and 50 individuals for control while ensuring that the cfDNA size distribution is similar in oral cancer patients and control donors. Analyses on the dataset revealed that the mean concentration of cfDNA in oral cancer patients was significantly higher than that of the control group. The adjusted odds ratios were determined using binary logistic regression analysis, and a confidence interval of 95% was achieved. With a statistical significance test of p < 0.05, the study established the relationship between cfDNA and oral cancer.
Due to the role that serum exosome plays in the development of cancer, Li et al. [144] identified protein content in serum exosome based on 30 samples. The samples included oral cancer patients with lymph node metastasis, oral cancer patients with no lymph node metastasis, and healthy controls. Oral cancer patients have a high rate of lymph node metastasis [152]. A binary logistic regression analysis was carried out to compare the use of four biomarkers (ApoA1, CXCL7, PF4V1, F13A1) and their combinations based on the area under curve. This study deduced that the four biomarkers from serum exosomes could help diagnose oral cancer-lymph node metastasis.
Due to the lack of early detection and resistance to chemotherapy, ovarian cancer is the most lethal cancer in gynecology [153,154]. Li et al. [142] performed a two-stage epigenome-wide association study to identify methylation biomarkers for epithelial ovarian cancer. The authors selected 24 cancer cases, and 24 age-frequency matched control cases for genome-wide methylation profiling, and 206 cancer cases with 205 age-frequency matched control cases. Independent t-test and test was used for the continuous and categorical variables, respectively. The correlation between the blood cell counts and the DNA methylation was estimated using Pearson correlation analysis. A logistic regression model was further built for the differentially methylated cpG sites in the validation stage, and it was evaluated based on the receiver operating characteristics curves. With the study, the identified set of blood-derived DNA methylation signatures and its association with epithelial ovarian cancer will serve as a tool for the early detection of ovarian cancer.
Linear models have been successfully applied to different cancer types, including breast cancer, colorectal cancer, oral cancer, lung cancer, etc. Ranging from classification to the selection of important features for further prognosis, the application of linear models as machine learning tools is important.
4.1.2. Support Vector Machine
Support Vector Machine (SVM) [155] is a supervised learning method for solving data mining problems, first proposed by Cortes and Vapnik in 1995. It aims to build a decision boundary, which is known as the hyperplane, to separate different classes. The positive samples and negative samples each have the closest point to the hyperplane. SVM distinguishes different classes by maximizing the distance between these two points to the hyperplane.
Principle of SVM
If the data instances are for , where and . The two classes in the training data can be separated by a hyperplane H: . Furthermore, there are two hyperplanes : and : parallel to H. The positive and negative samples, which are closest to H, just fall on and , respectively. Such samples are support vectors. Margin is defined as the distance between and in Formula (37).
(37) |
SVM aims to learn an optimal separating hyperplane H to maximize the margin (minimize ), while keeping all the points correctly classified. This problem can be summarized as Formula (38).
(38) |
For non-separable data, slack variable is defined to allow data samples to violate the margin or even misclassified. In Formula (39), C is the penalty parameter.
(39) |
When the true model of the dataset is nonlinear, we can map the input data into a new high dimensional space employing a nonlinear mapping . After mapping, the problem can be summarized as Formula (40).
(40) |
To solve this problem, we need to rewrite the primal problem into its dual form.
(41) |
In Formula (41), is the Lagrange Multiplier. The SVM dual problem contains the inner product of , which is the high-dimensional feature vector. To simplify the calculation, kernel function is defined to replace the inner product as Formula (42).
(42) |
-
B.
The Application of SVM in Early Cancer Detection
As a traditional and popular machine learning method, SVM was widely used for early cancer detection. An overview of relevant reference to SVM is provided in Table 7.
Table 7.
Reference | Method | Dataset Availible | URL For Dataset | Cancer Type | Sample Type | Biomarker |
---|---|---|---|---|---|---|
[156] | SVM | N | Glioblastoma | Blood | miRNA | |
[141] | SVM | Y | https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE68086, accessed on 29 June 2021 | 6 Cancers | Blood | TEP-RNA |
[157] | PSO + SVM | Y | https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE89843, accessed on 29 June 2021 | Non-Small-Cell Lung Cancer | Blood | TEP-RNA |
[158] | SVM vs. PCA vs. LDA | N | Oral cancer | Saliva | Exosomes | |
[159] | PSO + SVM | Y | https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE107868, accessed on 29 June 2021 | 2 Cancers | Blood | TEP-RNA |
[160] | SVM | N | prostate-cancer | Blood | Extracellular vesicles | |
[161] | SVM vs. RF vs. LASSO | Y | https://bigd.big.ac.cn/search/?dbId=&q=PRJCA001138, accessed on 29 June 2021 | 3 Cancers | Urine | cfDNA |
[162] | SVM | N | Esophageal cancer | Plasma | cfDNA | |
[163] | PSO + SVM | N | Sarcoma | Blood | TEP-RNA | |
[164] | SVM | N | Lung Cancer | Serum | miRNA | |
[165] | SVM + SFLA vs. RF vs. KNN vs. GPC vs. GNB vs. GBM vs. SVM vs. LASSO vs. Elastic Net | Y | https://www.nature.com/articles/s41467-020-18965-w#data-availability, accessed on 29 June 2021 | 7 Cancers | Plasma | cfDNA |
Patrick et al. [156] reported a work of glioblastoma detection utilizing SVM with radial basis kernel. In this study, 1158 miRNAs collected from blood were analyzed. They applied SVM and filter based feature selection method to determine a suitable subset of miRNA biomarkers and achieved their best result based on 180 miRNAs with an accuracy of 81%, specificity of 79%, and sensitivity of 83%. Additionally, 52 miRNAs were significantly distinguished by unpaired Student’s t-test. On this basis, miR-128 and miR-342-3p stand out significantly with a p-value of 0.025 under correcting for multiple testing by Benjamini-Hochberg adjustment. This work revealed the possibility of miR-128, miR-342-3p and other important miRNA as biomarkers to detect glioblastom based on the analyses of 20 patients and 20 healthy individuals. It is also an instance of the effectiveness of SVM on a small sample dataset with high dimensions.
In 2015, Thomas Wurdinger’s team from the Netherlands published a study in Cancer Cell showing that mRNA from tumor-educated platelets (TEPS) is potential for diagnosis of various cancers and differentiation of cancer types [141]. This is the first time that the term of tumor-educated platelet proposed. They identified that 1453 mRNAs increased and 793 mRNAs decreased in TEPs compared with healthy platelets. Further analysis indicated that the increased TEP mRNAs were involved in biological processes such as vesicle-mediated transport and the cytoskeletal protein binding while the decreased mRNAs were involved in RNA processing and splicing. A pan-cancer classification based on SVM was implemented, distinguishing 228 patients in 6 cancers from 55 healthy individuals with 96% accuracy. Additionally, TEP mRNA profiles are also demonstrated to be effective in distinguishing the specific tumor type. Besides, they found that the platelet samples of patients possess distinct therapy-guiding markers confirmed in matching tumor tissue. In their further study [157], this team combined particle-swarm optimization (PSO) and SVM to detect non-small-cell lung cancer based on TEPs. PSO was utilized to identify the optimal biomarker panels from large amounts liquid biosources and to tune the parameter of SVM. They termed this pipeline PSO-enhanced thromboSeq. In 2019, they reevaluated the publicly available dataset in [157] and further validate the performance on a new platelet-RNA-sequencing dataset from a healthy donor (HD) and lower-grade glioma (LGG) samples [159]. In this manuscript, the authors not only provided a new dataset but also disclosed the code and state the operation of the code step by step. Heinhuis et al. [163] generalized the pipeline of PSO-enhanced thromboSeq to identify the biomarker for sarcoma on a dataset with 160 samples, achieving a diagnostic accuracy of 87% and AUC of 0.93.
Cario et al. [166] diagnosed oral cancer based on the Fourier-transform infrared (FTIR) spectra of salivary exosomes. The dataset is the whole saliva samples collected from 21 oral cancer patients and 13 healthy individuals. By analyzing the absorbance spectra, they found a number of differences between normal and cancer samples, including changes in the conformations of proteins, lipids and nucleic acids. Based on these findings, this work adopted the spectra absorbance bands between the 900 cm and 3700 cm, the ratios and the area under the absorbance spectrum of different three certain band as the input features of classifiers. Principal component analysis–linear discriminant analysis (PCA–LDA) and SVM are included as the discrimination models. In terms of accuracy, SVM achieved a training accuracy of 100% and a cross-validation accuracy of 89%. PCA–LDA showed an accuracy of 95%.
Sunkara et al. [160] presented a centrifugal device for isolation of extracellular vesicles (EVs) from whole-blood. SVM was utilized to analyze the 8 biomarkers to detect 43 prostate-cancer patients from 30 healthy individuals. HSP90 achieved the highest sensitivity (86%), accuracy (88%), specificity (90%), and AUC (0.92) of all the test markers.
Guangzhe et al. [161] applied SVM to detect urothelial carcinoma (UC) from 65 patients with urothelial carcinoma, 58 with kidney cancer, 45 with prostate cancer, and 95 normal individuals by analyzing copy number alterations of urinary cfDNA.In this work, the random forest was first utilized to select the top 50 features. After feature selection, RF, SVM and LASSO were compared and SVM with linear kernel outperformed the other two models. The authors defined UCdetector as a combination of the 50 CNA features selected by the RF and the SVM classifier with linear kernel. UCdetector achieved the AUC of 0.959 under 10 repeats of random splitting on this dataset. Further validation on an independent dataset comprising 24 normal samples and 28 UC patients was implemented. The UCdetector distinguishes UC with an AUC of 0.888. To test the clinical sensitivity of selected 50 CNA features, the authors applied UCdetector on the 410 patients from TCGA and 90 patients from Chinese UTUC WGS data. UCdetector could accurately identify the upper tract urothelial cancers at the AUC of 0.996. Furthermore, the concordance performance of urinary cfDNA was reported to be more sensitive than the urinary sediment. This recent work recognized the top 50 important CNA features from 5000 original features and achieved satisfying performance on different datasets, even on tissue samples from TCGA. For further comment, it demonstrates the power of feature selection based on RF and the identity capacity of SVM.
Shicai et al. [162] combined SALP-seq and SVM as a pipeline to discover new cfDNA-based biomarkers for esophageal cancer. They studied the reads density of all promoters and found high reads density in normal samples and extremely low-density cancer samples on 49 genes. Of these, 34 genes are newly discovered biomarkers. The author further validated the relationship between esophageal cancer and these biomarkers on a dataset with 163 esophageal cancer samples and 11 normal samples. Moreover, 88 important regions associated with esophageal cancer were screened out from the whole genome and 54 of these, located in distal intergenic and proximal regulatory regions, were inferred to be potential diagnostic and prognostic markers for cancer. Additionally, 37 mutated genes, unique in pre-operation patients, were also discovered from a large amount of mutations in thousands of genes in pre- and post-operated esophageal cancer samples and normal samples. In this work, 103 epigenetic markers and 37 genetic markers were discovered for esophageal cancer. Finally, SVM was adopted to detect cancer samples based on 88 cancer-associated regions and achieved a high AUC of 1.0.
Zhang et al. [164] designed a DNA molecular computation platform involving SVM to analyze miRNA profiles from serum samples. They validate the performance based on clinical serum samples from 8 healthy individuals and 14 lung cancer patients with an accuracy of 86.4%.
In our recently published work [165], we proposed an Adaptive Support Vector Machine (ASVM) method by combining Shuffled Frog Leaping Algorithm and SVM for pan-cancer and subsequent tumor origin analysis. The proposed method was firstly validated on a cell-free DNA dataset with 423 sample records. We observed an improvement of AUC from 0.832 for SVM to 0.938 for ASVM. The proposed ASVM was competitive or outperformed the other six machine learning models on both the original dataset and additional two datasets.
4.1.3. Random Forest
Random Forest (RF) is an ensemble machine learning approach consisting of randomly selected decision tree subsets for classification and regression. Leo Breiman introduced a random forest algorithm using bootstrapping in the random tree selection method in the early 2000s [167]. It was an enormous improvement in classification and regression machine learning accuracy. It uses the bag of random tree classifications to the ensemble and evaluates the overall classification for the given training and test data set.
Principle of Random Forest
The basic principle of the RF algorithm is the bootstrapping aggregation of randomly selected decision trees from given data observations. According to the Breiman [167] RF algorithm, it deals with classification and regression tasks using the random forest to learn. For the general RF regression estimation, Let X is the random input vector, where . We need to predict the response Y using the following Equation (43).
(43) |
Now training sample for independent input and goal data pairs of dataset that construct estimate with random tree T for m Function (43). Now RF consists of M numbers of random regression trees. The predicted estimation value () for the tree input x is defined as:
(44) |
where is the set of input data points for each tree and is the data elements of each input observation, is preselected data for input tree construction from . Now final random forest estimation is:
(45) |
For RF supervised classification, it can classify both binary and multi-class datasets [168]. Let input vector and Y is a random class vector with class value 0,1. Now we can predict label Y from input X and dataset. Therefore, RF binary classifier can obtain from the random classification trees as:
(46) |
-
B.
The application of Random Forest in Early Cancer Detection
In recent years, several studies employed RF for early cancer detection from different liquid biopsy data. An overview of relevant references is provided in Table 8.
Table 8.
Reference | Method | Dataset Availible | URL For Dataset | Cancer Type | Sample Type | Biomarker |
---|---|---|---|---|---|---|
[169] | RF and Mclust | Y | http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE81314, accessed on 29 June 2021 | 7 cancer | Blood | cfDNA |
[170] | LR and RF | Y | https://science.sciencemag.org/highwire/filestream/704651/field_highwire_adjunct_files/1/aar3247_Cohen_SM_Tables-S1-S11.xlsx, accessed on 29 June 2021 | 8 cancer | Blood | cfDNA and protein Biomarkers |
[171] | RF | Y | https://github.com/bergerm1/GenomeDerivedDiagnosis, accessed on 29 June 2021 | 22 Cancers | Plasma | cfDNA |
[172] | RF | Y | https://doi.org/10.5281/zenodo.3715312, accessed on 29 June 2021 | intracranial tumors | Plasma | cfDNA |
[173] | RF | N | Lung cancer | Serum | miRNA | |
[174] | RF | Y | https://www.ebi.ac.uk/pride/archive?keyword=PXD018301, accessed on 29 June 2021 | 5 cance | Plasma | EVP |
[175] | RF | N | hepatocellular carcinoma | Blood | cfDNA | |
[176] | RF | N | gastrointestinal cancers | Plasma | cfDNA |
Song CX et al. [169] applied the RF algorithm to predict lung cancer, pancreatic cancer, and HCC using cfDNA 5-Hydroxy-methyl-cytosine (5hmC) mark in blood plasmas. This study collected the whole-genome cfDNA 5hmC signatures from 49 patients with seven cancer types and eight healthy individuals for their sequence analysis using the 5hmC library. After sequence analysis, copy number variation (CNV) has estimated using PopSV 1.0.0 R package. The RF algorithm and Gaussian Mclust model applied using gene bodies and DhMRs for cancer type prediction with different cancer stages from forty HCC, pancreatic lung cancer patients, and healthy samples. RF algorithm achieved the highest accuracy, 87.5% and 92%, for two feature sets, gene bodies, and DhMRs, while Mclust prediction accuracies are 82.5% and 90%.
Cohen et al. [170] designed CancerSEEK method for early cancer detection using circulating protein biomarkers and mutation in cfDNA from multi-analyte blood test results consist of 1817 blood plasma samples with 1005 cancer patients with eight different type of cancer such as colorectum, liver, ovary, esophagus, pancreas, stomach, breast and lung cancer, and 812 healthy individuals. CancerSEEK method is usually applied for both binary and localize cancer detection from the mentioned blood test. For binary cancer detection, logistic regression (LR) classifier with 10-folds cross-validation involved using omega cfDNA score and eight protein biomarkers. CancerSEEK employed the random forest (RF) classifier with 10-folds cross-validation using omega cfDNA score, 39 protein biomarkers, and patient gender for cancer type localization. CancerSEEK achieved 70% average sensitivity for eight cancer types with 99% specificity, including the sensitivity levels of five cancer types from 69% to 98%.
Later, Nassri et al. [172] applied the binomial RF classifier for gliomas cancer detection with other types of cancer using cfDNA methylation profile from plasma samples. They achieved the highest sensitivity with an AUC value of 0.990.
Penson et al. [171] used the RF machine learning classifier for cancer type detection on tissue biopsies and then validated on two plasma ctDNA datasets. They achieved 73.8% accuracy with 5-folds cross-validation for 22 cancer types, including the highest accuracy of 95%, 87%, and 85% for uveal melanoma, glioma, and colorectal cancer, respectively. It also obtained 75% accuracy from plasma ctDNA genome analysis.
Wang et al. [176] used the RF model for gastrointestinal cancer detection using plasma cfDNA data. The gastrointestinal cancers include the gall bladder, stomach, esophagus, colon, bile duct, pancreas, liver, and rectum cancers. This study also analyzed the cfDNA profile of hepatocellular carcinoma, colorectal cancer, pancreatic cancer patients, and healthy individuals. It obtained the AUC of 0.960, 0.890, 0.910, respectively, using the RF model with 10-folds cross-validation.
Zhang et al. [173] employed the RF algorithm for feature selection and classification of early-stage lung cancer using circulating miRNA from the liquid biopsy with SMOTE oversampling technique. They achieved the highest 96.60% accuracy value (AUC = 0.996) with a maximum of 13 miRNA features. RF identified the top five circulating miRNA features for early lung cancer detection.
Peng et al. [177] applied the RF prediction model for early-stage pancreatic cancer detection of diabetic patients using blood-based plasma biomarkers. The RF model has identified the best biomarkers for early-stage pancreatic cancer patients considering the AUC measure using the leave-one-out cross-validation technique and obtained AUC values of 0.850 and 0.810 with and without the CA19-9 biomarker.
Hoshino et al. [174] employed the RF classifier to identified the biomarkers from extracellular vesicles and particle (EVP) for cancer detection. The research shows that EVP proteins are able to serve as biomarkers for early cancer detection and tumor origin detection. This study used 426 human EVP profile samples for cancer detection and achieved over 90% sensitivity and 88% specificity on both training set and test set.
4.2. Deep Learning
In cancer detection, traditional machine learning algorithms usually rely heavily on the representation of the selected information [178]. However, in most cases, it is difficult to give an effective feature set. In addition, manually designing features requires a lot of manpower and time in complex tasks. Therefore, deep learning came into being. When training the model, deep learning utilizes high-level features to represent low-level features, that is, to build complex concepts by combining simple concepts [179]. Since our survey focuses on commonly used algorithms based on the characteristics extracted from liquid biopsy, and the extracted features are substantially tabular data (i.e., a sample by feature matrix); therefore, here we just discuss the basic deep learning model without introducing the spatial-aware or time-aware blocks in computer vision or natural language processing. A classic case of deep learning is the multilayer perceptron (MLP, also named as a neural network (NN)).
Principle of MLP
A multilayer perceptron is a function that maps a set of input values to output values, and this function consists of many simpler functions [180]. It can be considered that each function gives a new representation of the input. Generally, an MLP consists of three different blocks, which are the input layer, hidden layer, and output layer. A 3-layer MLP architecture can be seen in Figure 5. Herein, the input layer accepts the features, that is, the experiment results from liquid biopsy. Hidden layers are between the input and output layers. Each hidden node in the hidden layer is a perceptron (with its own set of weights). Hidden layer can extract a feature pattern from the previous layer and model more complex functions [181]. Hidden layer is also called a fully-connected layer or dense layer. Output layer outputs the final prediction results (e.g., the binary description of sick or healthy).
Formally, for one layer:
where W is the weight matrix (one column for each node). is the input from the previous layer, is the output to the next layer. is the activation function that is applied to each dimension to get the output. In most cases, Rectifier Linear Unit (ReLU) is utilized as the activate function in hidden layers since it is faster and easier to train with [182]. ReLU is an activation function defined as the positive part of its argument:
where x is the input to a node. ReLU can obtain sparse representation since most nodes will output zero. The activation functions for the output layer can be Softmax for both binary and multi-class classification. Softmax function is defined as:
where z is the input vector. The term on the bottom of the formula is the normalization term which ensures that all the output values of the function will be sum to 1, thus constituting a valid probability distribution. When training, we can utilize Cross-Entropy Loss Function to optimize the neural network, which is formulated as:
where M is the number of class; is the indicator variable (0 or 1), if the category is the same as the category of sample i, it is 1, otherwise it is 0; is the predicted probability that the observation sample i belongs to the category c.
-
B.
The Application of MLP in Early Cancer Detection
In the past several years, the utilization of neural networks in cancer detection can be summarized into two categories: feature engineering and classification. For feature engineering, a neural network is usually performed to remove the input’s noise and extract the most representative features that can best describe the subjects’ attributes. This step can also be called feature extraction or dimension reduction [183]. Regarding the neural network for classification, the architecture of the neural networks used in cancer detection varied in depth (shallow and deep architecture), loss function and other parameters [184]. An overview of relevant references is provided in Table 9.
Table 9.
Reference | Method | Dataset Availible | URL For Dataset | Cancer Type | Sample Type | Biomarker |
---|---|---|---|---|---|---|
[185] | ANN | N | Lung cancer | Blood | Others | |
[186] | CNN | N | CTCs Detection | Blood | CTCs | |
[187] | CNN | N | Lung cancer | Blood | cfDNA | |
[188] | AODE, deep learning, decision tree, naive Bayes | Y | https://science.sciencemag.org/highwire/filestream/704651/field_highwire_adjunct_files/1/aar3247_Cohen_SM_Tables-S1-S11.xlsx, accessed on 29 June 2021 | 8 Cancers | Blood | Multianalyte |
In 2014, Krzysztof et al. [185] introduced Artificial Neural Networks (ANNs) to early lung cancer detection. The dataset, provided by Diagnostic and Monitoring of Tuberculosis and Illness of Lungs Ward in Kuyavia and Pomerania Centre of the Pulmonology (Bydgoszcz, Poland), includes 193 patients involving 48 features (i.e., blood test results, age, sex, etc.). The training set, validation set ant test set are randomly splitted with 97, 48, and 48 samples, respectively. Different ANNs are trained and analysed to achieve the best performance. The optimal architecture was composed of 3-layers MPL (48 input neurons, 9 neurons in hidden layer, 2 output neurons) with learning rate 0.1, epochs 17, linear function for hidden layer, and tangent function for output layer. The obtained classification accuracy is 97.91% and AUC is 0.9983. However, as the dataset is limited, we can not ascertain the high performance is on account of model generalization or the certain dataset splitting.
In 2016,Yunxiang et al. [186] developed a deep 6-layers Convolutional Neural Network (CNN) to detect circulating tumor cells from blood results. A training methodology utilizing k-means clustering was adopted to explore the most representative samples to build the classification boundary. The filter parameters, bis terms, and weights were automatically optimized by back-propagation under 0.1 learning rate setting. The experiment results show that the propsed CNN is superior to SVM on F-score. To validate the effectiveness of proposed training strategy, a comparison experiment was implemented, which indicates that the F-score of CNN increased from 91.2% to 97% with the training strategy. For SVM, the performance only reached 75.4% without the training method and increased to 78.4% after adopting it.
In 2018, Kothen-Hill et al. [186] proposed a CNN-based framework, named Kittyhawk, to distinguish the true cancer mutations from sequencing artifacts even in ultra-low variant allele frequencies (VAFs) at the level of . Kittyhawk is an 8-layer CNN with a fully-connected output layer, learning rate 0.1, momentum 0.9, and minibatch size 256. Kittyhawk initially proposed the read representation which combines the aligned genomic context, the quality scores, and the complete read sequence. The proposed method was first examined on 201,730 reads in the validation set, achieving an average performance of F1-score 0.961. Subsequently, the generalization capability was demonstrated on the independent lung cancer case with 0.92 F1-score reported.
In 2019, Ka-Chun Wong et al. [188] collected blood test records from 1817 patients to build three deep learning models to detect cancers as the front-line detector in a binary manner (i.e., cancer or normal). Since their datasets have standard and well-crafted input features, they directly adopted the deep feedforward neural networks with one hidden layer, two hidden layers, and three hidden layers (namely, DeepLearning1, DeepLearning2, and DeepLearning3, respectively) for model construction. The remaining training setting follows the default settings of WEKA. However, the performance of the deep learning methods cannot scale to full performance once the specificity level is relaxed.
5. Discussion
From the perspective of machine learning, we find out that even simple machine learning algorithms such as linear models can lead to a high-quality performance for liquid biopsy-based diagnosis for several common cancer types. However, there is no perfect model that performs the best on all datasets. Besides, the performance of machine learning models is diverse under different hyperparameter settings. To ensure the stability, we recommend Bayesian optimization for hyperparameter tuning after considering performance and runtime. With a hyperparameter optimization strategy, the machine learning model is adaptive to different datasets.
In addition, among all the machine learning models, the most popular and widely used are conventional algorithms. This is partly due to the barriers between biology and computer science; it is also partly due to the dataset size limitation. In the current data amount context, the traditional machine learning model such as linear models, support vector machine and random forest are still dominant in early cancer detection for their training speed and robustness on small dataset. We hope that the all-sided review of machine learning procedures and corresponding code demos presented in this survey can act as a reference guide. Definitely, advanced machine learning algorithms could also be applied for exploring latent biomarkers and the complicated relationship in order to further improve the performance. However, model generalization and complexity have to be balanced in a fair manner.
As limited with the sample size and the interpretability of deep learning models, deep learning was not popular in liquid biopsy cancer detection. From the related studies in the past several years, we can observe that, with the increased data amount from the liquid biopsy, deep learning methods are likely to outperform conventional machine learning methods. However, there are also concerns. The first concern is that deep learning is vulnerable to overfitting. Therefore, regularization, dropout, and early stopping are utilized to prevent neural networks from overfitting. Besides, the birth of batch normalization improves the model baselines and speeds up all structures [189]. Due to the variance shift conflict between dropout and batch normalization, these two methods are not recommended to be adopted simultaneously at bottlenecks except for high-dimentional data. Another concern is the black-box nature of deep learning [190]. Since the hidden layers between input and output layers are complex, it is difficult to extract the most important features and match them with the biological explanation. The explainable framework design is vital to introduce machine learning models into clinical application [191]. In general, the technique for explaining predictions can be categorized into backpropagation-based methods and perturbation-based methods [192]. The recent successes of explainable framework [191,192,193,194,195] do shed light on its promising ability. Therefore, we are still optimistic with its development in cancer detection in the future.
From the perspective of liquid biopsy components, we find out that machine learning is extensively used for single-omics analysis. However, a single type of circulating biomarker seldom fully reveals the essence of tumor occurrence. Therefore, multi-omics detection is another promising direction for early cancer detection and treatment monitoring. The exploration competence of machine learning can enable the capability to figure out the complex causal relationships between different molecular measurements. Therefore, the integration of machine learning methods and multi-omics, including genomics, epigenomics, transcriptomics, proteomics, metabolomics, and microbiomics, provides unprecedented opportunities to understand the underlying mechanism of tumor occurrence and early detection.
6. Conclusions
In this survey, we have presented an overview of machine learning protocols and the applications of different machine learning algorithms in the context of early cancer detection based on liquid biopsy. Additionally, we provided code demos for the aforementioned approaches in each procedure of machine learning. Based on the survey of over 400 papers, we have identified that the early cancer detection based on liquid biopsy has been tackled by different machine learning algorithms, which have been applied to multiple cancer types (e.g., pancreatic cancer, hepatocellular carcinoma, breast cancer, oral cancer, etc.) for a wide variety of component (e.g., circulating tumor cells (CTCs), cell-free DNA (cfDNA), circulating tumor DNA (ctDNA), cell-free RNA (cfRNA), exosomes, and Tumor Educated Platelets(TEPs)).
Supplementary Materials
The following are available online at https://www.mdpi.com/article/10.3390/life11070638/s1.
Author Contributions
L.L.: Project administration, Implementation of the code demo, Writing—most of original draft and review; X.C.: Writing—original draft of Deep learning part and review & editing; O.O.P.: Writing—original draft of Linear Models part; W.Z.: Writing—review & editing; S.R.: Writing—original draft of Random Forest part; Z.-R.T.: Writing—review & editing; K.-C.W.: Writing—review & editing and Funding acquisition. All authors have read and agreed to the published version of the manuscript.
Funding
The work described in this paper was substantially supported by the grant from the Research Grants Council of the Hong Kong Special Administrative Region [CityU 11200218], one grant from the Health and Medical Research Fund, the Food and Health Bureau, The Government of the Hong Kong Special Administrative Region [07181426], and the funding from Hong Kong Institute for Data Science (HKIDS) at City University of Hong Kong. The work described in this paper was partially supported by two grants from City University of Hong Kong (CityU 11202219, CityU 11203520). This research is also supported by the National Natural Science Foundation of China under Grant No. 32000464.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Footnotes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Nahid A.A., Kong Y. Involvement of machine learning for breast cancer image classification: A survey. Comput. Math. Methods Med. 2017;2017:3781951. doi: 10.1155/2017/3781951. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Wild C., Weiderpass E., Stewart B. World Cancer Report: Cancer Research for Cancer Prevention. IARC Press; Lyon, France: 2020. pp. 181–188. [Google Scholar]
- 3.Society A. Global cancer facts and figures 4th edition. Am. Cancer Soc. 2018;1:1–73. [Google Scholar]
- 4.Cree I.A., Uttley L., Woods H.B., Kikuchi H., Reiman A., Harnan S., Whiteman B.L., Philips S.T., Messenger M., Cox A., et al. The evidence base for circulating tumour DNA blood-based biomarkers for the early detection of cancer: A systematic mapping review. BMC Cancer. 2017;17:697. doi: 10.1186/s12885-017-3693-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Chen X., Gole J., Gore A., He Q., Lu M., Min J., Yuan Z., Yang X., Jiang Y., Zhang T., et al. Non-invasive early detection of cancer four years before conventional diagnosis using a blood test. Nat. Commun. 2020;11:1–10. doi: 10.1038/s41467-020-17316-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.WHO . Guide to Cancer Early Diagnosis. WHO; Geneva, Switzerland: 2017. [Google Scholar]
- 7.Crowley E., Di Nicolantonio F., Loupakis F., Bardelli A. Liquid biopsy: Monitoring cancer-genetics in the blood. Nat. Rev. Clin. Oncol. 2013;10:472. doi: 10.1038/nrclinonc.2013.110. [DOI] [PubMed] [Google Scholar]
- 8.Shinozaki M., O’Day S.J., Kitago M., Amersi F., Kuo C., Kim J., Wang H.J., Hoon D.S. Utility of circulating B-RAF DNA mutation in serum for monitoring melanoma patients receiving biochemotherapy. Clin. Cancer Res. 2007;13:2068–2074. doi: 10.1158/1078-0432.CCR-06-2120. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Zhou J., Shi Y.H., Fan J. Seminars in Oncology. Volume 39. Elsevier; Amsterdam, The Netherlands: 2012. Circulating cell-free nucleic acids: Promising biomarkers of hepatocellular carcinoma; pp. 440–448. [DOI] [PubMed] [Google Scholar]
- 10.Cohn S.L., Pearson A.D., London W.B., Monclair T., Ambros P.F., Brodeur G.M., Faldum A., Hero B., Iehara T., Machin D., et al. The International Neuroblastoma Risk Group (INRG) classification system: An INRG task force report. J. Clin. Oncol. 2009;27:289. doi: 10.1200/JCO.2008.16.6785. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Diaz Jr L.A., Bardelli A. Liquid biopsies: Genotyping circulating tumor DNA. J. Clin. Oncol. 2014;32:579. doi: 10.1200/JCO.2012.45.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Cai X., Janku F., Zhan Q., Fan J.B. Accessing genetic information with liquid biopsies. Trends Genet. 2015;31:564–575. doi: 10.1016/j.tig.2015.06.001. [DOI] [PubMed] [Google Scholar]
- 13.The L.O. Liquid cancer biopsy: The future of cancer detection? Lancet Oncol. 2016;17:123. doi: 10.1016/S1470-2045(16)00016-4. [DOI] [PubMed] [Google Scholar]
- 14.Molina-Vila M.A., Mayo-de Las-Casas C., Gimenez-Capitan A., Jordana-Ariza N., Garzón M., Balada A., Villatoro S., Teixido C., Garcia-Pelaez B., Aguado C., et al. Liquid biopsy in non-small cell lung cancer. Front. Med. 2016;3:69. doi: 10.3389/fmed.2016.00069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Strotman L.N., Millner L.M., Valdes R., Linder M.W. Liquid biopsies in oncology and the current regulatory landscape. Mol. Diagn. Ther. 2016;20:429–436. doi: 10.1007/s40291-016-0220-5. [DOI] [PubMed] [Google Scholar]
- 16.Zhang W., Chen X., Wong K.C. Noninvasive early diagnosis of intestinal diseases based on artificial intelligence in genomics and microbiome. J. Gastroenterol. Hepatol. 2021;36:823–831. doi: 10.1111/jgh.15500. [DOI] [PubMed] [Google Scholar]
- 17.Chen M., Zhao H. Next-generation sequencing in liquid biopsy: Cancer screening and early detection. Hum. Genom. 2019;13:34. doi: 10.1186/s40246-019-0220-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Peeters M., Price T., Boedigheimer M., Kim T.W., Ruff P., Gibbs P., Thomas A., Demonty G., Hool K., Ang A. Evaluation of emergent mutations in circulating cell-free DNA and clinical outcomes in patients with metastatic colorectal cancer treated with panitumumab in the ASPECCT study. Clin. Cancer Res. 2019;25:1216–1225. doi: 10.1158/1078-0432.CCR-18-2072. [DOI] [PubMed] [Google Scholar]
- 19.Cescon D.W., Bratman S.V., Chan S.M., Siu L.L. Circulating tumor DNA and liquid biopsy in oncology. Nat. Cancer. 2020;1:276–290. doi: 10.1038/s43018-020-0043-5. [DOI] [PubMed] [Google Scholar]
- 20.Di Meo A., Bartlett J., Cheng Y., Pasic M.D., Yousef G.M. Liquid biopsy: A step forward towards precision medicine in urologic malignancies. Mol. Cancer. 2017;16:80. doi: 10.1186/s12943-017-0644-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Heitzer E., Perakis S., Geigl J.B., Speicher M.R. The potential of liquid biopsies for the early detection of cancer. NPJ Precis. Oncol. 2017;1:1–9. doi: 10.1038/s41698-017-0039-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Ilie M., Hofman V., Long E., Bordone O., Selva E., Washetine K., Marquette C.H., Hofman P. Current challenges for detection of circulating tumor cells and cell-free circulating nucleic acids, and their characterization in non-small cell lung carcinoma patients. What is the best blood substrate for personalized medicine? Ann. Transl. Med. 2014;2:107. doi: 10.3978/j.issn.2305-5839.2014.08.11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Montani F., Marzi M.J., Dezi F., Dama E., Carletti R.M., Bonizzi G., Bertolotti R., Bellomi M., Rampinelli C., Maisonneuve P., et al. miR-Test: A blood test for lung cancer early detection. JNCI J. Natl. Cancer Inst. 2015;107 doi: 10.1093/jnci/djv063. [DOI] [PubMed] [Google Scholar]
- 24.Zhang S., Zhang C., Yang Q. Data preparation for data mining. Appl. Artif. Intell. 2003;17:375–381. doi: 10.1080/713827180. [DOI] [Google Scholar]
- 25.Huang J., Li Y.F., Xie M. An empirical analysis of data preprocessing for machine learning-based software cost estimation. Inf. Softw. Technol. 2015;67:108–127. doi: 10.1016/j.infsof.2015.07.004. [DOI] [Google Scholar]
- 26.García S., Ramírez-Gallego S., Luengo J., Benítez J.M., Herrera F. Big data preprocessing: Methods and prospects. Big Data Anal. 2016;1:9. doi: 10.1186/s41044-016-0014-0. [DOI] [Google Scholar]
- 27.Pendharkar P.C., Subramanian G.H., Rodger J.A. A probabilistic model for predicting software development effort. IEEE Trans. Softw. Eng. 2005;31:615–624. doi: 10.1109/TSE.2005.75. [DOI] [Google Scholar]
- 28.Kosti M.V., Mittas N., Angelis L. Alternative methods using similarities in software effort estimation; Proceedings of the 8th International Conference on Predictive Models in Software Engineering; Lund, Sweden. 21–22 September 2012; pp. 59–68. [Google Scholar]
- 29.Rodríguez D., Sicilia M., García E., Harrison R. Empirical findings on team size and productivity in software development. J. Syst. Softw. 2012;85:562–570. doi: 10.1016/j.jss.2011.09.009. [DOI] [Google Scholar]
- 30.Myrtveit I., Stensrud E., Olsson U.H. Analyzing data sets with missing data: An empirical evaluation of imputation methods and likelihood-based methods. IEEE Trans. Softw. Eng. 2001;27:999–1013. doi: 10.1109/32.965340. [DOI] [Google Scholar]
- 31.Kotsiantis S., Kanellopoulos D., Pintelas P. Data preprocessing for supervised leaning. Int. J. Comput. Sci. 2006;1:111–117. [Google Scholar]
- 32.Patro S., Sahu K.K. Normalization: A preprocessing stage. arXiv. 2015 doi: 10.17148/IARJSET.2015.2305.1503.06462 [DOI] [Google Scholar]
- 33.Wu E.Q., Hu D., Deng P.Y., Tang Z., Cao Y., Zhang W.M., Zhu L.M., Ren H. Nonparametric bayesian prior inducing deep network for automatic detection of cognitive status. IEEE Trans. Cybern. 2020 doi: 10.1109/TCYB.2020.2977267. [DOI] [PubMed] [Google Scholar]
- 34.Wu E.Q., Lin C.T., Zhu L.M., Tang Z., Jie Y.W., Zhou G.R. Fatigue Detection of Pilots’ Brain Through Brains Cognitive Map and Multilayer Latent Incremental Learning Model. IEEE Trans. Cybern. 2021 doi: 10.1109/TCYB.2021.3068300. [DOI] [PubMed] [Google Scholar]
- 35.Kourou K., Exarchos T.P., Exarchos K.P., Karamouzis M.V., Fotiadis D.I. Machine learning applications in cancer prognosis and prediction. Comput. Struct. Biotechnol. J. 2015;13:8–17. doi: 10.1016/j.csbj.2014.11.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Chen C.C., Schwender H., Keith J., Nunkesser R., Mengersen K., Macrossan P. Methods for identifying SNP interactions: A review on variations of Logic Regression, Random Forest and Bayesian logistic regression. IEEE/ACM Trans. Comput. Biol. Bioinform. 2011;8:1580–1591. doi: 10.1109/TCBB.2011.46. [DOI] [PubMed] [Google Scholar]
- 37.Khalid S., Khalil T., Nasreen S. A survey of feature selection and feature extraction techniques in machine learning; Proceedings of the IEEE 2014 Science and Information Conference; Warsaw, Poland. 24–26 September 2014; pp. 372–378. [Google Scholar]
- 38.Li J., Cheng K., Wang S., Morstatter F., Trevino R.P., Tang J., Liu H. Feature selection: A data perspective. ACM Comput. Surv. CSUR. 2017;50:1–45. doi: 10.1145/3136625. [DOI] [Google Scholar]
- 39.Pearson K. LIII. On lines and planes of closest fit to systems of points in space. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1901;2:559–572. doi: 10.1080/14786440109462720. [DOI] [Google Scholar]
- 40.Hotelling H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933;24:417. doi: 10.1037/h0071325. [DOI] [Google Scholar]
- 41.Fisher R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936;7:179–188. doi: 10.1111/j.1469-1809.1936.tb02137.x. [DOI] [Google Scholar]
- 42.Lee D.D., Seung H.S. Learning the parts of objects by non-negative matrix factorization. Nature. 1999;401:788–791. doi: 10.1038/44565. [DOI] [PubMed] [Google Scholar]
- 43.Roweis S.T., Saul L.K. Nonlinear dimensionality reduction by locally linear embedding. Science. 2000;290:2323–2326. doi: 10.1126/science.290.5500.2323. [DOI] [PubMed] [Google Scholar]
- 44.Bernstein M., De Silva V., Langford J.C., Tenenbaum J.B. Graph Approximations to Geodesics on Embedded Manifolds. Citeseer; Princeton, NJ, USA: 2000. Technical Report. [Google Scholar]
- 45.Zhou P., Hu X., Li P., Wu X. Online feature selection for high-dimensional class-imbalanced data. Knowl. Based Syst. 2017;136:187–199. doi: 10.1016/j.knosys.2017.09.006. [DOI] [Google Scholar]
- 46.García S., Luengo J., Herrera F. Data Preprocessing in Data Mining. Springer; Berlin/Heidelberg, Germany: 2015. [Google Scholar]
- 47.Yu L., Liu H. Feature selection for high-dimensional data: A fast correlation-based filter solution; Proceedings of the 20th International Conference on Machine Learning (ICML-03); Washington, DC, USA. 21–24 August 2003; pp. 856–863. [Google Scholar]
- 48.Chandrashekar G., Sahin F. A survey on feature selection methods. Comput. Electr. Eng. 2014;40:16–28. doi: 10.1016/j.compeleceng.2013.11.024. [DOI] [Google Scholar]
- 49.Guyon I., Elisseeff A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003;3:1157–1182. [Google Scholar]
- 50.Weir B.S., Hill W.G. Estimating F-statistics. Annu. Rev. Genet. 2002;36:721–750. doi: 10.1146/annurev.genet.36.050802.093940. [DOI] [PubMed] [Google Scholar]
- 51.Liu H., Setiono R. Chi2: Feature selection and discretization of numeric attributes; Proceedings of the 7th IEEE International Conference on Tools with Artificial Intelligence; Herndon, VA, USA. 5–8 November 1995; pp. 388–391. [Google Scholar]
- 52.Kraskov A., Stögbauer H., Grassberger P. Estimating mutual information. Phys. Rev. E. 2004;69:066138. doi: 10.1103/PhysRevE.69.066138. [DOI] [PubMed] [Google Scholar]
- 53.Reunanen J. Overfitting in making comparisons between variable selection methods. J. Mach. Learn. Res. 2003;3:1371–1382. [Google Scholar]
- 54.Guyon I., Weston J., Barnhill S., Vapnik V. Gene selection for cancer classification using support vector machines. Mach. Learn. 2002;46:389–422. doi: 10.1023/A:1012487302797. [DOI] [Google Scholar]
- 55.Kim S.j., Koh K., Lustig M., Boyd S., Gorinevsky D. An interior-point method for large-scale l1-regularized logistic regression. J. Mach. Learn. Res. 2007;8:1519–1555. [Google Scholar]
- 56.Friedman J., Hastie T., Tibshirani R. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 2010;33:1. doi: 10.18637/jss.v033.i01. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Saeys Y., Inza I., Larranaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007;23:2507–2517. doi: 10.1093/bioinformatics/btm344. [DOI] [PubMed] [Google Scholar]
- 58.Motoda H., Liu H. Feature selection, extraction and construction. Commun. IICM. 2002;5:2. [Google Scholar]
- 59.Neshatian K., Zhang M., Andreae P. A filter approach to multiple feature construction for symbolic learning classifiers using genetic programming. IEEE Trans. Evol. Comput. 2012;16:645–661. doi: 10.1109/TEVC.2011.2166158. [DOI] [Google Scholar]
- 60.Mahanipour A., Nezamabadi-pour H., Nikpour B. Using fuzzy-rough set feature selection for feature construction based on genetic programming; Proceedings of the 2018 3rd Conference on Swarm Intelligence and Evolutionary Computation (CSIEC); Bam, Iran. 6–8 March 2018; pp. 1–6. [Google Scholar]
- 61.Raschka S. Model evaluation, model selection, and algorithm selection in machine learning. arXiv. 20181811.12808 [Google Scholar]
- 62.Arlot S., Celisse A. A survey of cross-validation procedures for model selection. Stat. Surv. 2010;4:40–79. doi: 10.1214/09-SS054. [DOI] [Google Scholar]
- 63.Braga-Neto U.M., Dougherty E.R. Is cross-validation valid for small-sample microarray classification? Bioinformatics. 2004;20:374–380. doi: 10.1093/bioinformatics/btg419. [DOI] [PubMed] [Google Scholar]
- 64.James G.M. Variance and bias for general loss functions. Mach. Learn. 2003;51:115–135. doi: 10.1023/A:1022899518027. [DOI] [Google Scholar]
- 65.Moreno-Torres J.G., Sáez J.A., Herrera F. Study on the impact of partition-induced dataset shift on k-fold cross-validation. IEEE Trans. Neural Netw. Learn. Syst. 2012;23:1304–1312. doi: 10.1109/TNNLS.2012.2199516. [DOI] [PubMed] [Google Scholar]
- 66.Efron B. Computers and the theory of statistics: Thinking the unthinkable. SIAM Rev. 1979;21:460–480. doi: 10.1137/1021092. [DOI] [Google Scholar]
- 67.Efron B. Estimating the error rate of a prediction rule: Improvement on cross-validation. J. Am. Stat. Assoc. 1983;78:316–331. doi: 10.1080/01621459.1983.10477973. [DOI] [Google Scholar]
- 68.Efron B., Tibshirani R. Improvements on cross-validation: The 632+ bootstrap method. J. Am. Stat. Assoc. 1997;92:548–560. [Google Scholar]
- 69.Hélie S. An introduction to model selection: Tools and algorithms. Tutor. Quant. Methods Psychol. 2006;2:1–10. doi: 10.20982/tqmp.02.1.p001. [DOI] [Google Scholar]
- 70.Varma S., Simon R. Bias in error estimation when using cross-validation for model selection. BMC Bioinform. 2006;7:1–8. doi: 10.1186/1471-2105-7-91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Akaikei H. Information theory and an extension of maximum likelihood principle; Proceedings of the 2nd International Symposium on Information Theory; Tsahkadsor, AS, USA. 2–8 September 1973; pp. 267–281. [Google Scholar]
- 72.Schwarz G. Estimating the dimension of a model. Ann. Stat. 1978;6:461–464. doi: 10.1214/aos/1176344136. [DOI] [Google Scholar]
- 73.Rissanen J. A universal prior for integers and estimation by minimum description length. Ann. Stat. 1983;11:416–431. doi: 10.1214/aos/1176346150. [DOI] [Google Scholar]
- 74.Shannon C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948;27:379–423. doi: 10.1002/j.1538-7305.1948.tb01338.x. [DOI] [Google Scholar]
- 75.Hastie T., Tibshirani R., Friedman J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Science & Business Media; Berlin/Heidelberg, Germany: 2009. [Google Scholar]
- 76.Holland J.H., Reitman J.S. Pattern-Directed Inference Systems. Elsevier; Amsterdam, The Netherlands: 1978. Cognitive systems based on adaptive algorithms; pp. 313–329. [Google Scholar]
- 77.Kennedy J., Eberhart R. Particle swarm optimization; Proceedings of the ICNN’95-International Conference on Neural Networks; Perth, WA, Australia. 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
- 78.Kirkpatrick S., Gelatt C.D., Vecchi M.P. Optimization by simulated annealing. Science. 1983;220:671–680. doi: 10.1126/science.220.4598.671. [DOI] [PubMed] [Google Scholar]
- 79.Glover F., Laguna M. Handbook of Combinatorial Optimization. Springer; Berlin/Heidelberg, Germany: 1998. Tabu search; pp. 2093–2229. [Google Scholar]
- 80.Snoek J., Larochelle H., Adams R.P. Practical bayesian optimization of machine learning algorithms. arXiv. 20121206.2944 [Google Scholar]
- 81.Student The probable error of a mean. Biometrika. 1908;6:1–25. doi: 10.2307/2331554. [DOI] [Google Scholar]
- 82.Dietterich T.G. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput. 1998;10:1895–1923. doi: 10.1162/089976698300017197. [DOI] [PubMed] [Google Scholar]
- 83.Wilcoxon F. Individual Comparisons by Ranking Methods. Biometrics. 1945;1:80–83. doi: 10.2307/3001968. [DOI] [Google Scholar]
- 84.Corder G.W., Foreman D.I. Nonparametric Statistics for Non-Statisticians. Wiley; Hoboken, NJ, USA: 2011. [Google Scholar]
- 85.McNemar Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika. 1947;12:153–157. doi: 10.1007/BF02295996. [DOI] [PubMed] [Google Scholar]
- 86.Everitt B.S. The Analysis of Contingency Tables. CRC Press; Boca Raton, FL, USA: 1992. [Google Scholar]
- 87.Wilson E.B., Hilferty M.M. The distribution of chi-square. Proc. Natl. Acad. Sci. USA. 1931;17:684. doi: 10.1073/pnas.17.12.684. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Friedman M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937;32:675–701. doi: 10.1080/01621459.1937.10503522. reprinted in J. Am. Stat. Assoc. Am. Stat. Assoc.1939, 34, 109. [DOI] [Google Scholar]
- 89.Friedman M. A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 1940;11:86–92. doi: 10.1214/aoms/1177731944. [DOI] [Google Scholar]
- 90.Nemenyi P. Distribution-free multiple comparisons (doctoral dissertation, princeton university, 1963) Diss. Abstr. Int. 1963;25:1233. [Google Scholar]
- 91.Hollander M., Wolfe D.A., Chicken E. Nonparametric Statistical Methods. Volume 751 John Wiley & Sons; Hoboken, NJ, USA: 2013. [Google Scholar]
- 92.Holm S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 1979;6:65–70. [Google Scholar]
- 93.Dunn O.J. Multiple comparisons among means. J. Am. Stat. Assoc. 1961;56:52–64. doi: 10.1080/01621459.1961.10482090. [DOI] [Google Scholar]
- 94.Hommel G. A stagewise rejective multiple test procedure based on a modified Bonferroni test. Biometrika. 1988;75:383–386. doi: 10.1093/biomet/75.2.383. [DOI] [Google Scholar]
- 95.Gibbons J.D., Chakraborti S. Nonparametric Statistical Inference. CRC Press; Hoboken, NJ, USA: 2020. [Google Scholar]
- 96.Shaffer J.P. Multiple hypothesis testing. Annu. Rev. Psychol. 1995;46:561–584. doi: 10.1146/annurev.ps.46.020195.003021. [DOI] [Google Scholar]
- 97.Demšar J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006;7:1–30. [Google Scholar]
- 98.Garcia S., Herrera F. An Extension on “Statistical Comparisons of Classifiers over Multiple Data Sets” for all Pairwise Comparisons. J. Mach. Learn. Res. 2008;9:2677–2694. [Google Scholar]
- 99.Gasch C., Bauernhofer T., Pichler M., Langer-Freitag S., Reeh M., Seifert A.M., Mauermann O., Izbicki J.R., Pantel K., Riethdorf S. Heterogeneity of epidermal growth factor receptor status and mutations of KRAS/PIK3CA in circulating tumor cells of patients with colorectal cancer. Clin. Chem. 2013;59:252–260. doi: 10.1373/clinchem.2012.188557. [DOI] [PubMed] [Google Scholar]
- 100.Jahr S., Hentze H., Englisch S., Hardt D., Fackelmayer F.O., Hesch R.D., Knippers R. DNA fragments in the blood plasma of cancer patients: Quantitations and evidence for their origin from apoptotic and necrotic cells. Cancer Res. 2001;61:1659–1665. [PubMed] [Google Scholar]
- 101.Alimirzaie S., Bagherzadeh M., Akbari M.R. Liquid biopsy in breast cancer: A comprehensive review. Clin. Genet. 2019;95:643–660. doi: 10.1111/cge.13514. [DOI] [PubMed] [Google Scholar]
- 102.Ashworth T. A case of cancer in which cells similar to those in the tumours were seen in the blood after death. Aust. Med. J. 1869;14:146. [Google Scholar]
- 103.Imamura T., Komatsu S., Ichikawa D., Kawaguchi T., Miyamae M., Okajima W., Ohashi T., Arita T., Konishi H., Shiozaki A., et al. Liquid biopsy in patients with pancreatic cancer: Circulating tumor cells and cell-free nucleic acids. World J. Gastroenterol. 2016;22:5627. doi: 10.3748/wjg.v22.i25.5627. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 104.Kim M.Y., Oskarsson T., Acharyya S., Nguyen D.X., Zhang X.H.F., Norton L., Massagué J. Tumor self-seeding by circulating cancer cells. Cell. 2009;139:1315–1326. doi: 10.1016/j.cell.2009.11.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105.Rossi G., Mu Z., Rademaker A.W., Austin L.K., Strickland K.S., Costa R.L.B., Nagy R.J., Zagonel V., Taxter T.J., Behdad A., et al. Cell-free DNA and circulating tumor cells: Comprehensive liquid biopsy analysis in advanced breast cancer. Clin. Cancer Res. 2018;24:560–568. doi: 10.1158/1078-0432.CCR-17-2092. [DOI] [PubMed] [Google Scholar]
- 106.Hayes D.F., Cristofanilli M., Budd G.T., Ellis M.J., Stopeck A., Miller M.C., Matera J., Allard W.J., Doyle G.V., Terstappen L.W. Circulating tumor cells at each follow-up time point during therapy of metastatic breast cancer patients predict progression-free and overall survival. Clin. Cancer Res. 2006;12:4218–4224. doi: 10.1158/1078-0432.CCR-05-2821. [DOI] [PubMed] [Google Scholar]
- 107.Peitzsch C., Tyutyunnykova A., Pantel K., Dubrovska A. Seminars in Cancer Biology. Volume 44. Elsevier; Amsterdam, The Netherlands: 2017. Cancer stem cells: The root of tumor recurrence and metastases; pp. 10–24. [DOI] [PubMed] [Google Scholar]
- 108.Pantel K., Alix-Panabières C. Circulating tumour cells in cancer patients: Challenges and perspectives. Trends Mol. Med. 2010;16:398–406. doi: 10.1016/j.molmed.2010.07.001. [DOI] [PubMed] [Google Scholar]
- 109.Mocellin S., Hoon D., Ambrosi A., Nitti D., Rossi C.R. The prognostic value of circulating tumor cells in patients with melanoma: A systematic review and meta-analysis. Clin. Cancer Res. 2006;12:4605–4613. doi: 10.1158/1078-0432.CCR-06-0823. [DOI] [PubMed] [Google Scholar]
- 110.Mehlen P., Puisieux A. Metastasis: A question of life or death. Nat. Rev. Cancer. 2006;6:449–458. doi: 10.1038/nrc1886. [DOI] [PubMed] [Google Scholar]
- 111.Nagrath S., Sequist L.V., Maheswaran S., Bell D.W., Irimia D., Ulkus L., Smith M.R., Kwak E.L., Digumarthy S., Muzikansky A., et al. Isolation of rare circulating tumour cells in cancer patients by microchip technology. Nature. 2007;450:1235–1239. doi: 10.1038/nature06385. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112.Alix-Panabières C., Pantel K. Circulating tumor cells: Liquid biopsy of cancer. Clin. Chem. 2013;59:110–118. doi: 10.1373/clinchem.2012.194258. [DOI] [PubMed] [Google Scholar]
- 113.Mamdani H., Ahmed S., Armstrong S., Mok T., Jalal S.I. Blood-based tumor biomarkers in lung cancer for detection and treatment. Transl. Lung Cancer Res. 2017;6:648. doi: 10.21037/tlcr.2017.09.03. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 114.Buscail L., Bournet B., Cordelier P. Role of oncogenic KRAS in the diagnosis, prognosis and treatment of pancreatic cancer. Nat. Rev. Gastroenterol. Hepatol. 2020;17:1–16. doi: 10.1038/s41575-019-0245-4. [DOI] [PubMed] [Google Scholar]
- 115.Mandel P. Les acides nucleiques du plasma sanguin chez 1 homme. CR Seances Soc. Biol. Fil. 1948;142:241–243. [PubMed] [Google Scholar]
- 116.Spellman P.T., Gray J.W. Detecting cancer by monitoring circulating tumor DNA. Nat. Med. 2014;20:474–475. doi: 10.1038/nm.3564. [DOI] [PubMed] [Google Scholar]
- 117.Vendrell J.A., Mau-Them F.T., Béganton B., Godreuil S., Coopman P., Solassol J. Circulating cell free tumor dna detection as a routine tool forlung cancer patient management. Int. J. Mol. Sci. 2017;18:264. doi: 10.3390/ijms18020264. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 118.Leon S., Shapiro B., Sklaroff D., Yaros M. Free DNA in the serum of cancer patients and the effect of therapy. Cancer Res. 1977;37:646–650. [PubMed] [Google Scholar]
- 119.Anker P., Mulcahy H., Chen X.Q., Stroun M. Detection of circulating tumour DNA in the blood (plasma/serum) of cancer patients. Cancer Metastasis Rev. 1999;18:65–73. doi: 10.1023/A:1006260319913. [DOI] [PubMed] [Google Scholar]
- 120.Stroun M., Lyautey J., Lederrey C., Olson-Sand A., Anker P. About the possible origin and mechanism of circulating DNA: Apoptosis and active DNA release. Clin. Chim. Acta. 2001;313:139–142. doi: 10.1016/S0009-8981(01)00665-9. [DOI] [PubMed] [Google Scholar]
- 121.van der Vaart M., Pretorius P.J. The origin of circulating free DNA. Clin. Chem. 2007;53:2215. doi: 10.1373/clinchem.2007.092734. [DOI] [PubMed] [Google Scholar]
- 122.Breitbach S., Tug S., Simon P. Circulating cell-free DNA. Sport. Med. 2012;42:565–586. doi: 10.2165/11631380-000000000-00000. [DOI] [PubMed] [Google Scholar]
- 123.Devos T., Tetzner R., Model F., Weiss G., Schuster M., Distler J., Steiger K.V., Grutzmann R., Pilarsky C., Habermann J.K., et al. Circulating methylated SEPT9 DNA in plasma is a biomarker for colorectal cancer. Clin. Chem. 2009;55:1337–1346. doi: 10.1373/clinchem.2008.115808. [DOI] [PubMed] [Google Scholar]
- 124.Bulicheva N., Fidelina O., Mkrtumova N., Neverova M., Bogush A., Bogush M., Roginko O., Veiko N. Effect of cell-free DNA of patients with cardiomyopathy and rDNA on the frequency of contraction of electrically paced neonatal rat ventricular myocytes in culture. Ann. N. Y. Acad. Sci. 2008;1137:273. doi: 10.1196/annals.1448.023. [DOI] [PubMed] [Google Scholar]
- 125.Hu W., Yang Y., Zhang L., Yin J., Huang J., Huang L., Gu H., Jiang G., Fang J. Post surgery circulating free tumor DNA is a predictive biomarker for relapse of lung cancer. Cancer Med. 2017;6:962–974. doi: 10.1002/cam4.980. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 126.Lee Y.J., Yoon K.A., Han J.Y., Kim H.T., Yun T., Lee G.K., Kim H.Y., Lee J.S. Circulating cell-free DNA in plasma of never smokers with advanced lung adenocarcinoma receiving gefitinib or standard chemotherapy as first-line therapy. Clin. Cancer Res. 2011;17:5179–5187. doi: 10.1158/1078-0432.CCR-11-0400. [DOI] [PubMed] [Google Scholar]
- 127.Tug S., Helmig S., Menke J., Zahn D., Kubiak T., Schwarting A., Simon P. Correlation between cell free DNA levels and medical evaluation of disease progression in systemic lupus erythematosus patients. Cell. Immunol. 2014;292:32–39. doi: 10.1016/j.cellimm.2014.08.002. [DOI] [PubMed] [Google Scholar]
- 128.Chaudhuri A.A., Binkley M.S., Osmundson E.C., Alizadeh A.A., Diehn M. Seminars in Radiation Oncology. Volume 25. Elsevier; Amsterdam, The Netherlands: 2015. Predicting radiotherapy responses and treatment outcomes through analysis of circulating tumor DNA; pp. 305–312. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 129.Haber D.A., Velculescu V.E. Blood-based analyses of cancer: Circulating tumor cells and circulating tumor DNA. Cancer Discov. 2014;4:650–661. doi: 10.1158/2159-8290.CD-13-1014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 130.Lee R.C., Feinbaum R.L., Ambros V. The C. elegans heterochronic gene lin-4 encodes small RNAs with antisense complementarity to lin-14. Cell. 1993;75:843–854. doi: 10.1016/0092-8674(93)90529-Y. [DOI] [PubMed] [Google Scholar]
- 131.Hou J., Meng F., Chan L.W., Cho W., Wong S. Circulating plasma MicroRNAs as diagnostic markers for NSCLC. Front. Genet. 2016;7:193. doi: 10.3389/fgene.2016.00193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 132.Jansson M.D., Lund A.H. MicroRNA and cancer. Mol. Oncol. 2012;6:590–610. doi: 10.1016/j.molonc.2012.09.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 133.Trejo-Becerril C., Pérez-Cárdenas E., Taja-Chayeb L., Anker P., Herrera-Goepfert R., Medina-Velázquez L.A., Hidalgo-Miranda A., Pérez-Montiel D., Chávez-Blanco A., Cruz-Velázquez J., et al. Cancer progression mediated by horizontal gene transfer in an in vivo model. PLoS ONE. 2012;7:e52754. doi: 10.1371/journal.pone.0052754. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 134.Johnstone R.M., Adam M., Hammond J., Orr L., Turbide C. Vesicle formation during reticulocyte maturation. Association of plasma membrane activities with released vesicles (exosomes) J. Biol. Chem. 1987;262:9412–9420. doi: 10.1016/S0021-9258(18)48095-7. [DOI] [PubMed] [Google Scholar]
- 135.Sheridan C. Exosome cancer diagnostic reaches market. Nat. Biotechnol. 2016;34:359–360. doi: 10.1038/nbt0416-359. [DOI] [PubMed] [Google Scholar]
- 136.Rodríguez M., Silva J., López-Alfonso A., López-Muñiz M.B., Peña C., Domínguez G., García J.M., López-Gónzalez A., Méndez M., Provencio M., et al. Different exosome cargo from plasma/bronchoalveolar lavage in non-small-cell lung cancer. Genes Chromosom. Cancer. 2014;53:713–724. doi: 10.1002/gcc.22181. [DOI] [PubMed] [Google Scholar]
- 137.Taverna S., Giallombardo M., Gil-Bazo I., Carreca A.P., Castiglia M., Chacártegui J., Araujo A., Alessandro R., Pauwels P., Peeters M., et al. Exosomes isolation and characterization in serum is feasible in non-small cell lung cancer patients: Critical analysis of evidence and potential role in clinical practice. Oncotarget. 2016;7:28748. doi: 10.18632/oncotarget.7638. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 138.Kahlert C., Kalluri R. Exosomes in tumor microenvironment influence cancer progression and metastasis. J. Mol. Med. 2013;91:431–437. doi: 10.1007/s00109-013-1020-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139.Paulus J.M. Platelet Size in Man. Elsevier; Amsterdam, The Netherlands: 1975. [Google Scholar]
- 140.Nilsson R.J.A., Balaj L., Hulleman E., Van Rijn S., Pegtel D.M., Walraven M., Widmark A., Gerritsen W.R., Verheul H.M., Vandertop W.P., et al. Blood platelets contain tumor-derived RNA biomarkers. Blood J. Am. Soc. Hematol. 2011;118:3680–3683. doi: 10.1182/blood-2011-03-344408. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 141.Best M.G., Sol N., Kooi I., Tannous J., Westerman B.A., Rustenburg F., Schellen P., Verschueren H., Post E., Koster J., et al. RNA-Seq of tumor-educated platelets enables blood-based pan-cancer, multiclass, and molecular pathway cancer diagnostics. Cancer Cell. 2015;28:666–676. doi: 10.1016/j.ccell.2015.09.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 142.Li L., Zheng H., Huang Y., Huang C., Zhang S., Tian J., Li P., Sood A.K., Zhang W., Chen K. DNA methylation signatures and coagulation factors in the peripheral blood leucocytes of epithelial ovarian cancer. Carcinogenesis. 2017;38:797–805. doi: 10.1093/carcin/bgx057. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 143.Lin L.H., Chang K.W., Kao S.Y., Cheng H.W., Liu C.J. Increased plasma circulating cell-free DNA could be a potential marker for oral cancer. Int. J. Mol. Sci. 2018;19:3303. doi: 10.3390/ijms19113303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 144.Li C., Zhou Y., Liu J., Su X., Qin H., Huang S., Huang X., Zhou N. Potential markers from serum-purified exosomes for detecting oral squamous cell carcinoma metastasis. Cancer Epidemiol. Prev. Biomark. 2019;28:1668–1681. doi: 10.1158/1055-9965.EPI-18-1122. [DOI] [PubMed] [Google Scholar]
- 145.Cucchiara F., Del Re M., Valleggi S., Romei C., Petrini I., Lucchesi M., Crucitta S., Rofi E., De Liperi A., Chella A., et al. Integrating liquid biopsy and radiomics to monitor clonal heterogeneity of EGFR-positive Non-small Cell Lung Cancer. Front. Oncol. 2020;10:593831. doi: 10.3389/fonc.2020.593831. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 146.Wei R., Chen L., Qin D., Guo Q., Zhu S., Li P., Min L., Zhang S. Liquid biopsy of extracellular vesicle-derived miR-193a-5p in colorectal cancer and discovery of its tumor-suppressor functions. Front. Oncol. 2020;10:1372. doi: 10.3389/fonc.2020.01372. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 147.Raman L., Van der Linden M., Van der Eecken K., Vermaelen K., Demedts I., Surmont V., Himpe U., Dedeurwaerdere F., Ferdinande L., Lievens Y., et al. Shallow whole-genome sequencing of plasma cell-free DNA accurately differentiates small from non-small cell lung carcinoma. Genome Med. 2020;12:1–12. doi: 10.1186/s13073-020-00735-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 148.El-Khoury V., Schritz A., Kim S.Y., Lesur A., Sertamo K., Bernardin F., Petritis K., Pirrotte P., Selinsky C., Whiteaker J.R., et al. Identification of a Blood-Based Protein Biomarker Panel for Lung Cancer Detection. Cancers. 2020;12:1629. doi: 10.3390/cancers12061629. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 149.Yang X., Cai G.X., Han B.W., Guo Z.W., Wu Y.S., Lyu X., Huang L.M., Zhang Y.B., Li X., Ye G.L., et al. Association between the nucleosome footprint of plasma DNA and neoadjuvant chemotherapy response for breast cancer. NPJ Breast Cancer. 2021;7:1–12. doi: 10.1038/s41523-021-00237-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 150.Maltoni R., Casadio V., Ravaioli S., Foca F., Tumedei M.M., Salvi S., Martignano F., Calistri D., Rocca A., Schirone A., et al. Cell-free DNA detected by “liquid biopsy” as a potential prognostic biomarker in early breast cancer. Oncotarget. 2017;8:16642. doi: 10.18632/oncotarget.15120. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 151.Bray F., Ferlay J., Soerjomataram I., Siegel R.L., Torre L.A., Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2018;68:394–424. doi: 10.3322/caac.21492. [DOI] [PubMed] [Google Scholar]
- 152.Rivera C. Essentials of oral cancer. Int. J. Clin. Exp. Pathol. 2015;8:11884. [PMC free article] [PubMed] [Google Scholar]
- 153.Jayson G.C., Kohn E.C., Kitchener H.C., Ledermann J.A. Ovarian cancer. Lancet. 2014;384:1376–1388. doi: 10.1016/S0140-6736(13)62146-7. [DOI] [PubMed] [Google Scholar]
- 154.Siegel R.L., Miller K.D., Jemal A. Cancer statistics, 2016. CA Cancer J. Clin. 2016;66:7–30. doi: 10.3322/caac.21332. [DOI] [PubMed] [Google Scholar]
- 155.Cortes C., Vapnik V. Support-vector networks. Mach. Learn. 1995;20:273–297. doi: 10.1007/BF00994018. [DOI] [Google Scholar]
- 156.Roth P., Wischhusen J., Happold C., Chandran P.A., Hofer S., Eisele G., Weller M., Keller A. A specific miRNA signature in the peripheral blood of glioblastoma patients. J. Neurochem. 2011;118:449–457. doi: 10.1111/j.1471-4159.2011.07307.x. [DOI] [PubMed] [Google Scholar]
- 157.Best M.G., Sol N., GJG S., Vancura A., Muller M., Niemeijer A.L.N., Fejes A.V., Fat L.A.T.K., Huis A.E., Leurs C., et al. Swarm intelligence-enhanced detection of non-small-cell lung cancer using tumor-educated platelets. Cancer Cell. 2017;32:238–252. doi: 10.1016/j.ccell.2017.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 158.Zlotogorski-Hurvitz A., Dekel B.Z., Malonek D., Yahalom R., Vered M. FTIR-based spectrum of salivary exosomes coupled with computational-aided discriminating analysis in the diagnosis of oral cancer. J. Cancer Res. Clin. Oncol. 2019;145:685–694. doi: 10.1007/s00432-018-02827-6. [DOI] [PubMed] [Google Scholar]
- 159.Best M.G., GJG S., Sol N., Wurdinger T. RNA sequencing and swarm intelligence–enhanced classification algorithm development for blood-based disease diagnostics using spliced blood platelet RNA. Nat. Protoc. 2019;14:1206–1234. doi: 10.1038/s41596-019-0139-5. [DOI] [PubMed] [Google Scholar]
- 160.Sunkara V., Kim C.J., Park J., Woo H.K., Kim D., Ha H.K., Kim M.H., Son Y., Kim J.R., Cho Y.K. Fully automated, label-free isolation of extracellular vesicles from whole blood for cancer diagnosis and monitoring. Theranostics. 2019;9:1851. doi: 10.7150/thno.32438. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 161.Ge G., Peng D., Guan B., Zhou Y., Gong Y., Shi Y., Hao X., Xu Z., Qi J., Lu H., et al. Urothelial carcinoma detection based on copy number profiles of urinary cell-free DNA by shallow whole-genome sequencing. Clin. Chem. 2020;66:188–198. doi: 10.1373/clinchem.2019.309633. [DOI] [PubMed] [Google Scholar]
- 162.Liu S., Wu J., Xia Q., Liu H., Li W., Xia X., Wang J. Finding new cancer epigenetic and genetic biomarkers from cell-free DNA by combining SALP-seq and machine learning. Comput. Struct. Biotechnol. J. 2020;18:1891–1903. doi: 10.1016/j.csbj.2020.06.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 163.Heinhuis K.M., In’t Veld S.G., Dwarshuis G., Van Den Broek D., Sol N., Best M.G., Coevorden F.v., Haas R.L., Beijnen J.H., van Houdt W.J., et al. RNA-sequencing of tumor-educated platelets, a novel biomarker for blood-based sarcoma diagnostics. Cancers. 2020;12:1372. doi: 10.3390/cancers12061372. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 164.Zhang C., Zhao Y., Xu X., Xu R., Li H., Teng X., Du Y., Miao Y., Lin H.C., Han D. Cancer diagnosis with DNA molecular computation. Nat. Nanotechnol. 2020;15:709–715. doi: 10.1038/s41565-020-0699-0. [DOI] [PubMed] [Google Scholar]
- 165.Liu L., Chen X., Wong K.C. Early cancer detection from genome-wide cell-free DNA fragmentation via shuffled frog leaping algorithm and support vector machine. Bioinformatics. 2021 doi: 10.1093/bioinformatics/btab236. [DOI] [PubMed] [Google Scholar]
- 166.Cario C.L., Witte J.S. Orchid: A novel management, annotation and machine learning framework for analyzing cancer mutations. Bioinformatics. 2018;34:936–942. doi: 10.1093/bioinformatics/btx709. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 167.Breiman L. Random forests. Mach. Learn. 2001;45:5–32. doi: 10.1023/A:1010933404324. [DOI] [Google Scholar]
- 168.Díaz-Uriarte R., De Andres S.A. Gene selection and classification of microarray data using random forest. BMC Bioinform. 2006;7:1–13. doi: 10.1186/1471-2105-7-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 169.Song C.X., Yin S., Ma L., Wheeler A., Chen Y., Zhang Y., Liu B., Xiong J., Zhang W., Hu J., et al. 5-Hydroxymethylcytosine signatures in cell-free DNA provide information about tumor types and stages. Cell Res. 2017;27:1231–1242. doi: 10.1038/cr.2017.106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 170.Cohen J.D., Li L., Wang Y., Thoburn C., Afsari B., Danilova L., Douville C., Javed A.A., Wong F., Mattox A., et al. Detection and localization of surgically resectable cancers with a multi-analyte blood test. Science. 2018;359:926–930. doi: 10.1126/science.aar3247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 171.Penson A., Camacho N., Zheng Y., Varghese A.M., Al-Ahmadie H., Razavi P., Chandarlapaty S., Vallejo C.E., Vakiani E., Gilewski T., et al. Development of genome-derived tumor type prediction to inform clinical cancer care. JAMA Oncol. 2020;6:84–91. doi: 10.1001/jamaoncol.2019.3985. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 172.Nassiri F., Chakravarthy A., Feng S., Shen S.Y., Nejad R., Zuccato J.A., Voisin M.R., Patil V., Horbinski C., Aldape K., et al. Detection and discrimination of intracranial tumors using plasma cell-free DNA methylomes. Nat. Med. 2020;26:1044–1047. doi: 10.1038/s41591-020-0932-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 173.Zhang Y.H., Jin M., Li J., Kong X. Identifying circulating miRNA biomarkers for early diagnosis and monitoring of lung cancer. Biochim. Biophys. Acta BBA Mol. Basis Dis. 2020;1866:165847. doi: 10.1016/j.bbadis.2020.165847. [DOI] [PubMed] [Google Scholar]
- 174.Hoshino A., Kim H.S., Bojmar L., Gyan K.E., Cioffi M., Hernandez J., Zambirinis C.P., Rodrigues G., Molina H., Heissel S., et al. Extracellular vesicle and particle biomarkers define multiple human cancers. Cell. 2020;182:1044–1061. doi: 10.1016/j.cell.2020.07.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 175.Sprang M., Paret C., Faber J. CpG-Islands as Markers for Liquid Biopsies of Cancer Patients. Cells. 2020;9:1820. doi: 10.3390/cells9081820. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 176.Wang Y., Zheng J., Li Z., Jiang R., Peng J., Sun J., Yang G., Yang X.R., Huang A., Wang Y., et al. Development of a novel liquid biopsy test to diagnose and locate gastrointestinal cancers. J. Clin. Oncol. 2020;38:1557. doi: 10.1200/JCO.2020.38.15_suppl.1557. [DOI] [Google Scholar]
- 177.Peng H., Pan S., Yan Y., Brand R.E., Petersen G.M., Chari S.T., Lai L.A., Eng J.K., Brentnall T.A., Chen R. Systemic proteome alterations linked to early stage pancreatic cancer in diabetic patients. Cancers. 2020;12:1534. doi: 10.3390/cancers12061534. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 178.Zhong G., Wang L.N., Ling X., Dong J. An overview on data representation learning: From traditional feature learning to recent deep learning. J. Financ. Data Sci. 2016;2:265–278. doi: 10.1016/j.jfds.2017.05.001. [DOI] [Google Scholar]
- 179.Ravì D., Wong C., Deligianni F., Berthelot M., Andreu-Perez J., Lo B., Yang G.Z. Deep learning for health informatics. IEEE J. Biomed. Health Informat. 2016;21:4–21. doi: 10.1109/JBHI.2016.2636665. [DOI] [PubMed] [Google Scholar]
- 180.Ramchoun H., Idrissi M.A.J., Ghanou Y., Ettaouil M. Multilayer Perceptron: Architecture Optimization and Training. IJIMAI. 2016;4:26–30. doi: 10.9781/ijimai.2016.415. [DOI] [Google Scholar]
- 181.LeCun Y., Bengio Y., Hinton G. Deep learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
- 182.Agarap A.F. Deep learning using rectified linear units (relu) arXiv. 20181803.08375 [Google Scholar]
- 183.Mookiah M.R.K., Acharya U.R., Ng E. Data mining technique for breast cancer detection in thermograms using hybrid feature extraction strategy. Quant. Infrared Thermogr. J. 2012;9:151–165. doi: 10.1080/17686733.2012.738788. [DOI] [Google Scholar]
- 184.Daoud M., Mayo M. A survey of neural network-based cancer prediction models from microarray data. Artif. Intell. Med. 2019;97:204–214. doi: 10.1016/j.artmed.2019.01.006. [DOI] [PubMed] [Google Scholar]
- 185.Goryński K., Safian I., Grądzki W., Marszałł M.P., Krysiński J., Goryński S., Bitner A., Romaszko J., Buciński A. Artificial neural networks approach to early lung cancer detection. Cent. Eur. J. Med. 2014;9:632–641. doi: 10.2478/s11536-013-0327-6. [DOI] [Google Scholar]
- 186.Mao Y., Yin Z., Schober J. A deep convolutional neural network trained on representative samples for circulating tumor cell detection; Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV); Lake Placid, NY, USA. 7–10 March 2016; pp. 1–6. [Google Scholar]
- 187.Kothen-Hill S.T., Zviran A., Schulman R.C., Deochand S., Gaiti F., Maloney D., Huang K.Y., Liao W., Robine N., Omans N.D., et al. Deep Learning Mutation Prediction Enables Early Stage Lung Cancer Detection in Liquid Biopsy. [(accessed on 10 June 2020)];2018 Available online: https://openreview.net/forum?id=H1DkN7ZCZ.
- 188.Wong K.C., Chen J., Zhang J., Lin J., Yan S., Zhang S., Li X., Liang C., Peng C., Lin Q., et al. Early Cancer Detection from Multianalyte Blood Test Results. Iscience. 2019;15:332–341. doi: 10.1016/j.isci.2019.04.035. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 189.Li X., Chen S., Hu X., Yang J. Understanding the disharmony between dropout and batch normalization by variance shift; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Long Beach, CA, USA. 15–20 June 2019; pp. 2682–2690. [Google Scholar]
- 190.Loyola-Gonzalez O. Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view. IEEE Access. 2019;7:154096–154113. doi: 10.1109/ACCESS.2019.2949286. [DOI] [Google Scholar]
- 191.Lauritsen S.M., Kristensen M., Olsen M.V., Larsen M.S., Lauritsen K.M., Jørgensen M.J., Lange J., Thiesson B. Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat. Commun. 2020;11:1–11. doi: 10.1038/s41467-020-17431-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 192.Ancona M., Ceolini E., Öztireli C., Gross M. Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv. 20171711.06104 [Google Scholar]
- 193.Lundberg S., Lee S.I. A unified approach to interpreting model predictions. arXiv. 20171705.07874 [Google Scholar]
- 194.Shrikumar A., Greenside P., Shcherbina A., Kundaje A. Not just a black box: Learning important features through propagating activation differences. arXiv. 20161605.01713 [Google Scholar]
- 195.Ribeiro M.T., Singh S., Guestrin C. “Why should i trust you?” Explaining the predictions of any classifier; Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; San Francisco, CA, USA. 13–17 August 2016; pp. 1135–1144. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.