Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Oct 15.
Published in final edited form as: Biochem Biophys Res Commun. 2023 Aug 9;677:126–131. doi: 10.1016/j.bbrc.2023.08.015

Hybrid AI models allow label-free identification and classification of pancreatic tumor repopulating cell population

Sakib Mohammad 1, Kshitij Amar 2, Farhan Chowdhury 1,3,*
PMCID: PMC10529635  NIHMSID: NIHMS1925677  PMID: 37573767

Abstract

Human pancreatic cancer cell lines harbor a small population of tumor repopulating cells (TRCs). Soft 3D fibrin gel allows efficient selection and growth of these tumorigenic TRCs. However, rapid and high-throughput identification and classification of pancreatic TRCs remain technically challenging. Here, we developed deep learning (DL) models paired with machine learning (ML) models to readily identify and classify 3D fibrin gel-selected TRCs into sub-types. Using four different human pancreatic cell lines, namely, MIA PaCa-2, PANC-1, CFPAC-1, and HPAF-II, we classified 3 main sub-types to be present within the TRC population. Our best model was an Inception-v3 convolutional neural network (CNN) used as a feature extractor paired with a Support Vector Machine (SVM) classifier with radial basis function (rbf) kernel which obtained a test accuracy of 90%. In addition, we compared this hybrid method of supervised classification with other methods of supervised classifications and showed that our working model outperforms others. With the help of unsupervised machine learning algorithms, we also validated that the pancreatic TRC subpopulation can be clustered into 3 sub-types. Collectively, our robust model can detect and readily classify tumorigenic TRC subpopulation label-free in a high-throughput fashion which can be very beneficial in clinical settings.

Keywords: Pancreatic cancer, tumor repopulating cells, CNN, Inception-v3, SVM

1. Introduction

Pancreatic cancer is one of the leading causes of cancer deaths in the world [1,2]. It is crucial to diagnose pancreatic cancer as early as possible and develop a targeted treatment regimen. Human pancreatic cancer cell lines harbor a small population of TRCs which are believed to be responsible for disease progression and relapses. Developing a targeted treatment regimen against TRCs would be an effective way to fight against pancreatic cancer. Recently, we showed that TRCs can be isolated from a pancreatic cancer cell line using soft 3D fibrin gels [3]. However, it remains unknown if a diverse group of tumorigenic cells exists within the fibrin gel-selected TRC population. To understand the subpopulations of 3D fibrin gel selected TRCs, we imaged the spheroids of four different cell lines including MIA PaCa-2, PANC-1, CFPAC-1, and HPAF-II. The TRC spheroids emerging from these four pancreatic cancer cell lines can possess varying degrees of tumorigenicity. Therefore, it is extremely crucial to identify them and study them in detail so that targeted treatment regimens can be developed. The first step in the process would be to define and train an algorithm that can reliably classify them in a high-throughput fashion.

Artificial Neural Networks (ANNs) are inspired by the biological system of neurons and neural networks that constitute the human brain [4]. Typically, neurons or nodes in an ANN are aggregated into several layers. Signals travel from the first layer (known as the input layer) to the final layer (known as the output layer), after traversing the layers multiple times through the intermediate layers also known as the hidden layers. Convolution Neural Networks (CNN), a sub-class of ANNs, are extremely popular for Computer Vision applications ranging from biomedical imaging to self-driving cars [5,6]. These models can be trained to optimize filters (or kernels) through automated learning. The model “learns” specific characteristics of the image called “features”. Through several iterations of training, also known as “epochs”, the weight values of the filters are optimized so that they can precisely extract important features from the images. The feature extraction layers in CNN are followed by fully connected layers (called dense layers) where the features are aggregated. The final layer of a CNN is known as a classification layer that assigns classes to the input images from the aggregated features. In supervised ML, the images in the dataset are labeled to their respective classes before training the model while in unsupervised learning, the model identifies any patterns or structures in the data on its own without any pre-assigned labels or classes.

In this work, we generated TRC spheroids in soft 3D fibrin gel, acquired images, and classified 3 sub-types within the TRC population using a hybrid model approach. The hybrid model consists of a CNN combined with an ML model which allowed us to train and test the dataset. In addition, we compared the results of our hybrid method with other existing methods. Finally, we verified the presence of only 3 sub-types within the TRC population using unsupervised ML models.

2. Material and methods

2.1.1. Cell culture

MIA PaCa-2, PANC-1, CFPAC-1, and HPAF-II pancreatic cancer cell lines were obtained from American Type Culture Collection (ATCC, Manassas, VA, USA) and were routinely cultured on gelatin (Sigma-Aldrich, St. Louis, MO, USA) coated 6-well tissue-culture dishes (Eppendorf, Enfield, CT, USA) with a complete medium containing 10% fetal bovine serum (Thermo Fisher Scientific, Waltham, MA, USA) and 2.5% horse serum (Thermo Fisher Scientific, Waltham, MA, USA) at 37 °C with 5% CO2. For cell passaging, when the cell culture reached 80% confluency, cells were trypsinized, centrifuged at 125 × g for 5 minutes, and resuspended in a complete culture medium before plating.

2.1.2. 3D Fibrin gel-culture and image acquisition

To select and grow pancreatic TRCs, cells were seeded in soft 3D fibrin gel. Fibrin gel culturing was described elsewhere [7,8]. In brief, Salmon fibrinogen and thrombin (Searun Holdings) were used to make the 3D fibrin gel cell culture. The stock fibrinogen was diluted with T7 buffer to a concentration of 2 mg/ml. Next, the fibrinogen solution was mixed with culture medium containing cells where the final fibrinogen concentration becomes 1 mg/ml. Each well of the 96-well plate was seeded with 1,000 cells. Before seeding the fibrinogen-cell solution in each well, plate bottoms were activated with thrombin. Fibrin-gel polymerization takes place in a 37 °C cell culture incubator for 20 min. Once polymerized, 200 μl cell culture medium was added to each well. TRCs spheroids were imaged after ten days of culture on 3D fibrin gels.

2.2. Image processing for machine learning

After image acquisition, we took square crops of singular colonies from large 2048 × 2048 image files with ImageJ [9] and were stored in directories named to their types (Type 1, 2, and 3). Typically, type 1 cell colonies are round with granular features inside. Type 2 colonies are round with a smooth appearance. Type 3 colonies are irregular in shape. We had a total of 243 images across the cell lines, with 73, 94, and 76 images belonging to type 1, type 2, and type 3, respectively. The image distribution across the cell lines is shown in supplementary Table S1. Later stages of processing involved scaling the images to 256 × 256-pixel images and then normalizing their pixel values between 0 and 1. This normalizing was done so that the data will be easier to work with by the AI models. Finally, the dataset was randomly split into train and test sets with an 80/20 ratio, so 80% of the images were used for training and the rest of the data was used for testing purposes.

2.3. Feature extraction and supervised classification

For our experiment, we first extracted features from images. In computer vision, features are part and pattern of an image that can help identify it. Every CNN has two parts: the feature extractor and the classifier. We used the extractor part of CNN to extract features from images. We did not use the classifier part due to having a modest number of data. One way of circumventing this is to use transfer learning, where optimized weights used in other experiments such as the ImageNet challenge [10] can be utilized. This is an effective way because many popular CNN architectures have been trained on over 14M images belonging to 1000 classes for the ImageNet challenge. The weights are well-optimized and suitable for a wide array of computer vision tasks. However, it did not yield good results. This is because the fully connected layers of the CNNs could not aggregate the extracted features in a regular manner. Therefore, we decided to go with traditional ML methods as the classifier. Our framework for the supervised method is divided into two parts- a hybrid of CNN and ML models.

Finally, we compared our hybrid approach with two other methods, namely, the regular transfer learning (with ImageNet weights) method and the random weight initialization method (also known as training from scratch), both utilizing the full CNN architecture.

2.4. Unsupervised clustering of features

For verification, that only 3 classes exist within the cell population, we clustered the image features into groups. For that, the image features were extracted with the help of a CNN. Next, we ran the dataset through unsupervised clustering algorithms. The algorithm takes the array of extracted features and arranges them into clusters. Following this, we employed cluster verification techniques [11,12] to determine the optimal number of clusters.

To summarize, we used CNN architectures, namely, VGG13 and VGG16 [13], Inception-v3 [14] and Xception [15] for feature extractions followed by supervised ML models, namely, SVM with rbf [16] and linear [16] kernels and XGBoost [17] for classification while we used K-Means clustering [18] and Hierarchical clustering with Ward’s method [19,20] for unsupervised ML classification.

2.5. Performance metrics for evaluating our models

For supervised learning, we used the following performance parameters.

Accuracy.

Accuracy is the correct predicted samples out of all samples. In machine learning and binary classification, it is the sum of true positives (TP) and true negatives (TN) is divided by the sum of TP, TN, false positives (FP), and false negatives (FN).

Accuracy=TP+TNTP+TN+FP+FN

Although accuracy is an important metric, it does not provide a complete picture. So, we additionally used precision, recall, and F1 score for a robust evaluation of our workflow. Some of our definitions of metrics may be for binary classifications which however can also be extended to multiclass classifications just like ours.

Precision.

Precision is the ratio between true positives and the sum of true positives and false positives. It defines a model’s ability to classify a sample as positive and is denoted by the following formula:

Precision=TPTP+FP

High precision means one can trust a model when it says one of the samples is positive, making it reliable.

Recall.

Recall is the ratio between true positives and the sum of true positives and false negatives. It defines the model’s ability to detect positive samples and is expressed as:

Recall=TPTP+FN

Recall gives an idea of how the positive samples are classified and independent of how the negative samples are classified, as in the case of precision.

F1 Score.

The F1 score combines both the precision and the recall scores. It is the harmonic mean of these scores and is given by:

F1Score=2precisionrecallprecision+recall

Confusion Matrix.

The confusion matrix is the visual representation of the model’s performance. It is a table of all 4 combinations of actual and predicted values: TP, TN, FP, FN described above. Consequently, we can calculate accuracy, precision, recall, and finally F1 score from a confusion matrix

For the unsupervised learning process, we followed the following two metrics.

Elbow Method.

The elbow method is a metric to determine the optimum number of clusters in a dataset. Here, we plot the within-cluster sum of squares (WSS) against the number of clusters. The WSS is a measure of variation within each cluster and is calculated as the sum of the squared distances between each data point and centroid of its assigned cluster. The plot shows a downward trend of WSS as the number of clusters increases. In addition, WSS begins to diminish with the increase of the number of clusters and finally forms an elbow shape. The point at which the elbow is formed represents the optimal number of clusters. We used the elbow method for K-Means clustering.

Dendrogram.

A dendrogram is a tree-like diagram used to represent the output of a hierarchical clustering of data in a graphical manner. It presents the different clusters and groups of similar data points together based on their similarities or differences. In a dendrogram, each data point is represented by a vertical line or leaf and the height of the line represents the distance or dissimilarity between that point and other data points. The lines are joined together at nodes, which represent the points where the clusters merge as the algorithm progresses. We used a dendrogram for hierarchical clustering of the image features.

2.6. Training and testing of the models

We ran our experiments on Google Colab Notebooks and/ or an Nvidia Tesla T4 with 16GB GDDR6 memory GPU and 12GB of system memory. Our code was written in Python 3 and we used TensorFlow [21], scikit learn [22], OpenCV [23], and matplotlib [24] packages. Our CNN used ImageNet weights to extract features from the images. Next, they were fed into respective ML models, followed by training and testing, and summarization of the results.

3. Results and Discussion

3.1. Extraction of features from the dataset

After culturing cells in soft 3D fibrin gels for 10 days, we imaged TRC spheroids. We observed that there exist distinct sub-types within TRC populations in all cancer cell lines. A sample of our dataset showing the TRC spheroids from each one of the cell lines is presented in Fig. 1A. Not all sub-types of TRCs exist in all pancreatic cell lines. Interestingly, type-1 spheroids exist exclusively in the MIA PaCa-2 cell line while type-2 and type-3 TRC spheroids were found to be predominant in PANC-1, CFPAC-1, and HPAF-II cell lines.

Figure 1.

Figure 1.

Examples of dataset and features extracted by CNN. (A) Type 1, 2, 3 TRC colonies of the cell lines. (B) The first convolution layer depicts simple features. (C) The final convolution layer presents complex features.

First, we extracted features using CNN architectures mentioned in sections 2.32.4. Fig. 1B and Fig. 1C display examples of the features extracted by VGG16. Fig. 1B shows the first convolution layer depicting simple features while Fig. 1C shows the final convolution layer presenting very complex features. Similarly, we extracted features of our acquired dataset using the other CNNs, namely, VGG19, Inception-v3, and Xception.

3.2. Supervised classification of TRC colonies

After the completion of feature extractions from the datasets, we next focused on classifying them using ML models. Table 1 shows the performance metrics of all the models tested.

Table 1.

Performance metrics of our tested models.

Test Accuracy (%) Precision Recall F1 Score
Inception-v3 + SVM w/rbf
Type 1 90 0.88 0.93 0.93
Type 2 0.94 0.89 0.92
Type 3 0.87 0.87 0.87
VGG16+ SVM w/rbf
Type 1 88 0.87 0.87 0.87
Type 2 0.94 0.79 0.86
Type 3 0.87 1.00 0.91
VGG19+SVM w/ linear
Type 1 86 0.88 0.93 0.90
Type 2 0.93 0.74 0.82
Type 3 0.78 0.93 0.85
Xception+XGBoost
Type 1 86 0.88 0.93 0.90
Type 2 0.89 0.84 0.86
Type 3 0.80 0.80 0.80

From Table 1, we observed that features extracted with Inception-v3 + SVM classifier provided the best performance scores with more than 85% confidence in every metric. Our next best model is VGG16 paired with an SVM classifier which has a good accuracy score but underperforms in recall score for type-2 colony images. The Xception model with XGBoost also has acceptable and comparable scores across all metrics but has a lower performance score for type-3 colony images. The final model using VGG19 paired with an SVM classifier did not perform as well as expected. We observed inferior recall score for type-2 colony images, lower F1 score for type-2 colony images, and lower precision score for type-3 colony images.

Based on our findings, our best-performing model is Inception-v3 with an SVM classifier which has a testing accuracy of 90%. Fig. 2A shows the confusion matrix based on the Inception-v3 with an SVM classifier model.

Figure 2.

Figure 2.

Summarization of performance metrics of our tested hybrid models. (A) Confusion matrix of Inception-v3 paired with SVM. (B) Comparison of testing accuracy of all the tested models.

For the test set, we had 49 images randomly selected from our dataset of 243 images. Type 1, 2, and 3 had 15, 19, and 15 images, respectively. Our best-performing model (Inception-v3 with an SVM classifier) classified 44 out of 49 images correctly. However, it misclassified two type-1 colonies as type-2 and type-3, one type-2 colony as type-3, and two type-3 colonies as type-1 and type-2. This could be due to having a slight disparity in the number of images in different class types. We next examined the F1 scores as shown in Table 1. All 3 types had F1 scores of 0.93, 0.92, and 0.87, respectively which are exceptionally high. In addition, we present the accuracy comparison of all tested models in Fig. 2B where Inception-v3+SVM outperforms other models.

Next, we determined the performances of stand-alone CNNs (VGG16, VGG19, Inception-v3, Xception) with random weight initialization and ImageNet weights. The metrics of stand-alone CNNs were compared to our hybrid method. Adam [25] was used as the optimizer with a learning rate of 0.001. We ran the training for 100 epochs and monitored the loss. If it remained unchanged for 20 epochs the training was stopped to minimize overfit. The test accuracy comparison is represented in Supplementary Fig. S1.

When comparing different classification methods, as shown in Supplementary Fig. S1, we observed that all the methods fail to match the accuracy of our hybrid method. With stand-alone Inception-v3, we were able to obtain an accuracy of 86% which is still lower than the 90% accuracy when we paired Inception-v3 with an SVM classifier. Similarly, other stand-alone CNNs underperformed compared to their hybrid counterpart models. Training with random weight initialization yielded even worse results in terms of accuracy. The highest accuracy obtained with Xception architecture was 51% with random weights. The other models did not even reach 40% accuracy, which was quite expected, given our modest dataset.

3.3. Unsupervised training of TRC colony types

Next, we verified if there are only 3 classes of colony types that can be determined from the acquired images using unsupervised ML models. To this end, we used our tested CNNs to extract features and fed those extracted features through two different unsupervised ML algorithms used for clustering, namely, the K-Means clustering method and the Hierarchical clustering method. In Fig. 3A, we plotted the sum of the squared distance against the number of clusters k. The elbow in the graph is formed at k=3 suggesting that there are three clusters in the dataset. For Hierarchical clustering out of several available methods, we used Ward’s method primarily because of its widespread usage for analyzing image datasets. From the dendrogram, as shown in Fig. 3B, we observed that the number of clusters is indeed 3.

Figure 3.

Figure 3.

Unsupervised clustering of TRC colony sub-types. (A) Elbow method for K-Means clustering and (B) Dendrogram for Hierarchical clustering using Ward’s method.

Culturing single cancer cells on soft 3D fibrin gels is a proven and efficient approach to generating highly tumorigenic TRC spheroids from a variety of solid tumors including patient samples [7,8,26,27,28]. In our recent work, we generated TRCs from the MIA PaCa-2 cell line [3]. However, upon close observation, we found that, unlike TRCs from other solid tumors, MIA PaCa-2 TRCs possess a variety of spheroid phenotypes. This motivated us to investigate the heterogeneity within the pancreatic TRC populations. With all four human pancreatic cell lines tested, we found that indeed there exist three subtypes of pancreatic TRC population. We adopted computer vision approach to readily identify and classify such images. We found that a hybrid approach of feature extraction by Inception-v3 CNN model +SVM classifier performed superior to other approaches. A similar hybrid approach was also found to perform well to classify chromosome images [29]. However, we did not enhance images as carried out in ref. [29] because our acquired datasets consist of high-quality images.

At present, it is not clear whether there is a difference in tumorigenicity of TRCs among these three sub-types. In the future, we will determine the level of tumorigenicity as well as investigate the heterogeneity, if any, in biopsy samples from pancreatic cancer patients. In addition, we will identify the molecular and spectral signature differences between these three sub-types using Raman spectroscopy.

4. Conclusion

In conclusion, despite having a modest dataset, we successfully distinguished 3 sub-types of pancreatic TRCs that exist in all four different pancreatic cell lines using our hybrid deep learning and machine learning approach. We not only classified the colony images with excellent performance scores but also verified the existence of 3 sub-types using unsupervised ML methods. Together, our hybrid model’s ability to classify tumorigenic TRC subpopulations may have a significant impact on basic research advancements and clinical outcomes.

Supplementary Material

MMC1

Highlights.

  • Here, we report the existence of three distinct subpopulations of highly tumorigenic pancreatic tumor repopulating cells isolated from soft 3D fibrin gels.

  • We trained deep learning (DL) models paired with machine learning (ML) models to readily identify and classify tumor repopulating cells into sub-types.

  • With the help of unsupervised machine learning algorithms, we also verified that there are indeed three distinct subpopulations of highly tumorigenic pancreatic tumor repopulating cells.

Acknowledgments

We thank Dr. Sajedul Talukder for carefully reading this manuscript and providing valuable feedback. We also thank Arpan Roy for his technical help and discussion. This work was supported by NIH 1R15GM140448 (F.C.) and Elsa U. Pardee Foundation (F.C.).

Footnotes

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain

References

  • [1].Chari ST, Singh DP, Aggarwal G, Petersen G, Pancreatic Cancer, in: Pitchumoni CS, Dharmarajan TS (Eds.), Geriatric Gastroenterology, Springer International Publishing, Cham, 2021, pp. 1903–1916. [Google Scholar]
  • [2].Saad AM, Turk T, Al-Husseini MJ, Abdel-Rahman O, Trends in pancreatic adenocarcinoma incidence and mortality in the United States in the last four decades; a SEER-based study, BMC Cancer 18 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Mandrell CT, Holland TE, Wheeler JF, Esmaeili SMA, Amar K, Chowdhury F, Sivakumar P, Machine Learning Approach to Raman Spectrum Analysis of MIA PaCa-2 Pancreatic Cancer Tumor Repopulating Cells for Classification and Feature Analysis, Life 10 (2020) 181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].McCulloch WS, Pitts W, A logical calculus of the ideas immanent in nervous activity, The bulletin of mathematical biophysics 5 (1943) 115–133. [PubMed] [Google Scholar]
  • [5].Goodfellow I, Bengio Y, Courville A, Deep Learning, MIT Press, 2016. [Google Scholar]
  • [6].O’Shea K, Nash R, An Introduction to Convolutional Neural Networks, ArXiv abs/1511.08458 (2015). [Google Scholar]
  • [7].Chowdhury F, Doğanay S, Leslie BJ, Singh R, Amar K, Talluri B, Park S, Wang N, Ha T, Cdc42-dependent modulation of rigidity sensing and cell spreading in tumor repopulating cells, Biochemical and Biophysical Research Communications 500 (2018) 557–563. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Liu J, Tan Y, Zhang H, Zhang Y, Xu P, Chen J, Poh Y-C, Tang K, Wang N, Huang B, Soft fibrin gels promote selection and growth of tumorigenic cells, Nature Materials 11 (2012) 734–741. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Schneider CA, Rasband WS, Eliceiri KW, NIH Image to ImageJ: 25 years of image analysis, Nature Methods 9 (2012) 671–675. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein MS, Berg AC, Fei-Fei L, ImageNet Large Scale Visual Recognition Challenge, International Journal of Computer Vision 115 (2014) 211–252. [Google Scholar]
  • [11].Espinoza FA, Oliver JM, Wilson BS, Steinberg SL, Using hierarchical clustering and dendrograms to quantify the clustering of membrane proteins, Bull Math Biol 74 (2012) 190–211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Marutho D, Handaka SH, Wijaya E, Muljono, The Determination of Cluster Number at k-Mean Using Elbow Method and Purity Evaluation on Headline News, 2018. International Seminar on Application for Technology of Information and Communication, 2018, pp. 533–538. [Google Scholar]
  • [13].Simonyan K, Zisserman A, Very Deep Convolutional Networks for Large-Scale Image Recognition, CoRR abs/1409.1556 (2014). [Google Scholar]
  • [14].Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z, Rethinking the Inception Architecture for Computer Vision, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818–2826. [Google Scholar]
  • [15].Chollet F, Xception: Deep Learning with Depthwise Separable Convolutions, 2017. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, 2017, pp. 1800–1807. [Google Scholar]
  • [16].Cristianini N, Ricci E, Support Vector Machines, in: Kao M-Y (Ed.), Encyclopedia of Algorithms, Springer; US, Boston, MA, 2008, pp. 928–932. [Google Scholar]
  • [17].Chen T, Guestrin C, XGBoost, ACM, 2016. [Google Scholar]
  • [18].Jin X, Han J, Clustering K-Means, in: Sammut C, Webb GI (Eds.), Encyclopedia of Machine Learning, Springer; US, Boston, MA, 2010, pp. 563–564. [Google Scholar]
  • [19].Bridges CC, Hierarchical Cluster Analysis, Psychological Reports 18 (1966) 851–854. [Google Scholar]
  • [20].Murtagh F, Legendre P, Ward’s Hierarchical Agglomerative Clustering Method: Which Algorithms Implement Ward’s Criterion?, Journal of Classification 31 (2014) 274–295. [Google Scholar]
  • [21].Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M, Kudlur M, Levenberg J, Monga R, Moore S, Murray DG, Steiner B, Tucker P, Vasudevan V, Warden P, Wicke M, Yu Y, Zheng X, TensorFlow: a system for large-scale machine learning, Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation, USENIX Association, Savannah, GA, USA, 2016, pp. 265–283. [Google Scholar]
  • [22].Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay É, Scikit-learn: Machine Learning in Python, J. Mach. Learn. Res. 12 (2011) 2825–2830. [Google Scholar]
  • [23].Bradski G, The opencv library, Doctor Dobbs Journal 25 (2000) 120–126. [Google Scholar]
  • [24].Hunter JD, Matplotlib: A 2D Graphics Environment, Computing in Science & Engineering 9 (2007) 90–95. [Google Scholar]
  • [25].Kingma DP, Ba J, Adam: A Method for Stochastic Optimization, CoRR abs/1412.6980 (2014). [Google Scholar]
  • [26].Tan Y, Tajik A, Chen J, Jia Q, Chowdhury F, Wang L, Chen J, Zhang S, Hong Y, Yi H, Wu DC, Zhang Y, Wei F, Poh Y-C, Seong J, Singh R, Lin L-J, Doğanay S, Li Y, Jia H, Ha T, Wang Y, Huang B, Wang N, Matrix softness regulates plasticity of tumour-repopulating cells via H3K9 demethylation and Sox2 expression, Nature Communications 5 (2014) 4619. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Ma J, Zhang Y, Tang K, Zhang H, Yin X, Li Y, Xu P, Sun Y, Ma R, Ji T, Chen J, Zhang S, Zhang T, Luo S, Jin Y, Luo X, Li C, Gong H, Long Z, Lu J, Hu Z, Cao X, Wang N, Yang X, Huang B, Reversing drug resistance of soft tumor-repopulating cells by tumor cell-derived chemotherapeutic microparticles, Cell Res 26 (2016) 713–727. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Talluri B, Amar K, Saul M, Shireen T, Konjufca V, Ma J, Ha T, Chowdhury F, COL2A1 Is a Novel Biomarker of Melanoma Tumor Repopulating Cells, Biomedicines 8 (2020) 360. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Menaka D, Ganesh Vaidyanathan S, A hybrid convolutional neural network-support vector machine architecture for classification of super-resolution enhanced chromosome images, Expert Systems 40 (2023) e13186. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

MMC1

RESOURCES