Abstract
Lung cancer is a leading cause of cancer death in both men and women worldwide. The high mortality rate in lung cancer is in part due to late-stage diagnostics as well as spread of cancer-cells to organs and tissues by metastasis. Automated lung cancer detection and its sub-types classification from cell’s images play a crucial role toward an early-stage cancer prognosis and more individualized therapy. The rapid development of machine learning techniques, especially deep learning algorithms, has attracted much interest in its application to medical image problems. In this study, to develop a reliable Computer-Aided Diagnosis (CAD) system for accurately distinguishing between cancer and healthy cells, we grew popular Non-Small Lung Cancer lines in a microfluidic chip followed by staining with Phalloidin and images were obtained by using an IX-81 inverted Olympus fluorescence microscope. We designed and tested a deep learning image analysis workflow for classification of lung cancer cell-line images into six classes, including five different cancer cell-lines (P-C9, SK-LU-1, H-1975, A-427, and A-549) and normal cell-line (16-HBE). Our results demonstrate that ResNet18, a residual learning convolutional neural network, is an efficient and promising method for lung cancer cell-lines categorization with a classification accuracy of 98.37% and F1-score of 97.29%. Our proposed workflow is also able to successfully distinguish normal versus cancerous cell-lines with a remarkable average accuracy of 99.77% and F1-score of 99.87%. The proposed CAD system completely eliminates the need for extensive user intervention, enabling the processing of large amounts of image data with robust and highly accurate results.
Subject terms: Cancer, Computational biology and bioinformatics
Introduction
Cancer is one of the most leading causes of death in the world1–3. Among different types of cancers, lung cancer is the most cause of death in both men and women worldwide. Based on cell morphology, there are two types of lung cancer available: Small Cell Lung Cancer (SCLC) which is responsible for 15–20% of the cancer cases and occurs almost in heavy smokers. Non-Small Lung Cancer (NSCLC) is observed in 80–85% of lung cancers and mainly sub-classified in Adenocarcinoma (ADC), Squamous Cell Carcinoma (SCC), and Large Cell Carcinoma (LCC)4. Lung cancer has similar symptoms with other brunch disorders such as chest pain, coughing up blood, and etc. Therefore, the exact diagnosis of lung cancer, as well as its stage, is based on the microscopic morphology analysis. It is well known that approximately 85–88% of NSCLC is corresponding to A-549 lung cancer cells as well as this type is characterized as type II of pulmonary epithelial cell model for drug metabolism5. Furthermore, A-549 and A-427 lung cancer cell-lines are introduced as drug resistance of NSCLCs in comparison with other types6.
Success in lung cancer treatment is highly related to its diagnostic time7, the stage and grade of the tumor. In addition, deciding the most appropriate treatment of lung cancer depends on determining the extent (stage) of cancer, which is assessed by pathologists’ evaluation of the tumor’s histology8. Therefore, the early-stage detection of lung cancer is necessary for health and selecting the most appropriate treatment procedure. To diagnose the lung cancer disease, there are several tests including the tissue sample (biopsy), cytology, and imaging tests (X-ray and Computerized Tomography (CT) scan), of which most are based on visual observation and manual techniques. Manual interpretation of lung cancer based on medical images is not only time–consuming process, but also needs highly trained people (doctors, pathologists, or technicians), and also is very prone to mistakes9. Currently, exact lung cancer diagnosis from biopsy samples require pathologist visual inspection where his/her experience influences the prediction and accuracy of the decision8. Moreover, exact cancer diagnosis as well as therapeutic success require microscopic image assessment, depends on the right diagnostic pathology. Diagnostic pathology is a complicated task that requires an expert who is trained over a couple of years10,11. Accordingly, automated cancer detection from cancer cell images is an urgent need to reduce the heavy workloads of pathologists and can help avoid misdiagnosis. In addition, due to strong phenotypic (morphological) differences of human lung cancer cells, comprehensive quantification of medical images are of the interesting output of CAD approaches to assist doctors in treatment procedures12–15.
Recent advances in the machine learning community have shown a great promise to apply deep learning for cancer classification. Deep learning is a subset of machine learning in artificial intelligence that imitates the function of the human brain for data processing. Deep learning allows machines to solve complex problems even if the dataset is very diverse, unstructured, and inter-connected. In deep learning, an algorithm learns effective representations for a given task entirely from data. One of the most advantages of using a deep learning approach is its ability to execute feature engineering by itself. Most recently, deep learning algorithms, especially Convolutional Neural Networks (CNN), have been widely applied in computer vision and image analysis tasks16. Such algorithms have already been successfully utilized for the segmentation and classification of medical images such as breast cancer analysis17, brain tumor detection18, gastrointestinal cancer segmentation19, prostate cancer diagnosis20, lung cancer classification21, and etc. For instance, in lung cancer research, CNNs have been mostly studied with the regard to classification of lung patterns on CT scans22, Positron Emission Tomography (PET)23, and X-ray24,25. While cell image interpretation continues to be the gold standard for cancer diagnosis especially in the early stage of the disease, future CAD systems towards this task fall behind the essential clinical need.
Here, Kanavati et al.26 trained a CNN (EfficientNet-B3 architecture27) to predict carcinoma using 3704 histopathology images (obtained from Kyushu Medical Center and International University of Health and Welfare, Mita Hospital) and achieved promising results for discrimination between cancer and normal cells. Although there are multiple studies on automatic lung cancer detection, the focus of most researches is the classification of normal cells versus cancerous ones26,28. However, cell-lines classification of lung cancer has more clinical values than binary classification (normal versus cancer) as it provides more detailed information to help clinicians for correct therapeutic schedules. As an example, Teramoto et al.15 developed an automated classification scheme for lung cancer cell-image detection (including adenocarcinoma, squamous cell carcinoma, and small cell carcinoma) from microscopic images using CNN. The total correct rate was reported at around 71% using three-fold cross-validation on their collected database which was comparable to that of a cytotechnologist or pathologist. Additionally, Coudray et al.29 applied a deep learning model (inceptionv3 architecture30) for the automatic analysis of tumor slides using publicly available histopathology images available in “The Cancer Genome Atlas (TCGA)”. They achieved remarkable results in the classification of adenocarcinoma and squamous cell carcinoma as the most prevalent types of lung cancer and also normal lung tissue with an average area under the curve of 0.97 which was comparable to that of pathologists. Recently, Wei et al.8 proposed a deep learning model (ResNet architecture31) that automatically classifies the histologic patterns of lung adenocarcinoma on surgical resection slides. The authors evaluated their approach on an independent set of 143 whole-slide images. They achieved a kappa score of 0.525 with 66.6% agreement to three pathologists for classifying the predominant histologic patterns, slightly higher than the inter-pathologist kappa score of 0.485 and an agreement of 62.7%.
Motivated by the above-described successes of employing CNNs routines in digital pathology image analysis, our work sets out to further identify the high level and discriminative features exhibited by cancer cells using CNNs for accurate classification of lung cancer subtypes. Microfluidics has risen as a capable approach for the investigation of malignant cell growth and medication screening. Because of their micro-scaled structures, microfluidic chips need low quantities of cells and offer the potential for high-throughput screening. Microfluidic chips have a platform for the malignant cells to grow in 3D dimension for keeping the cell population similar to the in-vivo conditions32. In this study, we have used microfluidic devices to culture popular lung cancer and normal cells with the aim of establishing a baseline accuracy expected from the modern deep learning models for the classification of lung cancer cell-lines. The workflow of this study has been depicted in Fig. 1, which consists of three main parts: (a) schematic representation of microfluidic device used for seeding the lung cancer cell lines; (b) cell imaging by IX-81 and IX-71 Olympus microscopes; (c) classification of cell images into healthy cells or cancer cells based on deep learning methodologies. We are also interested in discriminating healthy controls from lung cancer cell samples. To this end, the CNNs are trained to predict the normal lung cells (16-HBE), and five types of lung cancer cells including P-C9, SK-LU-1, H-1975, A-427, and A-549. To the best of our knowledge, no research has been conducted to classify these types of lung cancer from tissue-derived cells cultured in a microfluidic platform.
Figure 1.
Overview of the combined microfluidic deep learning approach.
Results
First, a preliminary experimental study was conducted to evaluate five popular CNN architectures (in terms of classification performance and the number of parameters33) on our lung cancer cell-line database to select the best model. The performance data resulting from this evaluation is tabulated in Table 1. As shown, ResNet18 not only has a better recognition performance (98.37% accuracy, 97.64% precision, 96.88% recall, and 97.12% F1_score), but also has fewer parameters (~ 25.6 million) to be set in comparison to similarly performing model such as AlexNet, which causes reducing the likelihood of overfitting34. Therefore, ResNet18 was chosen for our purpose and the hyperparameters for the fine-tuned ResNet18 architecture were set as given in Table 2. Note that we used the adaptive moment estimation (Adam) algorithm35 for training and only the weights in the last 12 layers were trainable whereas all other weights were frozen. Figure 2 depicts the sample images of our collected database, where 16-HBE represents a sample image of healthy lung cell-line and others show the lung cancer cell-lines.
Table 1.
Comparison of five deep neural network architectures.
| Model | Classification performance | No. parameters | |||
|---|---|---|---|---|---|
| Accuracy (%) | Precision (%) | Recall (%) | F1_score (%) | (M = million) | |
| AlexNet | 97.17 | 96.52 | 95.28 | 95.82 | 60 M |
| GoogLeNet | 88.26 | 89.57 | 86.50 | 87.44 | 4 M |
| ResNet18 | 98.37 | 97.64 | 96.88 | 97.12 | 25.6 M |
| Inceptionv3 | 82.67 | 90.29 | 80.39 | 83.45 | 23.6 M |
| SqueezNet | 94.41 | 92.33 | 90.48 | 90.62 | 1.2 M |
Table 2.
Parameter setting for the ResNet18 architecture.
| No. epochs | Mini-batch size | Initial learning rate | Learning rate factor | L2 regularization |
|---|---|---|---|---|
| 10 | 64 | 5*10–5 | 2 | 10–4 |
Figure 2.
Representative fluorescence microscopic images of lung cell-lines (normal and five cell-lines) from our collected database.
The confusion matrix is shown in Fig. 3 depicts the inter-class variability in cancer cell-lines classification accuracy and also intra-class variability in discrimination between healthy control and cancer cell-lines. This figure provides all information of the outcome of our trained classifier, where the rows represent the predicted values of the target categories. As shown, the classifier performed excellent accuracy (100%) in the prediction of normal samples. Based on the confusion matrix results, the most misclassified cancer cell-lines were A-549 (85.3% accuracy) and H-1975 (96.5% accuracy) respectively. However, the other three cancer cell-lines PC-9, A-427, and SK-LU-1 achieved an excellent performance (99.5%, 100%, and 100% accuracy, respectively).
Figure 3.
Confusion matrix for lung cancer cell detection resulted from ResNet18. Rows of the matrix represents the number of instances in a predicted class (upper number) as well as the percentage of correctly or incorrectly classified observations for each true class (down number), while each column represents the instances in an actual class. The information of class-wise precisions and recalls are summarized at the end of each rows and columns respectively with green color, while the corresponding error rates are specified with red color.
The final classification results after the parameter setting for ResNet18 were given in Table 3. All measures are reported as mean ± standard deviation for five runs. The average F1-score of 97.29% (98.37% accuracy) in classification between normal and different cancer cell-lines shows the efficiency of the method in clinical practice. Note that the small standard deviations in our results indicate that the trained model produces stable results across all five experimental runs. The training progress plots for one of our experimental runs are depicted in Fig. 4 to show how well the accuracy and loss curves converged after a few iterations.
Table 3.
Classification results for the ResNet18 architecture (six classes: normal (16-HBE), A-427, A-549, H-1975, SK-LU-1, and PC-9 cell-lines). Values are mean ± standard deviation for five experimental runs.
| Accuracy (%) | Precision (%) | Recall (%) | F1-score (%) |
|---|---|---|---|
| 98.37 ± 0.36 | 97.38 ± 0.81 | 97.35 ± 0.67 | 97.29 ± 0.73 |
Figure 4.
Accuracy and loss curves in training progress for the ResNet18 model; (A) accuracy is plotted versus the training iteration for both training and validation data, (B) cross-entropy loss is plotted versus the training iteration for both training and validation data. Training plots were smoothed to better visualize trends.
It is also interesting to quantify the performance of the classifier in a binary setting; discrimination between normal and cancer cell-line images. As the results are shown in Table 4, we achieved the average F1-score of 99.87% (99.77% accuracy). 100% precision means there is no false-positive error in all five runs. It means that none of the normal images are predicted as cancerous one.
Table 4.
Classification results for the ResNet18 architecture (normal vs. cancer). Values are mean ± standard deviation for five experimental runs.
| Accuracy (%) | Precision (%) | Recall (%) | F1-score (%) |
|---|---|---|---|
| 99.77 ± 0.15 | 100 ± 0.00 | 99.74 ± 0.16 | 99.87 ± 0.08 |
Discussion
Pathology investigation of tissue slides has significant importance in lung analysis. For instance, in the Tumor Glands and Metastasis (TGM) staging, the gland stage (territorial lymph gland association) is dictated by looking at whether the tumor has attacked the lymph hubs, in light of pathology slides36. Classification of histologic patterns in lung cancer is extremely critical for estimating the tumor grade and deciding on the patient’s treatment. However, this is a challenging task due to the heterogeneous nature of lung cancer and the subjective criteria for evaluation.
Developing a CAD method for lung cancer is a very important clinical achievement that could increase the patient survival rate. Cell-based microfluidic systems have shown great promise in enhancing biotechnology applications by easy single-cell manipulation, performing multiplexed assays at the same time, with only a small sample volume (microliter range).
To this end, we merged the microfluidic technology and deep learning algorithms to mimic the biological system, acquire data, and efficient analysis of the obtained data.
A few previous studies have been done involving deep learning and lung cancer pathology images to automatically analyze and interpret the lung patterns29,37. One limitation of these researches is that they used TCGA data where the cases submitted for this available database might be biased in terms of having images with typical and definitive morphological patterns of disease, which would be different from what pathologists encounter in real-world practice38. It means that many slides of the histological images at multiple microscopic views might be examined by the pathologist, but only the most representative views have been submitted to the database. A recent study8 used their own collected histopathological data of lung cancer for the classification of lung adenocarcinoma patterns, however, their reported performances are not good enough to be used reliably in clinical settings.
Our work is novel in several ways. First, we used the cell lines to create our database for developing an automated lung cancer diagnosis system since the cell lines are well known, more homogeneous population compares to tissue driven images. Second, we have cultured cell lines in a microfluidic chip that is more similar to an in-vivo system as well as an extremely low volume of cells/reagents required at the micro-scale. Furthermore, we attempted to automate classification of five challenging cell-lines of lung cancer (PC-9, SK-LU-1, H-1975, A-427, and A-549) cultured in a microfluidic platform, a task that would be challenging even for experienced pathologists. Finally, we proposed a deep learning model for classifying histologic patterns on lung cancer cell data as it is demonstrated that deep learning and microfluidics represent an ideal coupling of experimental and analytical throughput39. Our proposed workflow combines the efficacy of a suitable CNN model to extract high-level features from an input image data with the benefit of a transfer learning strategy that allows reducing the likelihood of overfitting problem.
Our study demonstrates that some CNN models, such as resner18, could be utilized to assist the discrimination of lung cancer and normal cell-lines. Our results revealed that resner18 architecture successfully distinguished normal versus cancerous cell-lines with remarkable average accuracy of 99.77% and F1-score 99.87%.
We also showed that the classifier had 100% precision which means none normal samples are predicted as cancerous one. This is very important since the false positive error for cancer screening not only causes wasting time and budget for the healthcare system, but also imposes huge anxiety and unnecessary stress as well as physical and psychosocial harms for the patients and maybe their family40. Our designed computer-based diagnosis of cell-lines would also significantly diminish the false-negative rate.
Actually, our results have been reported based on randomly splitting the data into training, validation, and test sets in which the test data were the same cell-lines used in the training but they are unseen samples. Although we showed the capability of the model to discriminate between normal and the mentioned five types of cancer cell lines with remarkable performance, it is worth checking the generality of the method for classification of a new cancer cell-line where none of its instances are observed during the training phase. To address this issue and due to lack of access to new cell-line images, we trained the ResNet18 model as a binary classifier on normal versus a collection of four cancer cell-lines (randomly partitioning into training and validation sets in the ratio of 80:20), and then tested it for the remain unseen cancer cell-line. The results are tabulated in Table 5. As expected, the classifier accuracy dropped a lot when dealing with A-549 cell-line in the test phase. This was observed previously even if the samples of A-549 were seen in training (as shown in Fig. 3), the model was failed to accurately classify all of them as a cancer type. It means that A-549 probably exhibited similar morphological features with normal cell-line. However, for categorization of other cancer cell-lines, the model achieved the acceptable results.
Table 5.
Classification accuracies of the ResNet18 architecture (normal vs. cancer) for unseen test data.
| Train and validation data | Test data | Validation accuracy (%) | Test accuracy (%) |
|---|---|---|---|
| Normal versus cancer (A-427, H-1975, PC-9, SK-LU-1) | A-549 | 99.92 | 73.54 |
| Normal versus cancer (A-549, H-1975, PC-9, SK-LU-1) | A-427 | 100 | 96.58 |
| Normal versus cancer (A-549, A-427, PC-9, SK-LU-1) | H-1975 | 100 | 95.58 |
| Normal versus cancer (A-549, A-427, H-1975, SK-LU-1) | PC-9 | 100 | 100 |
| Normal versus cancer (A-549, A-427, H-1975, PC-9) | SK-LU-1 | 99.89 | 99.80 |
Our selected model was also able to classify lung cancer cell-lines with an excellent accuracy 98.37% and F1-score 97.29%. Indeed, deep features automatically learned by ResNet18 architecture encoded the biological characteristics of distinct cellular lines, enabling more compact within cell-lines distribution and between cell-lines separation which result in high classification performance.
Conclusions
In this work, a huge amount of raw data (normal and cancerous lung cell-line images) collected in a microfluidic system have been processed by deep learning algorithms. Our work aimed to learn a high-level discriminative feature using CNNs to accurately classify lung cell-line images into six classes, including five different cancer cell-lines (PC-9, SK-LU-1, H-1975, A-427, and A-549) and normal cell-line (16-HBE). The remarkable performance outcome of this work confirms the ideal integrating of microfluidic technology for data acquisition and deep learning for data processing.
Our findings suggest that deep learning models can assist pathologists in the detection of cancer cell-line that could be adopted in routine pathological practices and potentially help reduce the burden on pathologists. Given the results obtained in this work, the future work would be extending the framework to predict other types of cancer.
Materials and methods
Cell culture and imaging in microfluidic platform
The lung normal cell and non-small lung cancer cells (PC-9, SK-LU-1, H-1975, A-427, and A-549) were received from research Institute of Molecular Pathology (IMP), Technical University of Vienna (TU Wien), and Ludwig Boltzmann Institute for cancer research, Vienna, Austria. Based on our previous microfluidic cell-based assays works4,41 the microfluidic device was used for culturing the cancer cells. Briefly, the microfluidic template was designed by AutoCAD 2016 software (Autodesk, San Rafael, CA, USA) and Polydimethylsiloxane (PDMS) sheet was cut using a CAM-1 GS-24 cutter (Roland DGA Corporation, Irvin, CA, USA). PDMS is the most common used polymer for microfluidic assays, which has been surface-functionalized and coated by collagen I. To this end, the PDMS sheet was plasma treated and immersed in collagen I solution. The assembled microfluidic device was sterilized by ethanol (70%) and under UV exposure (20 min) and finally, rinsed several times. The desired cell number injected into micro-channels based on the surface to the area of micro-channels and after 70–80% confluency, the cells were rinsed (by phosphate buffer 37° C, PBS), fixed (by paraformaldehyde 2%) and stained by DAPI (4′,6-diamidino-2-phenylindole) and phalloidin fluorescent dye. Finally, the micro-channels containing stained cells were rinsed several times by Deuterium-Depleted Water (DDW) and were subjected to imaging by Olympus IX81 and IX71 (Olympus Ltd, Tokyo, Japan). The collected images have proceeded for further analysis.
Deep convolutional neural networks
Training deep learning models is a time-consuming process and often requires lots of annotated images which may be difficult to acquire in the medical field. It also demands a costly system equipped with a Graphics Processing Unit (GPU) and large Random Access Memory (RAM). However, an approach called transfer learning could help researchers solve problems in medical images when the available dataset has a lower number of samples for each class. In other words, transfer learning aims to transfer knowledge between large source and small target domains. For CNNs, this is often done by pre-training a model with the source dataset, then re-training parts of the model with the target dataset which is named fine-tuning.
In this work, we are particularly interested in investigating the effectiveness of transferring features learned from a generic dataset into the classification of lung cancer types. To this end, we exploited five popular CNN architectures GoogLeNet42, ResNet1831, AlexNet43, SqueezeNet44, and Inceptionv330 where all networks were pre-trained on ImageNet45, the current largest image classification dataset in computer vision.
Moreover, to handle the problem of class imbalance data46, we employed the augmentation strategy (including scaling, rotation, and translation) to equalize the sample distribution across the 6 classes. It means that for each class, the necessary number of augmented samples was randomly selected in such a way that all classes would reach the training set size of the majority class.
Evaluation performance and experimental setup
For classification tasks on imbalanced databases, the accuracy rate is an inadequate measure despite its popularity in literature. To provide a fair measure of the classifier’s performance, we used additional metrics such as precision, recall, and F1-score47. In an imbalanced classification problem with more than two classes, precision is calculated as the sum of true positives across all classes divided by the sum of true positives and false positives across all classes. Recall is calculated as the sum of true positives across all classes divided by the sum of true positives and false negatives across all classes. Maximizing precision will minimize the number of false positives, whereas maximizing the recall will minimize the number of false negatives. F1_score provides a way to combine both precisions and recall into a single measure that captures both properties. Our evaluation metrics to report the results are given in Eqs. (1)–(4).
| 1 |
| 2 |
| 3 |
| 4 |
where TP, TN, FP, and FN represent true positive, true negative, false positive, and false negative respectively.
Our data is split into training, validation, and test sets in the ratio of 60:20:20 respectively with a random partition by keeping a ratio between classes. This procedure was repeated five times by changing the random partition at the beginning of the procedure. The original number of images in our dataset is listed in Table 6.
Table 6.
Overview of the number of images in our database.
| 16-HBE (normal) | P-C9 | SK-LU-1 | H-1975 | A-427 | A-549 |
|---|---|---|---|---|---|
| 800 | 1988 | 2469 | 860 | 438 | 511 |
The model selection and parameter setting have been done on the evaluation dataset in a greedy search manner48.
Note that the fine-tuning of the pre-trained CNNs has been performed using Matlab 2019a running on a desktop workstation equipped with an NVIDIA 8 GB GeForce GTX 745 GPU card.
Acknowledgments
The authors are thankful for funding from Iran National Science Foundation (INSF) Grant No. 96006759 and TU-Vienna. The authors would like thank Christoph Eilenberger and Sarah Spitz for their help in this experiment.
Author contributions
H.H., S.Sh., A.A. software; S.Sh. conceptualization and methodology; H.H., A.A., S.Sh., M.R. validation; H.H., A.A., S.Sh., M.R. P.E., H.N-M. formal analysis; H.H., A.A., S.Sh., M.R. P.E., H.N-M. data curation; H.H., S.Sh., A.A. writing—original draft preparation; H.H., A.A., S.Sh., M.R. P.E., H.N-M. writing—review and editing; S.Sh., A.A., P.E, H.N-M. supervision; P.E, H.N-M. project administration; H.H., A.A., S.Sh., M.R. P.E., H.N-M. funding acquisition.
Competing interests
The authors declare no competing interests.
Footnotes
This article has been retracted. Please see the retraction notice for more detail: https://doi.org/10.1038/s41598-025-09817-y"
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: Hadi Hashemzadeh and Seyedehsamaneh Shojaeilangari.
Change history
7/7/2025
This article has been retracted. Please see the Retraction Notice for more detail: 10.1038/s41598-025-09817-y
Contributor Information
Seyedehsamaneh Shojaeilangari, Email: s.shojaie@irost.ir.
Peter Ertl, Email: Peter.ertl@tuwien.ac.at.
Hossein Naderi-Manesh, Email: Naderman@modares.ac.ir.
References
- 1.Darvishi, M. H. et al. Targeted DNA delivery to cancer cells using a biotinylated chitosan carrier. Biotechnol. Appl. Biochem.64(3), 423–432. 10.1002/bab.1497 (2017). [DOI] [PubMed] [Google Scholar]
- 2.Esfandyari, J. et al. Capture and detection of rare cancer cells in blood by intrinsic fluorescence of a novel functionalized diatom. Photodiagn. Photodyn. Ther.30, 101753. 10.1016/j.pdpdt.2020.101753 (2020). [DOI] [PubMed] [Google Scholar]
- 3.Khaledian, M., Nourbakhsh, M. S., Saber, R., Hashemzadeh, H. & Darvishi, M. H. Preparation and evaluation of doxorubicin-loaded pla–peg–fa copolymer containing superparamagnetic iron oxide nanoparticles (Spions) for cancer treatment: Combination therapy with hyperthermia and chemotherapy. Int. J. Nanomed.15, 6167–6182. 10.2147/IJN.S261638 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hashemzadeh, H., Allahverdi, A., Sedghi, M. & Vaezi, Z. PDMS Nano-modified scaffolds for improvement of stem cells proliferation and differentiation in microfluidic platform. Nanomaterials10(4), 668 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Foster, K. A. et al. Characterization of the A549 cell line as a type II pulmonary epithelial cell model for drug metabolism. Exp. Cell Res.243(2), 359–366 (1998). [DOI] [PubMed] [Google Scholar]
- 6.Melguizo, C. et al. Modulation of MDR1 and MRP3 gene expression in lung cancer cells after paclitaxel and carboplatin exposure. Int. J. Mol. Sci.13(12), 16624–16635. 10.3390/ijms131216624 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Kishore, R. An effective and efficient feature selection method for lung cancer detection. Int. J. Comput. Sci. Inf. Technol.7(4), 135–141. 10.5121/ijcsit.2015.7412 (2015). [Google Scholar]
- 8.Wei, J. W. et al. Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks. Sci. Rep.9(1), 1–8. 10.1038/s41598-019-40041-7 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Munir, K., Elahi, H., Ayub, A., Frezza, F. & Rizzi, A. Cancer diagnosis using deep learning: a bibliographic review. Cancers (Basel)11(9), 1–36. 10.3390/cancers11091235 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Brimo, F., Schultz, L. & Epstein, J. I. The value of mandatory second opinion pathology review of prostate needle biopsy interpretation before radical prostatectomy. J. Urol.184(1), 126–130. 10.1016/j.juro.2010.03.021 (2010). [DOI] [PubMed] [Google Scholar]
- 11.Elmore, J. G. et al. Diagnostic concordance among pathologists interpreting breast biopsy specimens. JAMA313(11), 1122–1132. 10.1097/CCM.0b013e31823da96d.Hydrogen (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Gao, F. et al. DeepCC: a novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis8(9), 20–25. 10.1038/s41389-019-0157-8 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Shen, L. et al. Deep learning to improve breast cancer detection on screening mammography. Sci. Rep.9(1), 1–12. 10.1038/s41598-019-48995-4 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Asuntha, A. & Srinivasan, A. Deep learning for lung cancer detection and classification. Multimed. Tools Appl.79(11–12), 7731–7762. 10.1007/s11042-019-08394-3 (2020). [Google Scholar]
- 15.Teramoto, A., Tsukamoto, T., Kiriyama, Y. & Fujita, H. Automated classification of lung cancer types from cytological images using deep convolutional neural networks. Biomed. Res. Int.10.1155/2017/4067832 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Guo, Y. et al. Deep learning for visual understanding: a review. Neurocomputing187, 27–48. 10.1016/j.neucom.2015.09.116 (2016). [Google Scholar]
- 17.Ragab, D. A., Sharkas, M., Marshall, S. & Ren, J. Breast cancer detection using deep convolutional neural networks and support vector machines. PeerJ2019(1), 1–23. 10.7717/peerj.6201 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Hossain, T., Shishir, F. S., Ashraf, M., Al Nasim, M. A. & Muhammad Shah, F. Brain tumor detection using convolutional neural network, in 1st Int. Conf. Adv. Sci. Eng. Robot. Technol. 2019, ICASERT 2019, vol. 2019, no. Icasert, 1–6, 2019. 10.1109/ICASERT.2019.8934561.
- 19.Yoon, H. J. & Kim, J. H. Lesion-based convolutional neural network in diagnosis of early gastric cancer. Clin. Endosc.53(2), 127–131. 10.5946/ce.2020.046 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Yoo, S., Gujrathi, I., Haider, M. A. & Khalvati, F. Prostate cancer detection using deep convolutional neural networks. Sci. Rep.9(1), 1–10. 10.1038/s41598-019-55972-4 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Alakwaa, W., Nassef, M. & Badr, A. Lung cancer detection and classification with 3D convolutional neural network (3D-CNN). Int. J. Biol. Biomed. Eng.11(8), 66–73. 10.14569/ijacsa.2017.080853 (2017). [Google Scholar]
- 22.Anthimopoulos, M., Christodoulidis, S., Ebner, L., Christe, A. & Mougiakakou, S. Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans. Med. Imaging35(5), 1207–1216. 10.1109/TMI.2016.2535865 (2016). [DOI] [PubMed] [Google Scholar]
- 23.Hochhegger, B. et al. PET/CT imaging in lung cancer: indications and findings. J. Bras. Pneumol.41(3), 264–274. 10.1590/s1806-37132015000004479 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Neal, R. D. et al. Immediate chest X-ray for patients at risk of lung cancer presenting in primary care: randomised controlled feasibility trial. Br. J. Cancer116(3), 293–302. 10.1038/bjc.2016.414 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Stapley, S., Sharp, D. & Hamilton, W. Negative chest X-rays in primary care patients with lung cancer. Br. J. Gen. Pract.56(529), 570–573 (2006). [PMC free article] [PubMed] [Google Scholar]
- 26.Kanavati, F. et al. Weakly-supervised learning for lung carcinoma classification using deep learning. Sci. Rep.10(1), 1–11. 10.1038/s41598-020-66333-x (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Tan, M. & Le, Q. V. EfficientNet: rethinking model scaling for convolutional neural networks, in 36th Int. Conf. Mach. Learn. ICML 2019, vol. 2019-June, 10691–10700 (2019).
- 28.Teramoto, A. et al. Automated classification of benign and malignant cells from lung cytological images using deep convolutional neural network. Inform. Med. Unlocked16, 20–25. 10.1016/j.imu.2019.100205 (2019). [Google Scholar]
- 29.Coudray, N. et al. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat. Med.24(10), 1559–1567. 10.1038/s41591-018-0177-5 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2818–2826 (2016). 10.1109/CVPR.2016.308.
- 31.He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, 770–778 (2016). 10.1109/CVPR.2016.90.
- 32.Ma, Y.-H.V., Middleton, K., You, L. & Sun, Y. A review of microfluidic approaches for investigating cancer extravasation during metastasis. Microsystems Nanoeng.4(1), 1–13. 10.1038/micronano.2017.104 (2018). [Google Scholar]
- 33.Khan, A., Sohail, A., Zahoora, U. & Qureshi, A. S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev.10.1007/s10462-020-09825-6 (2020). [Google Scholar]
- 34.Kather, J. N. et al. Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer. Nat. Med.25(7), 1054–1056. 10.1038/s41591-019-0462-y (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Kingma, D. P. & Ba, J. L. Adam: a method for stochastic optimization, 3rd Int. Conf. Learn. Represent. ICLR 2015, 1–15 (2015).
- 36.Wang, S. et al. Artificial intelligence in lung cancer pathology image analysis. Cancers (Basel)11(11), 1–16. 10.3390/cancers11111673 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Wang, S. et al. Comprehensive analysis of lung cancer pathology images to discover tumor shape and boundary features that predict survival outcome. Sci. Rep.8(1), 1–9. 10.1038/s41598-018-27707-4 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Yu, K. H. et al. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nat. Commun.7, 1–10. 10.1038/ncomms12474 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Riordon, J., Sovilj, D., Sanner, S., Sinton, D. & Young, E. W. K. Deep learning with microfluidics for biotechnology. Trends Biotechnol.37(3), 310–324. 10.1016/j.tibtech.2018.08.005 (2019). [DOI] [PubMed] [Google Scholar]
- 40.Rasmussen, J. F., Siersma, V., Malmqvist, J. & Brodersen, J. Psychosocial consequences of false positives in the Danish Lung Cancer CT Screening Trial: a nested matched cohort study. BMJ Open10(6), 1–9. 10.1136/bmjopen-2019-034682 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Hashemzadeh, H., Allahverdi, A., Ghorbani, M. & Soleymani, H. Gold nanowires/fibrin nanostructure as microfluidics platforms for enhancing stem cell differentiation: bio-AFM study. Micromachines11(1), 20–25. 10.3390/mi11010050 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Szegedy, C. et al. Going deeper with convolutions, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07–12-June, 1–9 (2015). 10.1109/CVPR.2015.7298594.
- 43.Krizhevsky, G. E., Sutskever, A., & Hinton, I. Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems, 1097–1105 (2012).
- 44.Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., & Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 1–13 (2016). http://arxiv.org/abs/1602.07360.
- 45.Russakovsky, O. et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis.115(3), 211–252. 10.1007/s11263-015-0816-y (2015). [Google Scholar]
- 46.Johnson, J. M. & Khoshgoftaar, T. M. Survey on deep learning with class imbalance. J. Big Data6(1), 20–25. 10.1186/s40537-019-0192-5 (2019). [Google Scholar]
- 47.Sokolova, G. & Lapalme, M. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag.45, 427–437 (2009). [Google Scholar]
- 48.Yoo, Y. J. Hyperparameter optimization of deep neural network using univariate dynamic encoding algorithm for searches. Knowl. Based Syst.178, 74–83. 10.1016/j.knosys.2019.04.019 (2019). [Google Scholar]




