Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2024 Sep 20;15(2):188–190. doi: 10.1002/alr.23454

Quantitative characterization of eosinophilia in nasal polyps with AI‐based single cell classification

Martin Stampe 1,, Ida Skovgaard Christiansen 1, Vibeke Backer 2, Kasper Aanæs 2, Anne‐Sophie Homøe 2, Jens Tidemandsen 2, Emilie Neumann Nielsen 1,3, Sigrid Louise Hjorth Rasmussen 1,3, Rasmus Hartvig 1, Katalin Kiss 1, Thomas Hartvig Lindkær Jensen 1,4
PMCID: PMC11785147  PMID: 39302216

Key points

  • Eosinophilic granulocytes have characteristic morphological features.

  • This makes them prime candidates for utilization of a single cell binary classification network.

  • Single cell binary classification networks can reliably help quantify eosinophils in nasal polyps.

Keywords: chronic rhinosinustis, disease severity, eosinophilic rhinitis, nasal polyposis

1. INTRODUCTION

Classification of chronic rhinosinusitis into eosinophilic and non‐eosinophilic phenotypes is subject to uncertainty. 1 Reported suggestions either exclusively or mainly consist of counting the eosinophils in pre‐specified areas of the polyps and the distinction is set at a clinically correlated cut‐off. 1 , 2 Recent introduction of monoclonal antibodies as treatment for eosinophilic chronic rhinosinusitis underlines the necessity of identifying a robust classification and quantification. 3 In this study, we compared the ability of the deep learning classifier to identify and classify eosinophilic granulocytes (EOS) and non‐eosinophils (non‐EOS), with the same ability of a trained pathologist. Classification of the trained network was applied to the samples examined by the pathologist and the results correlated.

2. METHODS

Fifty hematoxylin and eosin‐stained samples of resected nasal polyps were collected from 50 consecutive patients with nasal polyps and scanned anonymously at 40× (Nanozoomer s360) producing 50 NDPI whole slide images.

Twenty‐five of the whole slide images (WSI) were chosen at random to be used for training a cell‐based binary deep learning classifier in recognizing EOS. All cells were localized with instance segmentation of cell nuclei using the deep learning convolutional neural network (CNN) model Hover‐net 4 and pre‐trained on the PanNuke dataset. 5 The binary classification process involved a CNN model influenced by the ResNet framework. 6 Annotation and accumulation of training data was obtained in a one‐step process by clicking on cells and designating them as either EOS or non‐EOS (T.H.L.J.). In total, 610 EOS and 1376 non‐EOS were accumulated for training. Through initial training sessions, the following augmentation settings were selected: horizontal flip, vertical flip, and 25 degrees rotation range.

The remaining 25 WSI were presented to a pathologist (I.S.C.) for manual counting of EOS. In each image, three high‐power fields (HPFs) including surface epithelium and perceived highest presence of EOS were selected. In each HPF, the total number of EOS was counted, and the area outlined with the embedded annotation tool.

The trained binary classifier was compared with the manual counts by applying Hovernet nuclear segmentation to the same seventy‐five HPF and classification of the present cells as EOS or non‐EOS was performed with the trained binary network. For all cells in the HPFs, the following information was exported to a spreadsheet: predicted cell type (EOS or non‐EOS), coordinates of centroids, and distance in pixels to the nearest neighbor of the same cell type. Cells classified as EOS were sorted by distance to nearest neighbor. By visual inspection, it was determined that nuclei with centroid‐to‐centroid distance below 25 pixels (≈6 µm) were single EOS registered as two EOS due to lobulated nuclei (Figure 1). We have used cross‐entropy loss to quantify the difference between the predicted probability distribution by the CNN and the true probability distribution. 7 Cross‐entropy loss is a loss function used in classification models to measure the performance. A cross‐entropy value close to 0 indicates a high degree of performance. Following annotation of training data and manual classification, audits were held between T.H.L.J. and M.S. as well as T.H.L.J. and I.S.C. to ensure standardization of the results.

FIGURE 1.

FIGURE 1

A close‐up of three cells and the subsequent segmentation of nuclei and cell classification are shown. In the picture furthest to the left, we see three different cell types. The segmentation convolutional neural network (CNN) identifies morphological features of nuclei and outlines them (middle picture). The binary classification CNN classifies the cell based upon whether it is an eosinophilic granulocyte (EOS) (green) or a non‐EOS (red). The displayed EOS is an example of one nucleus being detected as two close nuclei by segmentation. Black scale bar = 10 µm.

3. RESULTS

The training session resulted in a validation accuracy of 0.98 with an accuracy on test training samples of 0.98 and a cross‐entropy loss of 0.06, indicating a high degree of performance.

Correlation between manually and digitally counted EOS is displayed in Figure 2. Near perfect correlations are noted with R 2≈0.98 for all HPFs and 0.99 for averages of the three HPFs in each polyp. By direct comparison between manually and digitally detected EOS, clear congruence was noted in obvious EOS. Two trends were noted in the discrepancies. Eosinophils with small presentation of cytoplasm with red vesicles may be overlooked at manual counting. Retrospectively, the pathologist agreed that these examples were EOS. Cells with bi‐lobulated nuclei and no presence of red vesicles in cytoplasm were classified as non‐EOS by the digital counting but may be counted as EOS by the pathologist due to the characteristic nuclear constellation.

FIGURE 2.

FIGURE 2

The correlations graph between the manual counting of eosinophilic granulocytes by the pathologist and the counting of eosinophilic granulocytes by the deep learning classifier. The x‐axis denotes manual counting of eosinophilic granulocytes (EOS), and the y‐axis denotes digital counting of EOS. Reddish dots represent absolute number of EOS counted, and black dots denote the average of three high‐power fields (HPFs) counted.

4. DISCUSSION

We have tested a digital algorithm including segmentation of cell nuclei and subsequent classification as EOS or non‐EOS as an alternative to manual counting of EOS in hematoxylin and eosin‐stained WSI of nasal polyps. The obtained results suggest that manual counting of eosinophils in nasal polyps can be confidently automated with the proposed algorithm and a modest amount of training data. One can argue that the digital counting is more reliable as EOS with small presentations of cytoplasm may be overlooked by the pathologist. The system, as trained, was however not able to detect EOS with no red vesicles present in the cytoplasm. It should be noted that the study was performed on an overall small dataset from a single pathology department. Furthermore, the material is assumed representative of the full spectrum of EOS morphology. Despite firm correlation, for a wider usage the system may need inclusion of training data from other departments to accommodate potential differences in staining. The variation in approaches to counting eosinophils in polyps arguably demonstrates a need for reproducible and clinically validated quantification methods. Further studies are needed to confirm the applicability in other settings and potentially on other entities of disease.

Stampe M, Christiansen IS, Backer V, et al. Quantitative characterization of eosinophilia in nasal polyps with AI‐based single cell classification. Int Forum Allergy Rhinol. 2025;15:188–190. 10.1002/alr.23454

REFERENCES

  • 1. Toro MDC, Antonio MA, Alves Dos Reis MG, de Assumpcao MS, Sakano E. Achieving the best method to classify eosinophilic chronic rhinosinusitis: a systematic review. Rhinology. 2021;59(4):330‐339. [DOI] [PubMed] [Google Scholar]
  • 2. Grayson JW, Hopkins C, Mori E, Senior B, Harvey RJ. Contemporary classification of chronic rhinosinusitis beyond polyps vs no polyps: a review. JAMA Otolaryngol Head Neck Surg. 2020;146(9):831‐838. [DOI] [PubMed] [Google Scholar]
  • 3. Hellings PW, Verhoeven E, Fokkens WJ. State‐of‐the‐art overview on biological treatment for CRSwNP. Rhinology. 2021;59(2):151‐163. [DOI] [PubMed] [Google Scholar]
  • 4. Graham S, Vu QD, Raza SEA, et al. Hover‐Net: simultaneous segmentation and classification of nuclei in multi‐tissue histology images. Med Image Anal. 2019;58:101563. [DOI] [PubMed] [Google Scholar]
  • 5. Gamper J, Koohbanani NA, Benes K, et al. PanNuke dataset extension, insights and baselines. arXiv. 2020. 10.48550/arXiv.2003.10778 [DOI]
  • 6. Sun KHXZSRJ . Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV; 2016. [Google Scholar]
  • 7. Connor R, Dearle A, Claydon B, Vadicamo L. Correlations of cross‐entropy loss in machine learning. Entropy (Basel). 2024;26(6):491. doi: 10.3390/e26060491 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from International Forum of Allergy & Rhinology are provided here courtesy of Wiley

RESOURCES