Abstract
Background:
Autofluorescence imaging of the coenzyme, reduced nicotinamide adenine dinucleotide (phosphate) (NAD(P)H), provides a label-free technique to assess cellular metabolism. Because NAD(P)H is localized in the cytosol and mitochondria, instance segmentation of cell cytoplasms from NAD(P)H images allows quantification of metabolism with cellular resolution. However, accurate cytoplasmic segmentation of autofluorescence images is difficult due to irregular cell shapes and cell clusters.
Method:
Here, a cytoplasm segmentation method is presented and tested. First, autofluorescence images are segmented into cells via either hand-segmentation or Cellpose, a deep learning-based segmentation method. Then, a cytoplasmic post-processing algorithm (CPPA) is applied for cytoplasmic segmentation. CPPA uses a binarized segmentation image to remove non-segmented pixels from the NAD(P)H image and then applies an intensity-based threshold to identify nuclei regions. Errors at cell edges are removed using a distance transform algorithm. The nucleus mask is then subtracted from the cell segmented image to yield the cytoplasm mask image. CPPA was tested on five NAD(P)H images of three different cell samples, quiescent T cells, activated T cells, and MCF7 cells.
Results:
Using POSEA, an evaluation method tailored for instance segmentation, the CPPA yielded F-measure values of 0.89, 0.87, and 0.94 for quiescent T cells, activated T cells, and MCF7 cells, respectively, for cytoplasm identification of hand-segmented cells. CPPA achieved F-measure values of 0.84, 0.74, and 0.72 for Cellpose segmented cells.
Conclusion:
These results exceed the F-measure value of a comparative cell segmentation method (CellProfiler, ~0.50–0.60) and support the use of artificial intelligence and post-processing techniques for accurate segmentation of autofluorescence images for single-cell metabolic analyses.
1. Introduction
Cellular heterogeneity within tissues is characteristic of both healthy tissue and many diseases, including autoimmune disorders [1], skin fibrotic conditions [2], lysosomal storage ailments [3], and cancers [4]. Tumors contain many different cell types including cancer cells, vasculature, immune cells, and stromal cells. Increased intratumoral heterogeneity is correlated with less favorable patient outcomes, because it can foster drug-resistant cell populations [5]. Hence, the measurement of cellular heterogeneity is important for understanding tissue pathologies and devising effective therapeutic strategies. However, assessing cellular heterogeneity remains difficult. Conventional biochemical techniques like western blot [6], mRNA analysis [7], and oxygen consumption measurements such as the Seahorse assay [8], provide measurements of bulk populations of cells and lack single-cell resolution. In contrast, single-cell assessment techniques, such as flow cytometry [9] and single-cell RNA sequencing [10], require the cells to be in a suspension form, preventing concurrent analysis of spatial positioning and relationships. Furthermore, mainstream biochemical methods such as flow cytometry [9] often necessitate permeabilization of cells for intracellular labeling with external contrast agents, which prevents in vivo and time-course experiments.
Fluorescence microscopy enables cellular heterogeneity analysis, when images are segmented and analyzed at the single-cell level. Autofluorescence imaging (AFI) detects the fluorescence emitted by endogenous molecules such as reduced nicotinamide adenine dinucleotide (NADH), flavins, collagen, and porphyrins [11]. Both NADH and the phosphate-bound form, NADPH, are fluorescent with similar fluorescence excitation and emission properties [12]. Therefore, the notation NAD(P)H is used for the combined fluorescence signal of both NADH and NADPH. Due to the functions of NADH and flavin adenine dinucleotide (FAD) as co-enzymes of metabolic reactions, autofluorescence imaging of these molecules allows label-free imaging and assessment of cellular metabolism [13–15]. Fluorescence imaging of NAD(P)H and FAD has been leveraged to study heterogeneity in immune cells [16–18], cancer cells [19–22], cellular responses to therapeutic interventions [23–26], and spatial intratumoral heterogeneity [27].
To fully exploit autofluorescence imaging for heterogeneity analysis of metabolism within cell populations, images must be segmented into individual cytoplasms due to the localization of NAD(P)H in the cytosol and mitochondria [28]. Various strategies have been developed to segment cells within microscopy images. While some methods, tailored to particular datasets, rely on conventional techniques like intensity-based thresholds or watershed operations [29–31], others use machine learning including convolutional neural networks for pixel classification and image partitioning [32–35]. CellProfiler is an open-source image analysis program that allows users to adopt modular imaging processing pipelines to automate image analysis [36]. CellProfiler is frequently employed for cell segmentation of autofluorescence images [29,37–39]. In our previous research [40], the POSEA algorithm was used to assess the segmentation results by CellProfiler for three different cell types. Specifically, we evaluated cell segmentation of quiescent T cells, activated T cells, and cytoplasmic segmentation of MCF7 cells. The F-measure for quiescent T cell segmentation achieved a score of 0.8. However, the activated T cell masks had a lower F-measure of approximately 0.7, and the F-measure for cytoplasmic segmentation of MCF7 cells was below 0.4, a result that was not acceptable for precise segmentation. These CellProfiler results, especially the poorly segmented results for MCF7 cells, underscore the need for improved segmentation techniques capable of handling the challenges presented by autofluorescence images, such as the low signal-to-noise ratio of the images, irregular cell shapes, and cell clusters [41].
Recently, machine learning and deep learning algorithms have shown promise for microscopy image segmentation. Commonly used methods such as Mask R–CNN [42], StarDist [43], and CellSeg [44] have pushed the boundaries of precision and efficiency in segmenting individual cells and intracellular components from complex images. Mask R–CNN extends object detection capabilities to provide precise segmentation masks; StarDist models cell shapes with innovative geometric approximations; and CellSeg tailors deep learning architectures for cellular structures, demonstrating the versatility of CNNs in capturing details that facilitate cellular analysis. However, despite their success, these methods encounter limitations when applied to certain imaging conditions and datasets. A common challenge is their dependence on large, well-annotated datasets for training—a requirement often unmet in specialized fields such as autofluorescence imaging, where the availability of extensive data is limited.
Cellpose [45] is a deep learning-based segmentation tool designed to segment cellular structures, including cell bodies, membranes, and nuclei, from microscopy images. Cellpose is capable of accurately handling a wide range of image types without the need for parameter adjustments because it was trained on a diverse and generalized dataset of over 70,000 segmented cells, enabling it to effectively segment different cellular shapes. Although not trained on autofluorescence images, Cellpose performs well at the detection of cell boundaries within autofluorescence images. However, Cellpose is unable to accurately detect nuclei or cytoplasmic regions within the cells of autofluorescence images.
Due to the importance of separating the cytoplasm region that contains metabolic information from the non-metabolizing nucleus regions within autofluorescence images, here, we introduce the combination of Cellpose for cell-segmentation with a novel cytoplasmic post-processing algorithm (CPPA) for cytoplasm segmentation of NAD(P)H images. This technique addresses common challenges for autofluorescence image segmentation including the low signal-to-noise ratios, irregular cell shapes, and the presence of cell clusters. Six different thresholding methods, specifically Isodata [46], Li [47], Mean [48], Otsu [49], Triangle [50], and Yen [51], were compared to optimize segmentation performance across NAD(P)H images of three distinct cell populations: quiescent T cells, activated T cells, and MCF7 cells. These cells were selected to demonstrate the ability of CPPA to segment images with cells of different sizes and morphologies. The accuracy of CPPA was compared with CellProfiler-segmentation results using a per-object segmentation evaluation algorithm (POSEA) [40,52].
2. Materials and methods
2.1. Dataset
2.1.1. Fluorescence images of T cells
Fluorescence images of T cells were sourced from a dataset previously published in Ref. [16] that was provided by Drs. Walsh and Skala. This dataset contains around 200 NAD(P)H images of T cells, accompanied with their respective CellProfiler cytoplasmic segmentation results. These images include two distinct T cell populations: quiescent and activated. Both the fluorescence intensity images and the CellProfiler segmentation images are 32-bit 256 × 256 pixels grayscale images with a pixel size of approximately 0.43 μm per pixel. To create a ground truth image of segmented cytoplasms, five random images each from the pool of quiescent and activated T cell images were picked for manual segmentation to reduce selection bias. For the selected images, each quiescent T cell image contained between 150 and 160 individual cells, while the activated T cell images have 32 to 60 cells. Quiescent and activated T cells present different segmentation challenges due to differences in their morphology. Quiescent T cells have uniform, round shapes, whereas activated T cells are more varied in size and tend to cluster.
2.1.2. Fluorescence images of MCF7 cells
The protocol for obtaining fluorescence images of MCF7 cells is detailed in Ref. [40]. MCF7 breast cancer cells were cultured in high glucose Dulbecco’s Modified Eagle’s Medium (Gibco, 11965–092), supplemented with 1 % penicillin: streptomycin and 10 % fetal bovine serum. The cells were plated at a density of 4 × 10^5 cells per 35 mm glass-bottom dish (MATTEK, P35G-1.5–14-C) in growth media and imaged 48 h after plating.
Images of NAD(P)H fluorescence were captured using a customized, inverted multiphoton fluorescence lifetime microscope (Marianas, 3i) with a 40× water-immersion objective (1.1 NA). For imaging, cells were placed in a stage top incubator (Okolab) to sustain a physiological environment of 37 °C, 5 % CO2, and 85 % relative humidity. A titanium: sapphire laser (COHERENT, Chameleon Ultra II) was tuned to 750 nm to excite NAD(P)H, and the laser power was maintained between 18 mW and 20 mW at the sample. The emission of NAD(P)H was captured using a photomultiplier tube (HAMAMATSU, H7422PA-40), paired with a 447/60 nm bandpass filter. Each 256 × 256-pixel fluorescence lifetime image was acquired for 60 s using a pixel dwell time of 50 μs and 5 frame accumulation. The field of view was approximately 270 × 270 μm2 for a pixel size of approximately 1.05 μm per pixel. Fluorescence lifetime images were integrated across time to generate an intensity image.
2.1.3. Cytoplasmic segmentation using CellProfiler
From a dataset of thirty-one NAD(P)H intensity images of MCF7 breast cancer cells, five images were selected at random. Cell cytoplasms of the NAD(P)H images were segmented using a published CellProfiler pipeline [29]. The CellProfiler pipeline uses a series of threshold and object identification steps to first segment regions of cells by defining the boundaries separating cells or clusters of cells from the background [29]. Subsequently, the nuclei regions are detected using the intensity difference between the nucleus and the cytoplasm [29]. Next, individual cells are identified by propagation of an edge boundary from the nucleus to a cell-background boundary or another propagating cell [29]. A cytoplasm mask image is generated by removing nucleus objects from cell objects. Finally, the identified cytoplasm objects were filtered by size to retain objects between 100 and 500 pixels. Detailed steps along with representative figures for the CellProfiler pipeline are shown in Supplementary Section 1 (Figs. S1–S3). Additional customization of the parameters of the CellProfiler pipeline for MCF7 cells are published [40].
2.1.4. Cell mask segmentation using Cellpose
In Python 3.7.7, Cellpose 2.0 was installed from https://github.com/MouseLand/cellpose. Images were loaded into the Cellpose graphical interface and displayed with the “grayscale” palette. The “calibrate” function was used to automatically identify the approximate cell diameter within each image. The Cellpose model offers a total of fourteen segmentation models, including five basic models (cyto, nuclei, tissuenet, livecell, and cyto2) and nine pretrained models designed for images of different microscopy modalities (fluorescence, brightfield) and different object identification (nucleus, cell boundary) tasks. After visual inspection of the performance of each model, the “cyto2” model was selected for cell segmentation of the NAD(P)H images of quiescent T cells, activated T cells, and MCF7 cells. The “cyto2” model is the original Cellpose 1.0 model [45] trained on the Cellpose cyto dataset [53]. The resulting cell mask images were saved in a designated directory using the “Save masks as PNG/tif” function.
2.2. CPPA
2.2.1. Preprocessing
The CPPA creates an image segmentation mask in which the pixels corresponding to each cell cytoplasm have a consistent value that is unique to each cell. CPPA accepts the original NAD(P)H intensity image and a corresponding cell-segmented mask such as that generated from Cellpose (Fig. 1). The segmented mask, effectively a label matrix, contains pixels grouped into objects with consecutive index values from 1 to n, where n is the total objects in the image. A binary mask is generated from the Cellpose results, with pixel values of 1 for objects and 0 for the background. This mask is then multiplied by the original NAD(P)H image to set all non-object pixels to 0, a step that removes noise.
Fig. 1.
Flow chart of the steps in the CPPA to obtain the cytoplasmic mask from an input NAD(P)H fluorescence image and corresponding cell mask. A representative image of activated T cells is used to illustrate the output of each step of CPPA.
2.2.2. Outlier removal and thresholding
Cell pixels of the NAD(P)H images can be separated by intensity as the nucleus contains a lower concentration of NAD(P)H molecules and is dimmer than the surrounding cytoplasm. Pixels containing mitochondria with high NAD(P)H concentrations have the highest fluorescence intensities. In CPPA, an intensity threshold is applied to separate the dim nucleus pixels from the cytoplasm pixels. To avoid outlier bias from the bright mitochondria pixels, extremely bright pixels, defined as pixels with values that exceed the third quartile plus 1.5 times the interquartile range, are removed. These bright outlier pixels are identified and removed at the image level.
Then, six binary thresholding techniques (Isodata [46], Li [47], Mean [48], Otsu [49], Triangle [50], and Yen [51]) are applied to separate low-intensity nucleus pixels from higher-intensity cytoplasm pixels at the image level. Each of the six techniques uses a different method to set the threshold value. In the Isodata method [46], an initial threshold is selected and the image is segmented into two classes, foreground and background. Subsequently, the mean intensity value for each class is calculated and the threshold value is then updated by the average of these two mean intensities, iteratively until the threshold either stabilizes or subsequent changes become negligible. Li’s method [47] determines the threshold value by minimizing the cross-entropy between foreground and background. The Mean method [48] defines the threshold as the average of all pixel intensities present in the image. The Otsu method [49] calculates a threshold value that optimizes the inter-class variance between the foreground and background. The Triangle [50] technique involves drawing a line from the peak of the histogram to the maximum or minimum data point, whichever is furthest from the peak value. The threshold is selected as the intensity value corresponding to the data point of the histogram that is the furthest from the line. Lastly, in Yen’s method [51], the threshold value that maximizes the log-transformed, between-class variance within the image is identified. The application of each of these threshold methods results in a nucleus mask with index values matched to the original cell mask image.
2.2.3. Distance transform and K value threshold
To consolidate fragmented nuclei and remove low-intensity pixels from the edges of the cells which are not true nucleus pixels, a distance transform algorithm [54] is applied to the Cellpose results after the object contours are removed. A binary mask is then created from the resulting image using an automated thresholding method. The threshold is determined by the sum of the mean intensity of contour pixels and an empirical value, k, on the distance transform results.
2.2.4. Cytoplasm mask and noise correction
The cytoplasm segmentation image is obtained by subtracting the nucleus mask from the input image containing the cell objects. To address cytoplasm fragmentation and remove non-adjacent pixels, each cytoplasm region, based on its unique index, is extracted into a blank image. A Depth-First Search (DFS) [55] then identifies the largest four-connected components, which are combined together to form the final result of cytoplasmic regions. This four-connected approach is chosen because it more effectively excludes isolated bright noise consisting of a single or a few pixels as compared with an eight-connected approach which would also include diagonal connections and was not as effective at removing the pixel noise.
2.2.5. Cytoplasmic segmentation via CPPA
Fluorescence images and cell masks are loaded into CPPA for generating cytoplasm masks. The input cell masks should be grayscale images, with adjacent segmented objects assigned unique integer values. The outputs of CPPA are the cytoplasm and nucleus mask images with unique index values for each cell. The output images are saved as a TIFF file in a created output directory, designated for storing TIFF files.
2.3. Segmentation evaluation
For each NAD(P)H image, two ground truth segmented images, one with cells and one with cytoplasms, were created by hand-segmentation of the images in ImageJ [56]. The per-object segmentation evaluation algorithm (POSEA) [40] was utilized to assess CPPA’s accuracy, focusing exclusively on object-level evaluation in this study. POSEA enabled us to calculate Precision, Recall, and F-measure values at two scales: across the entire image for a global view of segmentation performance (image-level analysis) and each object to detail CPPA’s precision on a per-cell basis (cell-level analysis).
POSEA was used to calculate accuracy metrics for CPPA-segmented cytoplasms and CellProfiler segmented cytoplasms for the 5 representative quiescent T cell, activated T cell, and MCF7 cell images. The Precision, Recall, and F-measure output values were recorded. In image segmentation, a perfect score of 1 in Precision (P), Recall (R), and F-measure (F) indicates a perfect match with the ground truth. Conversely, a score of 0 indicates no overlapping pixels between objects of the ground truth and evaluated segmented image.
Moreover, the Intersection over Union (IOU) metric [57] was used to provide a supplementary assessment of the segmentation accuracy for both the CPPA and CellProfiler methods. Detailed information on this additional metric can be found in Supplementary Section 3.
2.4. K value optimization
In the development of CPPA, a crucial step involves optimizing the empirical value, k, which is important in determining the threshold for the distance transform results. The threshold value is defined by the sum of the mean intensity of contour pixels and the empirical k value to consolidate fragmented nuclei and eliminate low-intensity edge pixels that do not accurately represent true nucleus pixels, as outlined in our method (Section 2.2.3).
To identify the optimal value, k was varied between 0.1 and 2 in 0.1 step sizes and CPPA was performed on the NAD(P)H images of quiescent T cells, activated T cells, and MCF7 cells. Using ground truth cell masks in CPPA, the F-measure values of the CPPA cytoplasmic segmentation outcomes were obtained and evaluated by POSEA (entire image). The k value that resulted in the highest F-measure for each cell type was selected as the optimal value and used for the CPPA identification of cell cytoplasms for the Cellpose input images.
Furthermore, to prevent overfitting during the optimization of the k value, a train/test analysis approach was adopted, as outlined in Supplementary Section 2. This involved dividing the images from each cell line into a training set, consisting of 3 randomly selected images, and a testing set, comprising 2 images. The optimal k value, determined using the training set, was subsequently validated on the testing set.
2.5. Graphical user interface (GUI)
The CPPA is based on Python 3.7.7. The algorithm code and GUI (Fig. 2) are available on GitHub (https://github.com/walshlab/CellPosePostProcessing/). The GUI has two primary functions: 1) It facilitates rapid assessment of segmentation algorithms by comparing the ground truth image with the segmentation outcomes, including the segmentation of cell, cytoplasm, and nucleus masks. 2) It applies CPPA to identify the cytoplasm and nucleus masks directly within the interface. The GUI allows users to select the thresholding technique under the “Models” section and adjust the “k value” to optimize segmentation performance. When equipped with ground truth images, users can select the functions: “find best (cytoplasm)" and “find best (nucleus)", to automatically choose the most accurate thresholding strategy by selecting the thresholding method with the highest F-measure value, as calculated by POSEA (entire image) for all the available thresholding methods.
Fig. 2.
The graphical user interface (GUI) for implementation of the post-processing algorithm (CPPA) and the Per-Object Segmentation Evaluation Algorithm (POSEA).
3. Results
3.1. Visual comparison of cytoplasmic segmentation
NAD(P)H is primarily localized to the cytosol and mitochondria within cells. As a result, cells in NAD(P)H images have a high fluorescence intensity in the cytosol and a dim nucleus (Fig. 3). Representative NAD(P)H fluorescence images (first row), ground truth segmented cytoplasm images (second row), cytoplasm segmentation results via CellProfiler (third row), and CPPA (Otsu thresholding; fourth row) segmentation results from input Cellpose segmented cells of quiescent T cells (first column), activated T cells (second column), and MCF7 cells (third column) show the differences in cell shape and clustering among the groups (Fig. 3).
Fig. 3.
Representative NAD(P)H fluorescence images and corresponding segmented images. The rows, from top to bottom, are representative NAD(P)H fluorescence images, ground truth segmented images, CellProfiler segmented images, and CPPA (Otsu thresholding) for cytoplasm segmentation from Cellpose segmented images, respectively. The columns, from left to right, are quiescent T cells, activated T cells, and MCF7 cells.
Fig. 3 illustrates the morphological characteristics of the three cell types. Quiescent T cells are round, uniform in size, and typically non-adjacent. Therefore, quiescent T cells are easy to segment. Although Activated T cells are generally round, these cells form clusters and vary in size, introducing complications for automated segmentation. MCF7 breast cancer cells have irregular shapes and form clusters with obscured boundaries between cells, and are thus difficult to segment.
3.2. Optimization of k value in CPPA
In CPPA, the k value is used to alter the threshold value that defines which boundary pixels, particularly those with intensity values similar to the nucleus pixels, should be excluded. Fig. 4 shows representative images of the intermediate k threshold step for an activated T cell image with different k values. Fig. 4(A) is the nucleus mask before the implementation of the distance transform and k value threshold process. Fig. 4 (B) shows the nucleus mask result with an optimal k value of 1.8. Fig. 4 (C) shows the result of a low k value (k = 0.1) which does not adequately remove all pixels at the edges of the cells. Fig. 4(D) shows the results of a high k (k = 5.0), where the threshold value is too high and nucleus pixels are removed. To identify the optimal k for each cell type, a range of k values were tested for the identification of the cytoplasm region from the ground truth cell mask. By comparing F-measure values evaluated by POSEA (entire image), an optimal k value of 0.1 was identified for quiescent T cells, 1.8 for activated T cells, and 1 for MCF7 cells. The corresponding average F-measure values of all five images for each cell group were 0.89 (±0.012) for quiescent T cells, 0.87 (±0.015) for activated T cells, and 0.94 (±0.008) for MCF7 cells for the segmented cytoplasm images identified with CPPA using the optimized k values and ground-truth cell masks. Additionally, the results from the train/test analysis in Supplementary Section 2 (Table S1) revealed that the optimal k values for all three cell lines remained consistent, underscoring the reliability of the k value selection.
Fig. 4.
Visualization of k value optimization for a representative activated T cell image. (A) Nucleus mask result after binary thresholding, prior to applying distance transform and k value threshold. (B) Nucleus mask result with the optimal k value (k = 1.8) used for the distance transform threshold. (C) Nucleus mask result for a k value of 0.1. (D) Nucleus mask result for a k value of 5.0.
3.3. Evaluation of CPPA identified cytoplasms from Cellpose cell masks using POSEA (entire image)
First, the performance of Cellpose plus CPPA for the identification of cell cytoplasms from NAD(P)H autofluorescence images was evaluated at the image level using POSEA to calculate F-measure metrics (Table 1). For CPPA segmented cytoplasms using Cellpose cell masks of 5 images of quiescent T cells, the Isodata, Li, Otsu, Triangle, and Yen methods all achieved an average F-measure of 0.82 with a standard deviation of ±0.01 (Table 1). The CPPA-mean method for segmentation of quiescent T cells had a lower F-measure average value of 0.78 with a standard deviation of ±0.02. For activated T cells, the average F-measure values of all six thresholding methods ranged from 0.72 to 0.74, with the standard deviation ranging from ±0.04 to ±0.05. For Cellpose-CPPA identification of cytoplasms within NAD(P)H images of MCF7 cells, the Isodata, Li, Mean, Otsu, and Yen thresholding methods all achieved an F-measure of 0.72 with a standard deviation of ±0.03. The CPPA-triangle algorithm had a lower F-measure value of 0.69 for segmentation of MCF7 cell images, with a standard deviation of ±0.05.
Table 1.
Comparison of F-measure values for different cytoplasm segmentation algorithms including CellProfiler and each unique Cellpose + CPPA threshold method. F-measure values are calculated using POSEA at the image level and are reported as the mean (± standard deviation) of n = 5 images each of quiescent T cells, activated T cells, and MCF7 cells. A rank sum test [58] was applied to these F-measure values, revealing that the F-measure for the CellProfiler method is significantly lower (p < 0.05) compared to the CPPA threshold methods. Within the CPPA thresholding methods, no significant differences were observed except for the mean thresholding method applied to quiescent T cells, which showed a significantly lower F-measure (p < 0.05) compared to other thresholding approaches.
Segmentation Algorithm | F-measure | ||
---|---|---|---|
|
|||
Quiescent T cells | Activated T cells | MCF7 cells | |
| |||
CellProfiler | 0.61 (±0.04) | 0.55 (±0.09) | 0.51 (±0.03) |
CPPA-isodata | 0.82 (±0.01) | 0.74 (±0.04) | 0.72 (±0.03) |
CPPA-li | 0.82 (±0.01) | 0.72 (±0.04) | 0.72 (±0.03) |
CPPA-mean | 0.78 (±0.02) | 0.72 (±0.04) | 0.72 (±0.03) |
CPPA-otsu | 0.82 (±0.01) | 0.74 (±0.04) | 0.72 (±0.03) |
CPPA-triangle | 0.82 (±0.01) | 0.74 (±0.04) | 0.69 (±0.05) |
CPPA-yen | 0.82 (±0.01) | 0.74 (±0.05) | 0.72 (±0.03) |
In contrast to the Cellpose-CPPA method, cell cytoplasms can be segmented using traditional image segmentation steps in CellProfiler. For the 5 CellProfiler segmented cytoplasm images of quiescent T cells, the average F-measure value was 0.61 with a standard deviation of ±0.04. For the 5 CellProfiler segmented cytoplasm images of activated T cells, the average F-measure value was 0.55 with a standard deviation of ±0.09. Similarly, for the 5 CellProfiler segmented cytoplasm images of MCF7 cells, the average F-measure value was 0.51 with a standard deviation of ±0.03.
3.4. Per-cell assessment of Cellpose-CPPA for cytoplasm identification
Second, the performance of Cellpose plus CPPA for the identification of cell cytoplasms from NAD(P)H autofluorescence images was assessed at the cell level by using POSEA to obtain F-measure metrics for each cell (Table 2). For Cellpose-CPPA segmented cytoplasms of quiescent T cells, the Isodata, Li, Otsu, and Yen methods in CPPA produced an average F-measure of 0.83 (±0.09). The CPPA-triangle method had the highest F-measure of 0.84 (±0.08). However, the CPPA-mean method for quiescent T cells had an average F-measure of 0.80 with a standard deviation of ±0.09. For activated T cells, Cellpose-CPPA resulted in average F-measures between 0.75 and 0.76 for all thresholding methods, with standard deviations ranging from 0.13 to 0.14. For Cellpose-CPPA segmentation of MCF7 cells, all thresholding methods in CPPA, except CPPA-triangle, had average F-measure values of 0.72 with standard deviations of ±0.21. The CPPA-triangle method applied to MCF7 cells had a slightly lower average F-measure value of 0.69 and a standard deviation of ±0.20.
Table 2.
F-measure values for Cellpose-CPPA and CellProfiler cytoplasm segmentation of NAD(P)H images of quiescent T cells, activated T cells, and MCF7 cells. F-measure values are computed for each identified cell via POSEA. Values are reported as mean (± standard deviation), n = 982 quiescent T cells, 228 activated T cells, and 335 MCF7 cells. From the subsequent statistical analysis, it was found that the F-measure for the CellProfiler method is significantly lower (p < 2e-16) compared to the CPPA threshold methods. In addition, within the CPPA thresholding methods, no significant differences were observed except for the mean thresholding method applied to quiescent T cells, which showed a significantly lower F-measure (p < 2e-16) compared to other thresholding methods. This aligns with the analysis presented in Table 1.
Segmentation Algorithm | F-measure | ||
---|---|---|---|
|
|||
Quiescent T cells | Activated T cells | MCF7 cells | |
| |||
CellProfiler | 0.59 (±0.23) | 0.55 (±0.21) | 0.46 (±0.29) |
CPPA-isodata | 0.83 (±0.09) | 0.76 (±0.13) | 0.72 (±0.21) |
CPPA-li | 0.83 (±0.09) | 0.75 (±0.14) | 0.72 (±0.21) |
CPPA-mean | 0.80 (±0.09) | 0.75 (±0.14) | 0.72 (±0.21) |
CPPA-otsu | 0.83 (±0.09) | 0.76 (±0.14) | 0.72 (±0.21) |
CPPA-triangle | 0.84 (±0.08) | 0.76 (±0.14) | 0.69 (±0.20) |
CPPA-yen | 0.83 (±0.09) | 0.75 (±0.13) | 0.72 (±0.21) |
Compared to the Cellpose-CPPA method, cell cytoplasms can be segmented using traditional image processing techniques in CellProfiler. For the cell-level POSEA comparison of CellProfiler segmented quiescent T cells cytoplasms with ground truth segmented images, the average F-measure derived from CellProfiler was 0.59 with a standard deviation of ±0.23. For activated T cells, the average F-measure was 0.55 ± 0.21, and for MCF7 cells, the average F-measure was 0.46 ± 0.29.
3.5. Statistical analysis
3.5.1. Comparison between CellProfiler and Cellpose-CPPA
A statistical analysis was conducted to compare the cytoplasm segmentation performance of CellProfiler and Cellpose-CPPA. F-measure values of all the CPPA thresholding techniques were combined. The per-cell POSEA computed F-measure values were increased for Cellpose-CPPA identified cytoplasms relative to CellProfiler identified cytoplasms for images of quiescent T cells, activated T cells, and MCF7 cells (p < 0.001 for each cell line, Fig. 5). The IOU metrics are also increased for Cellpose-CPPA identified cells compared with CellProfiler segmentation results (Table S2), further demonstrating that Cellpose-CPPA significantly outperforms CellProfiler in segmenting cytoplasms across all three cell lines.
Fig. 5.
Comparison of cell-level F-measure values between CellProfiler cytoplasm segmentation and Cellpose-CPPA results for NAD(P)H images of quiescent T cells, activated T cells, and MCF7 cells. ****p < 2e-16 (Student’s T test), n = 982 quiescent T cells, 228 activated T cells, and 335 MCF7 cells.
3.5.2. Statistical analysis of thresholding methods in CPPA
To evaluate differences in CPPA performance across the thresholding techniques, t-tests were conducted to compare the thresholding methods within CPPA (Fig. 6). A false discovery rate (FDR) correction [59] for multiple comparisons was used. For quiescent T cells, the Mean thresholding method had a reduced F-measure accuracy metric as compared to each other method including Isodata (p < 2e-16), Li (p < 2e-16), Otsu (p < 2e-16), Triangle (p < 2e-16), and Yen (p < 2e-16). No significant differences were identified among the thresholding methods for CPPA cytoplasmic segmentation of activated T cells or MCF7 cells.
Fig. 6.
Comparison of F-measure values across the thresholding methods within CPPA applied to NAD(P)H images of quiescent T cells, activated T cells, and MCF7 cells. ****p < 2e-16.
4. Discussion
Cellular heterogeneity within tissues is characteristic of many diseases including cancer; yet, few biomolecular assays provide information at a cellular level. Fluorescence imaging of the endogenous metabolic cofactors reduced nicotinamide adenine dinucleotide (NAD (P)H) and flavin adenine dinucleotide (FAD) enables nondestructive evaluation of cellular metabolism [60]. When coupled with cell segmentation, autofluorescence imaging enables analysis of metabolism at a cellular level and assessment of metabolic heterogeneity [61]. However, the localization of NAD(P)H to mitochondria and the cytosol, necessitates accurate segmentation of the cytoplasm of each cell to reduce the contributions of pixels of the non-metabolizing nucleus. Cytoplasmic segmentation of autofluorescence images is difficult due to the low signal-to-noise ratio of the images and irregular morphology of cells. Cellpose [45], a recently published deep-learning microscopy segmentation tool, identifies cell boundaries in autofluorescence images, yet fails to extract either a nucleus or cytoplasm mask. Here, a cytoplasmic post-processing algorithm (CPPA) is evaluated for accurate separation of cytoplasm and nucleus pixels within NAD(P)H fluorescence images following cell-level segmentation. First, the cells are segmented within autofluorescence images either through manual hand-segmentation or an automated method such as Cellpose. Next, the CPPA uses an intensity-based thresholding technique to obtain the nuclear regions. Nuclei masks are then corrected for errors at cell edges, and the refined nuclear mask is then subtracted from the segmented cell image, producing a cytoplasm mask.
CPPA combined with Cellpose has utility and accuracy advantages over CellProfiler [36], a commonly used [38,62–64] cell segmentation software. CellProfiler requires manual adjustments of parameters that are tailored to each dataset, which requires user expertise and limits the transference of CellProfiler pipelines across datasets. In contrast, Cellpose + CPPA only requires one input parameter, k, the selection of which can be optimized and automated. Furthermore, the separation of the cell segmentation from the cytoplasm and nucleus identification task, allows both processes to be completed and optimized by different methods. Because of the consistent pattern within NAD(P)H images where the nucleus is at a lower intensity than the cytoplasm, the CPPA can use an intensity threshold to separate cytoplasm pixels and is broadly applicable to NAD(P)H images of different cells. Furthermore, Cellpose + CPPA is more accurate than CellProfiler for the segmentation of cell cytoplasms within NAD(P)H images (Figs. 5 and 6). CPPA combined with Cellpose is not only more accurate at cytoplasm segmentation but also more robust and easier to apply to new datasets as compared with CellProfiler segmentation pipelines.
Cellpose + CPPA combines the advantages of using a pretrained machine learning model for cell identification with traditional image processing techniques for subsequent separation of cells into nucleus and cytoplasm compartments. This approach leverages the strengths of machine learning to identify cells with different morphologies and within clusters, a task that is difficult for traditional image processing methods. Likewise, the use of traditional thresholding and image processing within the CPPA enables accurate performance for cytoplasm segmentation within autofluorescence images, a task for which there is limited training data. While machine learning models generally can outperform traditional image processing with sufficient training and development [65], their application for cytoplasmic segmentation within autofluorescence images remains limited due to the low availability of training datasets. Most public datasets for cell segmentation, such as the Broad Bioimage Benchmark Collection [66], contain images of cells labeled with fluorescent dyes. The creation of comprehensive, curated biomedical image datasets for model development requires the tedious process of manual segmentation [67]. As a result, many deep-learning models, which rely on extensive and specific training datasets [32,68–71], are not broadly applicable. Moreover, de novo creation of advanced machine learning models often requires a computational background, programming expertise, and a thorough knowledge of model structures. In contrast to those deep-learning approaches, Cellpose utilizes a pretrained model, which has been trained on an extensive dataset of roughly 70,000 objects. The pretraining eliminates the need for users to have or produce specialized training datasets on their own. In addition, Cellpose is equipped with a user-friendly GUI, minimizing the need for users to have a computational background. Consequently, the combination of precise cell segmentation by Cellpose using a pre-trained machine learning model and further cytoplasmic segmentation by CPPA using image processing techniques offers a clear advantage compared to conventional machine learning models.
Due to the similar intensity values of the nucleus and pixels at cell edges within NAD(P)H images, the use of a threshold to separate low-intensity nucleus pixels from the higher-intensity cytoplasm can result in errors along the edge of the cell. To address these misclassified pixels at the edges of the cells, the CPPA includes a parameter, k, that can be optimized to remove pixels from the edge of the cell from the nucleus segmentation. The application of the distance transform here is preferred over basic size filtering due to its ability to provide radial spatial information from the center of the cell objects, which is critical for the effective exclusion of dim edge pixels that simple thresholding methods may not adequately differentiate from nucleus pixels. The optimal k value depends on a number of factors including the intensity values of the image, the size of cells, and image resolution. As depicted in Fig. 4, a low k value (k = 0.1) retains pixels at the boundary of the cells within the nucleus mask image (Fig. 4(C)). In contrast, a high k value (k = 5) results in over-thresholding and removes true nucleus pixels. Therefore, the use of CPPA first requires optimization of k for a specific image dataset. For images of large cells, such as MCF7 cell images, which exhibit a sufficient contrast between the background and cell cytoplasms, an optimal k value of 1 is suggested. However, for images of smaller cells or those with low SNR between the background and cell, the k value can be optimized by testing a range of k values and evaluating the resultant segmentation outcomes.
The application and evaluation of CPPA to ground-truth cell mask images provide a measurement of CPPA accuracy independent of the performance of cell-level identification by Cellpose. Given accurate cell masks, CPPA achieves cytoplasmic segmentation with accuracy metrics ~0.90 (Results Section 3.2). Using optimal k values and ground truth cell masks, the F-measure values for CPPA cytoplasmic segmentation were 0.89, 0.87, and 0.94, respectively, for quiescent T cells, activated T cells, and MCF7 cells. The higher F-measure value observed for MCF7 cells (0.94) may be attributed to the higher intensity values and increased contrast between the nucleus and cytoplasm regions (Fig. 3), which facilitates differentiation between the nucleus and cytoplasm, as compared to T cell images. For the NAD(P)H images of quiescent T cells and activated T cells, which have smaller cytoplasm regions and lower contrast between the nucleus and cytoplasm, the slight improvement in CPPA performance for quiescent T cells may be due to the uniform size and morphology of quiescent T cells.
The accuracy values for CPPA segmentation of ground truth cell mask images (0.89, 0.87, and 0.94) are higher than the results (0.77, 0.74, 0.72, respectively, for quiescent T cells, activated T cells, and MCF7 cells) obtained by CPPA when using cell masks derived from Cellpose. The difference between the accuracy rates of CPPA for ground truth cell mask images and cell masks derived from Cellpose demonstrates the utility of CPPA. CPPA can be used with new and improved image segmentation tools like Cellpose as the accuracy of cell-level segmentation improves for autofluorescence images. As AI-image segmentation continues to improve for cellular identification, the combination of AI and CPPA is anticipated to approach the accuracy rates achieved for ground truth segmented images, ~0.90–0.95.
A comparison between the per-cell accuracy rate and the entire image accuracy rate reveals an increased accuracy of Cellpose-CPPA when evaluated at the cell level as compared to the image level evaluation for quiescent T cells. At the cell level, POSEA iteratively evaluates objects of the ground truth image and objects unique to the segmented image are not included in the cell-level accuracy metrics. For MCF7 cell images, which exhibit high cell-background contrast, accuracy remains consistent across both evaluation methods. This consistency arises from a low number of false positive pixels in the Cellpose segmentation results for MCF7 images. In contrast, for images of quiescent and activated T cells with a lower SNR, non-cellular pixels are occasionally recognized as cells by Cellpose, resulting in a slight reduction in image-level accuracy compared to per-cell analysis.
For optimization of CPPA, six threshold methods, Isodata, Li, Mean, Otsu, Triangle, and Yen, were compared. For activated T cells and MCF7 cells, all six of the thresholding techniques produced similar cytoplasmic segmentation performance (Fig. 6). Each of the threshold techniques uses a single threshold value to differentiate background and foreground pixels. For the quiescent T cells, the mean threshold method, which sets the threshold as the mean intensity value of the image, was less accurate than the other threshold methods (Fig. 6). The lower accuracy of the Mean thresholding technique for the quiescent T cells is due to a high number of false positives from the selection of a threshold value that is too low for quiescent T cells. Compared to the other cell lines, the background and nucleus pixels occupy a larger percentage of the total image for quiescent T cells, which results in a low threshold value using the Mean thresholding method. While these results (Fig. 6) suggest that the selection of the Isodata, Li, Otsu, Triangle, or Yen threshold methods produce similar results, the CPPA GUI facilitates both identification of the optimal threshold method and application of a selected method for a dataset. If the CPPA GUI is provided with a ground truth cytoplasm or nucleus segmentation image, CPPA will return the optimal thresholding methods that maximize the F-measure metric via the “find best (cytoplasm)" and “find best (nucleus)" functions. Once the ideal method is determined, it can be applied to the remaining images.
Although CPPA provides accurate cytoplasmic segmentation of NAD (P)H images given an image with cell boundaries, it has some limitations. CPPA requires a precise segmentation cell mask to obtain an accurate corresponding cytoplasmic mask. Further, the final step of extracting each cell into a blank image to retain the largest four-connected components requires iteration and is thus computationally slow. Furthermore, CPPA is currently restricted to analysis of a single image at a time; however, it would be trivial to implement batch processing for extensive datasets.
5. Conclusion
Cellpose combined with CPPA provides a robust and accurate method to identify cells and separate nuclei and cytoplasm regions of cells of autofluorescence images. Cellpose with CPPA not only demonstrates improved accuracy over a traditional CellProfiler image processing pipeline, but also offers advantages for user accessibility and robust application across different datasets. In summary, Cellpose and CPPA combine the robustness of artificial intelligence with traditional image processing techniques, providing a precise method for cytoplasmic segmentation of autofluorescence images to facilitate single cell analysis of autofluorescence images.
Supplementary Material
Appendix A. Supplementary data
Supplementary data to this article can be found online at https://doi.org/10.1016/j.compbiomed.2024.108846.
Acknowledgement
Funding sources include CPRIT (RP200668) and NIH (R35 GM142990).
Footnotes
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Declaration of generative AI and AI-assisted technologies in the writing process
During the preparation of this work, the authors used ChatGPT only to improve readability and language. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
CRediT authorship contribution statement
Nianchao Wang: Writing – original draft, Visualization, Validation, Software, Methodology, Investigation, Formal analysis, Data curation. Linghao Hu: Writing – review & editing, Writing – original draft, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Alex J. Walsh: Writing – review & editing, Validation, Supervision, Resources, Project administration, Methodology, Investigation, Funding acquisition, Conceptualization.
References
- [1].Cha J, Lee I, Single-cell network biology for resolving cellular heterogeneity in human diseases, Exp. Mol. Med. 52 (11) (2020) 1798–1808. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Deng C-C, et al. , Single-cell RNA-seq reveals fibroblast heterogeneity and increased mesenchymal fibroblasts in human fibrotic skin diseases, Nat. Commun. 12 (1) (2021) 1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Gieselmann V, What can cell biology tell us about heterogeneity in lysosomal storage diseases? Acta Paediatr. 94 (2005) 80–86. [DOI] [PubMed] [Google Scholar]
- [4].Marusyk A, Polyak K, Tumor heterogeneity: causes and consequences, Biochimica et Biophysica Acta (BBA)-Reviews on Cancer. 1805 (1) (2010) 105–117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Dagogo-Jack I, Shaw AT, Tumour heterogeneity and resistance to cancer therapies, Nat. Rev. Clin. Oncol. 15 (2) (2018) 81–94. [DOI] [PubMed] [Google Scholar]
- [6].Mahmood T, Yang P-C, Western blot: technique, theory, and trouble shooting, N. Am. J. Med. Sci. 4 (9) (2012) 429. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Kozak M, An analysis of vertebrate mRNA sequences: intimations of translational control, J. Cell Biol. 115 (4) (1991) 887–903. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].Van der Windt GJ, Chang CH, Pearce EL, Measuring bioenergetics in T cells using a seahorse extracellular flux analyzer, Curr. Protoc. Im 113 (1) (2016) 3.16 B. 1, 3.16 B. 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Adan A, et al. , Flow cytometry: basic principles and applications, Crit. Rev. Biotechnol. 37 (2) (2017) 163–176. [DOI] [PubMed] [Google Scholar]
- [10].Kolodziejczyk AA, et al. , The technology and biology of single-cell RNA sequencing, Mol. Cell 58 (4) (2015) 610–620. [DOI] [PubMed] [Google Scholar]
- [11].Song L-MWK, et al. , Autofluorescence imaging. Gastrointestinal endoscopy 73 (4) (2011) 647–650. [DOI] [PubMed] [Google Scholar]
- [12].Huang S, Heikal AA, Webb WW, Two-photon fluorescence spectroscopy and microscopy of NAD(P)H and flavoprotein, Biophys. J. 82 (5) (2002) 2811–2825. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [13].Kolenc OI, Quinn KP, Evaluating cell metabolism through autofluorescence imaging of NAD (P) H and FAD, Antioxidants Redox Signal. 30 (6) (2019) 875–889. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Chance B, et al. , Oxidation-reduction ratio studies of mitochondria in freeze-trapped samples. NADH and flavoprotein fluorescence signals, J. Biol. Chem. 254 (11) (1979) 4764–4771. [PubMed] [Google Scholar]
- [15].Georgakoudi I, Quinn KP, Optical imaging using endogenous contrast to assess metabolic state, Annu. Rev. Biomed. Eng. 14 (2012) 351–367. [DOI] [PubMed] [Google Scholar]
- [16].Walsh AJ, et al. , Classification of T-cell activation via autofluorescence lifetime imaging, Nat. Biomed. Eng. 5 (1) (2021) 77–88. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [17].Alfonso-García A, et al. , Label-free identification of macrophage phenotype by fluorescence lifetime imaging microscopy, J. Biomed. Opt. 21 (4) (2016), 046005–046005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Heaster TM, et al. , Autofluorescence imaging of 3D tumor–macrophage microscale cultures resolves spatial and temporal dynamics of macrophage metabolism, Cancer Res. 80 (23) (2020) 5408–5423. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Walsh AJ, Skala MC, Optical metabolic imaging quantifies heterogeneous cell populations, Biomed. Opt Express 6 (2) (2015) 559–573. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Sharick JT, et al. , Cellular metabolic heterogeneity in vivo is recapitulated in tumor organoids, Neoplasia 21 (6) (2019) 615–626. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Wallrabe H, et al. , Segmented cell analyses to measure redox states of autofluorescent NAD (P) H, FAD & Trp in cancer cells by FLIM, Sci. Rep. 8 (1) (2018) 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Cardona EN, Walsh AJ, Identification of rare cell populations in autofluorescence lifetime image data, Cytometry 101 (6) (2022) 497–506. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].Shah AT, et al. , In vivo autofluorescence imaging of tumor heterogeneity in response to treatment, Neoplasia 17 (12) (2015) 862–870. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [24].Walsh AJ, et al. , Optical imaging of drug-induced metabolism changes in murine and human pancreatic cancer organoids reveals heterogeneous drug response, Pancreas 45 (6) (2016) 863. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Walsh AJ, et al. , Quantitative optical imaging of primary tumor organoid metabolism predicts drug response in breast cancer, Cancer Res. 74 (18) (2014) 5184–5194. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Walsh AJ, et al. , Optical metabolic imaging identifies glycolytic levels, subtypes, and early-treatment response in breast cancer, Cancer Res. 73 (20) (2013) 6164–6174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Heaster TM, Landman BA, Skala MC, Quantitative spatial analysis of metabolic heterogeneity across in vivo and in vitro tumor models, Front. Oncol. 9 (2019) 1144. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Bradshaw PC, Cytoplasmic and mitochondrial NADPH-coupled redox systems in the regulation of aging, Nutrients 11 (3) (2019) 504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Walsh AJ, Skala MC, An automated image processing routine for segmentation of cell cytoplasms in high-resolution autofluorescence images, in: Multiphoton Microscopy in the Biomedical Sciences, vol. XIV, 2014. SPIE. [Google Scholar]
- [30].Salvi M, et al. , Automated segmentation of fluorescence microscopy images for 3D cell detection in human-derived cardiospheres, Sci. Rep. 9 (1) (2019) 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [31].Gamarra M, et al. , Split and merge watershed: a two-step method for cell segmentation in fluorescence microscopy images, Biomed. Signal Process Control 53 (2019) 101575. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [32].Al-Kofahi Y, et al. , A deep learning-based algorithm for 2-D cell segmentation in microscopy images, BMC Bioinf. 19 (1) (2018) 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [33].Aydin AS, et al. , CNN based yeast cell segmentation in multi-modal fluorescent microscopy data, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, 2017. [Google Scholar]
- [34].Hu Z, et al. , Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification, Journal of Medical Imaging 2 (1) (2015) 14501, 014501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [35].Raza SEA, et al. , Mimo-net: a multi-input multi-output convolutional neural network for cell segmentation in fluorescence microscopy images, in: 2017 IEEE 14th International Symposium on Biomedical Imaging, IEEE, 2017. ISBI 2017). [Google Scholar]
- [36].Stirling DR, et al. , CellProfiler 4: improvements in speed, utility and usability, BMC Bioinf. 22 (2021) 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [37].Wang ZJ, et al. , Classifying T cell activity in autofluorescence intensity images with convolutional neural networks, J. Biophot. 13 (3) (2020) e201960050. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [38].Leavesley SJ, et al. , Hyperspectral imaging microscopy for identification and quantitative analysis of fluorescently-labeled cells in highly autofluorescent tissue, J. Biophot. 5 (1) (2012) 67–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [39].Theodossiou A, et al. , Autofluorescence imaging to evaluate cellular metabolism, JoVE (177) (2021) e63282. [DOI] [PubMed] [Google Scholar]
- [40].Wang N, Hu L, Walsh AJ, POSEA: a novel algorithm to evaluate the performance of multi-object instance image segmentation, PLoS One 18 (3) (2023) e0283692. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [41].Zhang Y, et al. , Automatic segmentation of intravital fluorescence microscopy images by K-means clustering of FLIM phasors, Opt Lett. 44 (16) (2019) 3928–3931. [DOI] [PubMed] [Google Scholar]
- [42].He K, et al. , Mask r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision, 2017. [Google Scholar]
- [43].Schmidt U, et al. , Cell detection with star-convex polygons, in: Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16–20, 2018, Proceedings, Part II 11, Springer, 2018. [Google Scholar]
- [44].Lee MY, et al. , CellSeg: a robust, pre-trained nucleus segmentation and pixel quantification software for highly multiplexed fluorescence images, BMC Bioinf. 23 (1) (2022) 46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [45].Stringer C, et al. , Cellpose: a generalist algorithm for cellular segmentation, Nat. Methods 18 (1) (2021) 100–106. [DOI] [PubMed] [Google Scholar]
- [46].Memarsadeghi N, et al. , A fast implementation of the ISODATA clustering algorithm, Int. J. Comput. Geom. Appl. 17 (1) (2007) 71–103. [Google Scholar]
- [47].Li C, Tam PK-S, An iterative algorithm for minimum cross entropy thresholding, Pattern Recogn. Lett. 19 (8) (1998) 771–776. [Google Scholar]
- [48].Senthilkumaran N, Vaithegi S, Image segmentation by using thresholding techniques for medical images, Comput. Sci. Eng.: Int. J. 6 (1) (2016) 1–13. [Google Scholar]
- [49].Xu X, et al. , Characteristic analysis of Otsu threshold and its applications, Pattern Recogn. Lett. 32 (7) (2011) 956–961. [Google Scholar]
- [50].Sekertekin A, A survey on global thresholding methods for mapping open water body using Sentinel-2 satellite imagery and normalized difference water index, Arch. Comput. Methods Eng. 28 (2021) 1335–1347. [Google Scholar]
- [51].Yen J-C, Chang F-J, Chang S, A new criterion for automatic multilevel thresholding, IEEE Trans. Image Process. 4 (3) (1995) 370–378. [DOI] [PubMed] [Google Scholar]
- [52].Carpenter AE, et al. , CellProfiler: image analysis software for identifying and quantifying cell phenotypes, Genome Biol. 7 (2006) 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [53].Pachitariu M, Stringer C, Cellpose 2.0: how to train your own model, Nat. Methods 19 (12) (2022) 1634–1641. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [54].Fabbri R, et al. , 2D Euclidean distance transform algorithms: a comparative survey, ACM Comput. Surv. 40 (1) (2008) 1–44. [Google Scholar]
- [55].Reif JH, Depth-first search is inherently sequential, Inf. Process. Lett. 20 (5) (1985) 229–234. [Google Scholar]
- [56].Abràmoff MD, Magalhães PJ, Ram SJ, Image processing with ImageJ, Biophot. Int. 11 (7) (2004) 36–42. [Google Scholar]
- [57].Choi H, et al. , Comparative analysis of generalized intersection over union, Sensor. Mater. 31 (11) (2019) 3849–3858. [Google Scholar]
- [58].McKnight PE, Najab J, Mann-whitney U test, The Corsini encyclopedia of psychology (2010) 1, 1. [Google Scholar]
- [59].Storey JD, A direct approach to false discovery rates, J. Roy. Stat. Soc. B Stat. Methodol. 64 (3) (2002) 479–498. [Google Scholar]
- [60].Quinn KP, et al. , Quantitative metabolic imaging using endogenous fluorescence to detect stem cell differentiation, Sci. Rep. 3 (1) (2013) 3432. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [61].Heaster TM, et al. , Intravital metabolic autofluorescence imaging captures macrophage heterogeneity across normal and cancerous tissue, Front. Bioeng. Biotechnol. 9 (2021) 644648. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [62].Xu A, Macrophage Phenotyping Using Autofluorescence Microscopy and Machine Learning, University of British Columbia, 2022. [Google Scholar]
- [63].Diem K, et al. , Image analysis for accurately counting CD4+ and CD8+ T cells in human tissue, J. Virol Methods 222 (2015) 117–121. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [64].Piasecka J, et al. , Label free identification of peripheral blood eosinophils using high-throughput imaging flow cytometry, J. Allergy Clin. Immunol. 139 (2) (2017) AB163. [Google Scholar]
- [65].Hatipoglu N, Bilgin G, Cell segmentation in histopathological images with deep learning algorithms by utilizing spatial relationships, Med. Biol. Eng. Comput. 55 (2017) 1829–1848. [DOI] [PubMed] [Google Scholar]
- [66].Ljosa V, Sokolnicki KL, Carpenter AE, Annotated high-throughput microscopy image sets for validation, Nat. Methods 9 (7) (2012) 637, 637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [67].Fasihi MS, Mikhael WB, Overview of current biomedical image segmentation methods, in: 2016 International Conference on Computational Science and Computational Intelligence (CSCI), IEEE, 2016. [Google Scholar]
- [68].Apthorpe N, et al. , Automatic neuron detection in calcium imaging data using convolutional networks, Adv. Neural Inf. Process. Syst. 29 (2016). [Google Scholar]
- [69].Guerrero-Pena FA, et al. , Multiclass weighted loss for instance segmentation of cluttered cells, in: 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE, 2018. [Google Scholar]
- [70].Chen J, et al. , The Allen Cell and Structure Segmenter: a new open source toolkit for segmenting 3D intracellular structures in fluorescence microscopy images, bioRxiv (2018) 491035. [Google Scholar]
- [71].Funke J, et al. , A benchmark for epithelial cell tracking, in: Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.