Abstract
A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.
Index Terms: Fluorescence Microscopy Image, 3D Cell Analysis, Gradient Vector Flow
1. INTRODUCTION
Fluorescence microscopy imaging is an essential tool for quantitative investigation of cell biological characteristic in cancer research, with its main advantage being that it is compatible with living cells [1, 2]. With the recent advance in imaging technologies, three dimensional (3D) fluorescence microscopic images are now available to support such studies. As a first step to characterize such common cell properties as migration, division, and apoptosis, an effective and robust cell segmentation method is required. Despite the fact that a large number of cell segmentation methods have been developed, this problem remains challenging. Thresholding methods are efficient and applicable to images where foreground objects have sharp contrast to background noise [3]. However, this class of methods can not accommodate cases in which contrast between objects of interest and background noise is either weak or spatially variant. Watershed segmentation methods are good at differentiating clumped cells [4], but are prone towards over-segmentation. Cell segmentation has also been approached by kernel-based dynamic clustering [5]. However, this method involves an iterative clustering process that can be computationally expensive. Another new idea for 3D cell segmentation is to naturally group voxels by tracking the image gradient flow [6]. This method works well given the assumption that gradient vectors within a cell generally point to its center. However, this assumption only holds when parameters required for elastic deformation transformation are precisely selected. The final result is also subject to the bimodal voxel intensity distribution within each gradient-driven sub-volume. Therefore, the choice of gradient field regulation methods and cell component identification within each sub-volume could pose a challenge in practice. In this paper, we present solutions to address these problems.
2. METHOD FOR 3D CELL SEGMENTATION
Our workflow for 3D cell segmentation with fluorescence microscopy images consists of several modules, including pre-processing (volume data interpolation and smoothing), gradient field regulation, gradient mode detection, multiscale cell enhancement, adaptive thresholding, and post-processing (cell smoothing, and small object removal).
2.1. Data pre-processing
As the physical sizes per pixel in x, y, and z direction are dx = 1.19 μm, dy = 1.19 μm, and dz = 2.00 μm, we interpolated the entire image volume V (x, y, z) ∈ ℝ3 with 3D cubic interpolation [8]. To mitigate noise coupled in image volume, we applied Gaussian filter to the interpolated data volume: where * is the convolution operator.
2.2. Gradient field regulation
Driven by the idea that boundaries of 3D cell sub-volumes can be naturally found by tracking the image gradient flow [6], we investigated distinct methods to make gradient vectors within a cell volume generally point to its central core. The Gradient Vector Flow (GVF) field, first proposed to replace the traditional potential force, is a non-irrotational external force field composed of the irrotational and solenoidal component [10]. It is diffused from the original edge map and presents many new merits for gradient field based analysis. It can be derived from a variational formulation where the following energy functional is minimized:
(1) |
where: ;ui = ∂u/∂xi, i = 1, 2, 3, and ∇ is the gradient operator. As ℒ involves first-order derivative of functions {u, v, w} in GVF field with respective to multiple variables {xi, i = 1, 2, 3}, functions {u*, v*, w*} minimizing functional in (1) can be found by solving the following system of Euler-Lagrange equations:
(2) |
Given in (1), system of equations in (2) becomes:
(3) |
where Δ is the Laplacian operator. A typical gradient field and the resulting GVF field over a small image volume are shown in Figure 1.
Fig. 1.
Gradient field of a small image volume. (Top) original gradient field (Bottom) GVF field fgvf after 15 iterations with μ = 0.1; Yellow points indicate |θ(f0) − θ(fgvf)| > π/4.
2.3. Gradient convergence mode detection
After GVF field computation, voxels were next grouped into distinct sub-volumes based on gradient modes detected by tracking directions of vectors in GVF field. Given a voxel , the next tracked voxel x(b) is:
(4) |
where is a vector of step functions:
The tracking process is continued until either it converges to a gradient mode or all voxels have been associated with some gradient mode. The location of a voxel is defined as the gradient mode mij, i.e. mode j in cell i from the set Mi = {mij, ∀j}, if the angle between GVF vector at this voxel and that of its previous voxel on the tracking path is greater than or equal to π/2. Due to artifacts, tracking paths Pi = {pik, ∀k} from distinct voxels of a cell can converge to a set of distinct gradient modes nearby . If the set cardinality is not zero, adjacent gradient modes were combined [6].
2.4. Cell component identification
For a given cell i, we assigned a unique integer label to all voxels on tracking paths Pi = {pik, ∀k} leading to the same combined gradient mode. However, not all associated voxels represent foreground cell in most cases. Some exhibit too weak fluorescence signal strength to be a cell voxel. As a result, there are two classes of voxels to be identified based on intensity: cell and background voxels. This is a classic problem where adaptive Otsu thresholding can be used to find the optimal threshold maximizing the between-class variance [9, 6]. To enhance the bimodal intensity distribution required by Otsu method, we used a multiscale cell enhancement filter to improve intensity contrast between cell and background voxels.
With the smoothed and interpolated 3D volume , the intensity of voxel x in neighborhood of voxel x0 can be represented by Taylor series expansion:
(5) |
where is a 3 × 3 Hessian matrix with its entry . As Hessian matrix is symmetric, it is always diagonalizable. We denote eigen values of by λi, i = 1, 2, 3, subject to |λ1| ≤ |λ2| ≤ |λ3|. If voxel x0 is close to the brightest voxel at the center of a bright sphere, the resulting eigenvalues of are constrained by [11]:
(6) |
With the following cell similarity function is formulated to describe the intensity similarity between the local volume structure and a 3D sphere centered at x0:
when λi < 0 ∀i; C(x, s) = 0, otherwise.
As cells have variable sizes, we accommodated this variation by convolving with Gaussian filter with different scale G(x, si) and selected the maximum cell similarity function value of all scales. Assuming s* is the scale associated with the maximum cell similarity, we enhanced voxel intensity as:
(7) |
where α > 0 is close to zero, and κ > 0. For any voxel x with C(x, s*(x)) > α, the voxel intensity is increased. Otherwise, it is depressed. In this way, we forced voxel intensity distribution to be more bimodal and constrained cell shapes deviated from a sphere substantially.
2.5. Data post-processing
In post-processing, we smoothed each 3D cell shape by performing morphological closing on the binary image volume associated with each cell component [12]. In addition, we removed all unduly small objects by a lower volume bound . In Figure 2, we present a 3D view of voxel association within a small image volume, its cell enhancement result at a middle plane, local adaptive thresholding result, and final segmentation result.
Fig. 2.
3D view of cell identification with a small image volume. (A) voxel association; (B) multiscale cell enhancement at a middle plane; (C) adaptive cell segmentation; and (D) final segmentation result.
3. EXPERIMENTS AND RESULTS
We developed our method with the driving use case of glioblastoma (GBM), a high-grade brain tumor. In this study, GBM that were resected neurosurgically were grown in-vitro as neurosphere cultures. Cultured cells were then implanted on biologically relevant 3D scaffolds that were 200 μm in thickness. This ex-vivo platform enables replication of a nearly authentic brain environment in which glioma cells interact with extracellular matrices within a 3D space [7]. Using confocal microscopy (Zeiss LSM 510), we acquired 3D fluorescence Z sliced images of tumor cells. In aggregation, this image volume consists of 73 interpolated images, with each image resolution of 512 × 512 pixels. Figure 3 shows the 3D fluorescence image volume and the Maximum Intensity Projection (MIP) of cell profiles.
Fig. 3.
(A) 3D data volume; (B) 2D MIP image.
We applied the proposed method to the 3D fluorescence image volume of human brain tumor cells with the following parameter setup: σ0 = 0.65, μ = 0.1, Ngvf = 15, η = cos(π/3), ε = 2, κ = 5, α = 0.05 and . This set of parameter values is empirically chosen. Moderate variations to this set can lead to similar results. Final segmentation result over the entire data volume is presented in Figure 4 where cells are color coded. We quantitatively assessed segmentation results by two validation studies. With the interpolated image volume , each image slice was manually reviewed. The resulting annotations were used as ground truth and compared with machine-generated 3D cell segmentation results. In the first validation study, four metrics were computed for each image slice: Miss, False, Over, and Under. We evaluated our approach with single and clustered cells separately. Miss and False represent missing and false recognition rates of machine-based method on individual cells with no touching neighbors, whereas Over, and Under show number of cell clusters with machined-segmented cells more and less than the true number by human, respectively. The resulting validation outputs (mean%±std) are presented in Table 1 for two sets of images from the volume, as there is a large variation in cell number per image across slices. Set S1 includes all 73 image slices, whereas set S2 contains 28 images with cell number greater than or equal to 20 each. We notice that our method presents satisfactory performance as measured by all measures. False recognition and miss segmentation rates for single cells are comparable, while over and under segmentation incidences are small for overlapped cells.
Fig. 4.
3D cell segmentation of the entire image volume with typical 3D cell mesh representations at corners.
Table 1.
Validation result (mean%±std) of cell segmentation on all image slices (set S1) and 28 slices with N ≥ 20 each (set S2); N and are the true and machine-identified cell number per image slice, respectively.
Set | N |
|
Miss | False | Over | Under | |
---|---|---|---|---|---|---|---|
S1 (73) | 31.92±42.36 | 32.07±42.76 | 0.68%±0.02 | 0.59%±0.01 | 0.69%±0.01 | 0.46%±0.01 | |
S2 (28) | 77.36±35.52 | 77.82±36.14 | 1.09%±0.01 | 1.32%±0.01 | 1.80%±0.01 | 1.01%±0.01 |
To further characterize the concordance between human- and machine-generated cell morphometry structures, we randomly selected 20 cells and compared the cell structure similarity by volume- and distance-based measures. We denote A and B as 3D cell structures by human and machine, respectively. Volume-based measures include Jaccard coefficient (J = |A ∩ B|/|A ∪ B|), Precision (P = |A ∩ B|/B), Recall (R = |A∩B|/A) and F1 score (F1 = 2PR/(P+R)), while Hausdorff distance is used as a distance-based metric. In Figure 5, we present validation results on cell surface concordance between human and machine segmentation results. Comparisons with these randomly selected cells lead to the following results by the five metrics above: J = 0.96 ± 0.06, P = 0.99 ± 0.01, R = 0.97 ± 0.06, F1 = 0.98 ± 0.03, and dH = 1.81 ± 0.93 (voxel). Note that most cells present high agreement between human and machine segmentation results, with two cells perfectly matched. Cells with the worst concordance by different metrics still present satisfactory result: Jlow = 0.75, Plow = 0.95, Rlow = 0.75, F1low = 0.86, and . In future work, we will develop a tracking algorithm and apply proposed segmentation method to longitudinal 3D image volumes of brain tumor cells to quantitatively characterize cell invasion property in 3D space.
Fig. 5.
Evaluation of human-machine segmentation agreement on 20 randomly selected cells with different metrics. (A) 1 − J; (B) 1 − P; (C) 1 − R; (D) 1 − F1; and (E) dH
4. CONCLUSIONS
In this paper, we present a promising cell segmentation method applicable to 3D fluorescence microscopic images, with specific solutions to mitigate parameter sensitivity and to improve intensity bimodal distribution assumption required for gradient field regulation and adaptive thresholding proposed in [6]. Our validation results on a 3D imaging volume of human brain tumor cells present a good concordance between automated and manual segmentation, suggesting our method’s potential to be applied to a large set of cell-oriented biology and cancer research.
Acknowledgments
This research is supported in part by grants from National Institute of Health K25CA181503 and R01CA176659, National Science Foundation ACI 1443054 and IIS 1350885, Georgia Research Alliance, and CNPq.
References
- 1.Fernández-Suárez M, Ting AY. Fluorescent probes for super-resolution imaging in living cells. Nature Rev Mol Cell Biol. 2008;9:929–943. doi: 10.1038/nrm2531. [DOI] [PubMed] [Google Scholar]
- 2.Pepperkok R, Ellenberg J. High-throughput fluorescence microscopy for systems biology. Nature Rev Mol Cell Biol. 2006;7:690–696. doi: 10.1038/nrm1979. [DOI] [PubMed] [Google Scholar]
- 3.Luo D, Barker J, McGrath JC, Daly CJ. Iterative Multilevel Thresholding and Splitting for Three-Dimensional Segmentation of Live Cell Nuclei Using Laser Scanning Confocal Microscopy. Journal of Computer-Assisted Microscopy. 1998;10(4):151–162. [Google Scholar]
- 4.Lin G, Adiga U, Olson K, Guzowski J, Barnes C, Roysam B. A hybrid 3-D watershed algorithm incorporating gradient cues and object models for automatic segmentation of nuclei in confocal image stacks. Cytometry. 2003;56(1):23–36. doi: 10.1002/cyto.a.10079. [DOI] [PubMed] [Google Scholar]
- 5.Yang FG, Jiang TZ. Cell Image Segmentation with Kernel-Based Dynamic Clustering and an Ellipsoidal Cell Shape Model. Journal of Biomedical Informatics. 2001;34(2):67–73. doi: 10.1006/jbin.2001.1009. [DOI] [PubMed] [Google Scholar]
- 6.Li G, Liu TM, Tarokh A, Nie JX, Guo L, Mara A, Holley S, Wong STC. 3D cell nuclei segmentation based on gradient flow tracking. BMC Cell Biology. 2007;8(40) doi: 10.1186/1471-2121-8-40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Bellail AC, Hunter SB, Brat DJ, Tan C, Van Meir EG. Microregional extracellular matrix heterogeneity in brain modulates glioma cell invasion. Int J Biochem Cell Biol. 2004;36:1046–1069. doi: 10.1016/j.biocel.2004.01.013. [DOI] [PubMed] [Google Scholar]
- 8.de Boor C. A Practical Guide to Splines. Springer-Verlag; New York: 1978. [Google Scholar]
- 9.Otsu N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans Systems, Man and Cybernetics. 1979;9(1):62–66. [Google Scholar]
- 10.Xu CY, Prince JL. Snakes, shapes, and gradient vector flow. IEEE Trans PAMI. 1998;7(3):359–369. doi: 10.1109/83.661186. [DOI] [PubMed] [Google Scholar]
- 11.Frangi AF, Niessen WJ, Vincken KL, Viergever MA. Multiscale vessel enhancement filtering. Medical Image Computing and Computer-Assisted Interventation. Lecture Notes in Computer Science. 1998;1496:130–137. [Google Scholar]
- 12.Gonzalez R, Woods R. Digital Image Processing. Addison-Wesley Publishing Company; 1992. pp. 524–552. [Google Scholar]