Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Jul 1.
Published in final edited form as: IEEE Trans Biomed Eng. 2017 Feb 23;65(7):1617–1629. doi: 10.1109/TBME.2017.2674521

Three-Dimensional Segmentation of the Ex-Vivo Anterior Lamina Cribrosa from Second-Harmonic Imaging Microscopy

Sundaresh Ram 1, Forest Danford 2, Stephen Howerton 3, Jeffrey J Rodríguez 4, Jonathan P Vande Geest 5,
PMCID: PMC6013322  NIHMSID: NIHMS922446  PMID: 28252388

Abstract

The lamina cribrosa (LC) is a connective tissue in the posterior eye with a complex mesh-like trabecular microstructure through which all the retinal ganglion cell axons and central retinal vessels pass. Recent studies have demonstrated that changes in the structure of the LC correlate with glaucomatous damage. Thus, accurate segmentation and reconstruction of the LC is of utmost importance. This paper presents a new automated method for segmenting the microstructure of the anterior LC in the images obtained via multiphoton microscopy using a combination of ideas. In order to reduce noise, we first smooth the input image using a 4-D collaborative filtering scheme. Next, we enhance the beam-like trabecular microstructure of the LC using wavelet multiresolution analysis. The enhanced LC microstructure is then automatically extracted using a combination of histogram thresholding and graph-cuts binarization. Finally, we use morphological area opening as a post-processing step to remove the small and unconnected 3-D regions in the binarized images. The performance of the proposed method is evaluated using mutual overlap accuracy, Tanimoto index, F-score, and Rand index. Quantitative and qualitative results show that the proposed algorithm provides improved segmentation accuracy and computational efficiency compared to the other recent algorithms.

Index Terms: Lamina cribrosa, volumetric data denoising, wavelet, graph cut segmentation, histogram thresholding

I. Introduction

GLAUCOMA is a group of neurodegenerative eye diseases characterized by gradual deterioration of retinal ganglion cells (RGC) that leads to excavation of the optic nerve head (ONH), thinning of the retinal nerve fiber layer, loss of peripheral vision and, in an advanced state, irreversible blindness [1], [2]. With glaucoma being the second leading cause of blindness worldwide [3], there is a need for development of a sophisticated system that may allow for early glaucoma diagnosis and disease monitoring, as well as fundamental studies of the disease mechanism. This could allow clinicians to slow down the progression of the disease and potentially avoid permanent damage to the ONH. Imaging technologies such as Heidelberg retinal tomography (HRT, Heidelberg Engineering, Heidelberg, Germany) and optical coherence tomography (OCT) implemented in clinical eye care are used to examine the tissue structure and used in early glaucoma detection [4], [5]. This is accomplished by measuring the retinal nerve fiber layer thickness [6]–[8] and loss of retinal pigment epithelial cells [5], [9], [10], as well as thickness of other retinal layers [11]–[13]. Despite these advances, the aforementioned imaging technologies are limited in their ability to image tissue structure while being largely unable to elucidate tissue function [5].

Recent studies have shown that the lamina cribrosa (LC) is another potential location to identify the glaucomatous condition of the eye [8], [14]. The LC is a mesh-like connective tissue in the posterior of the eye, through which bundles of RGC axons carrying visual information from the retina to the brain pass. It forms a barrier between the intraocular space with intraocular pressure and the retrobulbar space with retrolaminar pressure. Although the pathogenesis of glaucoma remains incompletely understood, studies suggest that the LC is the principle site of RGC vulnerability and the location of RGC axonal injury [15]–[17], potentially leading to activation of the surrounding mechanosensitive astrocytes [18], constriction of blood flow through the capillaries [19], or even direct blockage of retrograde and anterograde axonal transport [20]. Therefore, understanding the properties and characterizing the biomechanics of the LC is essential for understanding the mechanisms leading to vision loss in glaucoma.

Due to the complex mesh-like trabecular microstructure of the LC and also because of the difficulty of accessing the tissue experimentally due to its location deep within the ONH, the biomechanics of the LC remain poorly understood. Hence, for many years researchers have used histological techniques to elucidate the details of the LC structure. With the recent advances in imaging technology, OCT and MPM have emerged as nondestructive imaging techniques to visualize the LC structure in detail. These imaging technologies in tandem with computer modeling act as an alternative approach for studying LC biomechanics, particularly its deformation response to various magnitudes of intraocular pressure (IOP). In order to study the LC biomechanics and understand the deformations they undergo due to the IOP, there is a need to first segment the LC microstructure made up of a mesh-like array of interlocking “beams”, each of which is essentially a collagen-rich connective tissue sheath surrounding a capillary. Manual segmentation of the LC microstructure is tedious, time consuming, varies from person-to-person, and is not reproducible. Currently, there is a lack of automated methods for performing LC microstructure segmentation.

Recently multiphoton microscopy (MPM) has found increasing use in laboratory-based biomedical imaging due to its ability to achieve subcellular resolution and simultaneously obtain structural and functional information. MPM however suffers from relatively limited penetration depth compared to some other commonly used imaging technologies. Despite this drawback, accurate segmentation and reconstruction of the anterior LC observed from the ex-vivo MPM images is of importance for several reasons. First, it will allow future comparisons to LC images generated from existing in-vivo imaging modalities (e.g., OCT). For example, data that can only be generated by nonlinear optical microscopy (e.g., relative fibrillar collagen and elastin content) can be compared and combined with common features such as skeletonized beam diameter/length, pore size, shape, and orientation available in both MPM and other modalities. Second, accurate segmentation of the anterior LC will be critical in establishing boundary conditions for computational simulations aimed at better understanding LC biomechanics. Lastly, it will greatly aid in co-registering the spatiotemporal distribution of cell type and response in ex-vivo experiments (e.g., location of cells with respect to the surrounding extracellular matrix). All of these will be important for future mechanobiological investigations [21].

Grau et al. [22] developed a 3-D image segmentation algorithm to segment and reconstruct the LC microstructure from 3-D datasets of monkey eyes. Their approach uses the expectation-maximization (EM) framework, incorporating an anisotropic Markov random field (MRF) to introduce prior knowledge about the geometry of the structure. In addition, they use a structure tensor to characterize the predominant structure direction and the spatial coherence at each point within the 3-D dataset. This method performs well when the datasets have less noise and the inter- and intra-image intensity variations are less, i.e., when the 3-D dataset is more uniformly illuminated within and across 2-D images. In addition, the exact maximum-a-posteriori (MAP) estimates are not easy to compute, and the approximate MAP estimates are computationally expensive to calculate, leading to a large time to compute the 3-D segmentation result.

Nadler et al. [8] proposed an automated method to segment the LC microstructure in OCT images. Their method uses a 3-D Gaussian smoothing filter as a pre-processing step to reduce the high-frequency noise, followed by contrast-limited adaptive histogram equalization to equalize the local differences in pixel intensity values. Next, they employ a local thresholding technique to binarize the image. Finally, a 3-D median filter is applied as a post-processing step to remove the unconnected LC beams. This method fails to accurately segment the LC because of the smoothing operation, and it produces erroneous holes within the LC beams due to the inhomogeneous pixel intensities even after histogram equalization. Also, the algorithm is sensitive to the window size used in adaptive histogram equalization and median filtering.

Campbell et al. [23] developed a general algorithm for the segmentation of LC beams, making use of the 3-D orientation information of the beams that could be used either with OCT or SHG microscopy images. Their method uses a modified version of the Frangi vesselness filter [24] using a Hessian matrix to enhance the higher-order structures like plates and sheaths present within the image. The enhanced image is then binarized using a thresholding technique. This method works well when the images have a high signal-to-noise ratio (SNR) and there is a uniform illumination across the images, but does not achieve good segmentation results when the images are noisy. Also, the parameters used in the modified Frangi filter are not easily optimizable for new datasets and thus require careful tuning for each dataset to be segmented.

In this paper, we present a new automated technique for the segmentation of the anterior LC microstructure in images obtained from ex-vivo MPM, acquired in a laboratory setting, that overcomes the limitations of the aforementioned segmentation methods. We use block matching and collaborative filtering (BM4D), a non-local transform domain filter proposed by Maggioni et al. [25], as a first step to reduce the background clutter and noise in the 3-D dataset. Next, we make use of multiscale wavelet decomposition with adaptive scale selection to enhance the beams of the LC microstructure in the images. We then binarize the enhanced dataset using a combination of histogram thresholding and a graph-cuts algorithm. Finally, the unwanted and unconnected 3-D regions within the binarized dataset are removed using the morphological area opening operation as a postprocessing step. Fig. 1 shows a flowchart illustrating the various steps of our algorithm. The main contributions of this paper are as follows:

Fig. 1.

Fig. 1

Flowchart describing the various steps involved in the proposed segmentation algorithm.

  • Prior automated segmentation algorithms for LC segmentation have been developed mostly for OCT images. We propose a novel and computationally efficient automated algorithm for LC segmentation for MPM images in this work, exploiting the tubular structure of the LC using multiscale wavelet decomposition with adaptive scale selection.

  • We present a segmentation framework capable of successfully segmenting the complex interlocking beams in the LC microstructure, even in areas where they are densely packed. The algorithm is tested on a large number of donor eyes using a range of different tissues and stain preparations.

  • The algorithm is novel in the sense that it can efficiently handle segmentation of highly inhomogeneous LC beams, a scenario where the prior developed methods do not perform well.

Mutual overlap accuracy [26], Tanimoto coefficient [27], F-score [28], and Rand index [29] are used to measure the performance of the proposed algorithm and several existing automated algorithms for LC segmentation.

II. Materials

A. Sample Preparation

Pairs of human donor poles were acquired within 48 hours post-mortem from four different eye banks: Alabama Eye Bank, Banner Health Donor Network of Arizona, San Diego Eye Bank, and Eversight Eye Bank. The exterior scleral surface of each eye was cleaned of any remaining adipose, connective, and muscular tissue, and bisected along its equatorial plane using a scalpel and surgical scissors. With the aid of a dissecting microscope, a gentle microdissection procedure was performed to remove the choroid, retina, and prelaminar tissue from the ONH. The posterior poles then underwent the following procedure to remove any prelaminar tissue that could not be removed mechanically: submersion in 0.25% ethylenediaminetetraacetic acid (EDTA) trypsin for 10 minutes at 37° C with gentle agitation, three washes in fresh 1x phosphate buffered saline (PBS) solution, and a 5-minute submersion in a 1:1 0.25% EDTA-trypsin and 100 mM NaOH mixture at 37° C with gentle agitation. The digestion was halted by adding an equal volume of 10% fetal bovine serum (FBS) to the solution and gently agitating for 1 minute. Finally, samples were washed in fresh 1x PBS solution three times. The mechanical microdissection procedure was repeated again to clean off any of the remaining prelaminar tissue. All specimens were kept moist throughout the entirety of the preparation procedure and stored in 1x PBS at 4° C to prevent tissue dehydration and/or degradation prior to imaging. All samples regardless of age, gender, or race/ethnicity received identical digestion treatment.

We confirm that our research adhered to the tenets of the Declaration of Helsinki, that all subjects consented to donate tissue for research purposes, and that all ethical approval to use the specimens for research purposes was obtained by the aforementioned eye banks.

B. Image Acquisition

Prepared samples were placed in a novel micro-optomechanical device consisting of an acrylic base with a cavity allowing for uninhibited posterior deformation and a metal pressure plate with a Corning No. 1 cover glass functioning as the imaging window. The cover glass was adhered over a circular opening whose diameter was 400% larger than the average maximum diameter of the human lamina cribrosa as reported by Jonas et al. [30]. Four diagonal cuts along the borders of the physiological regions were made from the edge of the sclera towards the ONH to allow the sample to be effectively sealed via six 6-32 machine screws. Samples were then pressurized to physiologically relevant pressures. Pressure was regulated in a closed-loop system via feedback from a digital pressure transducer to custom software in LabVIEW 2012 (National Instruments, Austin, TX) actuating a syringe pump.

The samples were imaged using an Olympus BX51 upright laser-scanning microscope coupled to a coherent 120-fs tunable pulsed titanium-sapphire laser (Mira 900, Coherent Inc., Santa Clara, CA). The laser was centered at λ = 780 nm to efficiently visualize collagen via SHG and elastin via fluorescence emission. Power was held constant at 320.86 ± 92.45 mW. Power was not ramped as a function of depth and no visible tissue damage was observed at this power. Digital images were acquired with a plan-apochromat lens with a magnification of 4X, numerical aperture = 0.28, working distance of 29.5 mm, and a pixel size of 2.5 μm in the x- and y-direction and 5 μm in the z-direction with automatic focusing. Signals were collected simultaneously in an epifluorescence configuration, separated with a dichroic mirror (405 nm), and passed through a 377/50 nm bandpass filter (SHG) and a 460/80 nm bandpass filter onto Hamamatsu photomultiplier (PMT) detectors. The PMT tube absorbed these signals and transformed them into intensity images according to the light absorbed. The sequential 2-D images at each 5-μm-spaced optical section were then stacked to form a volumetric dataset. The dimension of each 2-D image is 1039 × 1039 pixels, and the gray scale dynamic range of each dataset is 16 bits per pixel. A volume-rendered view of the stacked images for an example 3-D dataset is shown in Fig. 2.

Fig. 2.

Fig. 2

Volume-rendered view of an example 3-D dataset projected onto a coronal plane. The mesh-like trabeculated microstructure of the connective tissue in the lamina cribrosa can be observed inside the elliptical scleral canal opening.

In this study, we considered a total of eighteen 3-D datasets, each from a different donor, consisting of between 110 and 205 z-stacked 2-D images. Variability is due to the fact that imaging was stopped after the tissue no longer produced sufficient signal to be detected by the PMTs. This occurred at a depth of 201.84 ± 55.21μm.

III. Methods

As our data is anisotropic, with lower resolution in the z-direction than in the x-, and y-directions, we make it isotropic by interpolating 2-D images between the original data using bicubic interpolation. In the reminder of this paper, we refer to the interpolated data as the original 3-D dataset unless stated otherwise. As the 3-D dataset is acquired as a collection of time-sequenced 2-D images in microscopy, factors such as drift in the field of view, occasional spells of focal plane changes, motion of a living subject in in-vivo imaging, and very long image sequences could lead to local deformation and misalignment between the 2-D images. Such misalignment needs need to corrected before any further processing of the data. There are many methods available in the literature that could be used to align such 3-D datasets [31], [32]. Since our images are acquired ex-vivo, under controlled settings without changes to the field of view and for short periods of time, we do not see such misalignment between the 2-D images in our 3-D dataset and hence skip the computationally expensive image alignment procedure.

A. Denoising

The clutter and noise in the MPM images make it hard to distinguish the borders of the LC beams, as observed in Fig. 3a. In addition, the background clutter in the data, especially toward the end z-slices, get enhanced by the subsequent structure enhancement step in Section III-B, as if they were LC beams, leading to erroneous segmentation results. Thus, clutter and noise reduction are necessary in order to achieve a more accurate and smooth segmentation. We used the BM4D denoising method proposed by Maggioni et al. [25] to reduce the noise. Generally, image data obtained via microscopy is modeled as a mixed-Poisson-Gaussian process [33]–[36]. Specifically, the noisy volumetric observation p : X → R is modeled as a mixed-Poisson-Gaussian process given by

Fig. 3.

Fig. 3

A representative 2-D slice taken from the 3-D volume. (a) Original image. (b) Cropped region from original image. (c) Denoised image of the image in (b).

p(r)=αq(r)+η(r),q~P(λ),η~N(μ,σ2) (1)

where q(r) is the original, unknown volumetric signal that is a random Poisson variable with an underlying intensity value λ modeling the photon counting scaled by the overall gain of the detector α > 0, r is a 3-D coordinate belonging to the signal domain X ⊂ Z3, and η(r) is an i.i.d. white Gaussian noise with mean μ and standard deviation σ representing the readout noise, also known as “dark current” [37]. Furthermore, the variance σ2 of the additive white Gaussian noise η depends on the signal q [38]. The BM4D approach is specifically used to remove the additive Gaussian noise associated with the data. We use the generalized Anscombe transform (GAT) [39], which is a variance stabilizing transformation (VST),

VST(p)={2ααp+38α2+σ2αμ,p>38ασ2α+μ0p38ασ2α+μ (2)

to stabilize the variance of the process in (1) to unity, i.e., Var {VST(p)|q, σ} ≈ 1. The purpose of VST is to remove the dependency of the noise variance on the underlying signal before the denoising and compensate for the bias in the filtered estimate. The above transformation produces a random (non-scaled) Poisson variable q corrupted by additive Gaussian noise ηVST of mean 0 and standard deviation σVST:

VST(p)=q+ηVST,q~P(λ),ηVST~N(0,σVST2) (3)

where

VST(p)=pμα,σVST=σα.

The BM4D1 denoising method [25] is then applied to this VST transformed data. The BM4D algorithm is implemented in two stages, namely a hard-thresholding stage and a Wienerfiltering stage, each comprising three steps: forming a 4-D set by grouping similar 3-D sub-datasets, collaborative filtering, and aggregation.

1) Hard-Thresholding Stage

In this stage, a set S1 of overlapping cubes of dimension N × N × N is first extracted from VST(p). Next, a four-dimensional group G1 of similar cubes are chosen from the set S1. A 4-D transform denoted by T1 (1-D decorrelating linear transform applied to each dimension) is then applied to G1, and the coefficients are shrunk using a hard-thresholding operator 𝚼1 with a threshold value ζ1. The filtered group is then produced by taking the inverse of the 4-D transform denoted by T1−1. Finally, since each voxel can have multiple, and, in general, different estimates as it can belong to different cubes, the final volumetric estimate q˜ is produced by aggregating the multiple estimate values of each voxel using an adaptive convex combination [25].

2) Wiener Filtering Stage

In the Wiener filtering stage, we extract a set S2 of overlapping cubes of dimension N × N × N from the volumetric estimate q˜ obtained in the hard-thresholding stage. Similar to the hard-thresholding stage, we group cubes from the set S2 into a 4-D group G2. Next, a 4-D transform operator T2 different from that of T1 is used to transform the group G2. The transformed group is then filtered using a Wiener filter, and the filtered group is produced by taking the inverse of the 4-D transform denoted by T2−1. The final estimate is produced by aggregating the multiple estimate values of each voxel in a manner similar to the hard-thresholding stage.

Once we obtain the estimate after denoising using BM4D, we have to invert the VST transform. Due to the nonlinearity of VST in (1), applying the algebraic inverse to the denoised data will, in general, produce a biased estimate. Leveraging the results from a recently proposed method for optimal inversion of GAT [40] for the Poisson-Gaussian distribution, we invert the denoised estimate using the closed-from approximation of the exact unbiased inverse of the GAT from [40] as

VST1(q^)=14q^+1432q^1118q^2+5832q^318σ2. (4)

After the above inversion we return to the original range of values by setting the estimate as

p^=αVST1(q^)+η. (5)

The result of denoising for a cropped region within an example image is shown in Fig. 3c.

B. Curvilinear Structure Enhancement

The gray-level intensity distribution within the LC beam area is inhomogeneous, and the contrast varies throughout the dataset. Also, the intensity edges along the LC beam borders are not very strong. Using existing region-based or edge-based image segmentation algorithms on the denoised LC images will result in inaccurate segmentation. Therefore, there is a need to enhance these images in order to obtain a good segmentation of the LC beams. Enhancement of curvilinear structures for the purpose of detection or segmentation has been proposed in a number of clinical applications, such as for extracting individual filaments in confocal and total internal reflectance fluorescence (TIRF) microscopy images [41], blood vessel extraction in retinal fundus images [42]–[44], segmentation of intracranial vessels in phase-contrast magnetic resonance angiographic (PC-MRA) images [45], detection and tracking of vehicles in wide area aerial images [46], and neuron detection and segmentation in microscopy images [47]–[49]. In this work, we use the isotropic undecimated wavelet transform (IUWT), which has been used for astronomical [50] and biological [51] applications.

The IUWT uses an analysis filter bank (h, g) to decompose a signal a0 into a coefficient set S = {d1, …, dJ, aJ}, where dj is the wavelet (detail) coefficients at scale j and aJ is the approximation (scaling) coefficients at the coarsest resolution level J. The scaling coefficients preserve the mean of the original signal, whereas the wavelet coefficients have zero mean and encode the information corresponding to the different spatial scales present within the signal. The passage from one resolution to the next one is obtained using the “á trous” algorithm [52]:

aj+1[l]=(h¯jaj)[l]=kh[k]aj[l+2jk]aj+1[l]=(g¯jaj)[l]=kg[k]aj[l+2jk] (6)

where j [l] = h[l] if l/2j ∈ Z and 0 otherwise, [n] = h[−n], and “*” denotes convolution. The reconstruction is given by

aj[l]=12[(hjaj+1)[l]+(gjdj+1)[l]] (7)

where (h˜, g˜) are the synthesis filters. Here, the filter bank (h, g, h˜, g˜) needs to satisfy the exact reconstruction condition [53]. We use the filter bank (h, g = δh, h˜ = δ, g˜ = δ) where h is typically a symmetric low-pass filter such as the β3-spline filter. Applied to a signal a0, subsequent scaling coefficients can be calculated by convolving the signal with a filter j−1. If the original signal a0 is multidimensional, the filtering can be applied separably along all dimensions. The detail coefficients are then simply the difference between two adjacent sets of scaling coefficients. This particular structure of the analysis filters (h, g) leads to the iterative decomposition scheme,

IUWT{aj=h¯j1aj1dj=aj1aj (8)

The reconstruction then becomes trivial, i.e., a0=aJ+j=1Jdj.

Applying the IUWT algorithm on our images and choosing the detail coefficients from a single scale would be naive, since our images consist of a heterogeneous population of the LC beams with different sizes and thickness levels. Thus, we extend the IUWT approach to incorporate automatic scale selection on each 2-D image of the 3-D dataset. We compute the scaling and detail coefficients at multiple scales j = [1, . . . , J] in steps of 1. We then use the determinant of the Hessian matrix to constrain the maximum scale values and compute a response image R from the detail coefficients across different scales as follows:

R(x,y)=maxj[1,,J]dj(x,y) (9)

where J=argmaxm=1,,M{det()}, and H is the 2-D Hessian matrix of the intensity function of the pixel located at (x, y) at scale m given by

=(2dm(x,y)x22dm(x,y)xy2dm(x,y)xy2dm(x,y)y2) (10)

Once we have computed the response image R for every 2-D image within the 3-D dataset, we stack these images and normalize them in the range [0, 1] to produce an enhanced 3-D dataset, where the LC beam regions have greater intensity than their surroundings. The entire curvilinear structure enhancement step is summarized in Algorithm 1. The effect of applying the IUWT with automatic scale selection to an example 2-D image from the 3-D dataset is shown in Fig. 4. From Fig. 4 we can observe that larger features and structures appear with higher contrast as we increase the wavelet levels.

Fig. 4.

Fig. 4

Application of IUWT at different scales on a denoised image. (a) Denoised image; (b)–(e) Wavelet coefficient images at levels 1-4 for the cropped region indicated by the red box in (a), computed using the IUWT algorithm. Wavelet coefficients have been scaled linearly for display, such that black, gray, and white pixels within the images represent negative, zero, and positive coefficients respectively. (f) The response image obtained by combining the coefficient images.


Algorithm 1: Curvilinear Structure Enhancement

input : Denoised 3-D dataset of size l × b × h, low-pass filter h, largest wavelet scale M
output: Enhanced 3-D dataset e
for k ← 1 to h do
 a0 (:, :, k);
for m ← 1 to M do
  for i ← 1 to l do
   for j ← 1 to b do
    H(i, j, m) ← Det[H(d(i, j, m))];
   end
  end
end
Hmax ← max(H);
t ← max(Hmax(:));
d ← zeros(l, b, t);
d2 ← zeros(l, b, t);
a ← zeros(l, b, t);
for i ← 1 to t do
  ai ← conv(h, a0);
  d(:, :, i) ← a0ai;
  a0ai;
  a(:, :, i) ← a0;
end
for i ← 1 to l do
  for j ← 1 to b do
   d2(i, j, 1 : Hmax(i, j)) ← 1;
  end
end
dd × d2;
R ← max(d);
e(:, :, k) ← R;
end

C. Image Binarization

The next step in our algorithm is to segment LC beam voxels from the image background voxels on the enhanced images obtained from Section III-B. Image thresholding or binarization is a well-researched topic with numerous approaches described in the literature for various types of images. A survey of image thresholding techniques can be found in [54]. Common methods include histogram-based [55], clustering-based [56]–[58] and entropy-based algorithms [59]. More advanced techniques use graph-cuts [60] and level-set algorithms [61], but they require a good initialization. Keeping this in mind, we propose a hybrid approach, where we initially binarize the enhanced image using a histogram-based thresholding approach and feed it as an initialization to a graph-cuts algorithm for final binarization.

We compute the normalized histogram of the enhanced 3-D dataset obtained from Section III-B. Fig. 5 shows the histogram of an example enhanced dataset. From Fig. 5 we see that the histogram has a single peak; i.e., it is unimodal with a steep monotonic increase till the peak is reached, followed by an exponential-like decay. It consists of two populations, one that produces the dominant peak corresponding to the background region and another corresponding to the foreground LC beam region, present within the decay part of the histogram. Thus, we use the method proposed by Rosin in [62] to find a threshold to separate these two regions within the 3-D dataset. According to this method we draw a straight line l from the most populated bin of the histogram to the first empty bin of the histogram following the final occupied bin. Let h(i) denote the histogram, where i denotes the intensity of a voxel in the range {0, . . . , Imax}, where Imax is the maximum intensity value of the enhanced 3-D dataset. We construct a straight line through the following two points:

Fig. 5.

Fig. 5

Normalized histogram of a 3-D enhanced image.

(argmaxi{h(i)},maxi{h(i)}),(max[h(i)=0&h(i1)0]{i},0)

We select the histogram index i that maximizes the perpendicular distance between the straight line and the point (i, h(i)) as the threshold. This forms our initial binarization.

Since there is not a discernible peak within the foreground LC beam region of the normalized histogram, selecting a single threshold value for the entire 3-D dataset may lead to some errors, such as missing some parts of the foreground, which is more prominent towards the deep z-slices. Also, the histogram-based thresholding does not take into account the spatial contiguity of the data. Thus, the results of initial binarization are refined using the graph-cuts algorithm incorporating the spatial continuity constraints. We seek the voxel labeling L(r) that minimizes the energy function,

E(L(r))=rXD(L(r);p^e(r))+rXr𝒩(r)V(L(r),L(r)) (11)

where N (r) is a 3-D spatial neighborhood around the voxel r and e is the 3-D enhanced dataset. The optimal labeling is computed using the widely used graph-cuts algorithm [60], [63], [64]. The first term in (11) is the data term representing the cost of assigning a label to a voxel. It can be expressed as

D(L(r);p^e(r))=lnPr(p^e(r)|L(r)) (12)

where Pr(e(r)|L(r) = 0) and Pr(e (r)|L(r) = 1) are the empirical/observed distribution of the background and the foreground LC beams, respectively. The data term in (11) ensures that the labeling L(r) is consistent with the observed data e(r). It penalizes a label L(r) to a voxel r if it is very different from the observed data e(r).

The second term in (11) is a smoothness term or voxel continuity term that ensures that the labeling L(r) is smooth. From [65], the smoothness term is expressed as

V(L(r),L(r))=δ(L(r),L(r))×exp(|p^e(r)p^e(r)|22σL2) (13)

where

δ(L(r),L(r))={1,ifL(r)L(r)0,ifL(r)=L(r)

and σL is the scale factor. The smoothness term V (L(r), L(r′)) penalizes different labels for neighboring voxels r and r′. We set the lower and upper bound of the scale factor σL as [5, 15] voxels. Lower values are used when the image is smooth and higher values are used in highly textured image areas using the fast max-flow/min-cut graph-cuts algorithm [60]. Fig. 6 shows the results of image binarization described above on the representative 2-D slice of the 3-D image in Fig. 4(a). The initial binarization output is shown in Fig. 6(a), while the refinement using graph cuts is shown in Fig. 6(b). Fig. 6 shows that there is a significant improvement after applying the graph-cuts refinement.

Fig. 6.

Fig. 6

Comparison of initial and graph-cut refined binarization for the cropped region (indicated by the red box) of the image in Fig. 4(a). The regions in red indicate the false positive pixels that were counted as foreground pixels during histogram thresholding and are removed after graph-cuts refinement. The pixels in green are the false negative pixels that are included as a part of foreground using the graph-cuts algorithm.

D. Post-Processing

As shown in Fig. 7(a), there is typically poor contrast between the foreground LC beams and the surrounding background in the deep z-slices. This may cause the binarization to lead to errors in segmenting the background as shown in Fig. 7(b). In addition, the inhomogeneous intensities and textures within the LC microstructure leads to errors in the segmentation, especially at the LC beam borders, as shown in Fig. 6(b). It is necessary to discard the background voxels wrongly labeled as foreground and to further smooth the LC beam borders to obtain a complete segmentation. Since the number of these wrongly labeled background voxels in 3-D is small relative to the size of the whole LC microstructure in 3-D, we employ a 3-D morphological area opening filter [66] to discard the false foreground voxels. This operator removes any connected component with a volume less than a threshold δ from the binarized 3-D dataset. We use a fast implementation of this filter by Meijster and Wilkinson [67]. The associated code snippets of this algorithm can be found in [68]. Fig. 7(c) shows the result of applying morphological area opening on a representative 2-D binarized image in Fig. 7(b). We observe that the false foreground regions has been mostly removed, color coded with different colors, while the valid image foreground remains shown in white color.

Fig. 7.

Fig. 7

Postprocessing results on an example 2-D image toward the end z-slices in the 3-D dataset. (a) Denoised image from an end z-slice. (b) binarized image of (a). (c) Image after post processing. The small regions that are removed from (b) are color coded, and the result of postprocessing is shown in white.

IV. Results and Validation

The proposed algorithm for segmentation of LC microstructure in 3-D has been applied to a number of real datasets obtained from MPM. We compare our algorithm with three other current LC microstructure segmentation algorithms: Grau's EM algorithm using the anisotropic MRF (EM-A-MRF) [22], Nadler's contrast limited adaptive histogram equalization and adaptive thresholding (CLAHE-Thresh) [8], and Campbell's modified Frangi filter for enhancement and segmentation (mFfilter) [23]. The eighteen volumes were carefully segmented by an expert in 3-D using the Seg3D software [69] to generate ground-truth LC microstructure datasets. All algorithm parameters were optimized for our data by varying one parameter at a time and choosing the value that maximized the mutual overlap between the automated method and the manual segmentation.

A. Algorithm Parameter Settings

For the BM4D algorithm, we use separable 4-D transforms similar to those used in [25]. In the hard-thresholding stage, T1 is a composition of a 3-D biorthogonal spline wavelet in the cube dimensions and a 1-D Haar wavelet in the grouping dimension; in the Wiener-filtering stage, T2 is made up of a 3-D discrete cosine transform in the cube dimensions and, again, a 1-D Haar wavelet in the grouping dimension. We restrict the number of grouped cubes to be the largest power of 2 less than or equal to a predefined value of 16. The computational complexity of the algorithm is reduced, in a manner similar to [25], by performing groupings within a 3-D window of size Ns×Ns×Ns voxels centered at the coordinate of every reference cube, and all such reference cubes are separated by a step of Nstep voxels in each spatial dimension. We choose Nstep = 4 and Ns = 12 for our experiments. In addition, we set the cube size to be N = 8 in both the hard-thresholding and Wiener-filtering stages of the algorithm. In the hard-thresholding stage the cube similarity threshold is set to τ1 = 25.8, and the shrinkage threshold is set to ζ1 = 2.9. For the Wiener-filtering stage, the cube similarity is set to τ2 = 5.8. In the LC beam enhancement, we set the maximum scale for the Hessian matrix computation as M = 10. In the graph-cuts algorithm in the image binarization step, we choose the neighborhood N (r) to be the 26 immediate spatial neighbors of the voxel r under examination. In the postprocessing stage, we set the area threshold value as δ = 3500 voxels.

B. LC Microstructure Segmentation Evaluation

We use a variety of segmentation evaluation metrics to demonstrate the performance of the proposed algorithm and the three other segmentation algorithms.

1) Mutual Overlap

This metric, also known as Dice evaluation [26], is based on the mutual overlap between the automated segmentation algorithms and the manual segmentation. Specifically, the mutual overlap is calculated using the automatically segmented region R1 and the manually segmented region R2 as follows:

2|R1R2||R1|+|R2| (14)

where | · | represents the number of voxels in a region. The mutual overlap value is bounded between zero (no overlap) and one (exact overlap).

2) Tanimoto Coefficient

The Tanimoto coefficient [27] measures the similarity between the automatically segmented region, R1, and the manually segmented region, R2. The Tanimoto coefficient is the number of voxels that regions R1 and R2 have in common, divided by the total number of voxels belonging to either region. It is computed as

|R1R2||R1|+|R2||R1R2|=|R1R2||R1R2| (15)

The value is bounded between zero (no overlap) and one (exact overlap).

3) F-Score

Precision (P) and recall (R) are two common measures for evaluating the quality of results of segmentation (amongst other applications). Using the automatically segmented region, R1, and the manually segmented region, R2, they are defined as

P=|R1R2||R1|,R=|R1R2||R2| (16)

The F-score [28] is a statistical measure of a test's accuracy that combines both precision (P) and recall (R) into a single measure by taking their harmonic mean:

2PRP+R (17)

The F-score is bounded between zero (bad segmentation) and one (perfect segmentation).

4) Rand Index

The Rand index is a well-known measure of similarity between two data clusterings [29]. The Rand index is defined as a measure of agreement. Given the automatically segmented region, R1, and manually segmented region, R2, of a 3-D volume I with n voxels, let p denote the number of (unordered) pairs of voxels in I that are in the same region in R1 and also in the same region in R2, and let q denote the number of pairs of voxels in I that are in different regions in R1 and in different regions in R2. The Rand index is then defined as the frequency with which two segmentations agree on whether a pair of voxels belongs to the same or different regions:

(p+q)(n2) (18)

The value is bounded between zero (bad segmentation) and one (perfect segmentation).

Fig. 8 shows automated segmentation results of the LC microstructure obtained for representative 2-D images from our 3-D datasets. From Fig. 8 we observe that the proposed method performs better in comparison to the other three automated methods, as it matches the manual segmentation better. Fig. 9 shows a 3-D rendering of the segmentation on a whole 3-D dataset using our proposed method.

Fig. 8.

Fig. 8

Segmentation results of the various methods on representative 2-D images from two different 3-D datasets. The results are shown for the two local regions marked with a green box. Column1: Denoised Images. Column2: Segmentation results using proposed method. Column3: CLAHE-Thresh method. Column4: mF-filter method. Column5: EM-A-MRF method. Manual segmentation results are shown in red. Areas where manual segmentation and automated segmentation results overlap are shown in white.

Fig. 9.

Fig. 9

3-D reconstructions of the LC using the proposed method. (a) An isometric view of the segmented LC. (b) A Longitudinal cut through the thickness of the LC showing the segmented LC beams.

C. Speed

We compared the proposed automated algorithm to the other three automated methods in terms of speed. All algorithms were run on every one of the eighteen datasets. Our proposed automated method, Grau's EM-A-MRF method [22], and Nadler's CLAHE-Thresh method [8] were coded in MATLAB. Campbell's mF-filter research source code [23] was made available online by the authors and was implemented in MATLAB. All algorithms were run on a Windows PC with a 3.06 GHz Intel Core Duo processor and 6 GB of RAM.

V. Discussion

Accurate segmentation of the ex-vivo anterior LC will be critical for future studies investigating the intravital response of the resident cells responsible for remodeling this structure. In this paper, we have proposed a new automated technique for the segmentation of anterior LC microstructure in ex-vivo MPM images of the ONH. Our method comprises four steps: image denoising, curvilinear structure enhancement, binarization, and post-processing using mathematical morphology. We compared the proposed algorithm with three other techniques with respect to 3-D segmentation with ex-vivo MPM data from 18 human eye volumes that contain a variety of beam diameters and geometries, along with varying levels of noise, using a variety of performance metrics. We show that our proposed algorithm is tolerant to varying beam thickness and geometry of the LC microstructure, has good noise immunity, and can effectively and efficiently segment the anterior LC microstructure in 3-D. Table I shows the segmentation accuracy for all the automated segmentation algorithms compared for the 18 datasets of human eye volumes acquired using ex-vivo MPM.

Table I. Mean Segmentation Accuracy (And Standard Deviation) Results.

Method Mutual Overlap Tanimoto Coefficient F-Score Rand Index
Proposed 0.8515 (0.0117) 0.8474 (0.0154) 0.8397 (0.0216) 0.8502 (0.0092)
EM-A-MRF 0.8063 (0.0202) 0.7843 (0.0136) 0.7702 (0.0458) 0.8058 (0.0810)
CLAHE-Thresh 0.7227 (0.0231) 0.7195 (0.0246) 0.7105 (0.0376) 0.7435 (0.0074)
mF-Filter 0.7931 (0.0129) 0.8011 (0.0232) 0.7698 (0.0193) 0.7915 (0.0101)

Evaluation of LC microstructure segmentation was based on four different metrics. As shown in Table I, the mutual overlap accuracy of the proposed algorithm is 12.88 percentage points greater than that of CLAHE-Thresh method [8], 5.84 percentage points greater than that of mF-filter method [23], and 4.52 percentage points greater than that of EM-A-MRF method [22]. The Tanimoto coefficient of the proposed algorithm is 12.79 percentage points greater than that of the CLAHEThresh method, 6.31 percentage points greater than that of the EM-A-MRF method, and 4.63 percentage points greater than that of the mF-filter method. The performance of all of the automated algorithms with regard to oversegmentation errors (i.e., excessive splitting) and undersegmentation errors (i.e., a failure to split a region into the correct number of LC beams) can be described in terms of precision and recall measures. We calculate the overall F-score for these data as shown in Table I. From Table I we observe that F-score of the proposed algorithm is 12.92 percentage points greater than that of the CLAHE-Thresh method, 6.99 percentage points greater than that of the mF-filter method, and 6.95 percentage points greater than that of the EM-A-MRF method. We also evaluate the segmentation performance as a measure of clustering similarity between the manually segmented LC microstructure and each automated segmentation algorithm. We use the Rand index in order to evaluate this clustering similarity. Table I shows that the Rand index of the proposed algorithm is 10.67 percentage points greater than that of the CLAHE-Thresh method, 5.87 percentage points greater than that of the mF-filter method, and 4.44 percentage points greater than that of the EM-A-MRF method.

In addition, the proposed algorithm was also evaluated with respect to computation time. Table II shows the mean computation time (and standard deviation) for the four methods under comparison. From Table II we observe that on average the proposed algorithm runs 11 minutes slower than the CLAHE-Thresh method, 33 seconds faster than the mFfilter method, and 27 minutes and 13 seconds faster than the EM-A-MRF method. Nadler's CLAHE-Thresh method has fewer computations and therefore runs faster than the proposed method since it uses simple Gaussian smoothing for filtering the noise in the data followed by adaptively equalizing the 3-D histogram of the whole image once and adaptive 3-D thresholding. Campbell's mF-filter method involves the calculation of the 3-D Hessian and the curvilinear structure enhancement function at each voxel and thus takes more time to execute than to our proposed approach. Grau's EMA-MRF method takes more time to execute because of the MAP estimation problem it solves to classify each voxel as foreground or background and also because the anisotropic MRF incorporates a structure tensor calculation which takes time. Although the proposed algorithm runs slower than the CLAHE-Thresh method, marginally faster than the mF-filter method and much faster than the EM-A-MRF, it performs better both qualitatively and quantitatively in comparison to all of them.

Table II. Mean Computation Time (and Standard Deviation) (in MM:SS).

Method Proposed EM-A-MRF CLAHE-Thresh mF-Filter
Computation Time 13:56 (01:12) 41:42 (02:01) 03:29 (00:53) 14:29 (01:39)

Although the proposed algorithm has been developed specifically for segmentation of anterior LC microstructure in exvivo MPM images, other applications which require segmentation of curvilinear structures like our LC beams could benefit from the ideas that we have presented in this work. As mentioned earlier in Section III-B, our proposed framework could also be applied to detection of individual filaments in confocal and TIRF microscopy images, blood vessel extraction in retinal fundus images, segmentation of intracranial vessels in PC-MRA images, and neuron segmentation in microscopy images.

Current clinical techniques for imaging the LC within the eye include OCT and confocal reflectance microscopy. In comparison with MPM imaging, OCT has poorer spatial resolution and is not able to reveal subcellular structures. While confocal reflectance microscopy allows one to capture subcellular structures, it does not include functional information as can be done in MPM. At present, MPM imaging is being used by many research groups for imaging various regions of the eye, exvivo, to study a variety of disease pathologies [5]. While MPM is not currently available for imaging the LC microstructure in-vivo, the advantages of using MPM in the laboratory ex-vivo are important due to the reasons mentioned earlier in Section I, thereby warranting the need for automated methods such as the segmentation method proposed in this paper.

VI. Conclusions and Future Work

Accurate segmentation of the anterior LC microstructure in MPM volumes is a challenging task for an expert to perform manually, as well as for any automated segmentation algorithm. Prior methods include Grau's EM-A-MRF method, Campbell's mF-filter approach, and Nadler's CLAHE-Tresh technique. Our approach differs from these earlier methods. We do not assume local structure direction or choose a fixed radial range, nor do we omit the structural information altogether. The key idea of the work presented in this paper is to be able to segment LC beams of varying thickness and high texture or fluctuating intensities, which was not a focus of the earlier studies. Our proposed method combines BM4D filtering to suppress the noise, a new curvilinear structure enhancement step to enhance the contrast between the LC beams and the background, a two-step binarization procedure, and morphology-based post-processing to find and segment the foreground LC beams. The results demonstrate the improved performance of our algorithm, according to both quantitative and qualitative studies.

The proposed segmentation algorithm is implemented using isotropic (interpolated) data. However, our algorithm can be modified to handle anisotropic data by modifying the smoothness term in (13) and associating different weights to each neighboring voxels' contribution. Making this change could further accelerate the algorithm, making it more efficient.

Acknowledgments

The authors would like to thank Dr. Urs Utzinger (Dept. of Biomedical Engineering, University of Arizona) for helping with the microscopy instrumentation setup and the acquisition of the images used in the paper. The authors would also like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper.

This work was supported by the National Institutes of health under Grant NIH-1R01EY020890 (JPVG) as well as an NIH-sponsored shared device (NIH/NCRR S10RR023737). The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Dinggang Shen.

Footnotes

1

BM4D code can be found at http://www.cs.tut.fi/∼foi/GCF-BM3D

Contributor Information

Sundaresh Ram, Department of Electrical and Computer Engineering, The University of Arizona, Tucson, AZ, 85721, USA.

Forest Danford, Department of Aerospace and Mechanical Engineering, The University of Arizona, Tucson, AZ 85721, USA.

Stephen Howerton, Department of Aerospace and Mechanical Engineering, The University of Arizona, Tucson, AZ 85721, USA.

Jeffrey J. Rodríguez, Department of Electrical and Computer Engineering and the Graduate Interdisciplinary Program on Biomedical Engineering, The University of Arizona, Tucson, AZ 85721, USA.

Jonathan P. Vande Geest, Department of Bioengineering, and McGowan Institute for Regenerative Medicine, University of Pittsburgh, Pittsburgh, PA 15260, USA

References

  • 1.Weinreb RN, Aung T, Medeiros FA. The pathophysiology and treatment of glaucoma: a review. Jama. 2014 May;311(18):1901–1911. doi: 10.1001/jama.2014.3192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Cook C, Foster P. Epidemiology of glaucoma: what's new? Canadian J Ophthalmol. 2012 Jun;47(3):223–226. doi: 10.1016/j.jcjo.2012.02.003. [DOI] [PubMed] [Google Scholar]
  • 3.Quigley HA, Broman AT. The number of people with galucoma worldwide in 2010 and 2020. Brit J Ophthalmol. 90(3):262–267. 2006. doi: 10.1136/bjo.2005.081224. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Bussel II, Wollstein G, Schuman JS. OCT for glaucoma diagnosis, screening and detection of glaucoma progression. Brit J Ophthal. 2013 Dec;:bjophthalmol–2013. doi: 10.1136/bjophthalmol-2013-304326. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Gibson EA, Masihzadeh O, Lei TC, Ammar DA, Kahook MY. Multiphoton microscopy for ophthalmic imaging. J Ophthalmol. 2011;2011:870879. doi: 10.1155/2011/870879. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Townsend KA, Wollstein G, Schuman JS. Imaging of the retinal nerve fibre layer for glaucoma. Brit J Ophthalmol. 2009 Feb;93(2):139–143. doi: 10.1136/bjo.2008.145540. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Wollstein G, Schuman JS, Price LL, Aydin A, Strack PC, Hertzmark E, Lai E, Ishikawa H, Mattox C, Fujimoto JG, Paunescu LA. Optical coherence tomography longitudinal evaluation of retinal nerve fiber layer thickness in glaucoma. Arch Ophthalmol. 2005 Apr;123(4):464–470. doi: 10.1001/archopht.123.4.464. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Nadler Z, Wang B, Wollstein G, Nevins JE, Ishikawa H, Kage-mann L, Sigal IA, Ferguson RD, Hammer DX, Grulkowski I, Liu JJ, Kraus MF, Lu CD, Hornegger J, Fujimoto JG, Schuman JS. Automated lamina cribrosa microstructural segmentation in optical coherence tomography scans of healthy and galucomatous eyes. Biomed Opt Express. 2013;4(11):2596–2608. doi: 10.1364/BOE.4.002596. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Bueno JM, Giakoumaki A, Gualda EJ, Schaeffel F, Artal P. Analysis of the chicken retina with an adaptive optics multiphoton microscope. Biomed Opt Express. 2011 Jun;2(6):1637–1648. doi: 10.1364/BOE.2.001637. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Masihzadeh O, Lei TC, Ammar DA, Kohook MY, Gibson EA. A multiphoton microscope platform for imaging the mouse eye. Molecular Vision. 2011 Jul;18:1840–1848. [PMC free article] [PubMed] [Google Scholar]
  • 11.Tan O, Li G, Lu AT, Varma R, Huang D. Mapping of macular substructures with optical coherence tomography for glaucoma diagnosis. Ophthalmology. 2008 Jun;115(6):949–956. doi: 10.1016/j.ophtha.2007.08.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Tan O, Chopra V, Lu ATH, Schuman JS, Ishikawa H, Wollstein G, Varma R, Huang D. Detection of macular ganglion cell loss in glaucoma by fourier-domain optical coherence tomography. Ophthalmology. 2011 Dec;116(12):2305–2314. doi: 10.1016/j.ophtha.2009.05.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Medeiros FA, Zangwill LM, Bowd C, Vessani RM, Susanna R, Weinreb RN. Evaluation of retinal nerve fiber layer, optic nerve head, and macular thickness measurements for glaucoma detection using optical coherence tomography. Am J Ophthalmol. 2005 Jan;139(1):44–55. doi: 10.1016/j.ajo.2004.08.069. [DOI] [PubMed] [Google Scholar]
  • 14.Burgoyne CF, Downs JC, Bellezza AJ, Suh JKF, Hart RT. The optic nerve head as a biomechanical structure: a new paradigm for understanding the role of IOP-related stress and strain in the pathophysiology of glaucomatous optic nerve head damage. Prog Retin Eye Res. 2005 Jan;24(1):39–73. doi: 10.1016/j.preteyeres.2004.06.001. [DOI] [PubMed] [Google Scholar]
  • 15.Quigley HA, Addicks EM. Regional differences in the structure of the lamina cribrosa and their relation to glaucomatous optic nerve damage. Arch Ophthalmol. 1981 Jan;99(1):137–143. doi: 10.1001/archopht.1981.03930010139020. [DOI] [PubMed] [Google Scholar]
  • 16.Abe RY, Gracitelli CPB, Diniz-Filho A, Tatham AJ, Medeiros FA. Lamina cribrosa in Glaucoma: diagnosis and monitoring. Curr Ophthalmol Rep. 2015 Jun;3(2):74–84. doi: 10.1007/s40135-015-0067-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Belghith A, Bowd C, Medeiros FA, Weinreb RN, Zangwill LM. Automated segmentation of anterior lamina cribrosa surface: how the lamina cribrosa responds to intraocular pressure change in glaucoma eyes? Proc IEEE Int Symp Biomedical Imaging. 2015:222–225. [Google Scholar]
  • 18.Son JL, Soto I, Lopez-Roca T, Pease ME, Quigley HA, Marsh-Armstrong N. Glaucomatous optic nerve injury involves early astrocyte reactivity and late oligodendrocyte loss. Glia. 2010 May;58(7):780–789. doi: 10.1002/glia.20962. [DOI] [PubMed] [Google Scholar]
  • 19.Su WW, Cheng ST, Ho WJ, Tsay PK, Wu SC, Chang SH. Glaucoma is associated with peripheral vascular endothelial dysfunction. Ophthalmology. 2008 Jul;115(7):1173–1178. doi: 10.1016/j.ophtha.2007.10.026. [DOI] [PubMed] [Google Scholar]
  • 20.Quigley HA, Dorman-Pease ME, Brown AE. Quantitative study of collagen and elastin of the optic nerve head and sclera in human and experimental monkey glaucoma. Curr Eye Res. 1991 Sep;10(9):877–888. doi: 10.3109/02713689109013884. [DOI] [PubMed] [Google Scholar]
  • 21.Tover-Vidales T, Wordinger RJ, Clark AF. Identification and localization of lamina cribrosa cells in the human optic nerve head. Exp Eye Res. 2016 Jun;147:94–97. doi: 10.1016/j.exer.2016.05.006. [DOI] [PubMed] [Google Scholar]
  • 22.Grau V, Downs JC, Burgoyne CF. Segmentation of trabeculatd structures using an anisotropic markov random field: application to the study of the optic nerve head in glaucoma. IEEE Trans Med Imag. 2006 Mar;25(3):245–255. doi: 10.1109/TMI.2005.862743. [DOI] [PubMed] [Google Scholar]
  • 23.Campbell IC, Coudrillier B, Mensah J, Abel RL, Either CR. Automated segmentation of the lamina cribrosa using Frangi's filter: a novel approach for rapid identification of tissue volume fraction and beam orientation in a trabeculated structure in the eye. J R Soc Interface. 2015 Jan;12(104):20141009. doi: 10.1098/rsif.2014.1009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Frangi AF, Niessen WJ, Vincken KL, Viergever MA. Multiscale vessel enhancement filtering. Medical image computing and computer-assisted intervension. 1998:130–13. [Google Scholar]
  • 25.Maggioni M, Katkovnik V, Egiazarian K, Foi A. Nonlocal transform domain filter for volumetric data denoising and reconstruction. IEEE Trans Image Process. 2013 Jan;22(1):119–133. doi: 10.1109/TIP.2012.2210725. [DOI] [PubMed] [Google Scholar]
  • 26.Sonka M, Hlavac V, Boyle R. Image processing, analysis, and machine vision. 4th. Pacific Groove, CA: Brooks/Cole-Thompson Learning; 2014. [Google Scholar]
  • 27.Duda RO, Hart PE, Stork DG. Pattern Classification. 2nd. Hoboken, NJ, USA: John Wiley; 2001. [Google Scholar]
  • 28.Al-Kofahi Y, Lassoued W, Lee W, Roysam B. Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Trans Biomed Eng. 2010 Apr;57(4):841–852. doi: 10.1109/TBME.2009.2035102. [DOI] [PubMed] [Google Scholar]
  • 29.Rand WM. Objective criteria for evaluation of clustering methods. J Amer Stat Assoc. 1971 Dec;66(336):846–850. [Google Scholar]
  • 30.Jonas JB, mardin YC, Schlötzer-Schrehardt U, Naumann GO. Morphometry of the human lamina cribrosa surface. Invest Ophthal Vis Sci. 1991 Feb;32(2):401–405. [PubMed] [Google Scholar]
  • 31.Araganda-Carreras I, Sorzano CÓS, Thévenaz P, Barrutia AM, Kybic J, Marabini R, Carazo JM, de Solórzano CO. Non-rigid consistent registration of 2D image sequences. Phys Med Biol. 2010 Oct;55(20):6215–6242. doi: 10.1088/0031-9155/55/20/012. [DOI] [PubMed] [Google Scholar]
  • 32.Ray N, McArdle S, Ley K, Acton ST. MISTICA: minimum spanning tree-based coarse image alignment for microscopy image sequences. IEEE J Biomed Health Informat. 2016 Nov;20(6):1575–1584. doi: 10.1109/JBHI.2015.2480712. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Ram S, Rodriguez JJ, Bosco G. Size-invariant cell nucleus segmentation in 3-D microscopy. Proc IEEE Southwest Symp Image Analysis and Interpretation. 2012:37–40. [Google Scholar]
  • 34.Ram S, Rodriguez JJ. Symmetry-based detection of nuclei in microscopy images. Proc IEEE Int Conf Acoustics, Speech, and Signal Processing. 2013:1128–1132. [Google Scholar]
  • 35.Ram S, Rodriguez JJ. Size-invariant detection of cell nuclei in microscopy images. IEEE Trans Med Imag. 2016 Jul;35(7):1753–1764. doi: 10.1109/TMI.2016.2527740. [DOI] [PubMed] [Google Scholar]
  • 36.Zhang B, Fadili MJ, Starck JL. Wavelets, ridgelets, and curvelets for Poisson noise removal. IEEE Trans Image Process. 2008 Jul;17(7):1093–1108. doi: 10.1109/TIP.2008.924386. [DOI] [PubMed] [Google Scholar]
  • 37.Luisier F, Blu T, Unser M. Image denoising in mixed Poisson-Gaussian noise. IEEE Trans Image Process. 2011 Mar;20(3):696–708. doi: 10.1109/TIP.2010.2073477. [DOI] [PubMed] [Google Scholar]
  • 38.Kervrann C, Sorzano COS, Acton S, Olivo-Marin JC, Unser M. A guided tour of selected image processing and analysis methods for fluorescence and electron microscopy. IEEE J Select Topics Signal Process. 2016 Feb;10(1):6–30. [Google Scholar]
  • 39.Strack JL, Murtagh F, Bijaoui A. Image Processing and Data Analysis. 1st. Cambridge, UK: Cambridge University Press; 1998. [Google Scholar]
  • 40.Makitalo M, Foi A. Optimal inversion of the generalized Anscombe transformation for Poisson-Gaussian noise. IEEE Trans Image Process. 2013 Jan;22(1):91–103. doi: 10.1109/TIP.2012.2202675. [DOI] [PubMed] [Google Scholar]
  • 41.Basu S, Liu C, Rohde GK. Extraction of individual filaments from 2D confocal microscopy images of flat cells. IEEE/ACM Trans Comput Biology Bioinformatics. 2014 Nov;12(3):632–643. doi: 10.1109/TCBB.2014.2372783. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Roychowdhury S, Koozekanai D, Parhi K. Iterative vessel segmentation of fundus images. IEEE Trans Biomed Eng. 2015 Jul;62(7):1738–1749. doi: 10.1109/TBME.2015.2403295. [DOI] [PubMed] [Google Scholar]
  • 43.Sofka M, Stewart CV. Retinal vessel centerline extraction using multiscale matched filters, confidence and edge measures. IEEE Trans Med Imag. 2006 Dec;25(12):1531–1546. doi: 10.1109/tmi.2006.884190. [DOI] [PubMed] [Google Scholar]
  • 44.Shang Y, Deklerck R, Nyssen E, Markova A, Mey JD, Yang X, Sun K. Vascular active contour for vessel tree segmentation. IEEE Trans Biomed Eng. 2011 Apr;58(4):1023–1032. doi: 10.1109/TBME.2010.2097596. [DOI] [PubMed] [Google Scholar]
  • 45.Law MW, Chung A. Segmentation of intracranial vessels and aneurysms in phase contrast magnetic resonance angiography using multirange filters and local variances. IEEE Trans Image Process. 2013 Mar;22(3):845–859. doi: 10.1109/TIP.2012.2216274. [DOI] [PubMed] [Google Scholar]
  • 46.Ram S, Rodriguez JJ. Vehicle detection in aerial images using multiscale structure enhancement and symmetry. Proc IEEE Intl Conf Image Processing. 2016:3817–3821. [Google Scholar]
  • 47.Meijering E, Jacob M, Sarria JC, Steiner P, Hirling H, Unser M. Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry A. 2004 Apr;58(2):167–176. doi: 10.1002/cyto.a.20022. [DOI] [PubMed] [Google Scholar]
  • 48.Meijering E. Neuron tracing in perspective. Cytometry A. 2010 Jul;77(7):693–704. doi: 10.1002/cyto.a.20895. [DOI] [PubMed] [Google Scholar]
  • 49.Mukherjee S, Condron B, Acton ST. Tubularity flow fielda technique for automatic neuron segmentation. IEEE Trans Image Process. 2015 Jan;24(1):374–389. doi: 10.1109/TIP.2014.2378052. [DOI] [PubMed] [Google Scholar]
  • 50.Starck JL, M F. Astronomical image and signal processing: looking at noise, information and scale. IEEE Signal Process Mag. 2001 Mar;18(2):30–40. [Google Scholar]
  • 51.Olivo-Marin JC. Extraction of spots in biological images using multiscale products. Pattern Recognit. 2002 May;35(9):1989–1996. [Google Scholar]
  • 52.Shansa MJ. Discrete wavelet transforms: Wedding the á trous and mallat algorithms. IEEE Trans Signal Process. 1992 Oct;40(10):2464–2482. [Google Scholar]
  • 53.Starck JL, Fadili J, Murtagh F. The undecimated wavelet decomposition and its recontruction. IEEE Trans Signal Process. 2007 Feb;16(2):297–309. doi: 10.1109/tip.2006.887733. [DOI] [PubMed] [Google Scholar]
  • 54.Sezgin M, Sankur B. Survey over image thresholding techniques and quantitative performance evaluation. J Electron Imag. 2004 Jan;13(1):146–168. [Google Scholar]
  • 55.Guo R, Pandit SM. Automatic threshold selection based on histogram modes and a discriminant criterion. Mach Vis Appl. 1998 Apr;10:331–338. [Google Scholar]
  • 56.Kittler J, Illingworth J. On threshold selection using clustering criteria. IEEE Trans Syst, Man Cybern. 1985 Sep;SMC-15(5):652–655. [Google Scholar]
  • 57.Kittler J, Illingworth J. Minimum error thresholding. Pattern Recognit. 1986;19(1):41–47. [Google Scholar]
  • 58.Otsu N. Threshold selection method from gray-level histograms. IEEE Trans Syst, Man, Cybern. 1979 Jan;SMC-9(1):62–66. [Google Scholar]
  • 59.Kapur JN, Sahoo PK, Wong AKC. A new method for gray-level picture thresholding using the entropy of the histogram. Comput Vis Graph Imag Process. 1985 Mar;29(3):273–285. [Google Scholar]
  • 60.Boykov Y, Kolmogorov V. An experimental comparison of mincut/max-flow algorithms for energy minimization in vision. IEEE Trans Pattern Anal Mach Intell. 2004 Sep;26(9):1124–1137. doi: 10.1109/TPAMI.2004.60. [DOI] [PubMed] [Google Scholar]
  • 61.Lie J, Lysaker M, Tai XC. A binary level set model and some applications to Mumford-Shah image segmentation. IEEE Trans Image Process. 2006 May;15(5):1171–1181. doi: 10.1109/tip.2005.863956. [DOI] [PubMed] [Google Scholar]
  • 62.Rosin PL. Unimodal thresholding. Pattern Recognit. 2001 Nov;34(11):2083–2096. [Google Scholar]
  • 63.Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts. IEEE Trans Pattern Anal Mach Intell. 2001 Nov;23(11):1222–1239. [Google Scholar]
  • 64.Kolmogorov V, Zabih R. What energy functions can be minimized via graph cuts? IEEE Trans Pattern Anal Mach Intell. 2004 Feb;26(2):147–159. doi: 10.1109/TPAMI.2004.1262177. [DOI] [PubMed] [Google Scholar]
  • 65.Boykov Y, Funka-Lea G. Graph cuts and efficient N-D image segmentation. Int J Comput Vis. 2006 Nov;70(2):109–131. [Google Scholar]
  • 66.Vincent L. Grayscale area openings and closings: their applications and efficient implementation. EURASIP Workshop on Mathematical Morphology and its Applications to Signal Processing. 1993:22–27. [Google Scholar]
  • 67.Meijster A, Wilkinson MHF. A comparison of algorithms for connected set openings and closings. IEEE Trans Pattern Anal Mach Intell. 2002 Apr;24(4):484–494. [Google Scholar]
  • 68.Salembier P, Wilkinson MHF. Connected operators. IEEE Signal Process Mag. 2009 Nov;26(6):136–157. [Google Scholar]
  • 69.CIBC. Seg3D: Volumetric Image Segmentation and Visualization. Scientific Computing and Imaging Institute (SCI); 2016. Download from: http://www.seg3d.org. [Google Scholar]

RESOURCES