Abstract
Spatial light interference microscopy (SLIM) is a recently developed method for the label-free imaging of live cells, using the quantitative optical path length through the sample as an endogenous source of contrast. In conventional SLIM, spatial resolution is limited by diffraction and aberrations. This paper describes a novel constrained deconvolution method for improving resolution in SLIM. Constrained deconvolution is enabled by experimental measurement of the system point-spread function and the modeling of coherent image formation in SLIM. Results using simulated and experimental data demonstrate that the proposed method leads to significant improvements in the resolution and contrast of SLIM images. The proposed method should prove useful for high-resolution label-free studies of biological cells and subcellular processes.
Index Terms: Constrained image reconstruction, deconvolution, live cell imaging, quantitative phase imaging (QPI), spatial light interference microscopy (SLIM)
I. Introduction
Live cell imaging can provide new insight into cellular and subcellular dynamics, structure, and function, and thus has a wide range of important biological applications. Most live cell imaging is performed using light microscopy techniques [1]. However, one limitation of light microscopy for biological samples is that most cellular structures absorb very little visible light and can be considered transparent unless some form of additional sample preparation is performed (e.g., staining or labeling with exogenous contrast agents, including fluorescent tags). While staining/labeling methods can provide images with high contrast between certain preidentified cellular structures of interest, the preparation and imaging of labeled cells can be complicated and/or can potentially disrupt natural cell behavior.
An alternative to using exogenous contrast agents is the use of phase-sensitive microscopy techniques like phase contrast (PC) [2] and differential interference contrast (DIC) microscopy [3]. These methods rely on the fact that light passing through a transparent sample undergoes a phase shift determined by the optical path length through the sample, which is itself a function of the sample’s refractive index and thickness. As a result, phase-sensitive techniques can use the optical path length as a source of endogenous contrast in transparent objects. While PC and DIC only provide qualitative phase information, recent advances in quantitative phase imaging (QPI) allow for quantitative measurement of the optical path length, thus enabling new approaches for the study of cellular structure and dynamics [4].
Spatial light interference microscopy (SLIM) is a new QPI method that uses high numerical aperture and spatially coherent white-light illumination to accurately measure optical phase shifts with both high sensitivity and high axial resolution [5], [6]. Its potential for novel biological applications has been recently demonstrated. However, as in most microscopy methods, the spatial resolution of SLIM is limited by diffraction and aberrations. In this paper, we introduce a new constrained deconvolution approach for SLIM that uses prior knowledge of the system point-spread function (PSF) to significantly enhance the spatial resolution of the SLIM phase images. To the best of our knowledge, this paper represents the first reported application of a deconvolution method to light microscopy QPI in this context. A preliminary account of this paper was previously given in [7].
This paper is organized as follows. Section II describes the relevant background on SLIM imaging and deconvolution. Section III introduces our proposed constrained deconvolution method. Section IV shows example results of applying the proposed method to real SLIM datasets, and provides additional characterization and discussion. Finally, we draw conclusions in Section V.
II. Background
A. SLIM Data Acquisition
A schematic of the SLIM instrument setup is given in [5]. The physics of SLIM are based on QPI with broad-band fields [8], as described in detail in [5] and [6]. We will focus in this paper on the resulting mathematical model of the imaging process. The starting point for SLIM is the transmission of white light through a specimen of interest. The use of white light means that SLIM provides high-resolution axial sectioning, high SNR, and images that are free of the speckle artifacts that contaminate laser-based QPI results. For the spatially coherent illumination used in SLIM, the light passing through the specimen can be described by the complex-valued optical field
| (1) |
where |A(x)| is the magnitude and Ψ(x) is the phase of U (x). The phase function Ψ(x) is proportional to the optical path length through the sample and is the main optical parameter of interest in SLIM.
Based on the assumptions of uniform illumination and the near transparency of biological samples, we additionally assume that A(x) is constant throughout the field of view and use |A(x)| = 1 without loss of generality. As a result, U (x) is represented as a pure phase object, i.e.,
| (2) |
After passing through the sample, light travels from the sample plane through the rest of the microscope system before finally arriving at the detector plane. As a result of aberration and diffraction, the optical field observed at the detector plane O(x) represents a degraded version of the original optical field of interest U (x). Generally, this degradation process can be modeled with the linear integral equation
| (3) |
where K (x; s) describes the response at location x in the detector plane to a point source at location s in the sample plane, and the spatial coordinate systems have been normalized to remove any optical image inversion or magnification. For a well-corrected microscope, the imaging system can be considered to be isoplanatic or shift invariant over small spatial regions of interest (ROIs), meaning that K (x; s) = h (x − s) for some function h (·). In this case, (3) can be represented as the convolution integral
| (4) |
The function h(x) is the PSF of the microscope system over the isoplanatic ROI, and is assumed to be known, since PSFs can be modeled theoretically and/or measured experimentally using standard methods. In this paper, we measure the PSF by SLIM imaging of microbeads that have diameters smaller than one-third of the microscope resolution, which are used as models of point sources [9], [10].
Ideally, the PSF h(x) would be a Dirac delta function, since this would represent an ideal imaging system for which no spatial information is lost as light travels from the sample plane to the detector plane. However, for practical microscopes, finite numerical aperture means that many of the high-frequency spatial Fourier components of U (x) are not transmitted by the imaging optics, and the observed field O(x) is heavily blurred relative to the original optical field.
After the light has passed through the microscope, the final step of SLIM data acquisition is the sampling of O(x). We will denote the magnitude and phase of O(x) as |M (x)| and Φ(x), respectively. SLIM makes use of interferometry to measure the phase function Φ(x) on a rectangular array of points determined by the spatial locations of charge-coupled devices (CCDs) in a CCD camera. The magnitude M (x) is not typically measured directly, as this would require modification of the SLIM optics.
While SLIM measures the phase Φ(x) of the blurred field, the original phase function Ψ(x) is significantly more interesting for high-resolution microscopy applications. However, with access to prior knowledge about the PSF, it can often be possible to enhance Φ(x) to regain some of the high-resolution content of Ψ(x). This kind of resolution enhancement is the objective of the method proposed in this paper. The process of using knowledge of the PSF to transform a blurred image into a higher resolution image is known as deconvolution.
B. Deconvolution
Standard deconvolution is a classical linear systems problem, with many applications in light microscopy and other imaging modalities [9]–[15]. The standard formulation of the problem is as follows: given the convolution relationship in (4), recover U (x) from measurements of O(x) and prior knowledge of the PSF. The characteristics of this inverse problem are easily analyzed in the Fourier domain. In particular, denoting the spatial Fourier transforms of O(x), U (x), and h(x) as Ô(k), Û(k), and Ĥ(k), respectively, the convolution property of the Fourier transform implies that (4) can be equivalently expressed as the simple Fourier-domain multiplication
| (5) |
As a result of this multiplicative relationship, a straightforward approach to recovering U (x) from O(x) would be to use simple division in the Fourier domain to reconstruct the original U (x):
| (6) |
However, there are a couple of important problems preventing the direct application of (6) to SLIM deconvolution. First, it is important to note that the division in (6) is poorly defined whenever Ĥ(k) is equal to zero and is unstable with respect to small noise perturbations in Ô(k) whenever Ĥ(k) is nearly zero. For practical microscopes with finite aperture, Ĥ(k) will only be nonzero within a small region of k-space determined by the microscope pupil function. As a result, deconvolution for microscopy is well known to be an ill-posed problem—the measured data contain little or no information about high-resolution image features, and there are generally multiple different potential reconstructed images that match closely with the measured data. This issue is common to most deconvolution microscopy applications, and the approach that is generally taken to make the deconvolution problem well posed is to incorporate additional prior information into the reconstruction process. Frequently, prior information is incorporated through the use of an appropriate regularization strategy [9], [11]–[14]. In this paper, we also make use of regularization to address ill conditioning, as described in the next section.
The second issue in applying (6) to SLIM images is that existing SLIM processing does not provide the magnitude of O(x), while standard deconvolution methods traditionally assume that complete information about O(x) is available. This means that we have less information than we would for standard deconvolution, and that we cannot easily compute Ô(k). As a result, SLIM deconvolution can be even more poorly conditioned than standard deconvolution. However, useful SLIM deconvolution is still feasible if the standard deconvolution problem formulation is adjusted to include appropriate modeling of the coherent image formation process in SLIM and appropriate assumptions about U (x). We describe our proposed approach to SLIM deconvolution in the next section.
III. Proposed Method
A. Problem Formulation
To simplify notation, the remainder of this paper will assume 2-D imaging of thin samples, with x = (x, y). We begin the formulation of our proposed method by using our previous assumptions about the SLIM imaging of biological specimens to expand (4) as
| (7) |
In writing this equation, we have explicitly assumed that U (x, y) is a phase object. This constraint is crucial for enabling SLIM deconvolution, since magnitude information about O(x) is not provided. Due to spatial sampling by the CCD detector array, we assume that we only have access to the values of Φ(x, y) on a rectangular lattice of N equally spaced points that are ordered lexicographically. In addition, it is assumed that h (x, y) has already been measured and is known.
Our proposed method seeks to estimate the unknown functions |M (x, y)| and Ψ(x, y) from the data. In particular, we will seek estimates of these functions that are consistent with the measurement model given in (7). However, due to the ill-posed nature of the problem, a data consistency constraint is not sufficient by itself to guarantee unique and stable SLIM deconvolution. As a result, we will impose additional constraints on U (x, y) to obtain a better posed reconstruction problem. In the absence of stronger prior information, it is standard in inverse problems to constrain reconstructed images to be spatially smooth [11]–[14]. This choice is based on the observation that most images of interest are spatially piecewise homogeneous, while reconstructed noise images frequently demonstrate very significant pixel-to-pixel spatial variation. As a result, our proposed method seeks to obtain a reconstruction that is both data consistent and spatially smooth.
Before explicitly specifying our proposed deconvolution formulation, we will first additionally assume that we can use a finite-dimensional image model to represent the continuous function U (x, y). In particular, we will assume that
| (8) |
where v (x, y) is a pixel basis function and is a set of real numbers. The use of a finite-dimensional representation is important to enable practical implementation of our proposed method on digital computers, and is standard in inverse problems [11]–[14]. Many choices for the pixel basis function v (x, y) are possible, and we choose Dirac delta functions to obtain the following numerical approximation of (7):
| (9) |
for n = 1, 2, …, N, where Hnp = h (xn − xp, yn − yp ). To simplify notation in the sequel, we will equivalently express (9) in terms of matrices and vectors as follows:
| (10) |
where ⊙ denotes the Hadamard product (element-by-element multiplication), H is an N × N matrix with elements [H]np = Hnp, m and ψ are from the space of N -dimensional real-valued vectors ℝN with elements [m]n = |M (xn, yn )| and [ψ]n = ψn, respectively, and eiφ and eiψ are from the space of N -dimensional complex-valued vectors ℂN with elements [eiφ]n = eiφ(xnyn) and [eiψ]n = eiψn, respectively. We will also use e−iφ and e−iψ to denote the complex conjugates of eiφ and eiψ, respectively. With this notation, we now explicitly describe the data consistency and spatial smoothness constraints we propose to use for SLIM deconvolution.
In traditional deconvolution microscopy [9], it is common to use a data-fidelity constraint that models the Poisson statistics of photon-limited optical imaging. In contrast to photon-limited scenarios, SLIM has an intrinsically high signal-to-noise ratio due to the use of white-light illumination, and the dominant sources of measurement errors for SLIM are small instrumental instabilities. As a result, we adopt a least-squares data-fidelity criterion because it leads to efficient computations. If we were to use only data consistency constraints, a deconvolved reconstruction could be obtained by solving
| (11) |
where ||·||ℓ2 is the ℓ2-norm defined as , and m ≽ 0 denotes the constraint that every element of the vector m should be real and nonnegative. As described previously, the problem in (11) is ill posed, in the sense that the inverse mapping from Φ(x, y) to Ψ(x, y) through (11) can be nonunique and is unstable with respect to small perturbations of Φ(x, y). As a result, we propose to augment (11) with an additional constraint that encourages spatially smooth reconstructions. In particular, we estimate m and ψ by solving the penalized nonlinear least-squares problem as follows:
| (12) |
where R(eiΨ) is a regularization functional that penalizes non-smooth functions and λ is a scalar regularization parameter which can be adjusted to balance the tradeoff between the data consistency and smoothness constraints.
Use of smoothness-based regularization is a standard approach used to stabilize ill-posed inverse problems like deconvolution [9], [11]–[14]. While many different regularization strategies are possible, we will restrict our attention in this paper to two different common regularization functionals that penalize the magnitude of the spatial gradient of eiψ. In particular, let Dx and Dy be N × N sparse matrices that use finite differences to approximate spatial differentiation of eiΨ(s,t) along the x and y dimensions such that
| (13) |
and
| (14) |
where n′ and n″ are chosen such that (xn′, yn′) = (xn + Δ, yn) and (xn″, yn″) = (xn, yn + Δ), where Δ is the detector spacing in the CCD array. The first regularization functional we consider penalizes the magnitude squared of the spatial gradient of the reconstructed field:
| (15) |
This type of quadratic regularization heavily discourages large values of the spatial gradient and, thus, results in smooth reconstructions. The second regularization functional we consider penalizes the magnitude of the spatial gradient:
| (16) |
and is often referred to as a total-variation (TV) penalty [13], [14], [16], [17]. The constant ε appearing in (16) is a small positive scalar (we use ε = 10−10) that is used to ensure differentiability of RTV (·) and improve the stability of numerical optimization algorithms [13]. Similar to the quadratic penalty of (15), the TV penalty also encourages spatial smoothness but penalizes large spatial gradient elements less strongly. As a result, the TV penalty is able to preserve image discontinuities (i.e., edges) better than the quadratic penalty and thus can lead to better reconstructions for images with significant edge structures. However, one disadvantage of the TV penalty compared to the quadratic penalty is that numerical optimization problems involving TV are generally more nonlinear and more difficult to solve.
We describe our solution to the numerical optimization problem in (12) for both regularization functionals in the following section.
B. Solving the Optimization Problem
Regardless of the choice of regularization functional, (12) represents a nonlinear, nonconvex optimization problem that does not have a closed-form solution. As a result, it is necessary to use iterative methods to determine the optimal solution. However, rather than applying an iterative algorithm to (12) directly, we first observe that the problem in (12) can be simplified to improve computational efficiency. In particular, for a fixed value of ψ, we observe that the optimal solution for m does have a closed-form expression. In particular, consider the optimization problem given by
| (17) |
Since the second term in this expression does not depend on m, the solution to (17) is equivalent to the solution of
| (18) |
Using Parseval’s identity, (18) can be rewritten as
| (19) |
where we have used the fact that pointwise multiplication with e−iφ is a unitary operation, and the fact that for any complex vector c ∈ ℂN. The operators real (·) and imag (·) return the real and imaginary parts of their inputs, respectively.
From (19), we can derive the optimal solution for m̂(ψ) as
| (20) |
where [·]+ is an operator that sets all of the negative entries in a vector to zero.
As a result of this closed-form solution, we can use the variable projection framework [18] to eliminate the variable m from the optimization procedure. In particular, by substituting the optimal m̂(ψ) from (20) into (12) and simplifying, we obtain the following reduced problem that only requires the optimization of Ψ:
| (21) |
where the [·]− operator is defined for arbitrary c ∈ ℂN as [c]− = c − [real (c)]+. The variable projection framework ensures that (21) has the same optimal solution for ψ̂ as (12). In addition to reducing the number of unknowns from 2N to N, the use of variable projection generally also reduces computational complexity, helps to avoid undesirable local minima, and improves the convergence rate of iterative algorithms [18].
In this paper, we use the Polak–Ribiere nonlinear conjugate gradient (NCG) algorithm [19] to minimize (21). The NCG algorithm is an iterative method for minimizing differentiable cost functions that is well suited to large-scale optimization problems, because it tends to converge rapidly and does not require the storage of any large-scale auxiliary matrices. Using the NCG algorithm requires computation of the gradient gψ of (21) with respect to ψ, which is given by
| (22) |
where Hi = imag (φH), Hr = real (ΦH), Φ is an N × N diagonal matrix with diagonal elements [Φ]nn = e−iΦ(xnyn), and Z is an N × N diagonal matrix with diagonal elements
| (23) |
The matrix W appearing in (22) is also N × N and diagonal, though the diagonal entries of this matrix depend on our choice of regularization functional. For the quadratic penalty in (15), the diagonal entries are given by [W]nn = 1 for all n = 1, 2, …, N. For the TV penalty in (16), the diagonal entries of W are given by
| (24) |
Given the expression for the gradient in (22), the NCG algorithm is implemented as described in [19]. For practical computation, it is important to note that all of the matrices appearing in (22) have special structures that can be leveraged to improve computational speed and reduce memory requirements for the algorithm. In particular, the Dx, Dy, W, Z, and Φ matrices are all sparse or diagonal. Sparse or diagonal matrix structure significantly reduces storage requirements and can improve the speed of matrix-vector multiplication, because it is only necessary to store and operate on the nonzero elements of the matrix [20]. In addition, the matrix H has Toeplitz-block-Toeplitz structure because it implements a convolution [13]. This structure means that it is only necessary to store the convolution kernel h (x, y) instead of the full N × N H matrix in memory, and that matrix-vector multiplications with H can be performed efficiently by using the fast Fourier transform (FFT) to implement multidimensional convolution [13], [21].
In practice, the cost function in (21) is nonconvex and will have many local minimizers. As a result, it is important to initialize the NCG algorithm well. We have found that a reasonable starting point can be obtained by ignoring the constraint that U (x, y) should be a pure phase object, and instead assuming that O (x, y) is a pure phase object (note that this latter assumption will be approximately valid when U (x) is a pure phase function and the phase function Ψ(x, y) has very small spatial variations). In this case, a deconvolved reconstruction can be obtained by solving
| (25) |
where u is the N × 1 complex vector with elements [u]n = U (xn, yn ). Note that under the assumption that O (x, y) = eiΦ(x,y), (25) is also one of the standard formulations that are commonly used to address the general deconvolution problem introduced in Section II-B.
Once a solution to (25) is obtained, an initial guess for ψ is obtained from extracting the phase from û. While (25) makes less accurate assumptions about SLIM imaging than (12), the benefit to using (25) to initialize the NCG algorithm is that (25) is equivalent to a standard linear least-squares problem, and thus, the solution has an analytic closed-form expression:
| (26) |
Since the matrices in this expression are all Toeplitz-block-Toeplitz and/or sparse, the optimal û can be found using standard Toeplitz/circulant solvers that leverage the FFT to considerably improve computational efficiency [22].
To summarize this section, pseudocode for the proposed algorithm is given as follows.
Proposed algorithm to find the ψ̂ that optimizes (12)
| Given: Inputs Φ and h(x, y). |
| Initialization: |
|
| Main Algorithm: |
|
| Output: Converged value of Ψ(i). |
IV. Results and Discussion
All algorithms were implemented in MATLAB 7.11 (Math-works Inc, Natick, MA), and computations were performed on a Linux-based workstation with two Intel Quad-Core Xeon 3.16GHz processors and 48 GB RAM. The following sections describe and discuss results obtained by applying the proposed method to experimental and simulated data. In all of the real experiments, SLIM imaging was performed with a white-light source emitting wavelengths between 400 and 700 nm and using a 1388×1040 Zeiss AxioCam MRm CCD detector array sampling uniformly over a 100 μm × 75 μm field of view. All experiments were performed using a Zeiss EC Plan-Neofluar 40 ×/0.75 PH2 objective.
A. Experimental Live Cell Imaging Results
Fig. 1 demonstrates the application of deconvolution to SLIM images of a mixed glial culture derived from bilateral dissection of the ventral hypothalamus of postnatal (P1-P2) Long-Evans BluGill rats. Fig. 2 demonstrates the application of deconvolution to SLIM images of primary hippocampal neuron cultures derived from the CA1-CA3 region of the same type of rats. In both datasets and for both regularization penalty functions, the deconvolved SLIM images demonstrate significantly enhanced resolution and visual quality relative to the original SLIM images.
Fig. 1.
SLIM and deconvolved SLIM phase images of live cells from a mixed glial culture. (top row) Images of a 41.4 μm × 31.0 μm ROI featuring a live microglial cell. (bottom row) Images of a 41.4 μm × 31.0 μm ROI featuring part of the glial membrane. The white scale bar in (a) corresponds to 5 μm, and is valid for both rows. (a) Original Φ(x, y) SLIM phase image. (b) Proposed Ψ̂(x, y) using the quadratic regularization of (15). (c) Proposed Ψ̂(x, y) using the TV regularization of (16). Both deconvolution approaches significantly enhance the resolution and visual quality of the SLIM images. Videos illustrating the temporal dynamics of these datasets can be viewed in the supplementary videos accompanying the paper.
Fig. 2.
SLIM and deconvolved SLIM phase images of live cells from a primary hippocampal neuron culture. The images correspond to a 41.4 μm × 31.0 μm ROI. The white scale bar in (a) corresponds to 5 μm. (a) Original Φ(x, y) SLIM phase image. (b) Proposed Ψ̂(x, y) using the quadratic regularization of (15). (c) Proposed Ψ̂(x, y) using the TV regularization of (16).
The images in Fig. 1 show ROIs from the first frame of a 396-frame image sequence acquired at 0.5 Hz over a 13-min time span. Movies illustrating the full image sequence can be viewed in the supplementary videos accompanying the paper.1 All videos use the MPEG-1 video format, and are played 50× faster than the true data acquisition speed. Clip 1 shows videos of the full field of view for the (left) original SLIM phase images Φ(x, y) and (right) quadratically regularized deconvolved SLIM phase images Ψ̂(x, y). Clips 2 and 3 show (left) original SLIM phase images and (right) quadratically regularized deconvolved SLIM phase images for different ROIs. The white scale bars in all videos correspond to 10 μm. These videos demonstrate the temporal cellular dynamics that are observable with SLIM, both before and after deconvolution. Combining the temporal resolution of this acquisition with the spatial resolution enabled by deconvolution, it is possible to clearly visualize a number of different subcellular processes, including the dynamics of the cytoskeleton and various transport processes.
When comparing quadratically regularized reconstructions using (15) with TV-regularized reconstructions using (16), there are a few important features to note. First, while both methods offer similar levels of resolution enhancement, reconstructions using TV regularization generally contained sharper edges and displayed reduced “noise” and ringing artifacts than reconstructions using quadratic regularization, as would be expected based on the known characteristics of TV. These characteristics are easily appreciated in Fig. 3, which plots a single row from the reconstructed glial phase image. While artifact reduction is desirable, TV-regularized reconstructions can also be known to exhibit certain undesirable characteristics, including loss of image contrast, distortion of geometrical image structure, and loss of image texture [17]. As a result, preference between quadratically regularized and TV-regularized reconstructions will depend on the objectives of the imaging experiment and should be tailored to each application. Second, it is important to note that TV-regularized reconstruction is significantly more computationally intensive than quadratically regularized reconstruction. For our MATLAB-based implementation, TV-regularized reconstruction requires about 3× longer computation time on average compared to quadratically regularized reconstruction. In particular, TV-regularized reconstruction requires approximately 530 s of computation per image frame, while quadratically regularized reconstruction requires approximately 180 s/frame. For comparison, the initialization computed using (25) only requires approximately 40 s/frame. Note that the algorithm speed for all formulations could be significantly improved through implementation in a more computationally efficient programming environment.
Fig. 3.
Plots of a section from a single row of Ψ̂(x, y) from the deconvolved glial culture dataset shown in Fig. 1. The TV-regularized reconstruction has reduced oscillatory “noise” and ringing artifacts relative to the quadratically regularized reconstruction, though this possibly comes at the expense of reduced image contrast, geometric distortion, and loss of fine texture.
An important consideration for regularized image reconstruction is the selection of the regularization parameter λ. If λ is set too small, then the smoothness constraints will not be heavily enforced and noise perturbations can dominate the useful information contained in the reconstruction. On the other hand, if λ is set too large, then smoothness constraints will be overly enforced, and the deconvolved images will have lower resolution than necessary. These effects are illustrated in Fig. 4. While many automatic regularization parameter selection methods exist in the inverse problems literature [11]–[14], we heuristically adjusted λ in this study to achieve good visual image quality. In practice, we have observed that for the same physical SLIM setup, the optimal regularization parameter is fairly consistent for different datasets. For quadratic regularization, we choose λ such that the condition number [13], [14] of the matrix being inverted in (25) is 500. For TV regularization, we choose λ to match the same level of data consistency achieved by the quadratically regularized reconstruction.
Fig. 4.
Reconstructions of a 19.7 μm × 16.1 μm ROI from the glial culture dataset using different regularization parameters. The white scale bar in (a) corresponds to 5 μm. (top row) Quadratically regularized reconstructions. (bottom row) TV-regularized reconstructions. The regularization parameter for quadratic regularization was selected such that the condition number of the matrix being inverted in (25) was (a) 50 (λ = 0.04), (b) 500 (λ = 0.003), and (c) 50000 (λ = 0.00003). For TV regularization, λ was adjusted to match the level of data consistency achieved with quadratic regularization. The choice of λ represents a tradeoff between resolution and noise.
B. Analysis of Resolution Enhancement
To assess the level of resolution improvement using deconvolution, we applied the proposed method to SLIM images of 200-nm polystyrene beads (representing small point sources that are significantly smaller than the experimental resolution of the microscope), and a representative result is shown in Fig. 5. These results indicate that deconvolution both reduces the full-width-at-half-maximum (FWHM) and increases the peak phase of the beads, as expected for successful resolution enhancement. In particular, the FWHM of the bead in the original SLIM image was measured to be 660 nm, while the FWHM has decreased down to 545 nm for deconvolution with quadratic regularization and down to 565 nm with TV regularization, representing improvements of roughly 20%. Note, however, that resolution improvement is a function of the regularization parameter λ, and the value of λ used to generate Fig. 5 was the same as in the reconstructions shown in Figs. 1, 2, and 4(b). Further improvements in resolution would be obtainable by choosing smaller values of λ, though this would come at the expense of significant noise sensitivity. In the absence of noise limitations, further improvements in resolution could also be possible by more accurate calibration of the PSF (e.g., using smaller beads or specialized signal processing methods [23]).
Fig. 5.
SLIM phase images of a 200-nm bead. The white scale bar in (a) corresponds to 2 μm. (a) Original SLIM image. (b) Proposed method (quadratic regularization). (c) Proposed method (TV regularization). (d) 1-D vertical profile through the bead reconstructions. (e) 1-D vertical profile through the bead after scale normalization.
C. Magnitude Modeling
One of the unique features of our approach compared to traditional deconvolution is that we do not directly observe the magnitude of O (x, y). Instead, we estimate this magnitude after making physics-based assumptions about the magnitude of U (x, y). However, as mentioned previously, if we had made assumptions about the magnitude of O (x, y) instead of U (x, y), then deconvolution would be possible using the traditional simple linear least-squares formulation in (25). Due to the closed-form solution to (25) given in (26), solving (25) is considerably less computationally intensive than solving (12). As a result, it is of interest to test how these different modeling assumptions influence deconvolution performance. Simulations were performed to investigate this issue. In each case, we simulated U (x, y) as a pure phase object, with the gold standard phase image Ψ(x, y) generated based on real SLIM data. Subsequently, we simulated the SLIM imaging experiment, first convolving the simulated U (x, y) with the PSF, and then sampling the phase of the resulting blurred field. Finally, deconvolution reconstructions were obtained using the proposed method from (12) and the simpler traditional method from (25). Reconstruction results were visually similar and are not shown here. However, as shown in Fig. 6, quantitative phase results were distinct for the two different reconstruction schemes. In particular, while the proposed method was accurate in reconstructing the original image phase, the linear reconstruction of (25) had a tendency to underestimate large phase values. This result implies that while solving (25) may be easier than solving (12), using the more accurate assumptions of (12) can lead to better quantitative phase estimates.
Fig. 6.

Scatter plot comparing the quantitative phase estimation performance of the proposed nonlinear deconvolution method in (12) versus the computationally simpler linear deconvolution method in (25). For the proposed method, the reconstructed phase matched well with the true gold standard phase image. However, the linear method in (25) systematically underestimates large phase values.
D. Extension to Other QPI Methods
Compared to traditional deconvolution approaches, our proposed method explicitly models SLIM-specific data acquisition assumptions, and we have demonstrated that such modeling leads to improved quantitative phase estimation. In practice, many of the assumptions made for SLIM will also be similar for other live cell QPI approaches. As a result, the proposed deconvolution approach has the potential to be applied to other QPI methods. However, one significant difference between other QPI methods and SLIM is that most QPI methods use laser illumination, while SLIM uses white-light illumination. Due to the high coherence of laser illumination, most QPI methods suffer from speckle, multiple reflection, and related artifacts, and will generally have lower contrast-to-noise ratios compared to a white-light QPI method like SLIM [5]. Due to these differences, additional investigation would be necessary to determine the usefulness of our proposed deconvolution method for QPI methods other than SLIM.
V. Conclusion
This paper proposed a deconvolution-based approach for enhancing SLIM images, using a novel formulation that is specifically tailored to the SLIM imaging physics for transparent biological samples. A new fast algorithm was derived to solve the resulting optimization problem, and the performance of the method was evaluated with simulated and experimental data. These results demonstrate that the proposed method enables significant resolution enhancement for SLIM imaging, with important implications for label-free high-resolution imaging of live cells.
Supplementary Material
Acknowledgments
This work was supported in part by Grant NIH-P41-RR023953, Grant NIH-P41-EB001977, Grant NIH-R21-EB009768, Grant NSF-CBET-07-30623, Grant NSF-CBET-1040462 MRI, Grant NSF-CBET-08-46660 CAREER (GP), and Grant NIH-R21-CA154813 (GP).
The authors would like to thank L. Millet and M. Gillette for providing live cell specimens.
Biographies

Justin P. Haldar (S’06) received the B.S. and M.S. degrees in electrical engineering in 2004 and 2005, respectively, and the Ph.D. degree in electrical and computer engineering in 2011, all from the University of Illinois at Urbana-Champaign, Urbana.
His research interests include image reconstruction, parameter estimation, and experiment design for biomedical imaging applications.
Dr. Haldar has received several awards, including the M. E. Van Valkenburg Graduate Research Award, an NSF Graduate Research Fellowship, a Beckman Institute Graduate Fellowship, an IEEE ISBI 2010 Best Student Paper Award, and the First-Place Award in the IEEE EMBC 2010 student paper competition.

Zhuo Wang received the B.E. degree (Hons.) in precision instruments, measurement, and control from Tsinghua University, Beijing, China, in 2002, the M.S. degree in electrical engineering from Wayne State University, Detroit, MI, in 2006, and the Ph.D. degree in electrical and computer engineering from the University of Illinois at Urbana-Champaign, Urbana, in 2011.
He was an Optical Design Engineer at Tsinghua-Foxconn Nanotechnology Research Center. His research interests include biomedical imaging, biopho-tonics, biosensors, and their clinical applications.
Dr. Wang is the recipient of the Yuen T. Lo Outstanding Research Award and the Sundaram Senshu International Student Fellowship.

Gabriel Popescu received the B.S. and M.S. degrees in physics from the University of Bucharest, Bucharest, Romania, in 1995 and 1996, respectively, and the M.S. and Ph.D. degrees in optics from the School of Optics/CREOL (now the College of Optics and Photonics), University of Central Florida, Orlando, in 1999 and 2002, respectively.
He continued his training with the G. R. Harrison Spectroscopy Laboratory, Massachusetts Institute of Technology, as a Postdoctoral Associate. He joined the University of Illinois at Urbana-Champaign, Urbana, in August 2007, where he is an Assistant Professor in the Department of Electrical and Computer Engineering, and holds a full faculty appointment with the Beckman Institute for Advanced Science and Technology. He is also an Affiliate Faculty in bioengineering.

Zhi-Pei Liang (F’06) received the Ph.D. degree in biomedical engineering from Case Western Reserve University, Cleveland, OH, in 1989.
He is currently a Professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign (UIUC), Urbana. His research interests include image formation theory, algorithms, and biomedical applications.
Dr. Liang is a recipient of the 1990 Sylvia Sorkin Greenfield Best Paper Award (Medical Physics), an NSF CAREER Award (1995), and an IEEE-EMBS Early Career Achievement Award (1999). He was named Henry Magnuski Scholar (1999–2001), and University Scholar (2001–2004) at UIUC. He is a Fellow of the American Institute for Medical and Biological Engineering (2005) and the International Society for Magnetic Resonance in Medicine (2010).
Footnotes
This paper has supplementary downloadable material provided by the authors. This material includes three video clips, totals 47.4 MB in size, and is available at http://ieeexplore.ieee.org.
Contributor Information
Justin P. Haldar, Email: haldar@uiuc.edu, Department of Electrical and Computer Engineering and the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA.
Zhuo Wang, Email: zwang47@illinois.edu, Department of Electrical and Computer Engineering and the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA.
Gabriel Popescu, Email: gpopescu@illinois.edu, Department of Electrical and Computer Engineering and the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA.
Zhi-Pei Liang, Email: z-liang@uiuc.edu, Department of Electrical and Computer Engineering and the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA.
References
- 1.Stephens DJ, Allan VJ. Light microscopy techniques for live cell imaging. Science. 2003;300:82–86. doi: 10.1126/science.1082160. [DOI] [PubMed] [Google Scholar]
- 2.Zernike F. How I discovered phase contrast. Science. 1955;11:345–349. doi: 10.1126/science.121.3141.345. [DOI] [PubMed] [Google Scholar]
- 3.Pluta M. Nomarski’s DIC microscopy: A review. Proc SPIE. 1994;1846:10–25. [Google Scholar]
- 4.Popescu G. Quantitative phase imaging of nanoscale cell structure and dynamics. Methods Cell Biol. 2008;90:87–115. doi: 10.1016/S0091-679X(08)00805-4. [DOI] [PubMed] [Google Scholar]
- 5.Wang Z, Millet L, Mir M, Ding H, Unarunotai S, Rogers J, Gillette MU, Popescu G. Spatial light interference microscopy (SLIM) Opt Exp. 2011;19:1016–1026. doi: 10.1364/OE.19.001016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Wang Z, Chun IS, Li X, Ong ZY, Pop E, Millet L, Gillette M, Popescu G. Topography and refractometry of nanostructures using spatial light interference microscopy. Opt Lett. 2010;35:208–210. doi: 10.1364/OL.35.000208. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Haldar JP, Wang Z, Popescu G, Liang Z-P. Label-free high-resolution imaging of live cells with deconvolved spatial light interference microscopy. Proc. IEEE Eng. Med. Bio. Conf.; Aug. 2010; pp. 3382–3385. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Wang Z, Popescu G. Quantitative phase imaging with broadband fields. Appl Phys Lett. 2010;96:051117-1–051117-3. [Google Scholar]
- 9.Sibarita JB. Deconvolution microscopy. Adv Biochem Eng/Biotechnol. 2005;95:201–243. doi: 10.1007/b102215. [DOI] [PubMed] [Google Scholar]
- 10.Swedlow JR. Quantitative fluorescence microscopy and image deconvolution. Methods Cell Biol. 2007;81:447–465. doi: 10.1016/S0091-679X(06)81021-6. [DOI] [PubMed] [Google Scholar]
- 11.Tikhonov AN, Goncharsky AV, Stepanov VV, Yagola AG. Numerical Methods for the Solution of Ill-Posed Problems. Dordrecht, The Netherlands: Kluwer; 1995. [Google Scholar]
- 12.Bertero M, Boccacci P. Introduction to Inverse Problems in Imaging. London, U.K: Inst. Phys. Publishing; 1998. [Google Scholar]
- 13.Vogel CR. Computational Methods for Inverse Problems. Philadelphia, PA: SIAM; 2002. [Google Scholar]
- 14.Hansen PC. Discrete Inverse Problems: Insight and Algorithms. Philadelphia, PA: SIAM; 2010. [Google Scholar]
- 15.Cotte Y, Toy MF, Pavillon N, Depeursinge C. Microscopy image resolution improvement by deconvolution of complex fields. Opt Exp. 2010 Sep;18:19 462–19 478. doi: 10.1364/OE.18.019462. [DOI] [PubMed] [Google Scholar]
- 16.Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D. 1992;60:259–268. [Google Scholar]
- 17.Chan T, Esedoglu S, Park F, Yip A. Total variation image restoration: Overview and recent developments. In: Paragios N, Chan Y, Faugeras OD, editors. Handbook Mathematical Models Computer Vision. Vol. 2. New York: Springer Science+Business Media, Inc; 2006. pp. 17–32. [Google Scholar]
- 18.Golub G, Pereyra V. Separable nonlinear least squares: The variable projection method and its applications. Inverse Problems. 2003;19:R1–R26. [Google Scholar]
- 19.Press WH, Teukolsky SA, Vetterling WT, Flannery BP. Numerical Recipes in C. 2. Cambridge, U.K: Cambridge Univ. Press; 1992. [Google Scholar]
- 20.Gilbert JR, Moler C, Schreiber R. Sparse matrices in MATLAB: Design and implementation. SIAM J Matrix Anal Appl. 1992;13:333–356. [Google Scholar]
- 21.Oppenheim AV, Schafer RW. Discrete-Time Signal Processing. Upper Saddle River, NJ: Prentice-Hall; 1999. [Google Scholar]
- 22.Chan RH, Ng MK. Conjugate gradient methods for Toeplitz systems. SIAM Rev. 1996;38:427–482. [Google Scholar]
- 23.Yoo H, Song I, Gweon DG. Measurement and restoration of the point spread function of fluorescence confocal microscopy. J Microsc. 2006;221:172–176. doi: 10.1111/j.1365-2818.2006.01556.x. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.





