Skip to main content
Biomedical Optics Express logoLink to Biomedical Optics Express
. 2019 Jun 26;10(7):3635–3653. doi: 10.1364/BOE.10.003635

Speckle-structured illumination for 3D phase and fluorescence computational microscopy

Li-Hao Yeh 1,*, Shwetadwip Chowdhury 1, Nicole A Repina 2, Laura Waller 1,2
PMCID: PMC6706021  PMID: 31467796

Abstract

High-content biological microscopy targets high-resolution imaging across large fields-of-view, often achieved by computational imaging approaches. Previously, we demonstrated 2D multimodal high-content microscopy via structured illumination microscopy (SIM) with resolution >2× the diffraction limit, using speckle illumination from Scotch tape. In this work, we extend the method to 3D by leveraging the fact that the speckle illumination is in fact a 3D structured pattern. We use both a coherent and an incoherent imaging model to develop algorithms for joint retrieval of the 3D super-resolved fluorescent and complex-field distributions of the sample. Our reconstructed images resolve features beyond the physical diffraction-limit set by the system’s objective and demonstrate 3D multimodal imaging with 0.6×0.6×6 μm3 resolution over a volume of 314×500×24 μm3.

1. Introduction

High-content optical microscopy is a driving force for large-scale biological study in fields such as drug discovery and systems biology. With fast imaging speeds over large fields-of-view (FOV) and high spatial resolutions [18], one can visualize rare cell phenotypes and dynamics. The traditional solution for 2D high-content microscopy is to mechanically scan samples through the limited FOV of a high-NA (i.e. high resolution) imaging objective and then digitally stitch the images together. However, this scheme is limited in imaging speed due to the large-distance translations of the sample, as well as the need for auto-refocusing at each position [9]. These issues are further compounded when extending this high-content imaging strategy to 3D.

Recently, computational imaging has demonstrated efficient strategies for high-content 2D microscopy. In contrast with slide scanning, these strategies often employ a low-NA imaging objective to acquire low-resolution (large-FOV) measurements, then use computational techniques like synthetic aperture [1012] and super-resolution (SR) [1318] to digitally reconstruct a high-resolution image. This eliminates the requirement for large-distance mechanical scanning in high-content imaging, which results in faster acquisition and more cost-effective optical setups, while also relaxing the sample’s auto-refocusing requirements due to the low-NA objective’s longer depth-of-field (DOF) [1936]. Examples of such approaches include lensless microscopy [1921] and Fourier ptychography [2228] for coherent absorption and quantitative phase imaging. For incoherent fluorescent imaging, micro-lenslet arrays [2932], Talbot plane scanning [3335], diffuse media [36], or meta-surfaces [37] have also been demonstrated. Among these examples, 3D high-content imaging capability has only been demonstrated in the coherent imaging context (quantitative phase and absorption) by Fourier ptychography [25, 27].

Our previous work demonstrated multimodal coherent (quantitative phase) and incoherent (fluorescence) imaging for high-content 2D microscopy [38]. Multimodal imaging is important for biological studies requiring cross-correlative analysis [3943]. Structured illumination microscopy (SIM) [10, 16, 17, 44] with speckle illumination [36, 4553] was used to encode 2D SR quantitative phase and fluorescence. However, because propagating speckle contains 3D features, it also encodes 3D information. Considering speckle patterns as random interference of multiple angled plane waves, the scattered light from interactions with the sample carries 3D phase (coherent) information, similar to the case of non-random angled illumination in diffraction tomography [5457] and 3D Fourier ptychography [25, 27]. Simultaneously, the fluorescent (incoherent) light excited by the 3D speckle pattern encodes 3D SR fluorescence information as in the case of 3D SIM [58]. Combining these, we propose a method for 3D SR quantitative phase and fluorescence microscopy using speckle illumination.

Experimentally, we position a Scotch tape patterning element just before the sample, mounted on a translation stage to generate a translating speckle field that illuminates the sample (Fig. 1). Because the speckle grain size is smaller than the PSF of the low-NA imaging objective (which provides large-FOV), the coherent scattered light from the speckle-sample interaction encodes 3D SR quantitative phase information. In addition to lateral scanning of the Scotch tape, axial sample scanning is necessary to efficiently capture 3D SR fluorescence information. Nonlinear optimization methods based on the 3D coherent beam propagation model [25, 5961] and the 3D incoherent imaging model [58] were formulated to reconstruct the 3D speckle field and imaging system aberrations, which are subsequently used to reconstruct the sample’s 3D SR quantitative phase and fluorescence distributions. Since the Scotch tape is directly before the sample, the illumination NA is not limited by the objective lens, allowing for >2× lateral resolution gain across the entire FOV. This framework enables us to achieve 3D imaging at sub-micron lateral resolution and micron axial resolution across a half-millimeter FOV.

Fig. 1.

Fig. 1

3D multimodal structured illumination microscopy (SIM) with laterally translating Scotch tape as the patterning element. The coherent arm (Sensor-C1 and Sensor-C2) simultaneously captures images with different defocus at the laser illumination wavelength (λex = 532 nm), used for both 3D phase retrieval and speckle trajectory calibration. The incoherent (fluorescence) arm (Sensor-F) captures low-resolution raw fluorescence acquisitions at the emission wavelength (λem = 605 nm) for 3D fluorescence super-resolution reconstruction. OBJ: objective, DM: dichroic mirror, SF: spectral filter, ND-F: neutral-density filter.

2. Theory

We start from the concept of 3D coherent and incoherent transfer functions (TFs), using the Born (weak scattering) assumption [54], to analyze the information encoding process. We then lay out our 3D coherent and incoherent imaging models and derive the corresponding inverse problems to extract SR quantitative phase and fluorescence from the measurements.

First, we introduce linear space-invariant relationships between raw measurements and 3D coherent scattering and incoherent fluorescence [54, 58, 62, 63], by invoking the Born (weak scattering) approximation [54]. These relationships enable us to define TFs for the coherent and incoherent imaging processes. The supports of these TFs in 3D Fourier space determine how much spatial frequency content of the sample can be passed through the system (i.e. the 3D diffraction-limited resolution).

In a coherent imaging system with on-axis plane-wave illumination, the TF describes the relationship between the sample’s scattering potential and the measured 3D scattered field, taking the shape of a spherical cap in 3D Fourier space (Fig. 2(a)). In an incoherent imaging system, the TF is the autocorrelation of the coherent system’s TF [63], relating the sample’s fluorescence distribution to the 3D measured intensity. It takes the shape of a torus (Fig. 2(b)). The spatial frequency bandwidth of these TFs are summarized in Table 1, where the lateral resolution of the system is proportional to the lateral bandwidth of the TF. The incoherent TF has 2× greater lateral bandwidth than the coherent TF. Axial bandwidth generally depends on the lateral spatial frequency, so axial resolution is specified in terms of the best-case. Note that the axial bandwidth of the coherent TF is zero, which means there is zero axial resolution for coherent imaging; hence the poor depth sectioning ability in 3D holographic imaging [41, 56, 64].

Fig. 2.

Fig. 2

3D coherent and incoherent transfer function (TF) analysis of the SIM imaging process. The 3D (a) coherent and (b) incoherent TFs of the detection system are auto correlated with the 3D Fourier support of the (c) illumination speckle field and (d)illumination intensity, respectively, resulting in the effective Fourier support of 3D (e) coherent and (f) incoherent SIM. In (e) and (f), we display decomposition of the auto-correlation in two steps: ① tracing the illumination support in one orientation and ② replicating this trace in the azimuthal direction.

Table 1.

Summary of spatial frequency bandwidths

Lateral bandwidth Axial bandwidth
Coherent TF 2NAdetλex 0
Incoherent TF 4NAdetλem 2(1cos θdet)λem
Illum. field 2NAillumλex 0
Illum. intensity 4NAillumλex 2(1cos θillum)λex
3D coherent SIM 2NAdet+2NAillumλex 1cos θdetλex+1cos θillumλex
3D incoherent SIM 4NAdetλem+4NAillumλex 2(1cos θdetλem+1cos θillumλex)

NAdet,NAillum: the numerical aperture (sinθ) of the detection and illumination system,

θdet,θillum: the maximal detectable and illuminating half angle of light,

λex,λem: the wavelength of the excitation and emission light.

SIM enhances resolution by creating beat patterns. When a 3D structured pattern modulates the sample, the sample’s sub-diffraction features create lower-frequency beat patterns which can be directly measured and used to reconstruct a SR image of the sample via post-processing [17, 58]. This process is generally applicable to both coherent and incoherent imaging [4043], enabling 3D SR multimodal imaging. Mathematically, a modulation between the sample contrast and the illumination pattern in real space can be interpreted as a convolution in Fourier space. This convolution result is then passed through the 3D TF defined in Fig. 2(a,b). The effective support of information going into the measurements can be estimated by conducting cross-correlations between the 3D TFs and the Fourier content of the illumination patterns, as shown in Fig. 2(c,e) and 2(d,f) for coherent and incoherent systems, respectively. The lateral and axial spatial frequency bandwidth of both illumination and 3D SIM Fourier supports for coherent and incoherent imaging are summarized in Table 1. Assuming approximately equal excitation and emission wavelengths, the achievable lateral resolution gain of 3D SIM (ratio between lateral bandwidths of 3D SIM and 3D TF) is (NAdet+NAillum)/NAdet for both coherent and incoherent imaging. Axially, coherent SIM builds up the spatial frequency bandwidth in the axial direction, and incoherent SIM can achieve axial resolution gain with a factor of (2cosθdetcosθillum)/(1cosθdet).

In this work, because the Scotch tape does not pass through an objective, it is able to create high-resolution speckle illumination such that NAillu>NAdet, enabling >2× lateral resolution gain without sacrificing FOV [38]. From the TF analysis, we also see that information beyond diffraction-limit in the axial dimension is obtainable. The next sections outline our computational scheme for 3D SR phase and fluorescence reconstruction. To provide higher-quality reconstructions and more robust operation, our algorithm jointly estimates the illumination speckle field, system pupil function (aberrations), the sample’s 3D transmittance function, and the sample’s 3D fluorescence distribution.

2.1. 3D super-resolution phase imaging

We adopt a multi-slice coherent scattering model to describe the 3D multiple-scattering process [25, 5961] and solve for 3D SR quantitative phase. Our system captures intensity at two focus planes, zc1 and zc2, for every speckle-scanned point [38]. With these measurements and the multi-slice model, we are able to reconstruct the sample’s 3D SR complex-field and the scattered field inside the 3D sample, which is used in the fluorescence inverse problem.

2.1.1. Forward model for 3D coherent imaging

Figure 3(a) illustrates the 3D multi-slice coherent imaging model. Plane-wave illumination of the Scotch tape, positioned at the l-th scanned point, rl, creates speckle field pc(rrl), where r=(x,y) is the lateral spatial coordinate. This speckle field propagates a distance Δsl to the sample. The field interacting with the first layer of the sample is described as:

fl,1(r)=C{pc(rrl)hΔsl,λex(r)}, (1)

where hz,λ(r)=F1{exp(i2πzn02/λ2u22)} is the angular spectrum propagation kernel inside a homogeneous media with refractive index n0[65] u=(ux,uy) is the spatial frequency coordinate, and C{} is a cropping operator that selects the part of the speckle field that illuminates the sample. To model scattering and propagation inside the sample, the multi-slice model treats the 3D sample as multiple slices of complex transmittance function, tm(r) (m=1,,M), where m is the slice index number. As the field propagates through each slice, it first multiplies with the 2D transmittance function at that slice, then propagates to the next slice. The spacing between slices is modeled as uniform media of thickness Δzm. Hence, at each layer we have:

gl,m(r)=fl,m(r)tm(r),m=1,,M,fl,m+1(r)=gl,m(r)hΔzm,λex(r),m=1,,M1. (2)
Fig. 3.

Fig. 3

3D multi-slice model: (a) coherent and (b) incoherent imaging models for the interaction between the sample and the speckle field as light propagates through the sample.

After passing through all the slices, the output scattered field, gl,M(r), propagates to the focal plane to form Gl(r)=gl,M(r)hΔzM,l,λex(r) and gets imaged onto the sensor (with defocus z), forming our measured intensity:

Ic,lz(r)=|Gl(r)hc(r)hz,λex(r)|2,l=1,,Nimg,z=zc1,zc2, (3)

where hc(r) is the system’s coherent point spread function (PSF). The measured intensity subscripts c and l denote indices for the coherent imaging channel and acquisition number, respectively. Nimg is the total number of translations of the Scotch tape. Note that all the spacing distances, Δzm, are independent of the axial scanned position, Δsl, except for the distance to the focal plane, which is ΔzM,l=Δsl+z0, where z0 is the distance from the last layer of the sample to the focal plane (before axial scanning). As the sample is scanned, we account for this shift by propagating extra distance back to the focal plane.

2.1.2. Inverse problem for 3D coherent imaging

We take the intensity measurements from both coherent cameras, {Ic,lz(r)|z=zc1,zc2}, and the scanning trajectory, rl (calculated via standard rigid-body 2D registration [38, 66]), as inputs to jointly estimate the sample’s 3D SR transmittance function, t1(r),,tM(r), as well as the illumination complex-field, pc(r), and the system’s coherent PSF, hc(r), including aberrations.

Based on the forward model in the previous section, we formulate the inverse problem as:

minimizet1,,tM,pc,hcec(t1,tM,pc,hc)=l,zec,lz(t1,,tM,pc,hc)whereec,lz(t1,,tM,pc,hc)=r|Ic,lz(r)|Gl(r)hc(r)hz,λex(r)||2. (4)

Here we adopt an amplitude-based cost function, ec, which minimizes the difference between the measured and estimated coherent amplitude in the presence of noise [67]. In order to solve this optimization problem, we use a sequential gradient descent algorithm [67, 68]. The gradient based on each single measurement is calculated and used to update the sample’s transmittance function, illumination speckle field, and coherent PSF. A whole iteration of variable updates is complete after running through all the measurements. In Appendix A, we provide a detailed derivation of the gradients and in Appendix B we lay out our reconstruction algorithm.

2.2. 3D super-resolution fluorescence imaging

Reconstruction of 3D SR images for the fluorescence channel involves an incoherent multi-slice forward model (Fig. 3(b)) and a joint inverse problem solver. The coherent result provides a good starting estimate of the 3D speckle intensity throughout the sample, which, together with the fluorescent channel’s raw data, is used to reconstruct the sample’s 3D SR fluorescence distribution and the system’s aberrations at the emission wavelength, λem.

2.2.1. Forward model for 3D fluorescence imaging

The 3D fluorescence distribution is also modeled by multiple slices of 2D distributions, om(r) (m=1,,M), as shown in Fig. 3(b). Each layer is illuminated by the m-th layer’s excitation intensity, |fl,m(r)|2, for Scotch tape position rl. The excited fluorescent light is mapped onto the sensor through 2D convolutions with the incoherent PSF at different defocus distances, zm,l. The sum of contributions from different layers form the measured fluorescence intensity:

If,l(r)=m=1M[om(r)|fl,m(r)|2]|hf,zm,l(r)|2,l=1,,Nimg, (5)

where hf,zm,l(r) is the coherent PSF at defocus distance zm,l, which could be further decomposed into hf,zm,l(r)=hf(r)hzm,l,λem(r), where hf(r) is the in-focus system’s coherent PSF at λem. The incoherent PSF is the intensity of the coherent PSF at λem. The subscript f denotes the fluorescence channel and the defocus distance, zm,l, depends on the axial scanning position, Δsl.

2.2.2. Inverse problem for 3D fluorescence imaging

The fluorescence inverse problem takes as input the raw fluorescence intensity measurements, If,l(r), the registered scanning trajectory, rl, and the 3D estimates from the coherent model, in order to estimate the sample’s 3D SR fluorescence distribution and aberrations at the emission wavelength. We also refine the speckle field estimate using the fluorescence measurements.

Based on the incoherent forward model, our 3D SR fluorescence inverse problem is:

minimizeo1,,oM,pc,hfef(o1,oM,pc,hf)=lef,l(o1,,oM,pc,hf)whereef,l(o1,,oM,pc,hf)=r|If,l(r)m=1M[om(r)|fl,m(r)|2]|hf,zm,l(r)|2|2, (6)

where ef is the cost function. Similar to the coherent inverse problem, we adopt a sequential gradient descent algorithm for estimation of each unknown variable. The detailed derivation of gradients and algorithm implementation are summarized in Appendix A and B, respectively.

3. Experimental results

Figure 1 shows the experimental setup. A green laser beam (BeamQ, 532 nm, 200 mW) is collimated through a single lens and illuminates the layered Scotch tape element, creating a speckle pattern at the sample. The number of layers of Scotch tape sets the degree of scattering; we use 16 layers here. The layered Scotch tape and the sample are mounted on a 3-axis closed-loop piezo-stage (Thorlabs, MAX311D) and a 1-axis open-loop piezo-stage (Thorlabs, NFL5DP20), respectively, to enable lateral speckle scanning and axial sample scanning. The separation between the tape and the sample is approximately 1 mm, which is the minimal distance we can achieve for high-angle and high-power illumination without physically touching the sample. The transmitted diffracted and fluorescent light from the sample then travels through the subsequent 4f system formed by the objective lens (Nikon, CFI Achro 20×, NA=0.4) and a tube lens. The coherent and fluorescent light have different wavelengths and are optically separated by a dichroic mirror (Thorlabs, DMLP550R), after which the fluorescence is further spectrally filtered before being imaged onto Sensor-F (PCO.edge 5.5). The coherent light is ND-filtered and then split by a beam-splitter onto two sensors (FLIR, BFS-U3-200S6M-C). Sensor-C1 is in focus, while Sensor-C2 is defocused by 3 mm, enabling efficient phase retrieval across a broad swath of spatial frequencies, according to the phase transfer function [69].

Successful reconstruction relies on appropriate choices for the scanning range and step size [38]. Generally, the translation step size should be 2-3× smaller than the targeted resolution and the total translation range should be larger than the diffraction-limited spot size of the original system. Our system has detection NA of 0.4 and targeted resolution of 500 nm, so a 36 × 36 Cartesian scanning path with a step size of 180 nm is appropriate for 2D SR reconstruction. For coherent imaging, since there is zero axial bandwidth in the coherent TF (Fig. 2(a)), the sample’s complete diffraction information is projected axially and encoded in the measurement. This enables SR reconstruction of the sample’s 3D quantitative phase from just the translating speckle. Incoherent imaging, however, has optical sectioning due to its torus-shaped TF (Fig. 2(b)); hence, fluorescent light that is outside the DOF of the objective will have weak contrast. In order to reconstruct 3D fluorescence with high fidelity, we add axial scanning to our acquisition scheme [58].

A direct combination of lateral xy-scanning of the speckle and axial z-scaning of the sample will result in 36×36×Nz measurements for both channels, where Nz is the number of axial scan positions. Fortunately, there is a high-degree of redundancy in this data. As previously stated, the 3D coherent information does not require axial scanning, and the speckle pattern measured from the coherent channel is used to initialize the fluorescent reconstruction. Thus, only minor refinements are needed for faithful fluorescent reconstruction.

To save acquisition time, we use an interleaving scanning scheme, alternating between axial sample scanning and lateral speckle scanning (Fig. 1). We laterally scan the speckle pattern through 36×36 xy positions, while incrementing the z position for each patch of 12 × 12 positions. The 36 × 36 Cartesian speckle scanning path is divided into 9 blocks of 12 × 12 sub-scanning paths. Each sub-scanning path is associated with a z-scan position. This means the distance from incident speckle field to sample is

Δsl=(n1)s,forl=122(n1)+1,,122n,wheren=1,,9, (7)

where s is the axial step size and n is the index for different z planes. We set the fifth z-scan position as the middle of the sample. The total scanning range is roughly the thickness of the sample and the step size is at least 2× smaller than the Nyquist-limited axial resolution of the fluorescence microscope. This interleaving measurement scheme enables high quality coherent and fluorescent 3D SR reconstructions.

3.1. 3D super-resolution demonstration

With a 0.4 NA objective, our system’s native lateral resolution is 1.33 μm for coherent imaging and 760 nm for fluorescence (Table 1). The intrinsic DOF is infinite for coherent imaging and 7.3 μm for fluorescence imaging. In order to characterize the resolution capability of our method, we begin by imaging a sample with features below both diffraction limits - a mono-layer of fluorescent polystyrene microspheres with diameter 700 nm. We use a z-scan step size of 1 μm across8 μm range, fully covering the thickness of the sample. 15 axial layers are assigned to the transmittance function, separated by 1.7 μm based on Nyquist sampling of the expected axial resolution for our 3D reconstruction, resulting an overall reconstructed axial range that spans the sum of the axial scanning range and the two axial ranges of the effective 3D PSF.

Figure 4 shows that our 3D reconstructions (400 × 400 × 15 voxels with voxel size of 0.096×0.096×1.7 μm3 ) clearly resolve the sub-diffraction individual microspheres and demonstrate better sectioning ability in both coherent and fluorescent channels compared to standard widefield imaging (without deconvolution). In the reconstruction, the average lateral peak-to-peak distance of these microspheres is around 670 nm, which is smaller than the nominal size of each microsphere. This is likely due to vertical staggered stacking of the microspheres. Given that our lateral resolution is at least 670 nm, we do break the lateral diffraction limit for both coherent and fluorescent channels, and the coherent channel achieves >2× lateral resolution improvement. Axially, we demonstrate 6 μm resolution for both channels, which is beyond the axial diffraction limit for both channels. The coherent channel improves the axial resolution from no sectioning ability to 6 μm. Given lateral resolution of 670 nm in the coherent channel, we can deduce the illumination NA of this speckle to be >0.4, which suggests the speckle intensity grain size is smaller than 670 nm.

Fig. 4.

Fig. 4

3D multimodal (fluorescence and phase) SIM reconstruction compared to widefield fluorescence and coherent intensity images for 700 nm fluorescent microspheres. Resolution beyond the system’s diffraction limit is achieved in both the (a) coherent and (b) fluorescent arms.

3.2. 3D large-FOV multimodal demonstration

Next we use the same setup to demonstrate 3D multimodal imaging for our full sensor area (FOV ∼314μm × 500μm). As shown previously, our method achieves ∼0.6×0.6×6μm3 resolution for both fluorescence and phase imaging over an axial range of ∼24 μm. This corresponds to ∼14 Mega-voxels of information. Our experiments are only a prototype; this technique is scalable to the Gigavoxel range with a higher-throughput objective and higher illumination NA.

Figure 5 shows the full-sensor 3D quantitative phase and fluorescence reconstructions (5200×3280×15 voxels with voxel size of 0.096×0.096×1.7 μm3) of a multi-size sample (mixed 2 μm, 4 μm fluorescent, and 3 μm non-fluorescent polystyrene microspheres). We adopt the same z-scan step size and number of slices as in Fig. 4. Zoom-ins on 2 regions of interest (ROIs) display 4 axial layers for each. The arrows highlight 2μm fluorescent microspheres, which defocus more quickly than the larger ones. The locations of the fluorescent microspheres match well in both channels. However, there are some locations in the fluorescence reconstruction where 4 μm microspheres collapse because the immersing media is dissolving the beads over time.

Fig. 5.

Fig. 5

Reconstructed 3D multimodal (fluorescence and phase) large-FOV for mixed 2 μm, 4μm fluorescent and 3 μm non-fluorescent polystyrene microspheres. Zoom-ins for two ROIs show fluorescence and phase at different depths.

Finally, we demonstrate our technique on human colorectal adenocarcinoma (HT-29) cells fluorescently tagged with AlexaFluor phalloidin (5200×3280×18 voxels with voxel size of 0.096×0.096×1.7 μm3), which labels F-actin filaments (sample preparation details in Appendix C). We use a z-scan step size of 1.6 μm across a 12.8 μm range and reconstruct 19 axial layers, separated by 1.7 μm. Figure 6 shows the full-sensor 3D quantitative phase and fluorescence reconstructions, with zoom-ins on 2 ROIs. The sample’s morphological features, as visualized with quantitative phase, match well with the F-actin visualization of the fluorescent channel. This is expected since F-actin filaments are generally known to encapsulate the cell body.

Fig. 6.

Fig. 6

Reconstructed 3D multimodal (fluorescence and phase) large-FOV imaging for HT-29 cells (See Visualization 1 (14.4MB, avi) and Visualization 2 (8.9MB, avi) ). Zoom-ins for two ROIs show fluorescence and phase at different depths. The blue arrows in two ROIs indicate two-layer cell clusters that come in and out of focus. The orange arrows indicate intracellular components, including higher-phase-contrast lipid vesicles at z = −5.1 μm, nucleolus at z = 0, as well as the cell nucleus and cell-cell membrane contacts.

4. Discussion

Unlike traditional 3D SIM or 3D quantitative phase methods which use expensive spatial-light-modulators (SLMs) [70, 71] or galvonemeter/MEMs mirrors [57, 72, 73], our technique is relatively simple and inexpensive. Layered Scotch tape efficiently creates speckle patterns with NAillum>0.4, which is hard to achieve with traditional patterning approaches to high-content imaging (e.g. lenslet array or grating masks [2935]). Furthermore, the random structured illumination conveniently multiplexes both phase and fluorescence information into the system’s aperture, enabling us to achieve multimodal 3D SR.

One limitation of our technique is that the fluorescent reconstruction relies on the recovered 3D speckle from the coherent imaging channel, so mismatch between the two channels can result in artifacts that degrade resolution. Indeed, the SR gain we achieve experimentally in the fluorescent channel does not match that achieved in the coherent channel. We attribute this mainly to mismatch in axial alignment between the coherent and fluorescent cameras, since the long DOF of the objective made it difficult to axially align the cameras to within the axial resolution limit of the high-resolution speckle pattern. In addition, our 3D coherent reconstruction suffers from coherent noise due to system instabilities during the acquisition process. Specifically, 3D phase information is encoded into the speckle-like (high dynamic range) features within the measurements, which are affected by Poisson noise. These factors reduce performance in both the 3D phase and fluorescence reconstructions.

Another limitation is the relatively long acquisition time - 1200 translations of the Scotch tape results in 180 seconds (without hardware optimization). The number of acquisitions could potentially be reduced with further investigation of the redundancy in the data, which would also reduce computational processing time for the reconstruction, which currently takes ∼6 hours on a NVIDIA, TITAN Xp GPU with MATLAB, for each 40×40 μm2 patch. Cloud computing could also parallelize the reconstruction by patches.

5. Conclusion

We have presented a 3D SIM multimodal (phase and fluorescence) technique using Scotch tape as the patterning element. The Scotch tape efficiently generates high-resolution 3D speckle patterns over a large volume, which multiplexes 3D super-resolution phase and fluorescence information into our low-NA imaging system. A computational optimization algorithm based on 3D coherent and incoherent imaging models is developed to both solve the inverse problem and self-calibrate the unknown 3D random speckle illumination and the system’s aberrations. The result is 3D sub-diffraction fluorescence reconstruction and 3D sub-diffraction phase reconstruction with >2× lateral resolution enhancement. The method is potentially scalable for Gigavoxel imaging.

Appendix A: Gradient derivation

A.1. Vectorial notation

In order to derive the gradient to solve for multivariate optimization problem in Eq. (4) and (6), it is more convenient to represent our 3D coherent and fluorescent model in the linear algebra vectorial notation in the following sections.

According to Eq. (1) and (2), we are able to re-express the multi-slice scattering model using the vectorial formulation into

fl,1=HΔsl,λexQSlpc,gl,m=diag(fl,m)tm,m=1,,M,fl,m+1=HΔzm,λexgl,m,m=1,,M1,Gl=HΔzM,l,λexgl,M, (8)

where the boldface symbols are the vectorial representation of the 2D variables in non-boldface form in the original model except for the cropping operator Q, the shifting operator Sl that shifts the speckle pattern with rl amount, and the defocus convolution operation expressed as

Hz,λ=F1diag(h˜z,λ)F, (9)

where F and F1 are Fourier and inverse Fourier transform operator, respectively, h˜z,λ is the vectorized coherent TF for propagation distance z and wavelength λ. With all these equations defined in vectorial form, we rewrite our coherent and fluorescence intensity as

Ic,lz=|HcHz,λexGl|2,If,l=m=1MKzm,ldiag(|fl,m|2)om, (10)

where Hc is also a convolution operation as expressed in Eq. (9) with the TF vector, h˜z,λ, replaced by the pupil vector h˜c, and

Kzm,l=F1diag(F|F1diag(h˜zm,l,λem)h˜f|2)F (11)

is the convolution operation with the incoherent TF at zm,l.

Next we use this vectorial model to represent the coherent and fluorescent cost functions for a single intensity measurement as

ec,lz(t1,,tM,pc,h˜c)=ec,lzTec,lz=Ic,lzHcHz,λexGl|22,ef,l(o1,,oM,pc,h˜f)=ef,lTef,l=If,lm=1MKzm,ldiag(|fl,m|2)om22, (12)

where ec,lz=Ic,lz|HcHz,λexGl| and ef,l=If,lm=1MKzm,ldiag(|fl,m|2)om are the coherent and fluorescent cost vectors, respectively.

A.2. Gradient derivation

The following derivation is based on CR calculus and is similar to the derivation introduced by our previous work [38, 67].

A.2.1. Gradient derivation for 3D coherent imaging

To optimize Eq. (4) for t1,,tM, pc, h˜c, we need to take the derivative of the coherent cost function with respect to them. We first express the gradients of all the transmittance function vectors, t1,,tM as

tmec,lz=(ec,lzgl,mgl,mtm)=diag(fl,m¯)(ec,lzgl,m)=diag(fl,m¯)vl,m, (13)

where

vl,M=(ec,lzgl,M)=HΔzM,l,λexHz,λexHcdiag(HcHz,λexGl|HcHz,λexGl|)ec,lzvl,m=(ec,lzgl,m+1gl,m+1gl,m)=HΔzm,λexdiag(tm+1¯)vl,m+1,m=1,,M1, (14)

are auxiliary vectors for intermediate gradient derivation steps, is the Hermitian operation, and ¯ is the complex conjugate operation. With these auxiliary vectors, it is relatively simple to derive the gradient of the speckle field vector, pc, as

pcec,lz=(ec,lzgl,1gl,1pc)=SlQHΔsl,λexdiag(t1¯)vl,1. (15)

As for the gradient of the pupil function, h˜c, we have

h˜cec,lz=(ec,lzh˜c)=diag(FGl¯)diag(h˜z,λex¯)Fdiag(HcHz,λexGl|HcHz,λexGl|)ec,lz (16)

A.2.2. Gradient derivation for 3D fluorescence imaging

To optimize Eq. (6) for o1,,oM, pc, h˜f, we need to take the derivative of the fluorescent cost function with respect to each. First, we express the gradient for the fluorescence distribution vectors from different layers, o1,,oM as

omef,l=(ef,lom)=2diag(|fl,m|2)Kzm,lef,l,m=1,,M (17)

Then, we would like to derive the gradient of the speckle field, pc, as

pcef,l=m=1M(ef,lfl,mfl,mpc)=2m=1M(fl,mpc)diag(fl,m)diag(om)Kzm,lef,l, (18)

where

(fl,mpc)=(fl,mgl,m1gl,m1gl,m2gl,2gl,1gl,1pc)=SlQHΔs,λex[diag(t1¯)HΔz1,λex][diag(tm1¯)HΔzm1,λex]. (19)

As for the gradient of the pupil function at the fluorescent wavelength, h˜f, we can express as

h˜fef,l=2m=1Mdiag(h˜zm,l,λem¯)Fdiag(F1diag(h˜zm,l,λem)h˜f)F1diag(Fdiag(|fl,m|2)om¯)Fef,l (20)

Appendix B: Reconstruction algorithm

B.1. Initialization of the variables

Since we use a gradient-based algorithm to solve, we must initialize each output variable, ideally as close as possible to the solution, based on prior knowledge.

For 3D coherent reconstructions, the targeted variables are transmittance function, tm(r), incident speckle field, pc(r), and pupil function, h˜c(u). We have no prior knowledge of the transmittance function or pupil function, so we set tm(r)=1 for m=1,,M and h˜c(u) to be a circle function with radius defined by NAdet/λex. This initializes with a completely transparent sample andnon-aberrated system. If the sample is mostly transparent, the amplitude of our incident speckle field is the overlay of all the in-focus shifted coherent intensities:

pcinitial(r)=l=1NimgIc,l,z=0(r+rl)/Nimg. (21)

For 3D fluorescence reconstruction, the targeted variables are sample fluorescence distribution, om(r), incident field, pc(r), and pupil function at the emission wavelength, h˜f(u). We have no prior knowledge of the system’s aberrations, so we set h˜f(u) to be a circle function with radius defined by NAdet/λem. For the incident speckle field, we use the estimated speckle field from the coherent reconstruction as our initialization. The key to a successful 3D fluorescence reconstruction with this dataset is an initialization of the sample’s 3D fluorescence distribution using the correlation-based SIM solver [53, 7478] that gives us an approximate result to start with. We adapt the correlation-based solver in our case for rough 3D SR fluorescence estimation. The basic idea is that we use the knowledge of illumination speckle intensity from the coherent reconstruction to compute the correlation between the speckle intensity and our fluorescence measurement. This correlation is stronger when the speckle intensity lines up with the fluorescent light generated by this excitation in the measurement. Each layer of the estimated speckle intensity gates out out-of-focus fluorescent light in the measurement, so we could get a rough estimate of the 3D fluorescent sample. Mathematically, we express this correlation as

ominitial(r)=n=19(If,(r)If,(r)(n))(|fm,(r)|2|fm,(r)|2(n))(n), (22)

where l(n) is the averaging operation over l index of fluorescence images with the same z-scan position (at the n-th layer) in the set defined l(n)={122(n1)+1,,122n}.

To understand why this correlation gives a good estimation of the 3D fluorescent sample, we go through a more detailed derivation with a short-hand notation Δ to denote the operation Δal(r)=al(r)al(r)l(n). Then, we examine one component of Eq. (22):

Δ|fm,l(r)|2ΔIf,l(r)l(n)=m=1Mom(r)Δ|fm,l(r)|2Δ|fm,l(r)|2l(n)hf,zm,l(rr)drm=1Mom(r)Δ|fm,l(r)|2l(n)2δm,mδ(rr)hf,zm,l(rr)drΔ|fm,l(r)|2l(n)2om(r), (23)

where we assume the speckle intensity is completely uncorrelated spatially in 3D, which is an approximation because the speckle has finite grain size depending on the illumination NA. Under this assumption, this correlation is almost the 3D fluorescence distribution with an extra modulation factor. Hence, it serves well as a initialization for our 3D fluorescence distribution.

B.2. Reconstruction algorithm

With all the initializations, Algorithm 1 and Algorithm 2 summarized by the following pseudo-code:

Algorithm 1.

3D coherent imaging reconstruction

Require: Ic,lz, rl, l=1,,Nimg
1: initialize t1(1,0),,tM(1,0), pc(1,0), h˜c(1,0); normalize Ic,lz
2: for k=1:Kc do
3:  Sequential gradient descent
4: for j=1:(NimgNz) do
5:   z=zmod(j,2); l=mod(j,Nimg)
6:   if j<NimgNz then
7:     for m=1:M do
8:       tm(k,j)=tm(k,j1)tmec,lz(t1(k,j1),,tM(k,j1),pc(k,j1),h˜c(k,j1))/4max (|pc(k,j1)|)2
9:     end for
10:     pc(k,j)=pc(k,j1)(r)pcec,lz(t1(k,j1),,tM(k,j1),
pc(k,j1),h˜c(k,j1))/max (|t1(k,j1)|,,|tM(k,j1)|)2
11:     ξ=F{Gl(k,j1)}
12:     h˜c(k,j)=h˜c(k,j1)h˜cec,lz(t1(k,j1),,tM(k,j1),pc(k,j1),h˜c(k,j1))|ξ|/[max (|ξ|)(|ξ|2+δ)], where δ is chosen to be small
13:   else
14:     Do the same update but save to t1(k+1,0),,tM(k+1,0),pc(k+1,0),h˜c(k+1,0)
15:   end if
16: end for
17:  Filter t1(k+1,0),,tM(k+1,0) with Gaussian filter to damp down high-frequency artifacts
18: end for

Algorithm 2.

3D fluorescence imaging reconstruction

Require: If,l, rl, f1,l,,fM,l l=1,,Nimg
1: initialize o1(1,0),,oM(1,0), pc(1,0), h˜f(1,0); normalize If,l
2: for k=1:Kf do
3:  Sequential gradient descent
4: for l=1:Nimg do
5:   if j<Nimg then
6:     for m=1:M do
7:      om(k,j)=om(k,j1)omef,l(o1(k,j1),,oM(k,j1),
     pc(k,j1),h˜f(k,j1))/max (|f1,l(k,j1)|,,|fM,l(k,j1)|)4
8:     end for
9:     pc(k,j)=pc(k,j1)(r)pcef,l(o1(k,j1),,oM(k,j1),
    pc(k,j1),h˜f(k,j1))/max (|t1(k,j1)|,,|tM(k,j1)|)2
10:     h˜f(k,j)=h˜f(k,j1)h˜fef,l(o1(k,j1),,oM(k,j1),
    pc(k,j1),h˜f(k,j1))/max (|F{o(k,j1)|fl(k,j1)|2}|)
11:   else
12:     Do the same update but save to o1(k+1,0),,oM(k+1,0),pc(k+1,0),h˜f(k+1,0)
13:   end if
14: end for
15: end for

3D coherent reconstruction takes about 40 iterations, while the 3D fluorescence reconstruction takes around 25 iterations to reach convergence.

Appendix C: Sample preparation

The sample shown in Fig. 4 is a monolayer of 700 nm diameter polystyrene microspheres (Thermofischer, R700), prepared by placing microsphere dilutions (60 uL stock-solution/500 uL isopropyl alcohol) onto #1.5 coverslips and then allowing to air dry. Water is subsequently placed on the coverslip to reduce the index-mismatch of the microspheres to the air. An adhesive spacer followed by another #1.5 coverslip was placed on top of the original coverslip to assure a uniform sample layer for imaging.

The sample used in Fig. 5 is a mixture of 2 μm (Thermofischer, F8826) and 4 μm (Thermofischer, F8858) fluorescently-tagged (λem=605 nm) and 3 μm non-fluorescent (Sigma-Aldrich, LB30) polystyrene microspheres. We follow a similar procedure as before, except that the dilution is composed of 60 uL stock solution of each type of microspheres and 500 uL isopropyl alcohol. Since the microspheres are larger in size, we adopt high-index oil (nm(λ)=1.52 at λ = 532 nm) for sample immersion.

Figure 6 uses a sample of HT-29 cells grown in DMEM with 10% FBS, trypsonized with 1× trypsin, passaged twice a week into 100mm dishes at 1/5, 1/6, 1/8 dilutions and stored in a 37C 5% CO2 incubator. For imaging, HT-29 cells were grown on glass coverslips (12mm diameter, No. 1 thickness; Carolina Biological Supply Co.) and fixed with 3% paraformaldehyde for 20min. Fixed cells were blocked and permeabilized in phosphate buffered saline (PBS; Corning Cellgro) with 5% donkey serum (D9663, Sigma-Aldrich), 0.3% Triton X-100 (Fisher Scientific) for 30 minutes. Cells were incubated with Alexa Fluor 546 Phalloidin (A22283, ThermoFisher Scientific) for 1 hour, washed 3 times with PBS, and mounted onto a second glass coverslip (24×50mm, No. 1.5 thickness; Fisher Scientific) and immobilized with sealant (Cytoseal 60; Thermo Scientific).

Funding

STROBE: A National Science Foundation Science & Technology Center (DMR 1548924); Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative (GBMF4562); Chan Zuckerberg Biohub; Ruth L. Kirschstein National Research Service Award (F32GM129966).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

  • 1.Mccullough B., Ying X., Monticello T., Bonnefoi M., “Digital microscopy imaging and new approaches in toxicologic pathology,” Toxicol Pathol. 32, 49–58 (2004). 10.1080/01926230490451734 [DOI] [PubMed] [Google Scholar]
  • 2.Kim M. H., Park Y., Seo D., Lim Y. J., Kim D.-I., Kim C. W., Kim W. H., “Virtual microscopy as a practical alternative to conventional microscopyin pathology education,” Basic Appl. Pathol. 1, 46–48 (2008). 10.1111/j.1755-9294.2008.00006.x [DOI] [Google Scholar]
  • 3.Dee F. R., “Virtual microscopy in pathology education,” Human Pathol 40, 1112–1121 (2009). 10.1016/j.humpath.2009.04.010 [DOI] [PubMed] [Google Scholar]
  • 4.Pepperkok R., Ellenberg J., “High-throughput fluorescence microscopy for systems biology,” Nat. Rev. Mol. Cell Biol. 7, 690–696 (2006). 10.1038/nrm1979 [DOI] [PubMed] [Google Scholar]
  • 5.Yarrow J. C., Totsukawa G., Charras G. T., Mitchison T. J., “Screening for cell migration inhibitors via automated microscopy reveals a Rho-kinase Inhibitor,” Chem. Biol. 12, 385–395 (2005). 10.1016/j.chembiol.2005.01.015 [DOI] [PubMed] [Google Scholar]
  • 6.Laketa V., Simpson J. C., Bechtel S., Wiemann S., Pepperkok R., “High-content microscopy identifies new neurite outgrowth regulators,” Mol. Biol. Cell 18, 242–252 (2007). 10.1091/mbc.e06-08-0666 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Trounson A., “The production and directed differentiation of human embryonic stem cells,” Endocr. Rev. 27(2), 208–219 (2006). 10.1210/er.2005-0016 [DOI] [PubMed] [Google Scholar]
  • 8.Eggert U. S., Kiger A. A., Richter C., Perlman Z. E., Perrimon N., Mitchison T. J., Field C. M., “Parallel chemical genetic and genome-wide RNAi screens identify cytokinesis inhibitors and targets,” PLoS Biol. 2, e379 (2004). 10.1371/journal.pbio.0020379 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Starkuviene V., Pepperkok R., “The potential of high-content high-throughput microscopy in drug discovery,” Br. J. Pharmacol 152, 62–71 (2007). 10.1038/sj.bjp.0707346 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Lukosz W., “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57, 932–941 (1967). 10.1364/JOSA.57.000932 [DOI] [Google Scholar]
  • 11.Schwarz C. J., Kuznetsova Y., Brueck S. R. J., “Imaging interferometric microscopy,” Opt. Lett. 28, 1424–1426 (2003). 10.1364/OL.28.001424 [DOI] [PubMed] [Google Scholar]
  • 12.Kim M., Choi Y., Fang-Yen C., Sung Y., Dasari R. R., Feld M. S., Choi W., “High-speed synthetic aperture microscopy for live cell imaging,” Opt. Lett. 36, 148–150 (2011). 10.1364/OL.36.000148 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Hell S. W., Wichmann J., “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19, 780–782 (1994). 10.1364/OL.19.000780 [DOI] [PubMed] [Google Scholar]
  • 14.Betzig E., Patterson G. H., Sougrat R., Lindwasser O. W., Olenych S., Bonifacino J. S., Davidson M. W., Lippincott-Schwartz J., Hess H. F., “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006). 10.1126/science.1127344 [DOI] [PubMed] [Google Scholar]
  • 15.Rust M. J., Bates M., Zhuang X., “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nature Methods 3, 793–795 (2006). 10.1038/nmeth929 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Heintzmann R., Cremer C., “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). 10.1117/12.336833 [DOI] [Google Scholar]
  • 17.Gustafsson M. G. L., “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” Journal of Microscopy 198, 82–87 (2000). 10.1046/j.1365-2818.2000.00710.x [DOI] [PubMed] [Google Scholar]
  • 18.Gustafsson M. G. L., “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” PNAS 102, 13081–13086 (2005). 10.1073/pnas.0406877102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Xu W., Jericho M. H., Meinertzhagen I. A., Kreuzer H. J., “Digital in-line holography for biological applications,” PNAS 98, 11301–11305 (2001). 10.1073/pnas.191361398 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Bishara W., Su T.-W., Coskun A. F., Ozcan A., “Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution,” Opt. Express 18, 11181–11191 (2010). 10.1364/OE.18.011181 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Greenbaum A., Luo W., Khademhosseinieh B., Su T.-W., Coskun A. F., Ozcan A., “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Scientific reports 3: 1717 (2013). 10.1038/srep01717 [DOI] [Google Scholar]
  • 22.Zheng G., Horstmeyer R., Yang C., “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photon. 7, 739–745 (2013). 10.1038/nphoton.2013.187 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Tian L., Li X., Ramchandran K., Waller L., “Multiplexed coded illumination for Fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). 10.1364/BOE.5.002376 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Tian L., Liu Z., Yeh L., Chen M., Zhong J., Waller L., “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2, 904–911 (2015). 10.1364/OPTICA.2.000904 [DOI] [Google Scholar]
  • 25.Tian L., Waller L., “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). 10.1364/OPTICA.2.000104 [DOI] [Google Scholar]
  • 26.Horstmeyer R., Chung J., Ou X., Zheng G., Yang C., “Diffraction tomography with Fourier ptychography,” Optica 3, 827–835 (2016). 10.1364/OPTICA.3.000827 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Ling R., Tahir W., Lin H.-Y., Lee H., Tian L., “High-throughput intensity diffraction tomography with a computational microscope,” Biomed. Opt. Express 9, 2130 (2018). 10.1364/BOE.9.002130 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Pan A., Zhang Y., Wen K., Zhou M., Min J., Lei M., Yao B., “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26, 23119–23131 (2018). 10.1364/OE.26.023119 [DOI] [PubMed] [Google Scholar]
  • 29.Orth A., Crozier K., “Microscopy with microlens arrays: high throughput, high resolution and light-field imaging,” Opt. Express 20, 13522–13531 (2012). 10.1364/OE.20.013522 [DOI] [PubMed] [Google Scholar]
  • 30.Orth A., Crozier K., “Gigapixel fluorescence microscopy with a water immersion microlens array,” Opt. Express 21, 2361–2368 (2013). 10.1364/OE.21.002361 [DOI] [PubMed] [Google Scholar]
  • 31.Orth A., Crozier K. B., “High throughput multichannel fluorescence microscopy with microlens arrays,” Opt. Express 22, 18101–18112 (2014). 10.1364/OE.22.018101 [DOI] [PubMed] [Google Scholar]
  • 32.Orth A., Tomaszewski M. J., Ghosh R. N., Schonbrun E., “Gigapixel multispectral microscopy,” Optica 2, 654–662 (2015). 10.1364/OPTICA.2.000654 [DOI] [Google Scholar]
  • 33.Pang S., Han C., Kato M., Sternberg P. W., Yang C., “Wide and scalable field-of-view Talbot-grid-based fluorescence microscopy,” Opt. Lett. 37, 5018–5020 (2012). 10.1364/OL.37.005018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Pang S., Han C., Erath J., Rodriguez A., Yang C., “Wide field-of-view Talbot grid-based microscopy for multicolor fluorescence imaging,” Opt. Express 21, 14555–14565 (2013). 10.1364/OE.21.014555 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Chowdhury S., Chen J., Izatt J., “Structured illumination fluorescence microscopy using Talbot self-imaging effect for high-throughput visualization,” arXiv 1801.03540 (2018).
  • 36.Guo K., Zhang Z., Jiang S., Liao J., Zhong J., Eldar Y. C., Zheng G., “13-fold resolution gain through turbid layer via translated unknown speckle illumination,” Biomed. Opt. Express 9, 260–274 (2018). 10.1364/BOE.9.000260 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Jang M., Horie Y., Shibukawa A., Brake J., Liu Y., Kamali S. M., Arbabi A., Ruan H., Faraon A., Yang C., “Wavefront shaping with disorder-engineered metasurfaces,” Nat. Photon. 12, 84–90 (2018). 10.1038/s41566-017-0078-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Yeh L.-H., Chowdhury S., Waller L., “Computational structured illumination for high-content fluorescent and phase microscopy,” Biomed. Opt. Express 10, 1978–1998 (2019). 10.1364/BOE.10.001978 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Park Y., Popescu G., Badizadegan K., Dasari R. R., Feld M. S., “Diffraction phase and fluorescence microscopy,” Opt. Express 14, 8263–8268 (2006). 10.1364/OE.14.008263 [DOI] [PubMed] [Google Scholar]
  • 40.Chowdhury S., Eldridge W. J., Wax A., Izatt J. A., “Structured illumination multimodal 3D-resolved quantitative phase and fluorescence sub-diffraction microscopy,” Biomed. Opt. Express 8, 2496–2518 (2017). 10.1364/BOE.8.002496 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Chowdhury S., Eldridge W. J., Wax A., Izatt J. A., “Structured illumination microscopy for dualmodality 3D sub-diffraction resolution fluorescence and refractive-index reconstruction,” Biomed. Opt. Express 8, 5776–5793 (2017). 10.1364/BOE.8.005776 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Schürmann M., Cojoc G., Girardo S., Ulbricht E., Guck J., Müller P., “Three-dimensional correlative single-cell imaging utilizing fluorescence and refractive index tomography,” J. Biophoton. 2017, e201700145 (2017). [DOI] [PubMed] [Google Scholar]
  • 43.Shin S., Kim D., Kim K., Park Y., “Super-resolution three-dimensional fluorescence and optical diffraction tomography of live cells using structured illumination generated by a digital micromirror device,” arXiv 1801.00854 (2018). [DOI] [PMC free article] [PubMed]
  • 44.Li D., Shao L., Chen B.-C., Zhang X., Zhang M., Moses B., Milkie D. E., Beach J. R., Hammer J. A., Pasham M., Kirchhausen T., Baird M. A., Davidson M. W., Xu P., Betzig E., “Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics,” Science 349, aab3500 (2015). 10.1126/science.aab3500 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Mudry E., Belkebir K., Girard J., Savatier J., Moal E. L., Nicoletti C., Allain M., Sentenac A., “Structured illumination microscopy using unknown speckle patterns,” Nat. Photon. 6, 312–315 (2012). 10.1038/nphoton.2012.83 [DOI] [Google Scholar]
  • 46.Ayuk R., Giovannini H., Jost A., Mudry E., Girard J., Mangeat T., Sandeau N., Heintzmann R., Wicker K., Belkebir K., Sentenac A., “Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm,” Opt. Lett. 38, 4723–4726 (2013). 10.1364/OL.38.004723 [DOI] [PubMed] [Google Scholar]
  • 47.Min J., Jang J., Keum D., Ryu S.-W., Choi C., Jeong K.-H., Ye J. C., “Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery,” Scientific Reports 3, 2075:1–6 (2013). 10.1038/srep02075 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Dong S., Nanda P., Shiradkar R., Guo K., Zheng G., “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22, 20856–20870 (2014). 10.1364/OE.22.020856 [DOI] [PubMed] [Google Scholar]
  • 49.Yilmaz H., Putten E. G. V., Bertolotti J., Lagendijk A., Vos W. L., Mosk A. P., “Speckle correlation resolution enhancement of wide-field fluorescence imaging,” Optica 2, 424–429 (2015). 10.1364/OPTICA.2.000424 [DOI] [Google Scholar]
  • 50.Jost A., Tolstik E., Feldmann P., Wicker K., Sentenac A., Heintzmann R., “Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction,” PLoS ONE 10, e0132174 (2015). 10.1371/journal.pone.0132174 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Negash A., Labouesse S., Sandeau N., Allain M., Giovannini H., Idier J., Heintzmann R., Chaumet P. C., Belkebir K., Sentenac A., “Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations,” J. Opt. Soc. Am. A 33, 1089–1094 (2016). 10.1364/JOSAA.33.001089 [DOI] [PubMed] [Google Scholar]
  • 52.Labouesse S., Allain M., Idier J., Bourguignon S., Negash A., Liu P., Sentenac A., “Joint reconstruction strategy for structured illumination microscopy with unknown illuminations,” ArXiv: 1607.01980 (2016). [DOI] [PubMed]
  • 53.Yeh L.-H., Tian L., Waller L., “Structured illumination microscopy with unknown patterns and a statistical prior,” Biomed. Opt. Express 8, 695–711 (2017). 10.1364/BOE.8.000695 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Wolf E., “Three-dimensional structure determination of semi-transparent objects from holographic data,” Optics Communications 1, 153–156 (1969). 10.1016/0030-4018(69)90052-2 [DOI] [Google Scholar]
  • 55.Lauer V., “New approach to optical diffraction tomography yielding a vector equation of diffraction tomography and a novel tomographic microscope,” J. Microscopy 205, 165–176 (2002). 10.1046/j.0022-2720.2001.00980.x [DOI] [PubMed] [Google Scholar]
  • 56.Debailleul M., Simon B., Georges V., Haeberlé O., Lauer V., “Holographic microscopy and diffractive microtomography of transparent samples,” Meas. Sci. Technol. 19, 074009 (2008). 10.1088/0957-0233/19/7/074009 [DOI] [Google Scholar]
  • 57.Sung Y., Choi W., Fang-Yen C., Badizadegan K., Dasari R. R., Feld M. S., “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). 10.1364/OE.17.000266 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Gustafsson M. G. L., Shao L., Carlton P. M., Wang C. J. R., Golubovskaya I. N., Cande W. Z., Agard D. A., Sedat J. W., “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys J. 94, 4957–4970 (2008). 10.1529/biophysj.107.120345 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Cowley J. M., Moodie A. F., “The scattering of electrons by atoms and crystals. I. A new theoretical approach,” Acta Crystallographica 10, 609–619 (1957). 10.1107/S0365110X57002194 [DOI] [Google Scholar]
  • 60.Maiden A. M., Humphry M. J., Rodenburg J. M., “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29, 1606–1614 (2012). 10.1364/JOSAA.29.001606 [DOI] [PubMed] [Google Scholar]
  • 61.Godden T. M., Suman R., Humphry M. J., Rodenburg J. M., Maiden A. M., “Ptychographic microscope for three-dimensional imaging,” Opt. Express 22, 12513–12523 (2014). 10.1364/OE.22.012513 [DOI] [PubMed] [Google Scholar]
  • 62.Sheppard C. J. R., Kawata Y., Kawata S., Gu M., “Three-dimensional transfer functions for high-aperture systems,” J. Opt. Soc. Am. A 11, 593–598 (1994). 10.1364/JOSAA.11.000593 [DOI] [Google Scholar]
  • 63.Gu M., Advanced Optical Imaging Theory (Springer, 2000). 10.1007/978-3-540-48471-4 [DOI] [Google Scholar]
  • 64.Debailleul M., Georges V., Simon B., Morin R., Haeberlé O., “High-resolution three-dimensional tomographic diffractive microscopy of transparent inorganic and biological samples,” Opt. Lett. 34, 79–81 (2009). 10.1364/OL.34.000079 [DOI] [PubMed] [Google Scholar]
  • 65.Goodman J. W., Introduction to Fourier Optics (Roberts & Co, 2005). [Google Scholar]
  • 66.Guizar-Sicairos M., Thurman S. T., Fienup J. R., “Efficient subpixel image registration algorithms,” Opt. Lett. 33, 156–158 (2008). 10.1364/OL.33.000156 [DOI] [PubMed] [Google Scholar]
  • 67.Yeh L.-H., Dong J., Zhong J., Tian L., Chen M., Tang G., Soltanolkotabi M., Waller L., “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23, 33213–33238 (2015). 10.1364/OE.23.033214 [DOI] [PubMed] [Google Scholar]
  • 68.Bottou L., “Large-scale machine learning with stochastic gradient descent,” International Conference on Computational Statistics pp. 177–187 (2010). [Google Scholar]
  • 69.Jingshan Z., Claus R. A., Dauwels J., Tian L., Waller L., “Transport of intensity phase imaging by intensity spectrum fitting of exponentially spaced defocus planes,” Opt. Express 22, 10661–10674 (2014). 10.1364/OE.22.010661 [DOI] [PubMed] [Google Scholar]
  • 70.Förster R., Lu-Walther H.-W., Jost A., Kielhorn M., Wicker K., Heintzmann R., “Simple structured illumination microscope setup with high acquisition speed by using a spatial light modulator,” Opt. Express 22, 20663–20677(2014). 10.1364/OE.22.020663 [DOI] [PubMed] [Google Scholar]
  • 71.Chowdhury S., Eldridge W. J., Wax A., Izatt J., “Refractive index tomography with structured illumination,” Optica 4, 537–545 (2017). 10.1364/OPTICA.4.000537 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Dan D., Lei M., Yao B., Wang W., Winterhalder M., Zumbusch A., Qi Y., Xia L., Yan S., Yang Y., Gao P., Ye T., Zhao W., “DMD-based LED-illumination Super-resolution and optical sectioning microscopy,” Scientific Reports 3, 1116 (2013). 10.1038/srep01116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Lee K., Kim K., Kim G., Shin S., Park Y., “Time-multiplexed structured illumination using a DMD for optical diffraction tomography,” Opt. Lett. 42, 999–1002 (2017). 10.1364/OL.42.000999 [DOI] [PubMed] [Google Scholar]
  • 74.Tanaami T., Otsuki S., Tomosada N., Kosugi Y., Shimizu M., Ishida H., “High-speed 1-frame/ms scanning confocal microscope with a microlens and Nipkow disks,” Appl. Opt. 41, 4704–4708 (1996). 10.1364/AO.41.004704 [DOI] [PubMed] [Google Scholar]
  • 75.Walker J. G., “Non-scanning confocal fluorescence microscopy using speckle illumination,” Opt. Commun. 189, 221–226 (2001). 10.1016/S0030-4018(01)01032-X [DOI] [Google Scholar]
  • 76.Jiang S.-H., Walker J. G., “Experimental confirmation of non-scanning fluorescence confocal microscopy using speckle illumination,” Opt. Commun. 238, 1–12 (2004). 10.1016/j.optcom.2004.04.035 [DOI] [Google Scholar]
  • 77.García J., Zalevsky Z., Fixler D., “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13, 6075–6078 (2005). 10.1364/OPEX.13.006073 [DOI] [PubMed] [Google Scholar]
  • 78.Heintzmann R., Benedetti P. A., “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45, 5037–5045 (2006). 10.1364/AO.45.005037 [DOI] [PubMed] [Google Scholar]

Articles from Biomedical Optics Express are provided here courtesy of Optica Publishing Group

RESOURCES