Skip to main content
Science Advances logoLink to Science Advances
. 2024 Mar 8;10(10):eadj3656. doi: 10.1126/sciadv.adj3656

Large-FOV 3D localization microscopy by spatially variant point spread function generation

Dafei Xiao 1, Reut Kedem Orange 1, Nadav Opatovski 1, Amit Parizat 2, Elias Nehme 2,3, Onit Alalouf 2, Yoav Shechtman 1,2,4,*
PMCID: PMC10923516  PMID: 38457497

Abstract

Accurate characterization of the microscopic point spread function (PSF) is crucial for achieving high-performance localization microscopy (LM). Traditionally, LM assumes a spatially invariant PSF to simplify the modeling of the imaging system. However, for large fields of view (FOV) imaging, it becomes important to account for the spatially variant nature of the PSF. Here, we propose an accurate and fast principal components analysis–based field-dependent 3D PSF generator (PPG3D) and localizer for LM. Through simulations and experimental three-dimensional (3D) single-molecule localization microscopy (SMLM), we demonstrate the effectiveness of PPG3D, enabling super-resolution imaging of mitochondria and microtubules with high fidelity over a large FOV. A comparison of PPG3D with a shift-variant PSF generator for 3D LM reveals a threefold improvement in accuracy. Moreover, PPG3D is approximately 100 times faster than existing PSF generators, when used in image plane–based interpolation mode. Given its user-friendliness, we believe that PPG3D holds great potential for widespread application in SMLM and other imaging modalities.


Spatially varying point spread function characterization enables 3D localization microscopy over a large field of view.

INTRODUCTION

Localization microscopy (LM) is a powerful imaging modality both for super-resolution imaging (1, 2) and for particle tracking (3). Imaging in these modalities is based on determining the positions of point sources or emitters with sub-diffraction precision.

Emitters from a flat sample can be localized to yield a two-dimensional (2D) image. However, in the case of a volumetric sample, 3D LM is needed. One approach for 3D LM is point spread function (PSF) engineering, which intentionally induces aberrations at the pupil plane of an imaging system to yield an informative depth-dependent PSF, e.g., astigmatism (4), double helix (5, 6), or tetrapod (7). For the decoding part, common localization algorithms include maximum likelihood estimation (812) and, more recently, deep learning–based algorithms, e.g., DeepSTORM3D (13), DECODE (14), and more (9, 1518), which have demonstrated excellent performance when dealing with PSF overlap and high emitter density. Considering the influence of field position on localization (field-dependent localization), recent work (19) proposes FD-DeepLoc by combining DECODE with CoorConv (20) to both address the field dependence and estimate emitter positions. All estimators rely on some forward model, a PSF generator, i.e., a continuous-domain “dictionary” that maps an emitter’s 3D spatial position to a PSF. The performance of LM strongly depends on the accuracy of this PSF generator (21).

In its basic form, a PSF generator receives as input the optical system parameters and a 3D position and outputs the expected image on the camera of an emitter in that position. The generator is considered shift-invariant if the shape of its image is independent of the global transverse position of the emitter, which simplifies the PSF’s spatial dependence to be depth-only.

Shift-invariant PSF characterization is typically based on a physical model (22, 23), with additional experimental input to account for unavoidable aberrations and deviations from the ideal models. A commonly used analytical model for the 3D microscopic PSF is based on the principle that the PSF can be obtained by taking the absolute value squared of the Fourier transform of the field at the back focal plane (BFP) (24); emitter defocus manifests as an approximately quadratic phase added to the BFP field. The experimental input can be acquired through a calibration experiment, where the PSF is measured at various known axial positions. Then, combining the calibration with the analytical model involves solving a phase-retrieval problem (12, 25) to determine the BFP phase profile that aligns with the experimental calibration measurement. This BFP phase profile can be projected onto Zernike polynomials, for computational efficiency, to yield Zernike pupil phase retrieval (ZPPR) (26, 27). For improved performance, a pixel-wise pupil representation can be used, at a cost of more optimization parameters, yielding VIPR (short for vectorial implementation of phase retrieval) (24). In addition to these model-based approaches, model-free generators utilize polynomial B-spline (28), C-spline (29, 30) basis, and Zernike moments/basis (31) to represent the depth-dependent PSF and yield PSF interpolation.

A common and important problem in LM is addressing field dependence of large fields of view (FOV), because the shift-invariant assumption deviates from reality in several cases. First, objectives, especially high–numerical aperture ones widely used in SMLM, exhibit inherent field-dependent aberrations that vary with transverse positions (27, 32). Second, PSF engineering in 3D LM introduces additional optical elements that inevitably add some field dependence, e.g., because of some displacement of the phase element from the BFP. Third, field dependence becomes more prominent as FOV increases (19, 33, 34), which is increasingly relevant in high-throughput imaging scenarios. Therefore, for 3D LM at high fidelity over a large FOV, the development of a PSF generator that considers both depth and field dependence would be highly beneficial.

In the context of LM, to the best of our knowledge, the only PSF generators that consider field dependence are based on ZPPR. Shift-variant ZPPR algorithms (19, 27, 35) rely on calibration measurements at many field positions to retrieve a Zernike-polynomial-based pupil phase for each position. Then, interpolation (32, 36, 37) of the Zernike basis coefficients is implemented to obtain the pupil phase corresponding to any desired field positions. In other fields, e.g., astronomy (38) and computational volumetric imaging (39, 40), field dependence has been characterized through truncated singular value decomposition (TSVD) of calibrated PSFs and interpolation of the decomposition coefficients.

Here, we extend the PSF decomposition and interpolation concept and propose a fast and accurate spatially variant PSF generator in 3D (PPG3D) specifically designed for SMLM. PPG3D is a continuous-domain PSF model that implements principal components analysis (PCA) of calibrated PSFs, conducts local interpolation of those principal component (PC) coefficients, and then generates the PSF at any target position through backward space projection. Comparison of PPG3D with three other commonly used PSF generators in LM demonstrates improvements of more than three times in accuracy and around 100 times in computation speed. In its application to SMLM, we combine PPG3D with our localization estimator FOV-dependent DeepSTORM3D, customized from one of the state-of-the-art localization estimators, DeepSTORM3D (13), and achieve 3D super-resolution imaging of mitochondria and microtubules with high fidelity over a large FOV.

RESULTS

PPG3D

PPG3D formation

PPG3D is a PSF interpolator built upon calibrated PSFs at known spatial positions. Direct pixel-by-pixel interpolation of the measured PSF is prone to failure due to sensitivity to noise and ill-posedness. To address this issue, we use PCA of the measured PSFs (Fig. 1A) and perform 3D interpolation in a lower-dimensional space (Fig. 1B). Because lateral-position dependence is more moderate compared to depth dependence for PSF of SMLM, we separate the 3D interpolation into two 2D lateral interpolations and one 1D axial interpolation. Furthermore, we empirically find that a PSF in LM can be adequately represented by its surrounding PSFs. As a result, we use local linear interpolation among six PSFs, specifically measured at three lateral positions and two axial positions (Fig. 1B), to improve the operation speed.

Fig. 1. Schematic of PSF engineering and PPG3D.

Fig. 1.

(A) PSF engineering with a Tetrapod phase mask at the back focal plane (BFP) and experimentally measured PSFs at two field positions xy1 and xy2 (white squares in Fig. 2) exemplifying field dependence. IIP denotes intermediate imaging plane. (B) PSF at any desired position (green star) is interpolated using six measured PSFs at nearby calibrated positions (red dots). Each lateral position corresponds to a z-stack of a depth-dependent Tetrapod PSF. The bottom flowchart shows two separate steps in PPG3D: lateral (gray arrows) and axial interpolations (blue arrow).

To generate a PSF at any spatial position, we first search for three lateral calibration positions close to the target coordinate according to a criterion of even distribution (text S3), at two axial planes above and below the target axial position, such that a total of six PSFs are chosen. Next, PCA of selected PSFs at each axial plane is conducted and lateral 2D interpolations regarding PC coefficients are performed to address field dependence, which yields two PSFs, located at the blue dots in Fig. 1B. The following step is PCA of those two laterally interpolated PSFs and a 1D interpolation of the PC coefficients, followed by a transformation back to image space, to yield the final target PSF. More details of this implementation in text S3 and field dependence quantification is presented in text S5.

Comparison of PPG3D with PSF generators in LM

The comparison is performed upon an experimental calibration dataset (151 lateral positions by 29 axial positions) obtained from an imaging system with tetrapod PSF engineering (see imaging system #1 of table S1). Each cropped PSF has a cropped size of 81 × 81 pixels (text S2). We divide the comparison into the shift-invariant case, i.e., axial-only interpolation, and the shift-variant case, i.e., axial and lateral interpolation. The criteria for assessing different generators include Pearson’s correlation coefficient (CC) defined as eq. S1, root mean square error (RMSE), comparing measured PSFs to generated ones, and runtime.

We first compare PPG3D’s shift-invariant mode with three shift-invariant PSF generators: model-based VIPR and ZPPR21 (which uses 21 Zernike basis functions for pupil function representation), and a model-free Spline interpolator. At the field position P0 in Fig. 2A, near the center of the FOV, PSFs are measured at a series of axial positions (Fig. 2B horizontal axis) that are divided into a “seen” group and an “unseen” group (marked by grid lines in Fig. 2B). The former is set as input for PSF generation and the latter is used for test. The details of implementing VIPR, ZPPR21, and Spline are shown in text S3. Firstly, PPG3D exhibits larger CC and lower RMSE, compared with others, at “unseen” axial positions, which is also visually demonstrated by generated PSFs at a test position (black dot line in Fig. 2B) in Fig. 2C. Second, PSFs generated at “seen” positions by PPG3D are nearly identical to the measured ones, while all other generators exhibit some characterization errors. Third, among the two model-based generators, VIPR outperforms ZPPR21, thanks to its pixel-wise optimization. Finally, in terms of runtime (from data input to generating all the unseen test data) measured through the code, we have table S5, and PPG3D is around 100 times faster than the second fastest algorithm, which is ZPPR21 (Fig. 2D).

Fig. 2. Comparison of PPG3D with three PSF generators in LM.

Fig. 2.

(A) An FOV with 151 calibrated field positions (gray and red dots), among which xy1 and xy2 are used for demonstration in Fig. 1. Scale bar, 20 μm. The comparison is divided into shift-invariant mode [(B) to (E)] considering point P0 in (A), and shift-variant mode [(F) to (H)] where the red dots are set as unseen test positions. (B) CC and RMSE assessment of VIPR, Spline, PPG3D, and ZPPR21 (implementation details in fig. S3). The axial positions at grid lines are set as unseen test positions. (C) Generated PSFs at the black dashed line of (B). (D) RMSE-time comprehensive assessment (details in table S5). (E) Principal components used in PPG3D and their variance ratios. (F) CC and RMSE assessment of shift-variant ZPPR21 and PPG3D at the five test positions. (G) Generated PSFs at the lateral test positions and one axial position [black dashed line in (F)]. (H) RMSE-time comprehensive assessment. Scale bars in (C), (E), and (G), 2 μm.

Next, we compare PPG3D’s shift-variant mode with shift variant ZPPR21. Among all the calibrated field positions, we choose five as unseen test positions (red dots in Fig. 2A). At all 5-by-29 test spatial positions, PPG3D performs better in both CC and RMSE than ZPPR21 (Fig. 2F). The implementation details of shift-variant ZPPR21 are presented in text S3 and fig. S3. Specifically, ZPPR21 fails to characterize the central bright spot of measured PSFs (Fig. 2G), caused by unmodulated light at the BFP, which often occurs due to some mismatch between the phase-mask size and the actual light circle at the BFP. Meanwhile, PPG3D is hundreds of times faster than shift-variant ZPPR21, which is important for applications such as online training of deep learning–based localization algorithms. Note that the comparison here is based on bead images on the coverslip and there is no refractive-index-mismatch issue. We also analyzed PPG3D’s generalizability for other PSFs (figs. S4 to S6, which also show the performance at FOV edge), calibration density, and robustness to noise (fig. S7).

High-accuracy large-FOV 3D SMLM

A field-dependent localization estimator: FOV-dependent DeepSTORM3D

We construct a complete field-dependent 3D SMLM localizer, by combining PPG3D with DeepSTORM3D (13), a deep learning–based algorithm for 3D SMLM localization, generating a large-FOV field-dependent localization estimator: FOV-dependent-DeepSTORM3D. This is based on the addition of a CoordConv (20) to all the feature detection layers of DeepSTORM3D (Fig. 3A) which consists of six convolution layers (constant feature channel number of 64) with skip connections in between (41). To overcome the computational challenges from the large FOV reconstruction, we use a segmentation and consolidation scheme that involves cropping the complete image into sub-images for localization and subsequently stitching together the localization results. Notably, PPG3D can be combined with localizers other than DeepSTORM3D, e.g., DECODE (14), which is used in FD-DeepLoc (19).

Fig. 3. FOV-dependent DeepSTORM3D and one prediction demonstration.

Fig. 3.

(A) FOV-dependent DeepSTORM3D outline. On the basis of DeepSTORM3D, x and y maps corresponding to the cropped sub-image are fed to feature extraction layers and prediction layers to yield local localizations, which are then filtered and transformed into final global localizations. (B) Demonstration of one representative experimental frame (upper left triangle) and a rendered image (bottom right triangle), which is an overlay, upon this frame, consisting of the reconstructed PSFs according to the network inference. The zoom-in view (1) exemplifies this overlay in a square with the experimental measurement (2) and PSF reconstruction (3). Scale bar, 10 μm.

In the training phase, training images consist of generated sub-images with random PSFs placed away from the sub-image edges to prevent PSF truncation. During the inference phase, we divide each experimental image into sub-images with some overlap. These sub-images are then fed individually into the network to obtain localizations, and the localizations from all sub-images are fused to generate the complete global localization map for the frame. To ensure accuracy, unreliable localizations near the edges of each sub-image are excluded, focusing only on the central valid area. Notably, this exclusion does not result in localization loss due to our precise cropping overlap design (fig. S10A).

Network training and inference on simulated data

Using PPG3D, we generate field dependent training data for FOV-dependent DeepSTORM3D, which divides the whole FOV into 121 patches (each has 161 pixels in both width and height) in a 11-by-11 grid with some overlap (fig. S10A). The bottom-left 6-by-6 patches, around a quarter of the whole calibrated FOV, are used to demonstrate our concept in simulation. For comparison, we also train a standard DeepSTORM3D with the same training images, which, however, are from the shift-invariant mode of PPG3D. First, we simulate 240 random emitters and test localization accuracy according to RMSE. Note that both networks, in the prediction stage, use a threshold regarding prediction confidence. Because the same threshold for both networks yields a different number of localizations in the same image, we set a series of thresholds and show a plot of localization RMSE versus the number of predictions. Furthermore, for each threshold, we repeat the inference 10 times with different random emitters. The lateral and axial RMSE between the predicted localizations and ground truth are calculated for assessment (Fig. 4, A and B). On average, FOV-dependent DeepSTORM3D improves the lateral and axial localization accuracy by 36% and 30%, respectively. The accuracy and precision as a function of position in the FOV are analyzed in fig. S11.

Fig. 4. Comparison of FOV-dependent DeepSTORM3D (subscript 1) to DeepSTORM3D (subscript 2) in simulation.

Fig. 4.

(A) Lateral RMSE of predictions with respect to the number of emitters detected by both networks. (B) Same comparison regarding axial RMSE. (C) Maximum projection of the reconstructed 3D dotted tubular “C” structure by FOV-dependent DeepSTORM3D. Scale bar, 10 μm. The cross sections along the lines D, F, G, and H are in (D), (F), (G), and (H), and a zoom-in view of E at z = 6.175 μm is shown in (E). Scale bar, 0.5 μm. The intensity profile along I in F, J in G, and K in H are shown in (I), (J), and (K) with the same color correspondence. The blue line on the top of (I) shows the true diameter of the simulated tube—300 nm.

Second, we create a 3D structure with 3 × 3 pattern of “CCCC.” Each unit is composed of 16 dotted tubules, each of which has a diameter of 300 nm and a length of 3 μm, and consists of 3200 emitters (fig. S12). Simulating the emitter blinking in an SMLM experiment, we predict the structure using 4000 frames of simulated images. When processing those images through both networks, we ensure the same approximate number of interferences through filters in ThunderSTORM (42) (text S7). As the reconstruction (Fig. 4, C to K, and movie S7) shows, FOV-dependent DeepSTORM3D (subscript 1 in Fig. 4) outperforms standard DeepSTORM3D (subscript 2) and yields super-resolution reconstruction with higher fidelity. Specifically, Fig. 4D2 shows noisier and deformed reconstructed tubules, which is improved and corrected in Fig. 4D2. Similarly, Fig. 4E1 presents a clearer hollow tubule structure compared with Fig. 4E2. More detailed quantification comparisons regarding hollow structure (Fig. 4, D to H) and intensity profiles (Fig. 4, I to K) demonstrate that FOV-dependent DeepSTORM3D helps correct misestimations caused by field dependence. An additional simulation of a sinusoidal surface is shown in fig. S13.

Experimental demonstration: Super-resolution imaging of mitochondria and microtubules

We perform 3D STORM using the tetrapod PSF (imaging system #2 in table S1). The experiment consists of imaging a fluorescently labeled samples as the fluorophores blink in time (movie S1), localizing them, and reconstructing a super-resolved image using either FOV-dependent DeepSTORM3D or DeepSTORM3D. See experimental details in Materials and Methods. We first image mitochondria, over a calibration FOV of 178-by-178 μm; in total, 52,261 frames (see one example frame with rendering prediction in Fig. 3B and movie S1) were processed, and ~12.7 million valid localizations were found (see the reconstruction demonstration in movies S2 to S4). Comparing the localization results (Fig. 5) from FOV-dependent DeepSTORM3D (subscript 1) and DeepSTORM3D (subscript 2), the improvement is noticeable. In Fig. 5, our results outperform their counterparts, with regard to localization density (color brightness) of the reconstructed microstructures and the noise (spotty patterns) around. Specifically, Fig. 5D1 shows a sharper pink and orange structure compared with Fig. 5D2. Also, Fig. 5 (E1 and F1) shows less speckled and noise structures in comparison to Fig. 5 (E2 and F2). Furthermore, in the cross sections in Fig. 5 (G to J), as well as the more quantitative intensity profiles in Fig. 5 (K to N), our method yields brighter, less noisy, and more recognizable cavity structures. We also imaged microtubules (fig. S14) in this setup.

Fig. 5. Large-FOV 3D super-resolution imaging of mitochondria with Tetrapod PSF.

Fig. 5.

(A) Super-resolved 3D mitochondria image (maximum projection) in a 127-by-160 μm2 FOV through FOV-dependent DeepSTORM3D (subscript 1). The zoom-in views of square regions B and C are shown in (B) and (C). The comparison, with the reconstruction from DeepSTORM3D (subscript 2), regarding regions D and E in (B), and F in (C) are shown in (D), (E), and (F), respectively. The white arrows indicate some noteworthy details. The cross sections along lines G and I in (B), and J and H in (C), are shown in (G), (H), (I), and (J). The intensity profiles along lines in those cross sections are shown in (K) to (N). The blue and orange lines correspond to FOV-dependent DeepSTORM3D and DeepSTORM3D, respectively. Scale bars in (A) to (C), 20 μm. The rest of the scale bars, 2 μm.

Next, we used a cylindrical lens with a focal length of 500 mm to generate an astigmatic PSF (fig. S10B) for 3D SMLM. We calibrated 424 field positions (fig. S10B) and fed these PSF data to PPG3D to build the spatially dependent PSF model for FOV-dependent DeepSTORM3D. Again, we used both mitochondria and microtubules, and the reconstruction results (movies S5 and S6), as well as the comparison with regular DeepSTORM3D (subscript 2), are shown in Fig. 6. Compared with cross sections (Fig. 6, B to D, G, and H) with subscript 2, the reconstructed structures from our method (subscript 1) appear more complete, in the sense that the density of the localization is higher. For example, the reconstruction is less “patchy,” around the mitochondrial cavity and along the microtubule line in the FOV-dependent case compared to the FOV-independent case.

Fig. 6. Large-FOV 3D super-resolution imaging of mitochondria and microtubules with an astigmatic PSF.

Fig. 6.

(A) Maximum projection of the reconstructed mitochondria with the xy cross section (B) at z = 0.36 μm in the box B, and xz cross sections (C) and (D) along lines C and D, respectively. Subscripts 1 and 2 represent FOV-dependent DeepSTORM3D and DeepSTORM3D. (E) Maximum projection of the reconstructed microtubules with a zoom-in view (F) showing the region between two nuclei, and xy cross sections (G) and (H) in boxes G and H. Scale bars in (A) and (E), 20 μm. Scale bars in (B), (D), (G) and (H), 2 μm. Scale bar in (C), 1 μm. Scale bar in (F), 10 μm.

DISCUSSION

Here, we demonstrate a field-dependent dense localization method for 3D SMLM. The method consists of two main components. First, we introduce PPG3D, a spatially variant PSF generator that employs PCA-based interpolation. This approach involves transforming PSFs into a lower-dimensional space, conducting interpolation for a target spatial position, and projecting back to the image space to obtain the desired PSF. Comparative analysis with other PSF generators in LM showcases substantial enhancement in accuracy and operation speed. Next, we integrate PPG3D with DeepSTORM3D to address field dependence in large-FOV 3D LM, resulting in FOV-dependent-DeepSTORM3D. This modified localization estimator demonstrates an improvement in prediction accuracy by more than 30%. In STORM experiments, FOV-dependent DeepSTORM3D enables 3D super-resolution imaging of mitochondria and microtubules within a large FOV, exhibiting noticeably higher fidelity compared to standard DeepSTORM3D.

PPG3D offers a fundamental advantage in its simplicity, stemming from its interpolation nature, compared to model-based PSF generators. Reliable implementation of model-based generators often requires users to accurately retrieve physical parameters and comprehend physical models. In contrast, the PPG3D codes we provide only require calibrated PSFs and positions as input to accurately perform PSF generation, making it potentially more user-friendly. Furthermore, the dimensionality reduction achieved through PCA confers a substantial speed advantage to PPG3D, as it involves interpolating a smaller number of parameters. When compared to the shift variant ZPPR21, PPG3D not only demonstrates superior accuracy and speed but also requires fewer calibration positions.

While PSF generators based on decomposition and interpolation can be found in other fields, such as astronomy and computational volumetric imaging, PPG3D distinguishes itself from them in two key aspects. Firstly, PPG3D stands out as a genuine 3D continuous-domain PSF generator. In contrast, shift-variant PSF generation in astronomy (38) is primarily conducted in 2D. Although volumetric imaging (39, 40) requires a 3D PSF, the voxel representation of objects dictates that the PSF is pixelated by 3D voxel. Second, PPG3D employs a concise and efficient scheme of local interpolations, allowing for high-speed operation without any concerns regarding memory limitations. In comparison, other studies (39, 40) often resort to decomposing all the calibrated PSFs, which can pose challenges regarding memory efficiency.

However, the absence of a physical model limits PPG3D when the influence of refractive index mismatch is prominent. This limitation can be addressed by combining a shift-invariant model-based PSF generator with PPG3D. For FOV-dependent-DeepSTORM3D, it enables field dependence learning; however, cropping the whole FOV into many fixed sub-areas in advance is still naïve and lacks flexibility. A possible future direction would be improving the flexibility of our method, e.g., by randomly sampling the sub-images in the training stage and ultimately alleviating the need for positional encoding.

Finally, the combination of a field-dependent PSF generator with a deep learning–based localizer, presented here, can be used in a variety of scenarios, including where the PSF is controllably different throughout the FOV (43).

MATERIALS AND METHODS

Formulation of spatially variant PSF

The PSF is the impulse response of a system, which, in the imaging context, is the 3D electromagnetic field distribution at the image domain, in response to a point source at the object domain. For shift-invariant imaging systems, the PSF can be described, up to scaling and inversion, by

Hδ(xα,yβ,zγ)=h(xα,yβ,zγ) (1)

where H is the 3D transfer function, δ is the delta function, (x, y, z) are spatial coordinates of the object side and (x′, y′, z′) are those of the image side, (α, β, γ) is the spatial position of the point source, and h is the 3D shape of the PSF. Note that we ignore the magnification effect for simplicity. If we abandon the shift-invariant assumption and consider the more general case, the PSF formation process can be given by

Hδ(xα,yβ,zγ)=hαβγ(x,y,z) (2)

where now PSF hαβγ is dependent on the location of the point source and the PSF shape changes as we move the point source in 3D object space. In practice, cameras with 2D sensors are typically used in imaging systems to sample the 3D space in a 2D plane. Switching to a 2D case, the impulse response becomes

Hδ(xα,yβ,zγ)=hαβγ(x,y) (3)

The PSF most obviously changes with the axial coordinate γ, related to the concepts of “focus,” “defocus,” and “depth of field (DOF)”. In comparison, αβ dependence is much milder and is typically ignored, for the sake of simple calculations. Generally, the former is called depth dependence and the latter is called field dependence, and the systems exhibiting field dependence are shift-variant. Our goal is to build a PSF generator that considers both kinds of dependence, for LM.

The basic idea of PPG3D is to transform the calibrated PSFs to a space with reduced dimensionality through PCA, specifically TSVD, and then implement interpolation in this low-dimensional domain with respect to the spatial position (α, β, γ). It is worth mentioning that, under this concept, different PCA interpolation schemes can be designed, dependent on the specific goal.

Imaging system calibration

To conduct the calibration experiment in a large FOV, two factors should be considered: point source distribution throughout the FOV, and homogeneous illumination. Here, we use both fluorescent bead samples and a nano-hole array (table S2) and rely on axial sample scanning by a microscope stage. For illumination, we built an illumination system composed of a 2-W high-power laser, a multi-mode fiber, and a vibration motor. Notably, SMLM experiment often benefits from strong illumination to enable efficient biomolecule blinking.

In the imaging system used for the STORM experiment, PSF engineering is performed using a tetrapod diffractive optical element (7). We use a fluorescent bead sample immersed in water for calibration. After several lateral scans, 334 field positions are covered within a circular FOV with a diameter of ~180 μm. At each field position, the PSF is measured in an axial range of (−3, 3) μm at an interval of 0.15 μm. Thus, in total, this calibration consists of 334-by-41 spatial positions.

As mentioned in the Discussion, we include the possibility of using physical models to address refractive index (RI) mismatch issue, i.e., the RI difference between the sample medium and the objective immersion oil. The image of a fluorescent source (e.g., sub-diffraction bead) on a glass coverslip is different from that of a fluorescent source inside a sample (fig. S9); this is why the PSF derived by straightforward interpolation of a z-stack of a bead on a glass coverslip cannot be used directly to obtain the PSF of a defocused emitter inside practically any biological medium at high accuracy.

Theoretically, the existence of the RI mismatch (13, 24) introduces two axial parameters with different contributions (text S6) to the phase at BFP: the distance of an emitter from the coverslip, and the position of the nominal focal plane (NFP), i.e., the focal plane without RI mismatch. In calibration, we can only change the NFP positions, while the real experiment has fixed NFP and unknown various emitter positions. To transform the calibration measurements to be applicable to the RI mismatch case, we use model-based VIPR (24). Specifically, we rely on the imaging model from VIPR to search for a proper NFP position such that the lowest emitter in experiment can be covered by the model. Then, we fix the NFP and generate PSFs for the emitter at a series of distances relative to the coverslip. By doing so, we obtain a new PSF dataset with 334 field positions in a 180-μm-diameter FOV and 28 axial positions over a 4-μm range. PPG3D based on this dataset can now be used to generate shift-variant PSFs of emitters.

Sample preparation for STORM experiment

Cover glasses (22 × 22 mm, 170 μm; Deckgläser, No.1.5H) were cleaned in an ultrasonic bath with 5% Contrad 70 (Decon) at 60°C for 30 min, then washed twice with double-distilled water (incubated shaking for 10 min each time), incubated shaking in ethanol absolute for 30 min, sterilized with filtered 70% ethanol for 30 min, and dried in a biological cabinet. COS7 cells at a concentration of 60,000 cells/ml in Dulbecco’s modified Eagle’s medium with 1 g/liter d-glucose (Sartorius, 01-050-1A), supplemented with fetal bovine serum (Biological Industries, 04-007-1A), penicillin-streptomycin (Biological Industries, 03-031-1B), and glutamine (Biological Industries, 03–020-1B), were grown for 24 hours in a 6-well plate (Thermo Fisher, Nunclon Delta Surface) containing 6 ml of the cell suspension and the cleaned cover glasses, at 37°C, and 5% CO2. The cells were fixed with 4% paraformaldehyde and 0.2% glutaraldehyde in phosphate-buffered saline (PBS), pH 6.2, for 60 min, washed, and incubated in 0.3 M glycine/PBS solution for 10 min. The cover glasses were transferred into a clean six-well plate and incubated in a blocking solution for 2 hours (10% goat serum, 3% bovine serum albumin, 2.2% glycine, and 0.1% Triton X-100 in PBS, filtered with a 0.45-μm Millex PVDF filter unit). The cells were then immune-stained with either 1:500 diluted anti–TOMM20-AF647 antibody (Abcam, ab209606) or 1:500 diluted anti–alpha-tubulin-AF647 (Abcam, ab190573) and 1:500 diluted anti–beta-tubulin-AF647 (Abcam, ab235759) in the blocking buffer for 1.5 hours and washed five times with PBS. For super-resolution imaging, a PDMS chamber (22 × 22 × 3 mm, with a 13 × 13 mm hole cut in the middle) was attached to the cover glass containing the fixed and stained COS7 cells to create a pool for the blinking buffer. Blinking buffer [50 mM cysteamine hydrochloride (Sigma-Aldrich, M6500), 20% sodium lactate solution (Sigma-Aldrich, L1375), and 3% OxyFluor (Sigma-Aldrich, SAE0059) in PBS, pH 8 to 8.5] was added and a cover glass was placed on top while ensuring minimal air bubbles.

STORM imaging

We used the Nikon eclipse Ti2 inverted microscope equipped with N-STORM unit (Nikon), silicone-oil objective (Nikon, RS HP Plan Apo 100×/1.35 Sil WD), and a multi-band-pass dichroic (Semrock, Di03-R405-488-532-635-t3). The microscope was extended with a 4f system (f = 200 mm) containing a tetrapod phase mask in the Fourier plane and a scientific Complementary Metal–Oxide–Semiconductor camera (Teledyne Photometrics, Kinetix) for image acquisition. In the astigmatism case, we put, in a 4f system (f = 200 mm), a cylindrical lens with a 500-mm focal length in front of another camera (Teledyne Photometrics Prime 95B). The sample was illuminated with a 640-nm laser at estimated power of ~3.13 kW/cm2 at the sample position.

Acknowledgments

We would like to express our sincere gratitude to O. Goldenberg for fruitful discussions and S. Fu for assistance with the shift-variant ZPPR21 codes.

Funding: This research was supported in part by the ISRAEL SCIENCE FOUNDATION (grant no. 450/18) and by funding from the European Union’s Horizon 2020 research and innovation program under grant agreement no. 802567-ERC-Five-Dimensional Localization Microscopy for Sub-Cellular Dynamics. Y.S. is supported by the Zuckerman Foundation and by the Donald D. Harrington fellowship.

Author contributions: Conceptualization: Y.S. and D.X. Methodology: Y.S., E.N., and D.X. Investigation: D.X., R.K.O., N.O., A.P., and O.A. Visualization: D.X. Supervision: Y.S. Writing—original draft: D.X. Writing—review and editing: Y.S., D.X., and O.A.

Competing interests: The authors declare that they have no competing interests.

Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. PPG3D: https://github.com/dafeixiao/PPG3D and https://zenodo.org/records/10600561. FOV-dependent DeepSTORM3D: https://github.com/dafeixiao/FOV-dependent-DeepSTORM3D and https://zenodo.org/records/10600579.

Supplementary Materials

This PDF file includes:

Supplementary text

Figs. S1 and S14

Tables S1 to S5

Legends for movies S1 to S7

sciadv.adj3656_sm.pdf (2.6MB, pdf)

Other Supplementary Material for this manuscript includes the following:

Movies S1 to S7

REFERENCES AND NOTES

  • 1.Betzig E., Patterson G. H., Sougrat R., Lindwasser O. W., Olenych S., Bonifacino J. S., Davidson M. W., Lippincott-Schwartz J., Hess H. F., Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642–1645 (2006). [DOI] [PubMed] [Google Scholar]
  • 2.Rust M. J., Bates M., Zhuang X., Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 3, 793–796 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Dupont A., Lamb D. C., Nanoscale three-dimensional single particle tracking. Nanoscale 3, 4532–4541 (2011). [DOI] [PubMed] [Google Scholar]
  • 4.Huang B., Wang W., Bates M., Zhuang X., Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 319, 810–813 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Greengard A., Schechner Y. Y., Piestun R., Depth from diffracted rotation. Opt. Lett. 31, 181–183 (2006). [DOI] [PubMed] [Google Scholar]
  • 6.Pavani S. R. P., Thompson M. A., Biteen J. S., Lord S. J., Liu N., Twieg R. J., Piestun R., Moerner W. E., Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function. Proc. Natl. Acad. Sci. U.S.A. 106, 2995–2999 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Shechtman Y., Sahl S. J., Backer A. S., Moerner W. E., Optimal point spread function design for 3D imaging. Phys. Rev. Lett. 113, 133902 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Shechtman Y., Weiss L. E., Backer A. S., Lee M. Y., Moerner W. E., Multicolour localization microscopy by point-spread-function engineering. Nat. Photonics 10, 590–594 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Shechtman Y., Weiss L. E., Backer A. S., Sahl S. J., Moerner W. E., Precise three-dimensional scan-free multiple-particle tracking over large axial ranges with tetrapod point spread functions. Nano Lett. 15, 4194–4199 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Aristov A., Lelandais B., Rensen E., Zimmer C., ZOLA-3D allows flexible 3D localization microscopy over an adjustable axial range. Nat. Commun. 9, 2409 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Mortensen K. I., Churchman L. S., Spudich J. A., Flyvbjerg H., Optimized localization analysis for single-molecule tracking and super-resolution microscopy. Nat. Methods 7, 377–381 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Quirin S., Pavani S. R. P., Piestun R., Optimal 3D single-molecule localization for superresolution microscopy with aberrations and engineered point spread functions. Proc. Natl. Acad. Sci. U.S.A. 109, 675–679 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Nehme E., Freedman D., Gordon R., Ferdman B., Weiss L. E., Alalouf O., Naor T., Orange R., Michaeli T., Shechtman Y., DeepSTORM3D: Dense 3D localization microscopy and PSF design by deep learning. Nat. Methods 17, 734–740 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Speiser A., Müller L.-R., Hoess P., Matti U., Obara C. J., Legant W. R., Kreshuk A., Macke J. H., Ries J., Turaga S. C., Deep learning enables fast and dense single-molecule localization with high accuracy. Nat. Methods 18, 1082–1090 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Zhang P., Liu S., Chaurasia A., Ma D., Mlodzianoski M. J., Culurciello E., Huang F., Analyzing complex single-molecule emission patterns with deep learning. Nat. Methods 15, 913–916 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Zelger P., Kaser K., Rossboth B., Velas L., Schütz G. J., Jesacher A., Three-dimensional localization microscopy using deep learning. Opt. Express 26, 33166–33179 (2018). [DOI] [PubMed] [Google Scholar]
  • 17.Ouyang W., Aristov A., Lelek M., Hao X., Zimmer C., Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018). [DOI] [PubMed] [Google Scholar]
  • 18.Kim T., Moon S., Xu K., Information-rich localization microscopy through machine learning. Nat. Commun. 10, 1996 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Fu S., Shi W., Luo T., He Y., Zhou L., Yang J., Yang Z., Liu J., Liu X., Guo Z., Yang C., Liu C., Huang Z., Ries J., Zhang M., Xi P., Jin D., Li Y., Field-dependent deep learning enables high-throughput whole-cell 3D super-resolution imaging. Nat. Methods 20, 459–468 (2023). [DOI] [PubMed] [Google Scholar]
  • 20.R. Liu, J. Lehman, P. Molino, F. P. Such, E. Frank, A. Sergeev, J. Yosinski, An intriguing failing of convolutional neural networks and the CoordConv solution, in Proceedings of the 32nd International Conference on Neural Information Processing Systems (Curran Associates Inc., 2018), pp. 9628–9639. [Google Scholar]
  • 21.Petrov P. N., Moerner W. E., Addressing systematic errors in axial distance measurements in single-emitter localization microscopy. Opt. Express 28, 18616–18632 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Richards B., Wolf E., Electromagnetic diffraction in optical systems. II. Structure of the image field in an aplanatic system. Proc R. Soc. A Math. Phys. Sci. 253, 358–379 (1959). [Google Scholar]
  • 23.Gibson S. F., Lanni F., Experimental test of an analytical model of aberration in an oil-immersion objective lens used in three-dimensional light microscopy. J. Opt. Soc. Am. A 9, 154–166 (1992). [DOI] [PubMed] [Google Scholar]
  • 24.Ferdman B., Nehme E., Weiss L. E., Orange R., Alalouf O., Shechtman Y., VIPR: Vectorial implementation of phase retrieval for fast and accurate microscopic pixel-wise pupil estimation. Opt. Express 28, 10179–10198 (2020). [DOI] [PubMed] [Google Scholar]
  • 25.Shechtman Y., Eldar Y. C., Cohen O., Chapman H. N., Miao J., Segev M., Phase retrieval with application to optical imaging: A contemporary overview. IEEE Signal Process. Mag. 32, 87–109 (2015). [Google Scholar]
  • 26.Petrov P. N., Shechtman Y., Moerner W. E., Measurement-based estimation of global pupil functions in 3D localization microscopy. Opt. Express 25, 7945–7959 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Yan T., Richardson C. J., Zhang M., Gahlmann A., Computational correction of spatially variant optical aberrations in 3D single-molecule localization microscopy. Opt. Express 27, 12582–12599 (2019). [DOI] [PubMed] [Google Scholar]
  • 28.H. Kirshner, C. Vonesch, M. Unser, Can localization microscopy benefit from approximation theory?, in 2013 IEEE 10th International Symposium on Biomedical Imaging (IEEE, 2013), pp. 588–591. [Google Scholar]
  • 29.Babcock H. P., Zhuang X., Analyzing single molecule localization microscopy data using cubic splines. Sci. Rep. 7, 552 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Li Y., Mund M., Hoess P., Deschamps J., Matti U., Nijmeijer B., Sabinina V. J., Ellenberg J., Schoen I., Ries J., Real-time 3D single-molecule localization using experimental point spread functions. Nat. Methods 15, 367–369 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Maalouf E., Colicchio B., Dieterlen A., Fluorescence microscopy three-dimensional depth variant point spread function interpolation using Zernike moments. J. Opt. Soc. Am. A 28, 1864–1870 (2011). [Google Scholar]
  • 32.von Diezmann L., Lee M. Y., Lew M. D., Moerner W. E., Correcting field-dependent aberrations with nanoscale accuracy in three-dimensional single-molecule localization microscopy. Optica 2, 985–993 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Zheng G., Horstmeyer R., Yang C., Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics 7, 739–745 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Opatovski N., Xiao D., Harari G., Shechtman Y., Monocular kilometer-scale passive ranging by point-spread function engineering. Opt. Express 30, 37925–37937 (2022). [DOI] [PubMed] [Google Scholar]
  • 35.Hulleman C. N., Thorsen R. Ø., Kim E., Dekker C., Stallinga S., Rieger B., Simultaneous orientation and 3D localization microscopy with a vortex point spread function. Nat. Commun. 12, 5934 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Copeland C. R., Geist J., McGray C. D., Aksyuk V. A., Liddle J. A., Ilic B. R., Stavis S. M., Subnanometer localization accuracy in widefield optical microscopy. Light Sci. Appl. 7, 31 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Copeland C. R., McGray C. D., Ilic B. R., Geist J., Stavis S. M., Accurate localization microscopy by intrinsic aberration calibration. Nat. Commun. 12, 3925 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Thiébaut É., Denis L., Soulez F., Mourya R., Spatially variant PSF modeling and image deblurring. Proc. SPIE 9909, 99097N (2016). [Google Scholar]
  • 39.Yanny K., Antipa N., Liberti W., Dehaeck S., Monakhova K., Liu F. L., Shen K., Ng R., Waller L., Miniscope3D: Optimized single-shot miniature 3D fluorescence microscopy. Light Sci. Appl. 9, 171 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Xue Y., Yang Q., Hu G., Guo K., Tian L., Deep-learning-augmented computational miniature mesoscope. Optica 9, 1009–1021 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.F. Yu, V. Koltun, Multi-scale context aggregation by dilated convolutions. arXiv:1511.07122 [cs.CV] (2016).
  • 42.Ovesný M., Křížek P., Borkovec J., Švindrych Z., Hagen G. M., ThunderSTORM: A comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging. Bioinformatics 30, 2389–2390 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Ferdman B., Saguy A., Xiao D., Shechtman Y., Diffractive optical system design by cascaded propagation. Opt. Express 30, 27509–27530 (2022). [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary text

Figs. S1 and S14

Tables S1 to S5

Legends for movies S1 to S7

sciadv.adj3656_sm.pdf (2.6MB, pdf)

Movies S1 to S7


Articles from Science Advances are provided here courtesy of American Association for the Advancement of Science

RESOURCES