Skip to main content
Optics Express logoLink to Optics Express
. 2020 Sep 15;28(20):29044–29053. doi: 10.1364/OE.401933

Practical sensorless aberration estimation for 3D microscopy with deep learning

Debayan Saha 1,2, Uwe Schmidt 1,2, Qinrong Zhang 3, Aurelien Barbotin 4, Qi Hu 4, Na Ji 3, Martin J Booth 4,6, Martin Weigert 1,2,5,7, Eugene W Myers 1,2,8
PMCID: PMC7679184  PMID: 33114810

Abstract

Estimation of optical aberrations from volumetric intensity images is a key step in sensorless adaptive optics for 3D microscopy. Recent approaches based on deep learning promise accurate results at fast processing speeds. However, collecting ground truth microscopy data for training the network is typically very difficult or even impossible thereby limiting this approach in practice. Here, we demonstrate that neural networks trained only on simulated data yield accurate predictions for real experimental images. We validate our approach on simulated and experimental datasets acquired with two different microscopy modalities and also compare the results to non-learned methods. Additionally, we study the predictability of individual aberrations with respect to their data requirements and find that the symmetry of the wavefront plays a crucial role. Finally, we make our implementation freely available as open source software in Python.

1. Introduction

Image quality in volumetric microscopy of biological samples is often severely limited by optical aberrations due to refractive index inhomogeneities inside the specimen [1,2]. Adaptive optics (AO) is widely used to correct for these distortions via optical elements like deformable mirrors or spatial light modulators [3,4]. Successful implementation of AO requires aberration measurements at multiple locations within the imaging volume [5]. This can be achieved by creating point sources such as embedded fluorescent beads [6] or optically induced guide stars [7], and then sensing the wavefront either directly via dedicated hardware (e.g. Shack-Hartman wavefront sensors [8,9]) or indirectly from the intensity image of the point source (PSF) alone [10,11]. Due to its special hardware requirements, and its reliance on a point-scanning configuration, direct wavefront sensing can be cumbersome to implement and too slow for volumetric imaging of living samples [12]. In contrast, indirect wavefront sensing - or phase retrieval - offers the possibility to infer the aberration at multiple locations, across the entire volume simultaneously, without additional optical hardware [13,14]. Establishing a fast and accurate phase retrieval method from intensity images of point sources is therefore an important step for making AO more accessible to live imaging of large biological samples.

Classical approaches to phase retrieval include alternating projection methods such as Gerchberg-Saxton (GS) [11,15] or parameterized PSF fitting methods such as ZOLA [16] or VIPR [17]. While projection methods are typically fast but can perform poorly especially for noisy images, PSF fitting methods can achieve excellent results yet are relatively slow. Over the last years, deep learning-based approaches using convolutional neural networks (CNNs) have proven to be powerful and computationally efficient for image-based classification and regression tasks for microscopy images [18,19]. Recently, several studies demonstrated that deep learning-based phase retrieval can produce accurate results at fast processing speeds [2025], however they fall short regarding their practical applicability. Some of these approaches [2224] used purely simulated synthetic data, where generalizability to real microscopy images is unclear. Others focused on specific microscopy acquisition modes (such as using biplanar PSFs [20]) or microscopy setups that allow to collect large sets of experimental ground truth data for training and prediction [21,25], thus limiting this approach in practice. Moreover, most studies lack comparison against strong classical phase retrieval methods that are used in practice. As a result, the practical applicability of these approaches in experimental microscopy settings remains unclear.

In this paper we demonstrate for the first time that CNNs trained on appropriately generated synthetic data can be successfully applied to real images acquired with different microscopy modalities thereby avoiding the difficult or even impossible collection of experimental training data. Specifically, we generate synthetic 3D bead images with random aberrations via a realistic image formation model that matches the microscope setup, and we use a simple CNN architecture (which we call PHASENET) to directly predict these aberrations from the given volumetric images. We demonstrate the efficacy of our approach on two distinct microscopy modalities: i) a point-scanning microscope where single-mode aberrations were introduced in the illumination path, and ii) a widefield microscope where random-mode aberrations were introduced in the detection path. In contrast to other works [20,22], we also quantitively compare the speed and accuracy of PHASENET with the two popular state-of-the-art methods GS and ZOLA and find that PHASENET leads to competitive results yet is orders of magnitude faster. Finally, we demonstrate that the number of focal planes required for accurate prediction with PHASENET is related to different symmetry groups of the Zernike modes.

2. Method

Let h(x,y,z) be the acquired image of a bead (point spread function, PSF) and let φ(kx,ky) be the wavefront aberration, i.e. the phase deviation from an ideal wavefront defined on the back pupil with coordinates kx,ky. The wavefront aberration φ is then decomposed as a sum of Zernike polynomials/modes

φ(kx,ky)=iaiZi(kx,ky) (1)

with Zi(kx,ky) being the i-th (Noll indexed) Zernike mode and ai the corresponding amplitude [26,27]. The problem of phase retrieval is then to infer these amplitudes ai from h(x,y,z). Our approach (PHASENET) uses a CNN model that takes a 3D image as an input and directly outputs the amplitudes ai. Importantly, the model is trained on synthetically created data first and only then applied to real microscopy images (cf. Fig. 1). That way, we avoid the acquisition of experimental training images with precisely known aberrations, which often is difficult or outright impossible (e.g. for sensorless setups).

Fig. 1.

Fig. 1.

Overview of our approach: We train a CNN (PHASENET) with synthetic PSFs hsynth (nz axial planes) generated from randomly sampled amplitudes of Zernike modes ai. The trained network is then used to predict the amplitudes ai~ from experimental bead images hreal. The predicted amplitudes ai~ are then used to reconstruct the wavefront.

2.1. Synthetic training data

To generate training data for a specific microscope setup, we synthetically create pairs (ain,hsynthn)nN of randomly sampled amplitudes ain and corresponding 3D PSFs hsynthn. We use only the first 11 non-trivial Zernike modes ain=(a5n,,a15n), excluding piston, tip, tilt and defocus, and generate randomly aberrated PSFs by uniformly sampling ain[0.075μm,0.075μm] corresponding to the experimentally expected amplitude range. Given a wavefront φn(kx,ky)=iainZi, we compute the corresponding intensity image as:

hsynthn(x, y, z) = |[P(kx,ky)e2πiφn(kx,ky)/λe2πizn02λ2kx2ky2]|2 (2)

where [·] is the 2D Fourier transform with respect to the pupil coordinates kx and ky, λ is the wavelength, n0 is the refractive index of the immersion medium, φn(kx,ky)=i=515ainZi(kx,ky) is the wavefront aberration, and P(kx,ky) is the amplitude of the pupil function [28]. Since we do not consider amplitude attenuation, we simply set P(kx,ky)=1kx2+ky2<(NAλ)2 with NA being the numerical aperture of the objective. To accommodate for a finite bead size, we then convolve hsynthn with a sphere of appropriate diameter (depending on the experiment) and add realistic Gaussian and Poisson noise.

2.2. PHASENET

The CNN architecture (PHASENET) is shown in Fig. 1 and consists of five stacked blocks, each comprising two 3×3×3 convolutional layers (with stride 1 and the number of channels doubling every block starting with 8) and one max-pooling layer (only along the lateral dimensions), followed by two dense layers (64 channels) and a final dense layer having the same number of neurons as the number of Zernike amplitudes to be predicted (11 in our case). We use tanh as activation function for all layers except the last, where we use linear activation. This results in a rather compact CNN model with a total of 0.9 million parameters which we found to perform equally well for our task as more complex architectures (e.g. ResNet [22], cf. Fig. S9 in Supplement 1 (3MB, pdf) ). The 3D input size of PHASENET (e.g. 32×32×32) is fixed for each experimental setting. We simulate 3D PSFs hsynthn and the corresponding amplitudes ain which form the input and output of the network, respectively (cf. Fig. 1). To prevent overfitting, we use a data generator to continuously create random batches of training data pairs during the training process. We minimize the mean squared error (MSE) between predicted and ground truth (GT) amplitudes and train each model for 50000 steps and batch size 2 on a GPU (NVIDIA Titan Xp) using the Adam optimizer [29] with learning rate 1104 for a total training time of 24 h. Our synthetic training generation pipeline as well as the PHASENET implementation based on Keras [30] can be found at https://github.com/mpicbg-csbd/phasenet.

2.3. Experimental data

We use two different microscope setups (Point Scanning and Widefield) to demonstrate the applicability of this technique on real microscopy data.

2.3.1. Point scanning

This is a point-scanning microscope designed for STED microscopy, equipped with a 1.4NA oil immersion (n0=1.518) objective and a λ=755nm illumination laser (cf. Fig. S1(a) in Supplement 1 (3MB, pdf) and described in [31]). For these experiments, the system was operated without the STED function activated – in effect as a point scanning confocal microscope with open pinhole. Single Zernike mode aberrations for Z5 (oblique astigmatism) to Z15 (oblique quadrafoil) within an amplitude range of ±0.11μm were introduced in the illumination path via a spatial light modulator (SLM). The backscattering signal of 80nm gold beads was then measured using a photomultiplier tube and the stage axially and laterally shifted resulting in n=198 aberrated 3D bead images of size 32×32×32 with isotropic voxel size 30nm. We generated synthetic training data using the given microscope parameters and random amplitudes(a5,,a15) in the range of ±0.075μm (cf. Section 2.1). We then trained a PHASENET model as explained in Section 2.2.

2.3.2. Widefield

This is a custom-built epifluorescence microscope with a 1.1NA water immersion objective and a λ=488nm illumination laser (cf. Fig. S1(b) in Supplement 1 (3MB, pdf) ). Mixed Zernike mode aberrations comprising Z5Z10 (lower order) or Z5Z15 (higher order) were introduced in the detection path via a deformable mirror (DM). We used an amplitude range of ±0.075μm for each mode. The images of 200nm fluorescent beads were recorded at different focal positions, resulting in n=100 aberrated 3D bead images of size 50×50×50 with a voxel size of 86 nm laterally and 100nm axially. As before, we generated similar synthetic training data using the respective microscope parameters and trained a PHASENET model.

2.4. Evaluation and comparison with classical methods

We compare PHASENET against two classical iterative methods, GS (Gerchberg-Saxton, code from [11]) and ZOLA [16]. GS is an alternating projection method that directly estimates the wavefront aberration φ. ZOLA fits a realistic PSF model to the given image and returns the present Zernike amplitudes (Supp. Notes A). For both GS and ZOLA, we used 30 iterations per image, ZOLA additionally leveraging GPU-acceleration (NVIDIA Titan Xp). For every method we quantify the prediction error by first reconstructing the wavefront from the predicted Zernike amplitudes (for PHASENET and ZOLA) and then computing the root mean squared error (RMSE, in µm) of the difference between the predicted and the ground truth wavefront.

3. Results

3.1. Point scanning

We first investigated the performance of PHASENET on the data from Point Scanning microscope with experimentally introduced single-mode aberrations (cf. Fig. 2). This gives us the opportunity to assess the performance of all methods for each Zernike mode and amplitude in isolation. Here, the respective PHASENET model trained on synthetic PSFs achieved good wavefront reconstruction with the predicted and ground truth wavefront having a median RMSE of 0.025 µm (compared to the RMSE 0.15 µm of the input wavefronts), thus validating our approach (cf. Fig. S2 in Supplement 1 (3MB, pdf) ). We then applied the model on the experimental images, yielding amplitude predictions (a5,,a15) for each 3D input. In Fig. 2(a) we show the results for Z5 (oblique astigmatism). As can be seen, the predicted amplitude a5 exhibits good agreement with the experimental ground truth, even outside the amplitude range used for training (indicated by the gray arrow). Importantly, the predicted amplitudes for the non-introduced modes (a6,,a15) were substantially smaller, indicating only minor cross-prediction between modes (cf. inset in Fig. 2(a)). The same can be observed for all other modes Z6Z15 (cf. Fig. S3 and Fig. S4 in Supplement 1 (3MB, pdf) for reconstructed wavefronts).

Fig. 2.

Fig. 2.

Measurement of single Zernike mode aberrations for Point Scanning data: a) PHASENET predictions on images with experimentally introduced oblique astigmatism Z5 (see Fig. S2 in Supplement 1 (3MB, pdf) for modes Z6Z15). Shown are ground truth vs. the predicted amplitude a5 (black dots), perfect prediction (solid black line), and the upper/lower bounds of amplitudes used during training (gray arrow). The inset shows the distribution of predicted non-introduced modes (a6,,a15). Scalebar 500 nm. b) RMSE for PHASENET and compared methods (GS and ZOLA) on all images. Boxes show interquartile range (IQR), lines signify median, and whiskers extend to 1.5 IQR.

We next quantitatively compared the results of PHASENET with predictions obtained with GS and ZOLA. Here, PHASENET achieves a median RMSE between predicted and ground truth wavefronts of 0.028 µm across all acquired images (n = 198), which is comparable to the prediction error on synthetic PSFs. At the same time GS (0.039 µm) and ZOLA (0.031 µm) performed slightly worse (cf. Fig. 2(b), Fig. S8 in Supplement 1 (3MB, pdf) ). This demonstrates that a PHASENET model trained only on synthetic images can indeed generalize to experimental data and achieve better performance than classical methods. Interestingly, although this dataset uses a high numerical aperture objective, PHASENET achieves high accuracy despite using only a scalar PSF model (2) which neglects vectorial effects in the PSF simulation [17]. Crucially, predictions with PHASENET were obtained orders of magnitude faster than with both GS and ZOLA (cf. Table 1). Whereas it took only 4 ms for PHASENET to process a single image, it required 0.12s for GS and 17.1s for ZOLA. The speed advantage of PHASENET is even more pronounced when predicting batches of several images simultaneously (cf. Table 1).

Table 1. Runtime of all methods for aberration estimation from a single (n = 1) and multiple (n = 50) PSFs of size 32×32×32.

Method single (n = 1) batched (n = 50)
GS 0.120 s 6.2 s
ZOLA 17.1 s 838 s
PHASENET 0.004 s 0.033 s

3.2. Widefield

We next explored the applicability of our approach to the widefield microscope modality, where mixed-mode aberrations were randomly introduced. The PHASENET model trained on appropriate synthetic data achieved a median RMSE of 0.022 µm (compared to RMSE 0.14 µm of the input wavefronts) indicating again good wavefront reconstruction (Fig. S5 in Supplement 1 (3MB, pdf) ). We then applied the trained model on the experimental bead images. In Fig. 3(a) we show results for PHASENET, GS, and ZOLA for images with introduced modes Z5 − Z10 (lower order). The reconstructed wavefronts for both PHASENET and ZOLA exhibit qualitatively good agreement with the ground truth, whereas GS noticeably underperforms (cf. Fig. S6 in Supplement 1 (3MB, pdf) ). Similarly, the calculated RMSE across all images (n = 50) for GS (0.124 µm) is substantially larger than for PHASENET (0.025 µm) and ZOLA (0.012 µm). The same results can be observed when predicting images with higher order modes Z5Z15 (Fig. 3(b)). As expected, RMSE values increased slightly compared to the lower order modes for all methods, with 0.148 µm for GS, 0.035 µm for PHASENET, and 0.019 µm for ZOLA (more examples can be found in Fig. S7 in Supplement 1 (3MB, pdf) ). Although ZOLA yields slightly better RMSE than PHASENET for this dataset, PHASENET again vastly outperforms ZOLA and GS in terms of prediction time by being orders of magnitude faster (cf. Table S1 in Supplement 1 (3MB, pdf) ).

Fig. 3.

Fig. 3.

Results for Widefield data with mixed-modes aberrations: a) Predictions for lower order modes (Z5Z10): We show the ground truth (GT) wavefront, lateral (XY) and axial (XZ) midplanes of the experimental 3D image, the reconstructed wavefront and their GT difference for all methods (Gerchberg-Saxton/GS [11], ZOLA, PHASENET), and the reconstructed image from the PHASENET prediction. We further depict the RMSE for all n = 50 experimental PSFs. Boxes show interquartile range (IQR), lines signify median, and whiskers extend to 1.5 IQR. b) Same results but including higher order modes Z5Z15. Scalebar: 500 nm.

3.3. Number of input planes

In both experiments so far, the 3D input of PHASENET consisted of many defocus planes (nz=32 for Point Scanning and nz=50 for Widefield). We set out to determine whether accurate aberration prediction is still possible with substantially fewer planes. We therefore trained several PHASENET models with varying nz and applied them to experimental images (cf. Supp. Notes B). In Figs. 4(a) and (b) we show predictions with nz{1,2,32} for single-mode aberrations Z5 (oblique astigmatism) and Z7 (vertical coma). Interestingly, we find that in the case of Z5 at least nz ≥ 2 planes are needed for meaningful predictions, whereas in the case of Z7 already a single plane (nz=1) yields satisfactory results. This can be explained by observing that for purely Z5 aberrations (i.e. ai5=0), flipping the sign of the aberration amplitude a5=a5 leads to a 3D PSF that is mirrored along the optical axis. Predicting the amplitude a5 from a single image plane is therefore inherently ambiguous. To further examine this, we grouped the Zernike modes into the classes even and odd depending on the symmetry of the wavefront (even:Z5,Z6,Z11,...,odd:Z7,Z8,Z9,...) and calculated the prediction for each class separately. As expected, the RMSE decreases with increasing nz (Fig. 4(c)) for both classes. However, for even Zernike modes the prediction error is significantly higher than for odd modes, especially when using only few planes, in line with our earlier observation.

Fig. 4.

Fig. 4.

Results for varying number of input planes nz: a) Ground truth vs. the predicted amplitude a5 (oblique astigmatism) for single mode data Point Scanning and using PHASENET models with nz=1,2,32. b) The same for a7 (vertical coma). c) Prediction error (RMSE) on Widefield data (50 images) for PHASENET models trained with different nz. We show the RMSE for odd (orange) and even (blue) Zernike modes separately. Boxes depict interquartile range (IQR), lines signify median, and whiskers extend to 1.5 IQR.

4. Conclusion

We demonstrated for the first time that deep learning-based phase retrieval with our proposed PHASENET model using only synthetically generated training data does generalize to experimental data from different microscopy setups and allows for accurate and efficient aberration estimation from experimental 3D bead images. On datasets from two different microscopy modalities we showed that PHASENET yields better (Point scanning dataset) or almost comparable (Widefield dataset) results than classical methods, while being orders of magnitude faster. This opens up the interesting possibility of using PHASENET to perform aberration estimation from multiple beads or guide stars across an entire volumetric image in a real-time setting on the microscope during acquisition. We further investigated how prediction quality depends on the number of defocus planes nz and found that odd Zernike modes are substantially easier to predict than even modes for the same nz.

Still, our approach may not be applicable to cases where the synthetic PSF model is inadequate for the microscope setup or where experimental data is vastly different from the data seen during training (a limitation that applies to most machine learning-based methods). In particular, the range of aberration amplitudes used in the synthetic generator should cover the range of experimentally expected aberrations. Moreover, for discontinuous wavefronts (such as double helix PSFs [32] or helical phase ramps [33]) the low-order Zernike mode representation is likely to be inadequate and PHASENET performance is therefore sub-optimal. Furthermore, our experimental data so far included only Zernike modes Zn15, leaving the question open whether our approach would behave similarly for larger Zernike modes. Additionally, more advanced network architectures that explicitly leverage the physical PSF model might improve prediction accuracy. We believe that in the future our method can serve as an integral computational component of practical adaptive optics systems for microscopy of large biological samples.

Acknowledgments

We thank Robert Haase, Coleman Broaddus, Alexandr Dibrov (MPI-CBG) and Jacopo Antonello (University of Oxford) for the scientific discussions at different stages of this work. We thank Nicola Maghelli (MPI-CBG) for valuable inputs on building a microscope. We thank Siân Culley (UCL, London), Fabio Cunial (MPI-CBG) and Martin Hailstone (University of Oxford) for providing feedback. This research was supported by the German Federal Ministry of Research and Education (BMBF SYSBIO II - 031L0044) and by CA15124 (NEUBIAS). MW was supported by a generous donor represented by CARIGEST SA. MJB and QH were supported by the European Research Council (AdOMiS, no. 695140). AB was supported by EPSRC/MRC (EP/L016052/1). NJ and QZ were supported by US National Institutes of Health (U01 NS103573).

Funding

European Cooperation in Science and Technology10.13039/501100000921 (CA15124); European Research Council10.13039/501100000781 (695140, AdOMiS); Engineering and Physical Sciences Research Council10.13039/501100000266 (EP/L016052/1); Bundesministerium für Bildung und Forschung10.13039/501100002347 (031L0044, SYSBIO II); National Institutes of Health10.13039/100000002 (U01 NS103573).

Disclosures

The authors declare no conflicts of interest.

See Supplement 1 (3MB, pdf) for supporting content.

References

  • 1.Schwertner M., Booth M., Wilson T., “Characterizing specimen induced aberrations for high NA adaptive optical microscopy,” Opt. Express 12(26), 6540–6552 (2004). 10.1364/OPEX.12.006540 [DOI] [PubMed] [Google Scholar]
  • 2.Kubby J. A., Adaptive Optics for Biological Imaging (CRC, 2013). [Google Scholar]
  • 3.Booth M. J., “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014). 10.1038/lsa.2014.46 [DOI] [Google Scholar]
  • 4.Ji N., “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017). 10.1038/nmeth.4218 [DOI] [PubMed] [Google Scholar]
  • 5.Liu T. L., Upadhyayula S., Milkie D. E., Singh V., Wang K., Swinburne I. A., Mosaliganti K. R., Collins Z. M., Hiscock T. W., Shea J., Kohrman A. Q., Medwig T. N., Dambournet D., Forster R., Cunniff B., Ruan Y., Yashiro H., Scholpp S., Meyerowitz E. M., Hockemeyer D., Drubin D. G., Martin B. L., Matus D. Q., Koyama M., Megason S. G., Kirchhausen T., Betzig E., “Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms,” Science 360(6386), eaaq1392 (2018). 10.1126/science.aaq1392 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Ji N., Sato T. R., Betzig E., “Characterization and adaptive optical correction of aberrations during in vivo imaging in the mouse cortex,” Proc. Natl. Acad. Sci. 109(1), 22–27 (2012). 10.1073/pnas.1109202108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Wang K., Sun W., Richie C. T., Harvey B. K., Betzig E., Ji N., “Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue,” Nat. Commun. 6(1), 1–6 (2015). 10.1038/ncomms8276 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Cha J.-W., Ballesta J., So P. T., “Shack-hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy,” J. Biomed. Opt. 15(4), 046022 (2010). 10.1117/1.3475954 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Tao X., Crest J., Kotadia S., Azucena O., Chen D. C., Sullivan W., Kubby J., “Live imaging using adaptive optics with fluorescent protein guide-stars,” Opt. Express 20(14), 15969–15982 (2012). 10.1364/OE.20.015969 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Fienup J. R., “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). 10.1364/AO.21.002758 [DOI] [PubMed] [Google Scholar]
  • 11.Kner P., Winoto L., Agard D. A., Sedat J. W., “Closed loop adaptive optics for microscopy without a wavefront sensor,” in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XVII, vol. 7570 (International Society for Optics and Photonics, 2010), p. 757006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Booth M. J., “Adaptive optics in microscopy,” Philos. Trans. R. Soc., A 365(1861), 2829–2843 (2007). 10.1098/rsta.2007.0013 [DOI] [PubMed] [Google Scholar]
  • 13.Débarre D., Botcherby E. J., Watanabe T., Srinivas S., Booth M. J., Wilson T., “Image-based adaptive optics for two-photon microscopy,” Opt. Lett. 34(16), 2495–2497 (2009). 10.1364/OL.34.002495 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Xu F., Ma D., MacPherson K. P., Liu S., Bu Y., Wang Y., Tang Y., Bi C., Kwok T., Chubykin A. A., Yin P., Calve S., Landreth G. E., Huang F., “Three-dimensional nanoscopy of whole cells and tissues with in situ point spread function retrieval,” Nat. Methods 17(5), 531–540 (2020). 10.1038/s41592-020-0816-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hanser B. M., Gustafsson M. G., Agard D. A., Sedat J. W., “Phase retrieval for high-numerical-aperture optical systems,” Opt. Lett. 28(10), 801–803 (2003). 10.1364/OL.28.000801 [DOI] [PubMed] [Google Scholar]
  • 16.Aristov A., Lelandais B., Rensen E., Zimmer C., “ZOLA- 3D allows flexible 3D localization microscopy over an adjustable axial range,” Nat. Commun. 9(1), 2409 (2018). 10.1038/s41467-018-04709-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Ferdman B., Nehme E., Weiss L. E., Orange R., Alalouf O., Shechtman Y., “VIPR: Vectorial Implementation of Phase Retrieval for fast and accurate microscopic pixel-wise pupil estimation,” Opt. Express 28(7), 10179–10198 (2020). 10.1364/OE.388248 [DOI] [PubMed] [Google Scholar]
  • 18.Rivenson Y., Göröcs Z., Günaydin H., Zhang Y., Wang H., Ozcan A., “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). 10.1364/OPTICA.4.001437 [DOI] [Google Scholar]
  • 19.Weigert M., Schmidt U., Boothe T., Müller A., Dibrov A., Jain A., Wilhelm B., Schmidt D., Broaddus C., Culley S., Rocha-Martins M., Segovia-Miranda F., Norden C., Henriques R., Zerial M., Solimena M., Rink J., Tomancak P., Royer L., Jug F., Myers E. W., “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). 10.1038/s41592-018-0216-7 [DOI] [PubMed] [Google Scholar]
  • 20.Zhang P., Liu S., Chaurasia A., Ma D., Mlodzianoski M. J., Culurciello E., Huang F., “Analyzing complex single-molecule emission patterns with deep learning,” Nat. Methods 15(11), 913–916 (2018). 10.1038/s41592-018-0153-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Jin Y., Zhang Y., Hu L., Huang H., Xu Q., Zhu X., Huang L., Zheng Y., Shen H.-L., Gong W., Si K., “Machine learning guided rapid focusing with sensor-less aberration corrections,” Opt. Express 26(23), 30162–30171 (2018). 10.1364/OE.26.030162 [DOI] [PubMed] [Google Scholar]
  • 22.Möckl L., Petrov P. N., Moerner W. E., “Accurate phase retrieval of complex 3d point spread functions with deep residual neural networks,” Appl. Phys. Lett. 115(25), 251106 (2019). 10.1063/1.5125252 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Paine S. W., Fienup J. R., “Smart starting guesses from machine learning for phase retrieval,” in Space Telescopes and Instrumentation 2018: Optical, Infrared, and Millimeter Wave, vol. 10698 MacEwen H. A., Lystrup M., Fazio G. G., Batalha N., Tong E. C., Siegler N., eds. (SPIE, 2018), p. 210. 10.1117/12.2307858 [DOI] [Google Scholar]
  • 24.Cumming B. P., Gu M., “Direct determination of aberration functions in microscopy by an artificial neural network,” Opt. Express 28(10), 14511–1452 (2020). 10.1364/OE.390856 [DOI] [PubMed] [Google Scholar]
  • 25.Vishniakou I., Seelig J. D., “Wavefront correction for adaptive optics with reflected light and deep neural networks,” Opt. Express 28(10), 15459–15471 (2020). 10.1364/OE.392794 [DOI] [PubMed] [Google Scholar]
  • 26.Born M., Wolf E., Principles of Optics 7th ed., (Cambridge University; 1999). [Google Scholar]
  • 27.Noll R. J., “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207 (1976). 10.1364/JOSA.66.000207 [DOI] [Google Scholar]
  • 28.Goodman J., Introduction to Fourier Optics 2nd ed., (MaGraw-Hill; 1996). [Google Scholar]
  • 29.Kingma D., Ba J., “Adam: A method for stochastic optimization,” Int. Conf. on Learn. Represent (ICLR; ) (2015). [Google Scholar]
  • 30.Chollet F., “Keras,” https://keras.io (2015).
  • 31.Barbotin A., Galiani S., Urbančič I., Eggeling C., Booth M. J., “Adaptive optics allows STED-FCS measurements in the cytoplasm of living cells,” Opt. Express 27(16), 23378–23395 (2019). 10.1364/OE.27.023378 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Pavani S. R. P., Thompson M. A., Biteen J. S., Lord S. J., Liu N., Twieg R. J., Piestun R., Moerner W., “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. 106(9), 2995–2999 (2009). 10.1073/pnas.0900245106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Willig K., Keller J., Bossi M., Hell S. W., “Sted microscopy resolves nanoparticle assemblies,” New J. Phys. 8(6), 106 (2006). 10.1088/1367-2630/8/6/106 [DOI] [Google Scholar]

Articles from Optics Express are provided here courtesy of Optica Publishing Group

RESOURCES