Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2021 Aug 3;11:15723. doi: 10.1038/s41598-021-95266-2

Beyond multi view deconvolution for inherently aligned fluorescence tomography

Daniele Ancora 1,, Gianluca Valentini 1,2, Antonio Pifferi 1,2, Andrea Bassi 1,2
PMCID: PMC8333050  PMID: 34344932

Abstract

In multi-view fluorescence microscopy, each angular acquisition needs to be aligned with care to obtain an optimal volumetric reconstruction. Here, instead, we propose a neat protocol based on auto-correlation inversion, that leads directly to the formation of inherently aligned tomographies. Our method generates sharp reconstructions, with the same accuracy reachable after sub-pixel alignment but with improved point-spread-function. The procedure can be performed simultaneously with deconvolution further increasing the reconstruction resolution.

Subject terms: Applied optics, Optical physics, Information theory and computation, Imaging and sensing, Optical techniques, Light-sheet microscopy

Introduction

The field of tomographic imaging experienced a silent revolution during the last decade. A strong demand driven by deep learning and data mining has prompted hardware manufacturers to improve computing performances while keeping the price affordable. Nowadays, high throughput computation is possible with graphic processing units (GPU). GPUs allow parallel data-processing with performances beyond belief just a few years ago, radically changing the field of signal processing. In particular, standard image processing tasks such as Fourier-transformation, convolution, and matrix operations experience a constant-rate performance increase1,2. GPUs are the ideal solution for the massive image processing tasks required by tomographic reconstructions3. At visible wavelengths, optical projection tomography (OPT) is an example of an imaging technique applied for tomographic studies at microscopic level4. By rotating the specimen and collecting its optical projections at multiple angles, it is possible to form the reconstruction of the specimen via tomographic inversion. Another optical technique, light-sheet fluorescence microscopy (LSFM), offers a straightforward way to optically section the sample for the inspection of its internal structure5. Even if LSFM is a direct tomographic technique (i.e., it does not strictly require computation to generate a section of the sample) it is often desirable to observe the object from different angles to increase the reconstruction quality6. LSFM suffers from non-isotropic resolution (the axial resolution is lower than the lateral) and, in many cases, the sample is not visible as a whole due to tissue scattering or absorption. Multi-view approaches address these problems, either relying on the sample rotation7 or exploiting multiple objectives to observe the specimen from different angles8. Before their fusion, each acquisition is registered (aligned) against a chosen reference9, to place the information captured at different angles appropriately. Usually, the registration is accomplished by locating the best overlap between the volumes, eventually including beads around the specimen to enforce the alignment fidelity10. Here we discuss a new reconstruction strategy for the formation of an inherently aligned tomographic view of a biological specimen. We exploit the property of the auto-correlation (we indicate it by the operator A) to avoid any alignment procedure. At the same time, we demonstrate that the reconstruction based on multi-view auto-correlation brings an improved resolution due to the rejections of second order-correlations of the point-spread-function in the A-space. The work is inspired by previous results in OPT, where the auto-correlation is used to perform alignment-free reconstructions11. The use of A was possible because it commutes with the projection operator12. Here, instead, we calculate a tomographic auto-correlation of the sample based on multi-view light-sheet acquisitions. Fusing them leaves us with an ensembled A, created without aligning the views. It constitutes our starting point for the reconstruction: by inverting A, we form a tomographic view aligned at the sub-pixel level. Furthermore, we demonstrate that this inversion turns into a reconstruction sharper than the average fusion carried out in direct space. Since it is desirable to take into account the resolution-loss determined by the finite aperture of the optical system13, our protocol can further accomplish simultaneous deconvolution with a modified Bayesian A inversion scheme. For this study, the use of powerful GPUs plays a crucial role due to the computational complexity of our protocols. Without graphics cards, the reconstruction problem presented here would remain just a mere theoretical exercise.

Results

Our reconstruction strategy is grounded on the property of the auto-correlation of being centered in the shift-space. Each observation of the object is auto-correlated, and it concurs at the formation of the tomographic average A of the sample. Let us use the subscript μ to indicate the stack obtained by camera acquisitions and the superscript φi to denote its angular orientation indexed by i. In an experimental measurement, we observe a blurred version of the object due to the point spread function (PSF) of the system h, further corrupted by the presence of the noise ε. A typical acquisition is rendered in Fig. 1A, where we display a volumetric object imaged with a light-sheet microscope at the reference angle of 0. For the moment, we use Fig. 1 just for the discussion of the reconstruction pipeline; we will present the details about the specimen and the setup afterward. We assume that the additive ε can be neglected in case of high signal-to-noise ratio measurements. Now, we arrange the auto-correlation in a more convenient form. By applying the operator A to a given stack (see the Methods), we have that:

χμA{oμ}=oμoμ=ohoh 1
=χH=oK. 2

here, χ=oo is the ideal auto-correlation of the object, H=A{h} is the PSF in auto-correlation space and K=oH is an effective kernel. Figure 1D shows the auto-correlation of the volume displayed in panel A. The first equality in Eq. (2) implies that the auto-correlation of the ideal object is blurred by H, given by the auto-correlation of the direct space PSF h. The second indicates that χμ can be seen as a convolution of the object with a blurring kernel that contains the object itself. We consider N evenly rotated measurements that we denote with the index φi. The rotation of each measurement back to the reference angle 0 by -φi is the only pre-processing step required. Additionally, we subtract the mean value of a dark region where the sample is not present. In Fig. 1B, we display an orthogonal acquisition which was rotated by 90 to match the angular view of Fig. 1A, and then used to compute the auto-correlation (Fig. 1E). Denoting each observation as oμφ, and its corresponding A as χμφ, the quantities of interest are the averages:

o¯μ=1Ni=0Noμφi,χ¯μ=1Ni=0NχμφiandH¯=1Ni=0NHφi. 3

Figure 1.

Figure 1

Reconstruction pipeline. (A) Rendering of the reference view taken at φ=0. The planes indicate the xy-camera acquisition along the z-scanning direction. (B) Orthogonal detection by rotating the sample at φ=90. (C) Aligned-average of 12 measurements. The axes are chosen according to the reference view (x-lateral, y-transverse, z-longitudinal). (D) Auto-correlation of the view at 0. (E) A of the view at 90. (F) A averaged through 12 angles. (G) Reconstructions obtained by using deauto-correlation methods. For visual comparison, the upper part shows the result using the Schultz-Snyder protocol. The bottom one compares it with that of the Anchor-Update method.

Before computing the fusion as o¯μ (that we consider as the standard reconstruction, rendered in Fig. 1C), each measurement required an accurate alignment against the reference. The χ¯μ displayed in Fig. 1F, instead, is accurate because the auto-correlations are centered by definition. Ideally, this implies that we can obtain an intrinsically aligned average-reconstruction14 from χ¯μ, provided that we have a robust way to carry out the inversion o¯ρ=A-1{χ¯μ}. The rigid shifts between different observations are encoded in their Fourier phase, which we always discard when working in the A-space. Instead, we retain only the information coming from the Fourier transformations of each acquisition. By inverting χ¯μ, we implicitly look for a new phase of the object that represents an overall alignment between each of the views. In fact, this problem falls within the class of phase retrieval (PR) since we have access to the Fourier modulus of a real object, but the phase information is missing15.

With these two quantities in hand, we try to extrapolate intrinsically-aligned reconstructions by inverting the A with two schemes:

  • (I).

    Given χ¯μ=o¯ρo¯ρ, find o¯ρ;

  • (II).

    Given χ¯μ=o¯o¯H¯, deblur it by H¯ and find o¯.

At first sight, only the second scheme that deconvolves the PSF appears to provide a super-resolved reconstruction. However, the scheme (I) implies something even more interesting that we describe in Fig. 2. By averaging oμφi in direct space, the resulting volume gets blurred by an average PSF given by h¯=1Nihφi (Fig. 2A). Its A{h¯}=h¯h¯ can be visualized in Fig. 2B. By averaging auto-correlations, instead, we neglect second-order cross-terms of the PSF. Those contributions introduce long-correlations in the fused image and degrade the image quality. For comparison, the corresponding A-PSF is shown in Fig. 2C. As a consequence, by solving for o¯ρ, we achieve an effective PSF that is sharper than h¯. Interestingly, this is an implicit property that comes along with the average of multiple views of A. Thus, the resolution gain is attainable without having access to the PSF of the system. However, for comparison, we show the effective point-spread function achieved h¯eff=A-1{H¯} in Fig. 2D. For a detailed discussion, see the Supplement Materials. We decided to tackle the scheme (I) by using the Schultz-Snyder (SS) iterations17:

ot+1=otχμototo~t+χμototot. 4

Figure 2.

Figure 2

Point-spread-functions analysis. (A) PSF that blurs o¯μ. (B) Auto-correlation of h¯. (C) PSF H¯ that blurs the average χ¯μ. (D) Corresponding PSF in the object domain, sharper than h¯.

For the scheme (II), instead, we implement the Anchor-Update (AU) protocol18 that was developed ad-hoc for this purpose:

ot+1=otχμotKtK~t,updating:Kt=otH. 5

Both are fixed-point iterative Bayesian methods, having the number of iterations as the only parameter to set. In the present case, we set a high number of 5·105 iterations for both since these methods are very stable and can withstand long runs. On the other hand, this is also a drawback since these algorithms suffer from a slow convergence rate (each update t+1 is close to the previous one t).

To delve into the proposed method, let us consider volumetric acquisitions taken with an LSFM setup of a cleared mouse popliteal lymph node19. We are interested in reconstructing the three-dimensional vasculature stained with a fluorescent label. The stack oμφ constitutes a single volumetric view of the specimen and contains the camera detections of the sample scanned through the light sheet. We use 12 volumes by rotating the sample in steps of 30. The oμφ acquired at 0 and 90 were already rendered in Fig. 1A, B. Standard multi-view reconstruction algorithms require the alignment of every dataset against the reference view (that we assume at φ=0). A consolidated strategy (accurate at pixel level) is to locate the maximum of the cross-correlation between the reference and the view, translating it back accordingly. However, the researcher may be looking for sub-pixel accuracy, which would require to upsample the volume accordingly to the resolution that he wants to reach20. This condition makes the size of the problem rapidly explode, leaving the user with an up-sampled estimation (compared to the original measurement) that needs to be down-sampled for the formation of the final image (Supplement Materials). Here, instead, we produce a multi-view reconstruction that is accurate at the sub-pixel level and directly formed at the original resolution. We do not calculate any volume translation; we simply process the reconstruction altogether starting from its auto-correlation χ¯μ.

We analyze two experimental situations. In Figs. 3 and 4, we report the results obtained on two regions of the specimen. The first contains the whole specimen and corresponds to a volume of 5123 voxels, with a size of (1320 μm)3. The second volume takes a region of interest of 256×256×128 voxels, with a size of 330 μm × 330 μm × 165 μm. Convolutions and correlations are implemented via Fast-Fourier Transform (FFT) spectral decomposition. The GPU implementation is essential to perform such reconstructions since the method relies on intensive usage of 3D-FFT. We implemented the code in Python by using the CuPy library21, which provides a flexible CUDA framework for matrix operations. The problems were tackled using a single Nvidia Titan RTX, equipped with 4608 CUDA cores and 24 gigabytes of RAM. Each step is accomplished in 0.48 s for the first volume, while the second needs 0.05 s. We choose the reference view at angle φ=0 for the initial guess at t=0. The results obtained for the reconstructions of the whole specimen are rendered in Fig. 1G, where the top-half is the result of SS, and the bottom half is the result of AU. We show the maximum intensity projection (MIP) along each spatial coordinate to compare the different results. The top row of Fig. 3 shows the ground-truth reconstruction, obtained by averaging the views previously aligned by locating the peak of their cross-correlation. The second row of Fig. 3, instead, shows the results of SS iterations. Since the reconstruction is formed from an inherently aligned auto-correlation, the features of the specimen are better resolved than the standard reconstruction. Compared to the o¯μ, the reconstructed o¯ρ is crisp, with sharp features better isolated from an intensity background. The reconstruction contrast improved due to the sharper PSF h¯eff implied by the usage of the auto-correlation. The third row of Fig. 3 displays the results obtained with AU, deconvolving H from the estimated auto-correlation. The final effect is a deblurring of the reconstruction with respect to the SS. We can further assess the effectiveness of the method by examining a tomographic slice taken through the middle of the full-resolution volume. In Fig. 4A, we show the standard reconstruction result of the aligned and averaged volume. We have chosen a detailed region that displays a bifurcated blood vessel and a smaller circular opening located at the bottom. Figure 4B slices the same plane of the volume o¯ρ after the inversion of χ¯μ via SS iterations. If we compare it with the standard result, we observe a clear improvement in the reconstruction quality. Having correctly reinterpreted sub-pixel misalignment and with a neat PSF, the image is rich in details and well contrasted, where the standard reconstruction appears fuzzier. On the other hand, Fig. 4C shows the reconstruction of the same volume by using AU. Here, it is possible to appreciate the deblurring effect that leaves us with a highly-resolved reconstruction. To assess the qualitative verdict of our analysis, we examine a small detail of the vessel located at the bottom of panel C. The region of interest is displayed in panel E as a reference in the case of AU reconstruction. We draw a line profile in the middle of it, and we plot the intensities for each of the reconstructions considered in Fig. 4D. The standard reconstruction almost confuses the walls of the small blood vessel, whereas SS resolves this detail. The opening within the blood vessel becomes even more evident when we use AU, given that the simultaneous PSF deconvolution lets us resolve sharper details. Thorough image analyses are presented in the Supplement Materials document accompanying this manuscript.

Figure 3.

Figure 3

Methods comparison for the reconstruction of the vasculature in a mouse popliteal lymph node. The quality of the first row improves in the central and in the bottom rows. Results of aligned mean shown in a transverse (A), longitudinal (B) and lateral (C) direction. The shown data are color-encoded maximum intensity projection (MIP)16 along each spatial coordinate. In all these MIPs, the color indicates the depth at which the corresponding feature is located. The small letters indicate the cropped volume. The cropped regions located within the whole specimen are framed with white boxes. The scale bar is 100μm. (A, a) Transverse view of the volume o¯μ averaged and aligned by cross-correlation (Rendered in Fig.1, viewed from the top). (B, b) Longitudinal (or side) view projection. (C, c) Lateral (or front) view. (D, d) Transverse, (E, e) longitudinal, and (F, f) lateral projections of the volume o¯ρ reconstructed with SS. (G, g) Transverse, (H, h) longitudinal, and (I, i) lateral projections of the volume o¯ deconvolved with AU.

Figure 4.

Figure 4

Tomographic slice of the cropped volume. (A) Aligned mean (standard reconstruction). (B) Reconstruction using SS. (C) Reconstruction using AU. (D) Profile plot along the dashed line in panel (E) for the three cases. (E) Detail of the small opening for AU.

Discussion

It is worth stressing that our auto-correlation method goes beyond the deconvolution approach. We are exploring a new path in alignment-free image formation, studying its advantage in terms of PSF. With this work, we presented an approach to the problem of shift-invariant reconstructions in volumetric multi-view tomography. Rather than relying on alignment and fusion pipelines, we proposed a conceptually simple approach that promotes the reconstruction into the shift-invariant A-space. We made use of multiple views of the specimen with the sole goal of refining the estimation of the auto-correlation of the object since we consider it as the ideal quantity for the formation of inherently aligned reconstructions. Since the user is free from the alignment task, one could direct his attention to better ways to estimate the auto-correlation. In particular, this may open the path for the corrections of higher-order transformations as, for example, those introduced by inaccuracies of the rotation stage. Two-axes angular tilts are seen easier in the shift-invariant space than in the object space since we no longer worry about the object positioning. Furthermore, we have proven that the solution of the A-1 can be accompanied with deconvolution18. Concatenating two inverse problems can be hazardous since remaining artifacts from the first inversion may condition the behavior of the following method. Technically, these problems can only converge if we pad enough the reconstruction volume: the auto-correlation of a discrete n-signal is defined on a translation-space that is 2n-1 long. However, we found that starting with a close guess ensures convergence even for volumes that do not comply with appropriate frequency padding. In volumetric tomography, this guess may be either a single view or the aligned mean of the specimen since both are not distant from the ideal reconstruction. Unpadded volumes are smaller, and this lets us save computer memory and speeds up the execution. The main computational burden of our algorithms, however, is performing convolutions of large volumes. Processing 105 SS-iterations on a volume of 5123 voxels takes about 80 min. Additionally, each AU step needs one convolution more than SS, and it is typically 25% slower. Novel GPU architectures continuously speed up those operations, and there are several ways to implement convolutions (direct, Fourier, or overlap-add methods) optimized for the size of the problem considered. In any case, both SS and AU are Bayesian quadratic methods and typically require many iterations to converge17,22. With this respect, a new approach to Bayesian deconvolution23 managed to reduce by two orders of magnitude the iterations needed by tuning the forward and backward projection operators. Those operators are similar to what we have -respectively- at the denominator and numerator in our Eq. (5) and, in the future, we may consider the adoption of a similar approach to increase efficiency. Both the iteration number and time consumption are crucial aspects that we plan to investigate further, and that may decrease the processing time from a few hours to a fraction of it.

Methods

Image pre-processing

Each raw dataset was subtracted with a corresponding average background value, rotated to the same angular orientation of the first dataset acquired at φ=0. The PSF of the system was assumed to be Gaussian, elongated along the direction of scanning. For each of these stacks, we computed the corresponding auto-correlation sequence. We took the absolute value of the average auto-correlation to avoid any presence of unwanted negative values. These are determined by the background-subtraction and eventually by rounding errors due to FFT computation.

Multi-view registration and fusion

As standard reconstruction comparison, we aligned the views against each other by finding the location of the maximum of the cross-correlation between X{oμi;oμj}, for ij. We defined the displacement vector mi with respect the central coordinate ξ=0. We kept φ0=0 as a reference and we translated each φi by the vector -mi defined in this way. Then we computed the average of the registered stacks to form o¯μ.

Rearranging the auto-correlation

We suppose that a generic measurement oμ is described by:

oμ=oh+ε. 6

We neglect the additive noise by assuming ε=0. Using the commutation properties of the convolution-correlation, the relations abab=aabb and abc=abc, we have that:

χμA{oμ}=oμoμ=ohoh 7
=oohh=oohh 8
=χH=oK. 9

here we have called χ=oo, H=hh and K=oH. In the body of the article, we use only the last two equations.

Experimental details

For the tests performed in our study, we use the image data taken from the work of Ozga et al.19, to which we refer for the experimental protocols. The sample, provided by Prof. J. Stein and imaged by J. Swoger, is a cleared mouse popliteal lymph node having the vasculature stained with the Alexa Fluor 488 dye (HEV, high endothelial venule). This specimen was embedded in agarose, then cleared and imaged in Benzyl-Alcohol Benzyl-Benzoate (BABB). The fluorescence was excited with a light-sheet perpendicular to the camera detection at λexc=488 nm, imaged onto the sample with a 2.5×/0.07 N PLAN (air) objective lens. With a band-pass filter at 525/50 nm, they recorded the emitted fluorescence using a 5×/0.12 N Plan EPI (air) objective lens. The sample was scanned through the light sheet along the z-axis, perpendicularly to the camera detection in steps of 4.985 μm.

Supplementary Information

Acknowledgements

Sample provided by Prof. J. Stein at the University of Bern and imaged by Jim Swoger at the Center for Genomic Regulation (CRG), Barcelona. The authors further thank Dr. Gianmaria Calisesi for inspiring discussions.

Author contributions

A.D. formalized the theory, designed the computational methods and performed the reconstructions. G.V., A.P. and A.B. directed the research and supervised the project. A.D., G.V., A.P. and A.B. discussed the results. A.D. produced the figures and wrote the first draft, all the authors reviewed the manuscript.

Funding

H2020 Marie Skłodowska-Curie Actions (HI-PHRET Project, 799230); H2020 Laserlab Europe V (871124).

Competing interests

The authors declare no competing interests.

Footnotes

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

The online version contains supplementary material available at (10.1038/s41598-021-95266-2).

References

  • 1.Sun, Y., Agostini, N. B., Dong, S. & Kaeli, D. Summarizing cpu and gpu design trends with product data. arXiv:1911.11313 (2019).
  • 2.Leiserson, C. E. et al. There’s plenty of room at the top: What will drive computer performance after moore’s law?. Science368, (2020). [DOI] [PubMed]
  • 3.Despres P, Jia X. A review of GPU-based medical image reconstruction. Phys. Med. 2017;42:76–92. doi: 10.1016/j.ejmp.2017.07.024. [DOI] [PubMed] [Google Scholar]
  • 4.Sharpe J, et al. Optical projection tomography as a tool for 3d microscopy and gene expression studies. Science. 2002;296:541–545. doi: 10.1126/science.1068206. [DOI] [PubMed] [Google Scholar]
  • 5.Verveer PJ, et al. High-resolution three-dimensional imaging of large specimens with light sheet-based microscopy. Nat. Methods. 2007;4:311–313. doi: 10.1038/nmeth1017. [DOI] [PubMed] [Google Scholar]
  • 6.Wu Y, et al. Simultaneous multiview capture and fusion improves spatial resolution in wide-field and light-sheet microscopy. Optica. 2016;3:897–910. doi: 10.1364/OPTICA.3.000897. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Krzic U, Gunther S, Saunders TE, Streichan SJ, Hufnagel L. Multiview light-sheet microscope for rapid in toto imaging. Nat. Methods. 2012;9:730–733. doi: 10.1038/nmeth.2064. [DOI] [PubMed] [Google Scholar]
  • 8.Weber M, Huisken J. Omnidirectional microscopy. Nat. Methods. 2012;9:656. doi: 10.1038/nmeth.2022. [DOI] [PubMed] [Google Scholar]
  • 9.Swoger J, Verveer P, Greger K, Huisken J, Stelzer EH. Multi-view image fusion improves resolution in three-dimensional microscopy. Opt. Express. 2007;15:8029–8042. doi: 10.1364/OE.15.008029. [DOI] [PubMed] [Google Scholar]
  • 10.Preibisch S, et al. Efficient bayesian-based multiview deconvolution. Nat. Methods. 2014;11:645. doi: 10.1038/nmeth.2929. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Ancora D, et al. Phase-retrieved tomography enables mesoscopic imaging of opaque tumor spheroids. Sci. Rep. 2017;7:11854. doi: 10.1038/s41598-017-12193-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ancora D, et al. Optical projection tomography via phase retrieval algorithms. Methods. 2018;136:81–89. doi: 10.1016/j.ymeth.2017.10.009. [DOI] [PubMed] [Google Scholar]
  • 13.Liu S, et al. Three-dimensional, isotropic imaging of mouse brain using multi-view deconvolution light sheet microscopy. J. Innov. Opt. Health Sci. 2017;10:1743006. doi: 10.1142/S1793545817430064. [DOI] [Google Scholar]
  • 14.Ancora, D., Valentini, G., Pifferi, A. G. & Bassi, A. Auto-correlation for multi-view deconvolved reconstruction in light sheet microscopy. In Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXVIII, vol. 11649, 116490X (International Society for Optics and Photonics, 2021).
  • 15.Shechtman Y, et al. Phase retrieval with application to optical imaging: A contemporary overview. IEEE Signal Process. Mag. 2015;32:87–109. doi: 10.1109/MSP.2014.2352673. [DOI] [Google Scholar]
  • 16.Schindelin J, et al. Fiji: An open-source platform for biological-image analysis. Nat. Methods. 2012;9:676–682. doi: 10.1038/nmeth.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Schulz TJ, Snyder DL. Image recovery from correlations. JOSA A. 1992;9:1266–1272. doi: 10.1364/JOSAA.9.001266. [DOI] [Google Scholar]
  • 18.Ancora D, Bassi A. Deconvolved image restoration from auto-correlations. IEEE Trans. Image Process. 2020;30:1332–1341. doi: 10.1109/TIP.2020.3043387. [DOI] [PubMed] [Google Scholar]
  • 19.Ozga AJ, et al. PMHC affinity controls duration of cd8+ t cell-dc interactions and imprints timing of effector differentiation versus expansion. J. Exp. Med. 2016;213:2811–2829. doi: 10.1084/jem.20160206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Guizar-Sicairos M, Thurman ST, Fienup JR. Efficient subpixel image registration algorithms. Opt. Lett. 2008;33:156–158. doi: 10.1364/OL.33.000156. [DOI] [PubMed] [Google Scholar]
  • 21.Okuta, R., Unno, Y., Nishino, D., Hido, S. & Loomis, C. Cupy: A numpy-compatible library for nvidia gpu calculations. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Thirty-first Annual Conference on Neural Information Processing Systems (NIPS) (2017).
  • 22.Choi K, Lanterman AD. An iterative deautoconvolution algorithm for nonnegative functions. Inverse Probl. 2005;21:981. doi: 10.1088/0266-5611/21/3/012. [DOI] [Google Scholar]
  • 23.Guo M, et al. Rapid image deconvolution and multiview fusion for optical microscopy. Nat. Biotechnol. 2020;38:1337–1346. doi: 10.1038/s41587-020-0560-x. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES