Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Sep 3.
Published in final edited form as: Nat Methods. 2014 Apr 20;11(6):645–648. doi: 10.1038/nmeth.2929

Efficient Bayesian-based multiview deconvolution

Stephan Preibisch 1,2,3,4, Fernando Amat 2, Evangelia Stamataki 1, Mihail Sarov 1, Robert H Singer 2,3,4, Eugene Myers 1,2, Pavel Tomancak 1
PMCID: PMC4153441  NIHMSID: NIHMS621142  PMID: 24747812

Abstract

Light-sheet fluorescence microscopy is able to image large specimens with high resolution by capturing the samples from multiple angles. Multiview deconvolution can substantially improve the resolution and contrast of the images, but its application has been limited owing to the large size of the data sets. Here we present a Bayesian-based derivation of multiview deconvolution that drastically improves the convergence time, and we provide a fast implementation using graphics hardware.


Modern light-sheet microscopes13 acquire images of large, developing specimens with high temporal and spatial resolution typically by imaging them from multiple directions (Fig. 1a). Deconvolution uses knowledge about the optical system to increase spatial resolution and contrast after acquisition. An advantage unique to light-sheet microscopy, particularly the selective-plane illumination microscopy (SPIM) variant, is the ability to observe the same location in the specimen from multiple angles, which renders the ill-posed problem of deconvolution more tractable410.

Fig. 1. Principles and performance.

Fig. 1

(a) Basic layout of a light-sheet microscope capable of multiview acquisitions. (b) Illustration of ‘virtual’ views. A photon detected at a certain location in a view was emitted by a fluorophore in the sample; the PSF assigns a probability to every location in the underlying image having emitted that photon. Consecutively, the PSF of any other view assigns to each of its own locations the probability to detect a photon corresponding to the same fluorophore. (c) Example of an entire virtual view computed from observed view 1 and the knowledge of PSF 1 and PSF 2. (d) Convergence time of the different Bayesian-based methods. We used a known ground-truth image (Supplementary Fig. 5) and let all variations converge until they reached precisely the same quality. The increase in computation time for an increasing number of views of the combined methods (black) is due to the fact that with an increasing number of views, more computational effort is required to perform one update of the deconvolved image (Supplementary Fig. 4) (e) Convergence times for the same ground-truth image of our Bayesian-based methods compared to those of other optimized multiview deconvolution algorithms58. The difference in computation time between Java implementations and IDL implementations, OSEM6 and SGP8, results in part from nonoptimized IDL code. (f) Corresponding number of iterations for our algorithm and other optimized multiview deconvolution algorithms.

Richardson-Lucy (RL) deconvolution11,12 (Supplementary Note 1) is a Bayesian-based derivation resulting in an iterative expectation-maximization (EM) algorithm5,13 that is often chosen for its simplicity and performance. Multiview deconvolution has previously been derived using the EM framework5,9,10; however, the convergence time of the algorithm remains orders of magnitude longer than the time required to record the data. We addressed this problem by deriving an optimized formulation of Bayesian-based deconvolution for multiple-view geometry that explicitly incorporates conditional probabilities between the views (Fig. 1b,c and Supplementary Fig. 1) and combining it with ordered subsets EM (OSEM)6 (Fig. 1d and Supplementary Fig. 2), achieving substantially faster convergence (Fig. 1d–f).

Bayesian-based deconvolution models images and point spread functions (PSFs) as probability distributions. The goal is to estimate the most probable underlying distribution (deconvolved image) that best explains all observed distributions (views) given their conditional probabilities (PSFs). We first rederived the original 645RL deconvolution algorithm and subsequently extended it to multiple-view geometry, yielding

fRLv(ξ)=xvϕv(Xv)ξψr(ξ)P(xvξ)dξP(xvξ)dxv (1)
ψr+1(ξ)=ψr(ξ)vVfRLv(ξ) (2)

where ψr(ξ) denotes the deconvolved image at iteration r and ϕv(xv) denotes the input views, both as functions of their respective pixel locations ξ and xv, whereas P(xv|ξ) denotes the individual PSFs (Supplementary Note 1). Equation (1) denotes a classical RL update step for one view; equation (2) illustrates the combination of all views into one update of the deconvolved image (Supplementary Video 1). In contrast to the maximum-likelihood (ML) EM5,13 that combines RL updates by addition, equation (2) suggests a multiplicative combination. We proved that equation (2), just as the ML-EM5,13 algorithm, converges to the ML solution (Supplementary Note 2). The ML solution is not necessarily the correct solution if disturbances such as noise or misalignments are present in the input images (Fig. 2). Importantly, previous extensions to multiple views5–10 assume individual views to be independent observations (Supplementary Fig. 2). Assuming independence between two views implies that by observing one view, nothing can be learned about the other view. We showed that this independence assumption is not required to derive equation (2) (Supplementary Note 3). Our solution represents, to our knowledge, the first complete derivation of RL multiview deconvolution based on probability theory and Bayes’ theorem.

Fig. 2. Deconvolution of simulated 3D multiview data.

Fig. 2

(a) Left, 3D rendering of a computer-generated volume resembling a biological specimen. The red outlines mark the wedge removed from the volume to show the content inside. Right, sections through the generated volume in the lateral direction (as seen by the SPIM camera, top) and along the rotation axis (bottom). (b) Same slices as in a with illumination attenuation applied (left), convolved with a PSF of a SPIM microscope (center) and simulated using a Poisson process (right). The bottom right panel shows the unscaled simulated light-sheet sectioning data along the rotation axis. (c) Slices from views 1 and 3 of the seven views generated from a by applying processes pictured in b and rescaling to isotropic resolution. These seven volumes are the input to the fusion and deconvolution algorithms quantified in d and visualized in e. (d) Cross-correlation of deconvolved and ground-truth data as a function of the number of iterations for MAPG7 and our algorithm with and without regularization (reg). The inset compares the computation (comp.) time. (Both algorithms were implemented in Java to support partially overlapping data sets; Supplementary Fig. 10). (e) Slices equivalent to c after content-based fusion14 (first column), MAPG7 deconvolution (second column), our approach without regularization (third column) and with regularization15 (fourth column; Tikhonov15 regularization parameter λ = 0.004). (f) Areas marked by boxes in a,c,e at higher magnification. Note the increased artificial ring patterns in MAPG7.

As we do not need to consider views to be independent, we next asked whether the conditional probabilities describing the relationship between two views can be modeled and used to improve convergence behavior (Supplementary Figs. 1 and 3 and Supplementary Notes 3 and 4). If we assume that a single photon is observed in the first view, the PSF of this view and Bayes’ theorem can be used to assign a probability to every location in the deconvolved image having emitted this photon (Fig. 1b). On the basis of this probability distribution, the PSF of the second view directly yields the probability distribution describing where to expect a corresponding observation for the same fluorophore in the second view (Fig. 1b). Thus, we argue that it is possible to compute an approximate image (‘virtual’ view) of one view from another view provided that the PSFs of both views are known (Fig. 1c).

We used these virtual views to perform intermediate update steps at no additional computational cost, decreasing the computational effort approximately twofold (Fig. 1d and Supplementary Note 4). The multiplicative combination (equation (2)) directly suggests a sequential approach, wherein each RL update (equation (1)) is directly applied to ψr(ξ) (Supplementary Fig. 2 and Supplementary Note 5). This sequential scheme is equivalent to the OSEM6 algorithm and results in a 13-fold decrease in convergence time. This gain increases linearly with the number of views6 (Fig. 1d and Supplementary Fig. 4). To further reduce convergence time, we introduced ad hoc simplifications (optimizations I and II) for the estimation of conditional probabilities that achieve up to 40-fold improvement compared to deconvolution methods that assume view independence (Fig. 1d–f, Supplementary Figs. 4 and 5 and Supplementary Notes 6 and 7). The new algorithm also performs well in the presence of noise and imperfect PSFs (Supplementary Figs. 6–8). If the input views show a very low signal-to-noise ratio (SNR), atypical for SPIM, the speedup is preserved, but the quality of the deconvolved image is reduced. Our Bayesian-based derivation does not assume a specific noise model, but it is in practice robust with respect to Poisson noise, which is the dominating source of noise in light-sheet microscopy acquisitions.

We compared the performance of our method with that of previously published multiview deconvolution algorithms510 in terms of convergence behavior and run time on the central processing unit (CPU) (Figs. 1e,f and 2d and Supplementary Figs. 4b and 9a,b). For typical SPIM multiview scenarios consisting of around seven views with a high SNR, our method requires sevenfold fewer iterations and is at least threefold faster than OSEM6, scaled gradient projection (SGP)8 and maximum a posteriori with Gaussian noise (MAPG)7. At the same time our optimization is able to improve the image quality of real and simulated data sets compared to MAPG7 (Fig. 2e,f and Supplementary Fig. 9c–h). A further speedup of threefold and reduced memory consumption is achieved by using our CUDA (Compute Unified Device Architecture) implementation (Supplementary Fig. 10g). Moreover, our approach is capable of dealing with partially overlapping acquisitions typical in multiview imaging (Supplementary Fig. 10 and Online Methods).

In order to evaluate our algorithm on realistic three-dimensional (3D) multiview image data, we simulated a ground-truth data set resembling a biological specimen (Fig. 2a). We next simulated image acquisition in a SPIM microscope from multiple angles by applying signal attenuation across the field of view, convolving the data with the PSF of the microscope, simulating the multiview optical sectioning and using a Poisson process to generate the final pixel intensities (Fig. 2b and Online Methods). We deconvolved the generated multiview data (Fig. 2c) using our algorithm with and without regularization (regularization adds smoothness constraints to the deconvolution process to achieve a more plausible solution for this ill-posed problem) and compared the results to the content-based fusion14 and the MAPG7 deconvolution (Fig. 2d–f). Our algorithm reached optimal reconstruction quality faster (Fig. 2d) and introduced fewer artifacts than MAPG7 (Fig. 2e,f and Supplementary Videos 2 and 3). Tikhonov regularization15 was required to converge to a reasonable result under realistic imaging conditions (Fig. 2d–f).

We applied our deconvolution approach to multiview SPIM acquisitions of Drosophila melanogaster and Caenorhabditis elegans embryos (Fig. 3a–e). We achieved a substantial increase in contrast as well as resolution with respect to the content-based fusion14 (Fig. 3b and Supplementary Fig. 11); only a few iterations were required and computation times were typically in the range of a few minutes per multiview acquisition (Supplementary Table 1). We applied the deconvolution to a four-view acquisition of a fixed C. elegans in larval stage 1 (L1) expressing GFP-tagged lamin (LMN-1–GFP) labeling the nuclear lamina and stained for DNA with Hoechst (Fig. 3f,g). Multiview deconvolution improved contrast and resolution compared to the input data and enabled unambiguous segmentation of nuclei in problematic areas of the nervous system16 (Supplementary Videos 47). The algorithm dramatically improved multiview data acquired with OpenSPIM17 (Supplementary Fig. 12), and its effi-ciency makes it applicable to spatially large multiview data sets (Supplementary Fig. 13) and to processing of long-term time lapses from the Zeiss Lightsheet Z.1 (Supplementary Videos 811 and Supplementary Table 1).

Fig. 3. Application to biological data.

Fig. 3

(a) Comparison of reconstruction results using content-based fusion14 (top row) and multiview deconvolution (bottom row) on a four-cell–stage C. elegans embryo expressing a PH domain–GFP fusion marking the membranes. Dotted lines mark plots shown in b; white arrowheads mark PSFs of a fluorescent bead before and after deconvolution. (b) Line plot through the volume along the rotation axis (yz, contrast locally normalized). This orientation typically shows the lowest resolution of a fused data set in light-sheet acquisitions, as all input views are oriented axially (Supplementary Fig. 11). SNR is substantially enhanced; arrowheads mark points illustrating increased resolution. (c,d) Cut planes through a blastoderm-stage Drosophila embryo expressing His-YFP in all cells. (e) Magnified view on parts of the Drosophila embryo. The left panel is a view in lateral orientation of one of the input views; the right panel shows a view along the rotation axis characterized by the lowest resolution. (f,g) Comparison of deconvolution and input data of a fixed L1 C. elegans larva expressing LMN-1–GFP (green) and stained with Hoechst (magenta). (f) Single slice through the deconvolved data set; arrowheads mark four locations of transversal cuts shown below. The cuts compare two orthogonal input views (0°, 90°) with the deconvolved data. No input view offers high resolution in this orientation approximately along the rotation axis. (g) The left box in the first row shows a random slice of a view in axial orientation (worst resolution). The second row shows a view in lateral orientation (best resolution). The third row shows the corresponding deconvolved image. The right boxes each show a slice through the nervous system. The alignment of the C. elegans L1 data set was refined using nuclear positions (Online Methods). The C. elegans embryo (a,b) and the Drosophila embryo (d,e) are each one time point of a time series (none of the other time points is used in this paper). The C. elegans L1 larva (f,g) is an individual acquisition of one fixed sample.

Multiview deconvolution increases contrast in SPIM data after acquisition, complementary to hardware-based contrast enhancement achieved by digital scanned laser light-sheet microscopy (DSLM-SI)18 (Supplementary Fig. 14). Moreover, multiview deconvolution produced superior results when comparing an acquisition of the same sample with SPIM and a two-photon microscope (Supplementary Fig. 15). Finally, the benefits of the multiview deconvolution approach are not limited to SPIM, as illustrated by the deconvolved multiview spinning disc confocal microscope acquisition of a C. elegans in L1 stage14 (Supplementary Fig. 16).

A major obstacle for widespread application of deconvolution approaches to multiview light-sheet microscopy data is the lack of usable and scalable multiview deconvolution software. Therefore, we implemented our fast converging algorithm as a Fiji19 plug-in taking advantage of ImgLib2 (ref. 20) and GPU processing (http://fiji.sc/Multi-View_Deconvolution). The only free parameter of the method that must be chosen by the user is the number of iterations for the deconvolution process. We facilitate this choice by providing a debug mode allowing the user to inspect all intermediate iterations and identify optimal trade-off between quality and computation time. Our Fiji19 implementation synergizes with other related plug-ins and provides an integrated solution for the processing of multiview light-sheet microscopy data of arbitrary size.

Methods

Methods and any associated references are available in the online version of the paper.

Supplementary Material

Online Methods
Supplementary Movie 1
Download video file (1.2MB, mov)
Supplementary Movie 10
Download video file (6.4MB, mov)
Supplementary Movie 11
Download video file (33.1MB, mov)
Supplementary Movie 2
Download video file (3.1MB, mov)
Supplementary Movie 3
Download video file (3.2MB, mov)
Supplementary Movie 4
Download video file (2MB, mov)
Supplementary Movie 5
Download video file (1.9MB, mov)
Supplementary Movie 6
Download video file (2.8MB, mov)
Supplementary Movie 7
Download video file (3.5MB, mov)
Supplementary Movie 8
Download video file (6.1MB, mov)
Supplementary Movie 9
Download video file (5.9MB, mov)
Supplementary Movie Legends
Supplementary Software 1
Supplementary Software 2
Supplementary Software 3
Supplementary Text and Figures

Acknowledgments

We thank T. Pietzsch (Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG)) for helpful discussions, proofreading and access to his unpublished software; N. Clack, F. Carrillo Oesterreich and H. Bowne-Anderson for discussions; N. Maghelli for two-photon imaging; P. Verveer (MPI Dortmund) for source code and helpful discussions; M. Weber for imaging the Drosophila time series; S. Jaensch for preparing the C. elegans embryo; J.K. Liu (Cornell University) for the LW698 strain; S. Saalfeld for help with 3D rendering; P.J. Keller for supporting F.A. and for the DSLM-SI data set; A. Cardona for access to his computer; and Carl Zeiss Microimaging for providing us with the SPIM prototype. S.P. was supported by MPI-CBG in P.T.’s lab, Howard Hughes Medical Institute (HHMI) in E.M.’s lab and the Human Frontier Science Program (HFSP) Postdoctoral Fellowship LT000783/2012 in R.H.S.’s lab, with additional support from US National Institutes of Health (NIH) GM57071. F.A. was supported by HHMI in P.J. Keller’s lab. E.S. and M.S. were supported by MPI-CBG. R.H.S. was supported by NIH grants GM057071, EB013571 and NS083085. E.M. was supported by HHMI and MPI-CBG. P.T. was supported by The European Research Council Community’s Seventh Framework Program (FP7/2007-2013) grant agreement 260746 and the HFSP Young Investigator grant RGY0093/2012. M.S., E.M. and P.T. were additionally supported by the Bundesministerium für Bildung und Forschung grant 031A099.

Footnotes

COMPETINGFINANCIAL INTERESTS

The authors declare no competing financial interests.

Note: Any Supplementary Information and Source Data files are available in the online version of the paper.

AUTHOR CONTRIBUTIONS

S.P. and F.A. derived the equations for multiview deconvolution. S.P. implemented the software and performed all analysis, and F.A. implemented the GPU code. E.S. generated and imaged the H2Av-mRFPruby fly line. M.S. prepared, and M.S. and S.P. imaged, the C. elegans L1 sample. S.P. and P.T. conceived the idea and wrote the manuscript. R.H.S. provided support and encouragement, E.M. and P.T. supervised the project.

References

  • 1.Huisken J, Swoger J, Del Bene F, Wittbrodt J, Stelzer EHK. Science. 2004;305:1007–1009. doi: 10.1126/science.1100035. [DOI] [PubMed] [Google Scholar]
  • 2.Keller PJ, Schmidt AD, Wittbrodt J, Stelzer EHK. Science. 2008;322:1065–1069. doi: 10.1126/science.1162493. [DOI] [PubMed] [Google Scholar]
  • 3.Truong TV, Supatto W, Koos DS, Choi JM, Fraser SE. Nat Methods. 2011;8:757–760. doi: 10.1038/nmeth.1652. [DOI] [PubMed] [Google Scholar]
  • 4.Swoger J, Verveer P, Greger K, Huisken J, Stelzer EHK. Opt Express. 2007;15:8029–8042. doi: 10.1364/oe.15.008029. [DOI] [PubMed] [Google Scholar]
  • 5.Shepp LA, Vardi Y. IEEE Trans Med Imaging. 1982;1:113–122. doi: 10.1109/TMI.1982.4307558. [DOI] [PubMed] [Google Scholar]
  • 6.Hudson HM, Larkin RS. IEEE Trans Med Imaging. 1994;13:601–609. doi: 10.1109/42.363108. [DOI] [PubMed] [Google Scholar]
  • 7.Verveer PJ, et al. Nat Methods. 2007;4:311–313. doi: 10.1038/nmeth1017. [DOI] [PubMed] [Google Scholar]
  • 8.Bonettini S, Zanella R, Zanni L. Inverse Probl. 2009;25:015002. [Google Scholar]
  • 9.Krzic U. PhD thesis. Univ. Heidelberg; 2009. Multiple-View Microscopy with Light-Sheet Based Fluorescent Microscope. [Google Scholar]
  • 10.Temerinac-Ott M, et al. IEEE Trans Image Process. 2012;21:1863–1873. doi: 10.1109/TIP.2011.2181528. [DOI] [PubMed] [Google Scholar]
  • 11.Richardson WH. J Opt Soc Am. 1972;62:55–59. [Google Scholar]
  • 12.Lucy LB. Astron J. 1974;79:745–754. [Google Scholar]
  • 13.Dempster AP, Laird NM, Rubin DB. J R Stat Soc Series B Stat Methodol. 1977;39:1–38. [Google Scholar]
  • 14.Preibisch S, Saalfeld S, Schindelin J, Tomancak P. Nat Methods. 2010;7:418–419. doi: 10.1038/nmeth0610-418. [DOI] [PubMed] [Google Scholar]
  • 15.Tikhonov AN, Arsenin VY. Solutions of Ill-Posed Problems. Winston: 1977. [Google Scholar]
  • 16.Long F, Peng H, Liu X, Kim S, Myers E. Nat Methods. 2009;6:667–672. doi: 10.1038/nmeth.1366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Pitrone PG, et al. Nat Methods. 2013;10:598–599. doi: 10.1038/nmeth.2507. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Keller PJ, et al. Nat Methods. 2010;7:637–642. doi: 10.1038/nmeth.1476. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Schindelin J, et al. Nat Methods. 2012;9:676–682. doi: 10.1038/nmeth.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Pietzsch T, Preibisch S, Tomancak P, Saalfeld S. Bioinformatics. 2012;28:3009–3011. doi: 10.1093/bioinformatics/bts543. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Uddin MS, Lee HK, Preibisch S, Tomancak P. Microsc Microanal. 2011;17:607–613. doi: 10.1017/S1431927611000262. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Online Methods
Supplementary Movie 1
Download video file (1.2MB, mov)
Supplementary Movie 10
Download video file (6.4MB, mov)
Supplementary Movie 11
Download video file (33.1MB, mov)
Supplementary Movie 2
Download video file (3.1MB, mov)
Supplementary Movie 3
Download video file (3.2MB, mov)
Supplementary Movie 4
Download video file (2MB, mov)
Supplementary Movie 5
Download video file (1.9MB, mov)
Supplementary Movie 6
Download video file (2.8MB, mov)
Supplementary Movie 7
Download video file (3.5MB, mov)
Supplementary Movie 8
Download video file (6.1MB, mov)
Supplementary Movie 9
Download video file (5.9MB, mov)
Supplementary Movie Legends
Supplementary Software 1
Supplementary Software 2
Supplementary Software 3
Supplementary Text and Figures

RESOURCES