Skip to main content
Springer logoLink to Springer
. 2022 Jun 14;64(9):968–992. doi: 10.1007/s10851-022-01100-3

Image Reconstruction in Light-Sheet Microscopy: Spatially Varying Deconvolution and Mixed Noise

Bogdan Toader 1,3,4,, Jérôme Boulanger 2, Yury Korolev 3, Martin O Lenz 1,5, James Manton 2, Carola-Bibiane Schönlieb 3, Leila Mureşan 1,4,5
PMCID: PMC7613773  EMSID: EMS146211  PMID: 36329880

Abstract

We study the problem of deconvolution for light-sheet microscopy, where the data is corrupted by spatially varying blur and a combination of Poisson and Gaussian noise. The spatial variation of the point spread function of a light-sheet microscope is determined by the interaction between the excitation sheet and the detection objective PSF. We introduce a model of the image formation process that incorporates this interaction and we formulate a variational model that accounts for the combination of Poisson and Gaussian noise through a data fidelity term consisting of the infimal convolution of the single noise fidelities, first introduced in L. Calatroni et al. (SIAM J Imaging Sci 10(3):1196–1233, 2017). We establish convergence rates and a discrepancy principle for the infimal convolution fidelity and the inverse problem is solved by applying the primal–dual hybrid gradient (PDHG) algorithm in a novel way. Numerical experiments performed on simulated and real data show superior reconstruction results in comparison with other methods.

Keywords: Deconvolution, Light-sheet microscopy, Poisson and Gaussian noise, Primal–dual hybrid gradient, Numerical methods

Introduction

Light-sheet microscopy is a fluorescence microscopy technique that enables volumetric imaging of biological samples at high frame rate with better sectioning and lower photo-toxicity in comparison with other fluorescent techniques. This is achieved by illuminating a thin slice of the sample using a sheet of light and detecting the emitted fluorescence from this plane with another objective perpendicular to the plane of the sheet. A schematic representation of a light-sheet microscope is shown in Fig. 1. Other microscopy techniques present certain disadvantages. For example, widefield microscopy [1] illuminates the whole sample using a single objective and achieves only very limited sectioning, while confocal microscopy [1] allows improved sectioning by utilising a pinhole to discard out-of-focus light, at the cost of higher photo-toxicity and reduced frame rate. Light-sheet microscopy avoids these downsides by only selectively illuminating the slice of the sample being imaged. In this way, less photo-toxicity damage is induced and, therefore, imaging of living samples over a longer period of time is possible. The combination of lower photo-toxicity, better sectioning capabilities and faster image acquisition led to light-sheet microscopy being recognised as “Method of the Year” by Nature Methods in 2014 [2].

Fig. 1.

Fig. 1

Schematic of a light-sheet microscope, showing the illumination and the detection directions. The interaction of the light-sheet with the detection PSF leads to a spatially varying overall PSF and decreasing of the pixel intensities away from the centre in the horizontal direction

The focus of the present manuscript is on deconvolution techniques for light-sheet microscopy data. In this context, deconvolution refers to the computational method of reversing the effect of blurring in the image acquisition process due to the point spread function (PSF) of the microscope [35]. Specifically, the PSF of an imaging system represents its response to a point object. In general, knowledge of the PSF can be modelled mathematically and calibrated using bead data (samples containing small spheres of known dimensions), and then used in the formulation of a forward model of the image formation, which can then be inverted, for example using optimisation methods, to reconstruct the original, deblurred object [6].

However, in the case of light-sheet microscopy, simply knowing or estimating the PSF of the detection objective is not sufficient, since the overall response of the system to a point source is also influenced by the excitation light-sheet used to illuminate the slice. The overall PSF could be approximated by the detection PSF in the region where the illumination sheet is focused. However, the detection PSF becomes more distorted and loses intensity away from the focus of the excitation light-sheet, an effect illustrated in Fig. 1, so this approach is not accurate. Therefore, we address this problem as a case of spatially varying deconvolution [7, 8], where the variation of the system’s overall PSF is determined by the interaction between the detection PSF and the light-sheet. We note that, in general, the detection PSF itself can be spatially varying due to optical aberrations in the sample, a problem that is not specific to light-sheet microscopy. We do not address this source of variability in this work, although such a spatially varying detection PSF could in principle be incorporated in our method.

Two examples of acquired data are shown in Fig. 2. We can see in both cases the effect of the spatially varying light-sheet: the image is sharper in the centre and blurry on the sides, with the amount of blur growing with the horizontal distance from the centre. In addition, the fluorescence intensity of imaged beads in Fig. 2a is unevenly distributed despite imaging a homogeneous sample of beads, with the centre of the image being brighter than the left and right sides. The aim of our work is to correct these effects.

Fig. 2.

Fig. 2

Examples of light-sheet microscopy data of dimensions 665.6μm×665.6μm: beads in a and Marchantia thallus in b. We show for both samples maximum intensity projections on the x-y plane in the top row and on the x-z plane in the bottom row, with the bead intensity shown in log scale for increased contrast. The effect of the light-sheet is visible along the horizontal direction (the x axis), as the image is sharp and has higher intensity in the centre, where the sheet is focused, while the quality of the image decreases away from the centre. The blurring effect of the light-sheet in the z direction is particularly noticeable in the x-z projections in the bottom row. Another source of blur observed, especially in the bead image (left) is given by optical aberrations due to the sample imaging medium. The Marchantia image has been acquired using samples from Dr. Alessandra Bonfanti and Dr. Sarah Robinson using the genetic line provided by Prof. Sebastian Schornack and Dr. Giulia Arsuffi at the Sainsbury Laboratory Cambridge University

Contribution

We propose a method for deconvolution of 3D light-sheet microscopy data that takes into account the spatially varying nature of the PSF and is scalable to the dimensions typical to biological samples imaged using light-sheet microscopy—4.86 GB per 3D 16-bit stack of 2048×2048×580 voxels.

Our approach is based on a new model for image formation that describes the interaction between the light-sheet and the detection PSF which replicates the physics of the microscope. Then, we formulate an inverse problem where the forward operator is given by model of the image formation process and which takes into account the degradation of the data by both Gaussian and Poisson noise as an infimal convolution (a concept that will be defined in Sect. 3) between an L2 term and a Kullback–Leibler divergence term, following [9]. The proposed variational problem is solved by applying the Primal Dual Hybrid Gradient (PDHG) algorithm [6, 10, 11] in a novel way. Finally, we exploit the noise model to automatically tune the balance between the data fidelity and regularisation resorting to a discrepancy principle. We obtain convergence rates in a Bregman distance for the infimal convolution fidelity from [9] under a standard source condition.

In our numerical experiments, we first show how this method performs on simulated data, where the ground truth is known, then we apply our method to two examples of data from experiments: an image of fluorescent beads and a sample of Marchantia. In both cases, we see that the deconvolved images show improved contrast, while outperforming deconvolution using only the constant detection PSF.

Related Work

Before describing in more detail our approach to the deconvolution problem, we give a brief overview of the literature on spatially varying deconvolution in the context of microscopy and how our work relates to it.

Purely data-driven approaches estimate a spatially varying PSF in a low dimensional space (for scalability reasons) using bead images [7, 8, 12]. This is usually not application specific and can be included in a more general blind deconvolution framework. Similarly, the work in [13] involves writing the spatially varying PSF as a convex combination of spatially invariant PSFs. The algorithm alternates between estimating the image and estimating the PSF. In a similar vein, the authors of [14] approach the problem of blind deconvolution by defining the convolution operator using efficient matrix-vector multiplication operations. This decomposition is similar to the discrete formulation of our image formation model. These methods optimise over the (unknown) operator in addition to the unknown image. Related to these results is [15], where the authors consider the models from [12] and [14] under the assumption that the blurring operator is known and given as a sum of weighted spatially invariant operators. They exploit this structure of the operator and use a Douglas–Rachford-based splitting to solve the optimisation problem efficiently. A different data-driven approach is presented in [16], where a deep artificial neural network is used to learn the spatially varying PSF from simulated data obtained using a forward model of the microscope. While these approaches are more general than our method, we consider that using the knowledge of the image formation process in the forward model is advantageous for the reconstruction of light-sheet microscopy data.

A number of groups consider the problem of reconstruction from multiple views in the context of light-sheet microscopy. In [17], the problem of multi-view reconstruction under a spatially varying blurring operator for 3D light-sheet data is considered. They divide the image into small blocks where they perform deconvolution using spatially invariant PSFs estimated from beads (and interpolated PSFs in regions where there are no beads). In [18], the authors extend the Richardson–Lucy algorithm to the multi-view reconstruction problem in a Bayesian setting. While it allows for different PSFs for each view (estimated using beads), this work does not consider spatial variations of the PSF. While using data from multiple views improves the quality of the reconstruction, these approaches are agnostic to the physics of the microscope.

The approach taken in [19, 20] involves directly measuring the spatially varying PSF in different regions of the field of view using an additional hardware module installed with the microscope, and then deconvolving the image in each region using the measured PSF. In particular, [20] employs a sophisticated tiling-based deconvolution method based on the Richardson–Lucy algorithm and a formulation similar to a convolutional neural network in order to avoid artefacts usually caused by stitching tiles deconvolved with different PSFs.

Taking an approach similar in spirit to ours, the authors of [21] model the effective PSF of a light-sheet microscope, which is then plugged into a regularised version of the Richardson–Lucy algorithm for deconvolution. However, while they model the detection PSF and the light-sheet separately, they assume the effective PSF of the microscope is spatially invariant and the point-wise product of the two PSFs. In contrast, we do not take this simplifying step in our modelling, as we consider that the relationship between the two PSFs plays in important role in the resulting blur of the image.

The work of Guo et al. [22] uses a modified Richardson–Lucy algorithm implemented on GPU to improve the speed of convergence, further improved by the use of a deep neural network, which is a promising approach.

Moreover, in [23] the authors introduce an image formation model similar to the one described in the present manuscript. However, the regions of the resulting PSF where the light-sheet is out of focus are discarded, hence approximating the overall PSF with a constant PSF and then performing deconvolution using the ADMM algorithm. In Cueva et al. [24], a mathematical model which takes into account image fusion with two-sided illumination is derived from first principles. However, it is restricted to 2D and they do not apply the method to real data.

Lastly, regarding the mixed Gaussian–Poisson noise fidelity, our method follows the infimal convolution variational approach described in [9], with the additional light-sheet blurring operator. The same inverse problem, without the blurring operator, is solved in [25] albeit using an ADMM algorithm for the minimisation.

Paper Structure

The paper is organised as follows. In Sect. 2, we introduce a mathematical model of the image formation process in a light-sheet microscope. This model describes how the sample is blurred by the excitation illumination together with the detection objective PSF. Optical aberrations of the system are modelled using Zernike polynomials in the detection PSF, which we discuss in Sect. 2.3. In Sect. 3, we define the mathematical setting for the deconvolution problem and we state an inverse problem using a data fidelity as an infimal convolution of the individual Gaussian and Poisson data fidelities. We discuss convergence rates and a discrepancy principle for choosing the regularisation parameter in Sect. 2.1. In Sect. 4, we describe how PDHG is applied to this inverse problem, with details of the implementation of the proximal operator and the convex conjugate of the joint Kullback–Leibler divergence. Finally, we validate our method with numerical experiments both with simulated and real data in Sect. 5, before concluding and giving a few directions for future work in Sect. 6.

Forward Model

The first contribution of the current work is a model of the image formation process in light-sheet microscopy. By modelling the excitation light-sheet and the detection PSF separately and their interaction in a way that replicates the physics of the microscope, we are able to accurately simulate the spatially varying PSF of the imaging system. We then incorporate this knowledge as the forward model in an inverse problem, which we solve to remove the noise and blur in light-sheet microscopy data. In this section, we describe the image formation process and the PSF model.

Image Formation Model

A light-sheet propagated along the x direction is focused by the excitation objective at an axial position z=z0 and the local light-sheet intensity l is modelled by the incoherent point spread function (PSF) of the excitation objective. The sample with local density of fluorophores u emits photons proportionally to the local intensity l of the light-sheet. These photons are then collected by a detection objective, whose action on the illuminated sample is modelled as a convolution with its PSF h. For clarity, see Fig. 3 for the directions of the axes. Finally, the sensor conjugated with the image plane z0 collects photons and converts them to digital values for storage. Consequently, the recorded image is corrupted by a combination of Gaussian and Poisson noise. We can see here again how the local variation of the light-sheet will result in a spatially varying blur and spatially varying illumination intensity in the captured image. This process is then repeated for each z0 to obtain the measured data f.

Fig. 3.

Fig. 3

Coordinate axes showing the light-sheet beam direction along the x axis and the detection direction along the z axis

More specifically, we model u, f, l and h as functions defined on ΩR3, a rectangular domain of dimensions Ωx×Ωy×Ωz (in μm) with Ω=[-Ωx2,Ωx2]×[-Ωy2,Ωy2]×[-Ωz2,Ωz2]. For the sample u, the light-sheet l and the detection objective PSF h, the measured data f is given by:

f(x,y,z)=u(s,t,w-z)l(s,t,w)h(x-s,y-t,-w)dsdtdw. 2.1

The detection PSF h is given by

h(x,y,z)=gσpZ(κx,κy)e2iπz(n/λh)2-κx2-κy2e2iπ(κxx+κyy)dκxdκy2 2.2

and the light-sheet l is the y-averaged beam PSF lbeam:

lbeam(x,y,z)=p0(κz,κy)e2iπx(n/λl)2-κz2-κy2e2iπ(κzz+κyy)dκzdκy2, 2.3

where n is the refractive index, λh,λl are the wave lengths corresponding to the detection objective and light-sheet beam, respectively, and gσ represents Gaussian blur. Lastly, pZ(κx,κy) and p0(κz,κy) are the pupil functions for the detection PSF and the light-sheet beam, respectively, both given by:

graphic file with name 10851_2022_1100_Equ4_HTML.gif 2.4

for their respective wave lengths, λi=λh or λi=λl, and numerical apertures, NAi=NAh or NAi=NAl, where the phase for the light-sheet pupil p0 is equal to zero and the phase for the detection PSF pupil pZ is an approximation of the optical aberrations written as an expansion in a Zernike polynomial basis. The Gaussian blur gσ in (2.2) is a technical detail that enables better fitting of the detection PSF h to the optical aberrations seen in bead data, an idea introduced in [26]. More details about the pupil functions and the aberration fitting using Zernike polynomials and the Gaussian blur gσ will be given in Sect. 2.3. In general, the NA of the excitation sheet is much lower than the NA of the detection lens. We note that the overall process is not translation invariant and cannot be modelled by a convolution operator.

Note that both the detection PSF h and the light-sheet PSF have a similar formulation derived from:

PSF(x,y,z)=p(κx,κy)e2iπz(n/λi)2-κx2-κy2e2iπ(κxx+κyy)dκxdκy2, 2.5

which includes the pupil function for modelling aberrations and a defocus term before taking the Fourier transform (see, for example [27, 28]). In addition, the actual light-sheet which illuminates a slice of the sample is obtained by rapid scanning of the illumination beam, which we model by y-averaging the illumination PSF lbeam given in (2.3) and repeating it in the y direction for the full length of the sample.

In practice, the image formation process modelled by (2.1) is discretised at the point of recording by the camera sensor in the xy plane and by the step size of the light-sheet in the z direction. If the camera has a resolution of Nx×Ny pixels and the light-sheet illuminates the sample at Nz distinct steps, the model (2.1) becomes:

f~i,j,k=1C~i=1Nxj=1Nyk=1Nzl~i,j,ku~i,j,k-kh~i-i,j-j,k, 2.6

for all i=1,,Nx,j=1,,Ny,k=1,,Nz, and a normalisation constant C~, where u~,f~,l~,h~RNx×Ny×Nz are the discretised versions of uflh, respectively. Similarly, the sampling performed by the camera sensor leads to a discretisation of the Fourier space and the use of the discrete Fourier transform in the PSF and light-sheet models (2.2) and (2.3). Lastly, in our implementation we normalise h~ so that i=1Nxj=1Nyk=1Nzh~i,j,k=1 and choose the normalisation constant C~ so that the norm of the resulting operator is equal to one.

Derivation of the Model

Let luh be defined as in Sect. 2.1, with h and l centred at the origin and l translation invariant in the y direction and symmetric around the yz plane. For a fixed z0[-Ωz2,Ωz2], we take the following steps, which replicate the inner workings of a light-sheet microscope:

  1. Image the sample at z=z0: centre the sample u at z0 and multiply the result with the light-sheet l:
    F(x,y,z;z0)=u(x,y,z-z0)·l(x,y,z), 2.7
  2. Convolve with the objective PSF h:
    C(x,y,z;z0)=F(x,y,z;z0)h(x,y,z)=F(s,t,w;z0)h(x-s,y-t,z-w)dsdtdw, 2.8
  3. Slice at z=0:
    f(x,y,z0)=C(x,y,z;z0)z=0, 2.9

which leads to:

f(x,y,z0)=u(s,t,w-z0)l(s,t,w)h(x-s,y-t,-w)dsdtdw. 2.10

This is the same as model (2.1), where we substitute z for z0. Note that, if there are no aberrations in h or other sources of asymmetry in the z direction, we could simply write h(x-s,y-t,w) instead.

For a discretisation of the domain using a 3D grid with Nx×Ny pixels and Nz light-sheet steps, the forward model can be computed by following the three steps above for each k=1,,Nz, where we perform the convolutions using the fast Fourier transform (FFT), resulting in a number of O(NxNyNz2log(NxNyNz)) operations.

Alternatively, we can rewrite the last integral above as:

f(x,y,z0)=K(x,y,w)h(x,y,-w)dw, 2.11

where

K(x,y,w)=l(x,y,w)u(x,y,w-z0), 2.12

and the convolution in (2.11) is a 2D convolution in (xy):

K(x,y,w)h(x,y,-w)=K(s,t,w)h(x-s,y-t,-w)dsdt. 2.13

In terms of numbers of FFTs performed on a discretised Nx×Ny×Nz grid, this alternative formulation requires O(NxNyNz2log(NxNy)) operations.

Point Spread Function Model

While both the light-sheet profile and the detection PSF are based on the same model of a defocused system (2.5) introduced in [27], note that our definition of h in (2.2) includes an additional convolution operation with a Gaussian gσ and a pupil function pZ with a nonzero phase. Let us turn to why this is the case.

It is well known that optical aberrations hamper results based on deconvolution with theoretical PSFs. In light-sheet microscopy, the effect of aberrations is more visible away from the centre, as shown for example in the bead image in Fig. 2, or in the more detailed example beads in Fig. 4. It is, therefore, required that we model the (spatially invariant) aberrations of the detection lens.

Fig. 4.

Fig. 4

Examples of beads and light-sheet profile. The bead in a is cropped from the centre of Fig. 2a and the bead in b is cropped from the right-hand side of Fig. 2a. The maximum intensity projections are taken in the x-y plane (top left), the z-y plane (top right) and the x-z plane (bottom left)

The general PSF model (2.5), with the phase of the pupil function equal to zero, does not take optical aberrations into account and therefore it is not an accurate representation of the objective PSF h. For example, a PSF calculated using (2.5) with zero phase of the pupil and the parameters of the detection objective, shown in Fig. 5, does not resemble the actual bead images in the data in Fig. 4.

Fig. 5.

Fig. 5

Objective PSF used in our model, with no aberrations (maximum intensity projections taken in the same way as in Fig. 4)

There has been extensive work on the problem of phase reconstruction in the literature [26, 29, 30], but here we take a more straightforward approach using Zernike polynomials to include aberrations in the PSF [31], as follows. Let hz be the objective PSF calculated using (2.5) with Zernike polynomials in the phase of the pupil function:

hz(x,y,z;c)=pZ(κx,κy;c)e2iπz(n/λh)2-κx2-κy2e2iπ(κxx+κyy)dκxdκy2, 2.14

where pz(κx,κy;c) is the pupil function with NZ Zernike polynomials in the phase:

graphic file with name 10851_2022_1100_Equ15_HTML.gif 2.15

and c=[c1,,cNz]T are coefficients corresponding to the polynomials for some integer NZ>0.

Moreover, let hzb be the blurred PSF obtained by convolving hz with a Gaussian gσ with width σ:

hzb(x,y,z;c,σ)=hz(x,y,z;c)gσ. 2.16

This allows us to obtain a better approximation of the objective PSF [26]. The parameters c and σ are calculated by solving the least-squares problem

minc,σhzb(c,σ)b-hbead22 2.17
subject toc[-BZ,BZ]NZ,σ>0, 2.18

for some BZ>0, where hbead is a bead image containing the aberrations that one wants to capture in the fitted detection PSF (for example the bead in Fig. 4b) and b is equal to one inside the sphere of the radius equal to the radius of the bead (a parameter that is provided) and zero outside the sphere. This takes into account the non-negligible size of the beads used to generate the data.

In the implementation of the fitting procedure, we normalise both the bead image hbead and the simulated PSF hzb by their maximum values before calculating their error, and we include two additional parameters, scaling and shift, to ensure a better fit of the intensity values (not shown here for simplicity of the presentation).

The best choice of the number of Zernike polynomial basis elements NZ and the boundary BZ of the coefficients c depend on the data hbead and how well the fit is required to be in the deconvolution step. In general, at least the first 15 Zernike polynomials are needed to capture the main optical aberrations such as spherical and astigmatism. In our experiments, for the bead shown in Fig. 4b, we found that NZ=15 and B=3 are an appropriate choice. The Zernike polynomials used and their resulting corresponding coefficients are shown in Table 1 and in Fig. 6. The resulting PSF is the detection PSF model (2.2) and is shown in Fig. 7.

Table 1.

The first 15 Zernike Polynomials (in polar coordinates) and their coefficients used in hz

Zj Polynomial cj
Z1 ρcosθ -0.7763
Z2 ρsinθ -0.0460
Z3 2ρ2-1 -2.3608
Z4 ρ2cos2θ -1.3001
Z5 ρ2sin2θ 0.2024
Z6 (3ρ2-2)ρcosθ -0.3999
Z7 (3ρ2-2)ρsinθ 0.0348
Z8 6ρ4-6ρ2+1 -1.2112
Z9 ρ3cos3θ -0.1521
Z10 ρ3sin3θ -0.0466
Z11 (4ρ2-3)ρ2cos2θ -0.0930
Z12 (4ρ2-3)ρ2sin2θ 0.0427
Z13 (10ρ4-12ρ2+3)ρcosθ -0.0117
Z14 (10ρ4-12ρ2+3)ρsinθ -0.0581
Z15 20ρ6-30ρ4+12ρ2-1 -0.0633

Fig. 6.

Fig. 6

The Zernike polynomials used in the PSF hz, with image range [-1,1]

Fig. 7.

Fig. 7

Fitted PSF using Zernike polynomials. In panel c, we can see the benefits of using the Gaussian blur gσ in obtaining an accurate approximation of the bead in a

Finally, we note that a more thorough approach to choosing the Zernike basis is possible, by using multiple bead images for fitting the Zernike coefficients and comparing the quality of the fit for different values of NZ and BZ. Alternatively, one could average multiple beads and perform the fitting procedure described above on the averaged bead. In both cases, it is worth mentioning that, since the optical aberrations vary within the sample image, we would only be able to fit the general shape of the PSF rather than the sharper features present in each bead, effectively fitting the low frequency information in the beads. In the end, this would achieve a similar effect to the Gaussian blur gσ that we use in the fitting process. Moreover, one can employ more advanced techniques such as the ones described in [26, 29, 30] for estimating the pupil function, which can be plugged in to our image formation model. However, such an analysis focused on the pupil function is beyond the scope of the present work.

Convolution with Spatially Varying Kernel

Having introduced the image formation model for a light-sheet microscope (2.1) as well as the models for the individual PSFs, it is worth expanding on the source of spatial variability that we tackle in this work.

First, note that with a change of variable ww+z, we can rewrite the model (2.1) as:

f(x,y,z)=u(s,t,w)l(s,t,w+z)h(x-s,y-t,-w-z)dsdtdw,

so f(xyz) is the convolution of u(xyz) with the spatially varying kernel h~(x,y,z;s,t,w):

f(x,y,z)=u(s,t,w)h~(x,y,z;s,t,w)dsdtdw, 2.19

where

h~(x,y,z;s,t,w)=l(s,t,w+z)h(x-s,y-t,-w-z) 2.20

gives the expression of a kernel h~(x,y,z;·,·,·) which varies with its centre (xyz).

Therefore, the model presented in this section describes the spatial variation of the overall PSF of the system h~, as a consequence of the interaction between the light-sheet beam PSF l and the detection PSF h. We highlight that l and h are not themselves spatially varying. By the process described in Sect. 2.2, this interaction (and spatial variability) is modelled explicitly. This is in contrast to approaches such as [16], where the spatial variability of the PSF is learned from the data and encoded in the black-box mechanism of an artificial neural network.

In practice, a second source of spatial variability of the PSF may be the detection PSF h, due to the optical aberrations that can vary within the sample image. As described in Sect. 2.3, in this work we do not account for this potential spatial variability of h, and we fit one pupil function to the bead data.

Inverse Problem

Problem Statement

In this section we formally state the inverse problem of deblurring a light-sheet microscopy image. Let ΩR3 be a bounded Lipschitz domain and let L:Lp(Ω)L2(Ω) be the forward operator defined by (2.1). Here 1<p<3/2 is chosen such that the embedding of the BV space is compact [32]. Clearly, L is linear.

We consider the following inverse problem

Lu=f¯, 3.1

where f¯L2(Ω) is the exact (noise-free) data. As outlined in Sect. 2.1, the measurements in light microscopy are corrupted by a combination of Poisson and Gaussian noise. More precisely, the measurement is given by f=v+w, where vPois(f¯) is a Poisson distributed random variable with mean f¯ and w represents additive zero-mean Gaussian noise. We do not model Gaussian noise statistically and instead, in the spirit of (deterministic) variational regularisation, assume that wL2(Ω) is a fixed perturbation with Inline graphic for some known σG>0. Poisson noise is typically modelled using the Kullback–Leibler divergence as the data fidelity term [33, 34].

Let us give a brief justification of the inverse problem formulation described in this section [9, 35], from a Bayesian perspective. First, by using the Poisson and Gaussian probability density functions, we have that p(v|u)=(Lu)ve-(Lu)v! and p(f|v)=12πσGe-12f-vσG2, and from Bayes’ theorem and conditional probability:

p(u,v|f)=p(f|v)p(v|u)p(u)p(f), 3.2

where we used that p(f|u,v)=p(f|v). Moreover, we assume that the prior is a Gibbs distribution p(u)=e-αJ(u) for a convex functional J(u), which we will introduce later. To obtain a maximum a posteriori estimation of u and v (i.e. maximise the posterior distribution p(uv|f)), we take the minimum of the negative log of (3.2) and, after discarding the denominator p(f) and using the Stirling approximation for the factorial logv!=vlogv-v, we obtain the minimisation problem:

argminu,vαJ(u)+12σG2f-v2+vlogvLu+Lu-v, 3.3

where the first term is the regularisation term and the remaining terms form the data fidelity term.

We will now describe the formal mathematical setting for (3.3) in the context of variational regularisation. This will allow us to show well-posedness of the model, establish convergence rates of the solution with respect to the noise in the measurements and to introduce a discrepancy principle for choosing the value of the regularisation parameter α.

First, note that in (3.3), we can perform the minimisation over v only on the data fidelity part of the objective, which can be written as an infimal convolution of the two separate Gaussian and Poisson fidelities. The infimal convolution of two functionals φ1,φ2 on L2 is defined as1:

(φ1φ2)(f)=infvL2(Ω)φ1(f-v)+φ2(v), 3.4

for fL2(Ω). Therefore, we define the following data fidelity term, as proposed in [9]:

Φ(f¯,f):=infvL+2(Ω)12f-vL22+DKL(v,f¯), 3.5

for fL2(Ω) and f¯L+1(Ω), where L+1,2(Ω) denotes the positive cone in L1,2(Ω) (that is, functions fL1,2(Ω) such that Inline graphic a.e.) and DKL denotes the Kullback–Leibler divergence, which we define as follows

graphic file with name 10851_2022_1100_Equ26_HTML.gif 3.6

We note that Ωv(x)logv(x)dx< for vL2, since L2 is continuously embedded into the Orlicz space LlogL of functions of finite entropy [36, 37]

LlogL(Ω):={fL1(Ω):Ωf(x)(logf(x))+dx<}, 3.7

where (·)+=max{·,0} denotes the positive part.

A proof of the following result can be found in [9], but we provide it here for readers’ convenience.

Proposition 3.1

(Exactness of the infimal convolution) For any f¯L+1 such that Ωf¯dx=1, there exists a unique solution v=v(f¯) of (3.5), that is, the infimal convolution is exact. Moreover, the functional Φ(f¯,·):L2R+{+} is proper, convex and lower semicontinuous.

Proof

Fix f¯L+1 such that Ωf¯dx=1. Then, (3.5) is the infimal convolution of the following two functionals on L2

φ(g):=χL+2(g)+DKL(g,f¯),ψ(g):=12gL22,gL2(Ω),

where χ denotes the characteristic function. The function φ is proper, convex, non-negative and lower semicontinuous, while ψ is proper, convex, lower semicontinuous and coercive. Therefore, by [38, Prop. 12.14], the infimal convolution is exact and is itself a proper, convex and lower semicontinuous function. Uniqueness follows from strict convexity of ψ.

Now we turn our attention to the lower semicontinuity of the functional Φ(·,f) in its first argument.

Proposition 3.2

(Lower semicontinuity) For any fL+2(Ω) such that Ωfdx=1 the functional Φ(·,f):L1(Ω)R+{+} is lower semicontinuous.

Proof

We have

Φ(g,f)=infvL+2(Ω)12f-vL22+DKL(v,g)=12f-v(g)L22+DKL(v(g),g)=12f-v(g)L22++Ωv(x)(logv(x)-logf¯(x))dx+χC(g),

where gL1(Ω), v(g) is as defined in Proposition 3.1 and C:={gL+1(Ω):Ωgdx=1}. The characteristic function is lower semicontinuous because C is closed in L1 and the rest is lower semicontinuous by [9, Thm. 4.1].

The following fact is easily established.

Proposition 3.3

The operator L:Lp(Ω)L1(Ω) defined in (2.1) is continuous for any Inline graphic. Moreover, if l and h are non-negative and have overlapping support:

supp(l)supp(h),

then 1N(L), where 1 is the constant one function and N(L) is the null space of L.

Proof

By (2.1), we have

Lu(x,y,z)=Ωl(s,t,w)h(x-s,y-t,w)u(s,t,w-z)dμstw,

where dμstw:=dsdtdw. Noting that the light-sheet PSF l and detection PSF h are bounded from above by some C1,C2>0, we have that:

graphic file with name 10851_2022_1100_Equ81_HTML.gif

where in the last inequality we applied Hölder’s inequality and C(p) is a constant that depends on p (as well as C1,2 and Ω). Hence, we obtain the first claim.

For the second claim, we observe that

graphic file with name 10851_2022_1100_Equ82_HTML.gif

Consider

ΩL1(x,y,z)dμxyz=ΩΩl(s,t,w)h(x-s,y-t,w)dμstwdμxyz

and let Bl,hsupplsupph. Then, since both l and h are non-negative on Ω, from the last equality above we have that:

graphic file with name 10851_2022_1100_Equ83_HTML.gif

which proves the second claim.

Remark 3.1

Our setting with the measured data fL2(Ω) differs slightly from [9], where fL(Ω) was assumed.

We will consider the following variational regularisation problem

minuL+p(Ω)Φ(f,Lu)+αJ(u), 3.8

where Φ is the infimal convolution fidelity as defined in (3.5), J:LpR+{+} is a regularisation functional, αR+ is a regularisation parameter and 1<p<3/2. Without loss of generality, we assume that Ωf¯dx=1.

As the regulariser J, we choose the total variation [39]

graphic file with name 10851_2022_1100_Equ84_HTML.gif

By the Rellich–Kondrachov theorem, the space

BV(Ω):={uL1(Ω):TV(u)<},uBV:=uL1+TV(u),

is compactly embedded into Lp(Ω) for Inline graphic and continuously embedded into L3/2(Ω) since ΩR3. Therefore, we consider TV:LpR+{+}

graphic file with name 10851_2022_1100_Equ85_HTML.gif

We will denote by uTV the TV-minimising solution of (3.1), i.e. a solution that satisfies

graphic file with name 10851_2022_1100_Equ86_HTML.gif

The existence of such solution is obtained by standard arguments [40]. We will make the reasonable assumption that the TV-minimising solution is positive, i.e. Inline graphic a.e. Due to the positivity of the kernels involved in (2.1), it is clear that Inline graphic implies Inline graphic.

Since by Proposition 3.1 the infimal convolution (3.5) is exact, we can equivalently rewrite (3.8) as follows

minuL+p(Ω)vL+2(Ω)12f-vL22+DKL(v,Lu)+αJ(u). 3.9

Existence of minimisers in (3.8) and (3.9) is obtained by standard arguments [9, Thm. 4.1].

Proposition 3.4

Each of the optimisation problems (3.8) and (3.9) admits a unique minimiser.

We will also need the following coercivity result.

Proposition 3.5

The functional Φ(f,·):L1(Ω)R+{+} is strongly coercive with exponent 2, i.e. there exists a constant C>0 such that

graphic file with name 10851_2022_1100_Equ87_HTML.gif

Proof

Using Pinsker’s inequality for the Kullback–Leibler divergence, we get

graphic file with name 10851_2022_1100_Equ88_HTML.gif

for some C>0. Note that Pinsker’s inequality assumes that Inline graphic and infΩfdx=Ωgdx=1, which we ensure by definition in (3.6).

Now, using the inequality Inline graphic that holds for all a,bR and the triangle inequality, we obtain the claim

graphic file with name 10851_2022_1100_Equ89_HTML.gif

Convergence Rates

Our aim in this section is to establish convergence rates of minimisers of (3.8) as the amount of noise in the data decreases. But first we need to specify what we mean by the amount of noise in our setting.

We argue as follows. Since the noise in the measurement is generated sequentially, i.e. photo-electrons are first counted by the sensor leading to a Poisson noise and later they are collected by the electronic circuit generating an additive Gaussian noise, for any exact data f¯ there exists z¯Pois(f¯) such that Inline graphic, where γ>0 depends on the exposure time t and vanishes as t [33]. Further, there exists wL2(Ω) with Inline graphic such that f=z¯+w. Since Inline graphic is feasible in (3.5), we get the following upper bound on the fidelity term (3.5) evaluated at the measurement f and the exact data f¯

graphic file with name 10851_2022_1100_Equ30_HTML.gif 3.10

The standard tool for establishing convergence rates are Bregman distances associated with the regulariser J. We briefly recall the necessary definitions.

Definition 3.1

Let X be a Banach space and J:XR+{+} a proper convex functional. The generalised Bregman distance between x,yX corresponding to the subgradient qJ(y) is defined as follows

DJq(x,y):=J(x)-J(y)-q,x-y.

Here J(v) denotes the subdifferential of J at yX. If, in addition, pJ(x), the symmetric Bregman distance between x,yX corresponding to the subgradients pq is defined as follows

DJp,q(x,y):=DJq(x,y)+DJp(y,x)=p-q,x-y.

To obtain convergence rates, an additional assumption on the regularity of the TV-minimising solution, called the source condition, needs to be made. We use the following variant [41].

Assumption 3.1

(Source condition) There exists an element μL(Ω) such that

q:=LμJ(uTV).

Parameter Choice Rules

Let us summarise what we know about the fidelity function Φ as defined in (3.5), the regularisation functional TV and the forward operator L:

  • Φ(f,·) is proper, convex and coercive (Proposition 3.5) in L1(Ω);

  • Φ(·,·) is jointly convex [42] and lower semicontinuous (Propositions 3.1 and 3.2);

  • Φ(f,g)=0 if and only if f=g;

  • TV:L1(Ω)R{+} is proper, convex and lower semicontinuous [32] and its null space is given by N(TV)=span{1}, where 1 denotes the constant one function;

  • TV is coercive on the complement of its null space in L1(Ω) [32];

  • L:Lp(Ω)L1(Ω) is continuous and N(TV)N(L)={0} (Proposition 3.3).

Using these facts and slightly modifying the proofs from [43], we obtain the following

Theorem 3.2

(Convergence rates under a priori parameter choice rules) Let assumptions made in Sect. 3.1 hold and let the source condition (Theorem 3.1) be satisfied at the TV-minimising solution uTV. Let uσG,γ be a solution of (3.8) and let α be chosen such that

α(σG,γ)=O(σG+γ).

Then,

DTVq(uσG,γ,uTV)=O(σG+γ),

where q=Lμ is the subgradient from Theorem 3.1 and σG,γ>0 are as defined in (3.10).

Proof

The proof is similar to [43, Thm. 3.9].

In a similar manner, we can obtain convergence rates for an a posteriori parameter choice rule known as the discrepancy principle [4446]. Let f be the noisy data and δ>0 the amount of noise such that Inline graphic, where Φ is as defined in (3.5). In our case, δ=σG22+γ by (3.10). The discrepancy principle amounts to selecting α=α(f,δ) such that

pgα=sup{α>0:Φ(Luα,f)τδ}, 3.11

where uα is the regularised solution corresponding to the regularisation parameter α and τ>1 is a parameter.

Again, slightly modifying the proofs from [43], we obtain the following

Theorem 3.3

(Convergence rates under the discrepancy principle) Let assumptions made in Sect. 3.1 hold and let the source condition (Theorem 3.1) be satisfied at the TV-minimising solution uTV. Let uσG,γ be a solution of (3.8) with α chosen according to the discrepancy principle (3.11). Then,

DTVq(uσG,γ,uTV)=O(σG+γ),

where q=Lμ is the subgradient from Theorem 3.1 and σG,γ>0 are as defined in (3.10).

Proof

The proof is similar to [43, Thm. 4.10].

Solving the Minimisation Problem

PDHG for Infimal Convolution Model

In practice, due to the joint convexity of the Kullback–Leibler divergence, we solve the minimisation problem (3.9), where we treat the reconstructed sample u and the Gaussian denoised image v jointly and, in addition, we impose lower and upper bound constraints on u and v by including the corresponding characteristic functions in the objective:

minu,vαTV(u)+12σG2f-v22+DKL(v,Lu)+χ[l1,l2]2N([u,v]T). 4.1

Note that the objective function in (4.1) is a sum of convex functions (the Kullback–Leibler divergence DKL is jointly convex [47]), and therefore is itself convex. We then write the problem (4.1) as:

minwG(w)+i=1mHi(Liw), 4.2

where we solve for w=uv, m=3 and:

G(w)=χ[l1,l2]2Nuv, 4.3
H1(·)=12σG2·-f2,L1=01, 4.4
H2(w)=DKL(v,u),L2=L001, 4.5
H3(·)=α·1,L3=x0y0z0, 4.6

where L is the forward operator corresponding to the image formation model from Sect. 2.1.

Rather than solving the problem (4.2) directly, a common approach is to reformulate it as a saddle point problem using the Fenchel conjugate G(y)=supzz,y-G(z). For proper, convex and lower semicontinuous function G, we have that G=G, so (4.2) can be written as the saddle point problem

minwsupy1,,ymG(w)+i=1myi,Lix-Hi(yi), 4.7

and by swapping the min and the sup and applying the definition of the convex conjugate G, one obtains the dual of (4.2):

maxy1,,ym-G-i=1mLiyi-i=1mHi(yi). 4.8

The saddle point problem (4.7) is commonly solved using the primal–dual hybrid gradient (PDHG) algorithm [6, 10, 11], and by doing so, both the primal problem (4.2) and the dual (4.8) are solved. We apply the variant of PDHG from [48], which accounts for the sum of composite terms HiLi. Given an initial guess for (w0,y1,0,,ym,0) and the parameters σ,τ>0, and ρ[ϵ,2-ϵ] for some ϵ>0, each iteration Inline graphic consists of the following steps:

1.w~k+1:=proxτG(wk-τi=1mLiyi,k),2.wk+1:=ρkw~k+1+(1-ρk)wk,3.i=1,,m:y~i,k+1:=proxσHiyi,k+σLi(2w~k+1-wk),4.i=1,,m:yi,k+1:=ρy~i,k+1+(1-ρ)yi,k. 4.9

where for a proper, lower semicontinuous, convex function G, proxτG is its proximal operator, defined as:

proxτG(y):=argminx12τx-y22+G(x). 4.10

The iterates (wk)kN and (yi,k)kN (i=1,,m) are shown to converge if the parameters σ and τ are chosen such that (see [48, Theorem 5.3]). In step 3 in (4.9), we use Moreau’s identity to obtain proxσHi from proxHi/σ:

proxσHi(y)+σproxHi/σ(y/σ)=y. 4.11

As a stopping criterion, one can use the primal–dual gap, i.e. the difference between the primal objective cost at the current iterate and the dual objective cost at the current (dual) iterate:

Dpd(w,y1,,ym)=G(w)+i=1mHi(Liw)+G(-i=1mLiyi)+i=1mHi(yi) 4.12

Due to strong duality, optimality is reached when the primal–dual gap is zero, so a practical stopping criterion is when the gap reaches a certain threshold set in advance.

Lastly, note that the optimisation is performed jointly over both u and v, which introduces a difficulty for the term H2(L2w) in Step 3 above, as this requires the proximal operator of the joint Kullback–Leibler divergence DKL(u,v). Similarly, the computation of the primal–dual gap in (4.12) requires the convex conjugate of the joint Kullback–Leibler divergence. We describe the details of these computations in Sects. 4.2 and 4.3, respectively.

Computing the Proximal Operator of the Joint Kullback–Leibler Divergence

When writing the optimisation problem in the form (4.2), it is common that the functions G and Hi (i=1,,m) are “simple”, meaning that their proximity operators have a closed-form solution or can be easily computed with high precision. This is certainly true for G and H1, but not obvious for the joint Kullback–Leibler divergence.

First, for discrete images u=[u1,,uN]T,[v1,,vN]T, the definition (3.6) becomes:

DKL(v,u)=j=1Nuj-vj+vjlogvjuj 4.13

and then:

proxγDKL(u,v)=argminu,vDKL(u,v)+12γuv-uv22=argminu,vj=1Nuj-vj+vjlogvjuj+12γ(uj-uj)2+(vj-vj)2=j=1Nargminuj,vjΦ(uj,vj), 4.14

where we define the function Φ as:

Φ(uj,vj):=uj-vj+vjlogvjuj+12γ[(uj-uj)2+(vj-vj)2]. 4.15

To find the minimiser of Φ(uj,vj), we let its gradient be equal to zero:

ujΦ(uj,vj)=0vjΦ(uj,vj)=01-vjuj+1γ(uj-uj)=0logvj-loguj+1γ(vj-vj)=0. 4.16

In the second equation, we write uj as a function of vj, which we substitute in the first equation to obtain:

1-e-1γ(vj-vj)+1γvje1γ(vj-vj)-uj=0uj=vje1γ(vj-vj). 4.17

The first equation is then solved using Newton’s method, where the iteration is given by:

vj(k+1)=vj(k)-γ-γe-1γ(vj(k)-vj)+vj(k)e1γ(vj(k)-vj)-uje-1γ(vj(k)-vj)+(1+1γvj(k))e1γ(vj(k)-vj). 4.18

Computing the Convex Conjugate of the Joint Kullback–Leibler Divergence

We compute the convex conjugate of the discrete joint Kullback–Leibler divergence DKL(v,u) in (4.13) for u,v[l1,l2]N:

DKL(v,u)=supv,u[l1,l2]Nuv,uv-DKL(v,u)=supv,u[l1,l2]Nj=1Nujuj+vjvj-uj+vj-vjlogvjuj=j=1Nsupvj,uj[l1,l2]Ψ(vj,uj), 4.19

where Ψ is defined as:

Ψ(vj,uj):=ujuj+vjvj-uj+vj-vjlogvjuj. 4.20

To solve the optimisation problem on the last line in (4.19), we write the KKT conditions (where we use uv instead of uj,vj to simplify the notation:

-Ψ(v,u)+i=14μigi(v,u)=0, 4.21a
gi(v,u)0,i=1,,4, 4.21b
μi0,i=1,,4, 4.21c
μigi(v,u)=0,i=1,,4. 4.21d

where the functions gi correspond to the bound constraints:

g1(v,u)=u-l2; 4.22a
g2(v,u)=v-l2; 4.22b
g3(v,u)=-u+l1; 4.22c
g4(v,u)=-v+l1; 4.22d

Noting that (4.21a) is equivalent to:

-u+1-vu+μ1-μ3=0, 4.23a
-v+logv-logu+μ2-μ4=0, 4.23b

we solve the last two equations by using the complementarity conditions (4.21d) for cases when the Lagrange multipliers μi are zero or nonzero.

Numerical Results

In this section, we describe a number of numerical experiments that illustrate the performance of our deconvolution method. We start with four examples of simulated data, where we are able to quantify the reconstructed image in relation to the known ground truth image. Then, we show how our method performs on microscopy data, where we reconstruct an image of spherical beads and a sample of a Marchantia thallus. In the experiments with microscopy data, we compare our method with two standard approaches of performing shift-invariant deconvolution, one where the convolution kernel is the detection PSF and one where the convolution kernel is the point-wise multiplication of the detection PSF with the light-sheet.

Simulated Data

We consider four images of size 128×125×64: a 5×5×5 grid of beads where the effect of the light-sheet in the z coordinate and the shape of the objective PSF are noticeable, a piecewise constant image of “steps” where the Poisson noise affects each step differently based on intensity, and an image that replicates realistic biological samples of tissue. These are shown in the top row of Fig. 8.

Fig. 8.

Fig. 8

Ground truth (top row) and measured images (middle row), shown using maximum intensity projections in each axis direction, except for the tissue images, where slices in each axis direction are shown. The axes of the plots are shown in the panel in the bottom row, with the missing axis in each panel being the direction in which the maximum intensity projection (or slice) is taken

To obtain the measured data, we proceed as follows. Given the ground truth image u0, the forward operator described in Sect. 2.1 is applied to obtain the blurred image Lu0. The parameters for the forward model are taken to be those of the microscope used in the experimental setup and are given in Table 2. Then, the image is corrupted with a mixture of Poisson and Gaussian noise. For the vectorised image Lu0, at each pixel i=1,,N, the Poisson noise component follows the Poisson distribution with parameter (Lu0)i and the additive Gaussian component has zero mean and standard deviation σG=10. The original image, which has intensity in [0, 1] is scaled so that the intensity of Lu0 is in [0, 2000], to replicate a realistic scenario for the Poisson noise intensity. The resulting simulated measured data is shown in the bottom row of Fig. 8.

Table 2.

Forward model parameters used in Sect. 5

Parameter Value Description, units
n 1.35 Refractive index
NAh 1 Numerical aperture (objective lens)
NAl 0.25 Numerical aperture (light-sheet)
λh 0.525 Wave length (objective lens), μm
λl 0.488 Wave length (light-sheet), μm
pxx 0.3250 Pixel size (x), μm
pxy 0.3250 Pixel size (y), μm
stepz 1 Light-sheet step size (z), μm

We compare the reconstruction obtained using the proposed approach, which we will refer to as LS-IC (light-sheet-infimal convolution), with the reconstructions obtained by using an L2 data fidelity term instead of the infimal convolution term, or using a convolution operator corresponding to the objective PSF instead of the light-sheet forward model from Sect. 2.1. Specifically, we compare the solution of (4.1) with the solutions to the following problems, all solved using PDHG as described in Sect. 4:graphic file with name 10851_2022_1100_Figh_HTML.jpggraphic file with name 10851_2022_1100_Figi_HTML.jpggraphic file with name 10851_2022_1100_Figj_HTML.jpg where H is the convolution operator with the detection objective PSF hz as given in (2.14).

For each test image and each method above, the PDHG parameters ρ and σ used are given in Table 3 and τ is set to τ=1/σi=1mLiLi to ensure convergence according to Theorem 5.3 in [48]. As a stopping criterion, we used the primal–dual gap (4.12), normalised by the number of pixels N and the dynamic range of the measured image f:

D~pd=DpdN·maxj=1,,Nfj, 5.1

with a threshold of 10-6 and a maximum number of 10,000 iterations.

Table 3.

Values of the PDHG parameters ρ and σ used in the numerical experiments with simulated data

Method LS-IC LS-L2 PSF-IC PSF-L2
Image Beads Steps Tissue Beads Steps Tissue Beads Steps Tissue Beads Steps Tissue
ρ 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.8 0.9 0.9 0.9
σ 0.0001 0.0001 0.00001 0.0001 0.001 0.0001 0.0001 0.0001 0.0001 0.0001 0.001 0.0001

The results of the four methods applied to the test images are given in Fig. 9 and quantitative results are given in Table 4. For each test image and each method, the regularisation parameter has been chosen to optimise the normalised l2 error and the structural similarity index (SSIM), respectively.

Fig. 9.

Fig. 9

Reconstruction on simulated data with regularisation parameter α such that best MSE is achieved for each method and each image. Shown as maximum intensity projections, except for tissue, where slices in each direction in the centre of the sample are shown. The axes are as shown in the bottom row of Fig. 8. First row: PSF-L2. Second row: PSF-IC. Third row: LS-L2. Fourth row: LS-IC

Table 4.

Results of the numerical experiments on simulated data, with the regularisation parameter α chosen to optimise the normalised l2 error and the SSIM, respectively

Image Beads Steps Tissue
Error metric l2 SSIM l2 SSIM l2 SSIM
PSF-L2 1.74 0.845 0.499 0.561 1.57 0.592
PSF-IC 1.54 0.844 0.324 0.659 1.65 0.582
LS-L2 0.282 0.982 0.055 0.971 0.301 0.951
LS-IC 0.258 0.983 0.012 0.998 0.349 0.931

We note that PSF-L2 and PSF-IC perform particularly poorly, highlighting the importance of an accurate representation of the image formation model instead of simply using the detection objective PSF as the forward operator. Comparing LS-IC and LS-L2, we see better results when using the infimal convolution data fidelity for the beads and the steps image, both visually and quantitatively. The deblurring is performed better on the beads image, while on the steps image we see a better denoising effect, especially along the edges in the image. For the tissue image, both fidelities give comparable results, but as we see in Fig. 10, when the ground truth is not known, choosing α using the discrepancy principle gives a better result for the infimal convolution model.

Fig. 10.

Fig. 10

Reconstruction on simulated data with regularisation parameter α chosen to satisfy the discrepancy principle (3.11). Shown as maximum intensity projections, except for tissue, where slices in each direction in the centre of the sample are shown. The axes are as shown in the bottom row of Fig. 8. First row: PSF-L2. Second row: PSF-IC. Third row: LS-L2. Fourth row: LS-IC

The reconstructions shown in Fig. 10 are obtained by applying the discrepancy principle corresponding to each method. For LS-IC, we choose a value of α such that it satisfies a variation of the discrepancy principle given in (3.11), where we enforce that the single noise fidelities are bounded by their respective noise bounds, rather than the sum of the fidelities being bounded by the sum of the noise bounds, as stated in (3.11). While both versions give good results, we found the former to give more accurate reconstructions. Here, the bound on the Poisson noise is set to 12, motivated by the following lemma from [49], which gives the expected value of the Kullback–Leibler divergence:

Lemma 5.1

Let Yβ be a Poisson random variable with expected value β and consider the function:

F(Yβ)=2YβlogYββ+β-Yβ.

Then, for large β, the following estimate of the expected value of F(Yβ) holds:

E[F(Yβ)]=1+O1β.

One last observation worth making about the results in Figs. 9 and 10 is about the square shape of the reconstructed beads (the first column of both figures). By looking carefully at the ground truth bead image in Fig. 8, one can see that the beads are almost square to begin with, due to the small dimensions of the image. The finer details that make them appear round are lost in the blurring process which, in combination with the total variation regulariser used in the deconvolution algorithm, leads to this detail not being present in the reconstruction, thus making them square. Moreover, the sharpening of their edges is an expected effect of the total variation regularisation, which could be avoided by using a different regularisation technique. However, this is beyond the scope of this article.

The experiments were run using Matlab version R2020b Update 2 (9.9.0.1524771) 64-bit in Scientific Linux 7.9 on a machine with Intel Xeon E5-2680 v4 2.40 GHz CPU, 256 GB memory and Nvidia P100 16 GB GPU. The running times, averaged over 5 runs for each method and each image, are given in Table 5.

Table 5.

Running times for each method and each simulated test image, averaged over 5 runs, in seconds

Image Beads Steps Tissue
PSF-L2 233 1793 903
PSF-IC 689 1077 1805
LS-L2 2913 2194 2273
LS-IC 972 601 850

The minimisation is stopped when the primal–dual gap is lower than 10-6 or the maximum number of 10,000 iterations is reached

Light-Sheet Data

In this section, we show the results of applying LS-IC to a cropped portion of the full resolution images in Fig. 2. Specifically, we select a cropped beads image of 1127×111×100 voxels and a cropped Marchantia image of 1127×156×100 voxels.

For comparison, we also run PSF-L2 on the same images. In addition, we run an alternative light-sheet deconvolution method, where we perform shift-invariant deconvolution using a PSF h¯ obtained by point-wise multiplication of the detection PSF hz in (2.14) and the light-sheet l, effectively clipping h by the width of the light-sheet. Therefore, the problem we solve, which we denote by PSF-L2-clip is: graphic file with name 10851_2022_1100_Figk_HTML.jpg where H¯ is the convolution operator with the PSF h¯=hz·l. A justification of this method is given by a simplified image formation model where we assume that the light-sheet has constant width (in the z direction) and constant intensity throughout the full sample, or in a region of interest where deconvolution is performed, as it is done for example in [21].

We run each method on both images for up to 6000 iterations, with a normalised primal–dual gap of 10-6 as a stopping criterion. The parameters for the image formation model used are the same as in Table 2 and the PDHG parameters are given in Table 6.

Table 6.

Values of the PDHG parameters ρ and σ used in the numerical experiments with real data

Method LS-IC PSF-L2 PSF-L2-clip
Image Beads Marchantia Beads Marchantia Beads Marchantia
ρ 0.5 0.7 0.9 0.9 0.9 0.9
σ 0.0001 0.0001 0.01 0.001 0.01 0.001

The results of the deconvolution are shown in Figs. 11 and 13 for the beads image and the Marchantia image, respectively. In both figures, we first show the position of the light-sheet in the first row (due to the cropping, this is no longer centred), the measured data in the second row, followed by the PSF-L2, the PSF-L2-clip and the LS-IC reconstructions on the third, fourth and fifth rows, respectively. The regularisation parameter α was chosen in all four cases visually such that a balance is achieved between the amount of regularisation and the noise in the reconstruction.

Fig. 11.

Fig. 11

Reconstruction results for the light-sheet bead image, shown as maximum intensity projections. The axes are as shown in the bottom row of Fig. 8. First row: The fitted light-sheet profile. Second row: The data. Third row: PSF-L2 with α=0.1. Fourth row: PSF-L2-clip with α=0.7943. Fifth row: LS-IC with α=0.0046

Fig. 13.

Fig. 13

Reconstruction results for the Marchantia sample, shown as slices in each direction in the centre of the sample. The axes are as shown in the bottom row of Fig. 8. First row: The fitted light-sheet profile. Second row: The data. Third row: PSF-L2 with α=0.1. Fourth row: PSF-L2-clip with α=0.1. Fifth row: LS-IC with α=0.0005

In the beads image in Fig. 11, we note that LS-IC performs better than PSF-L2 and PSF-L2-clip at reversing the effect of the light-sheet. This is most obvious in the zy plane on the right-hand side of the image, where the length of the beads in the z direction has been reduced to a greater extent than in the PSF-L2 and the PSF-L2-clip reconstructions. In addition, the beads appear less blurry in the LS-IC reconstruction in the right-hand side of the xy plane. We also note that PSF-L2-clip fails to properly reverse the effects of the optical aberrations in the beads. This is not unexpected, as the information related to the aberrations is lost when the detection PSF is clipped by setting its upper and lower extremities to zero. The extent to which this happens depends on the width of the light-sheet: as the light-sheet becomes wider, the overall PSF approaches the detection PSF, in which case the deconvolved image will be the same as the reconstruction using PSF-L2. We show the bead images in 3D in Fig. 12, where the effect of the deconvolution in the z direction is more significant in the LS-IC reconstruction than in both the PSF-L2 and the PSF-L2-clip reconstructions, namely the beads are shorter in the z direction.

Fig. 12.

Fig. 12

3D rendering of the beads data and reconstruction images using Imaris Viewer 9.7.2. First row: The data. Second row: PSF-L2 with α=0.1. Third row: PSF-L2-clip with α=0.7943. Fourth row: LS-IC with α=0.0046

In the Marchantia reconstruction in Fig. 13, we see a similar effect of better sharpening in the z direction, most easily seen in the right-hand side and bottom projections in each panel (maximum intensity projections in the zy and the xz planes, respectively). In particular, we see additional artefacts in the PSF-L2-clip reconstruction: horizontal lines (parallel with the xy plane), likely due to the clipping of the detection PSF. Moreover, the 3D rendering of the Marchantia sample in Fig. 14 shows smoother cell edges in the LS-IC reconstruction compared to the other methods. Specifically, the PSF-L2 reconstruction contains reconstruction artefacts that are non-existent in the LS-IC reconstruction (indicated by the yellow arrows), while the PSF-L2-clip reconstruction contains areas where the blur has not been fully removed (for example at the same locations indicated by the yellow arrows), where the edges are not as sharp as in the LS-IC reconstruction.

Fig. 14.

Fig. 14

3D rendering of the Marchantia data and reconstruction images using Imaris Viewer 9.7.2. First row: The data. Second row: PSF-L2 with α=0.1. Third row: PSF-L2-clip with α=0.1. Fourth row: LS-IC with α=0.0005

Lastly, we reiterate that the strength of our proposed method is given by the physically accurate modelling of the interaction between the detection PSF and the light-sheet. This allows one to model the optical aberrations as part of the detection PSF (with no requirements on how this should be done), as well as the spatial dependence of the width and the intensity of the light-sheet and to combine them in an image formation model that does not require approximating using a light-sheet with constant width and intensity. As we see in the numerical experiments shown in this section, such approximation, while faster and less expensive computationally, leads to loss of information and results that are at most locally accurate.

Conclusion

In this paper, we introduced a novel method for performing deconvolution for light-sheet microscopy. We start by modelling the image formation process in a way that replicates the physics of a light-sheet microscope, which is achieved by explicitly modelling the interaction of the illumination light-sheet and the detection objective PSF. Moreover, the optical aberrations in the system are modelled using a linear combination of Zernike polynomials in the pupil function of the detection PSF, fitted to bead data using a least squares procedure. We then formulate a variational model taking into account the image formation model as the forward operator and a combination of Poisson and Gaussian noise in the data. The model combines a total variation regularisation term and a fidelity term that is an infimal convolution between an L2 term and the Kullback–Leibler divergence, introduced in [9]. In addition, we establish convergence rates with respect to the noise and we introduce a discrepancy principle for selecting the regularisation parameter α in the mixed noise setting. We solve the resulting inverse problem by applying the PDHG algorithm in a non-trivial way.

The results in the numerical experiments section show that our method, LS-IC, outperforms simpler approaches to deconvolution of light-sheet microscopy data, where one does not take into account the variability of the overall PSF introduced by the light-sheet excitation, or the combination of Gaussian and Poisson noise. In particular, numerical experiments with simulated data show superior reconstruction quality in terms of the normalised l2 error and the structural similarity index, not only by optimising over the regularisation parameter α given the ground truth, but also with an a posteriori choice of α using the stated discrepancy principle. On bead data, the reconstruction obtained using LS-IC shows a more significant reduction of the blur in the z direction compared to PSF-L2, where the light-sheet variations and the Poisson noise are not taken into account. Moreover, reconstruction of a Marchantia sample with LS-IC shows fewer artefacts than the PSF-L2 reconstruction, as well as sharper cell edges and smoother cell membranes.

Future work includes applying this technique to a broader range of samples and using it to answer questions of biological interest. To do so, we see a number of potential future directions that this work can take:

  1. Adapting the discrepancy principle given in (3.11) for choosing the regularisation parameter α to real data sets, like the ones in Sect. 5.2.

  2. Improving the running time of the method potentially by means of randomised approaches.

  3. Investigating other regularisation terms.

  4. Making the technique available to other users as a more user-friendly tool.

Acknowledgements

BT and LM gratefully acknowledge the funding by Isaac Newton Trust/Wellcome Trust ISSF/University of Cambridge Joint Research Grants Scheme and EPSRC EP/R025398/1. MOL and LM also thank the Gatsby Charitable Foundation for financial support. YK acknowledges financial support of the EPSRC (Fellowship EP/V003615/1), the Cantab Capital Institute for the Mathematics of Information at the University of Cambridge and the National Physical Laboratory. CBS acknowledges support from the Philip Leverhulme Prize, the Royal Society Wolfson Fellowship, the EPSRC Grants EP/S026045/1 and EP/T003553/1, EP/N014588/1, EP/T017961/1, the Wellcome Innovator Award RG98755, the Leverhulme Trust project Unveiling the invisible, the European Union Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 777826 NoMADS, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute. Imaging was performed at the Microscopy Facility of the Sainsbury Laboratory Cambridge University. We thank Dr. Alessandra Bonfanti and Dr. Sarah Robinson for providing the Marchantia sample and Prof. Sebastian Schornack and Dr. Giulia Arsuffi (Sainsbury Laboratory Cambridge University) for provision of the line of Marchantia used. We also acknowledge the support of NVIDIA Corporation with the donation of two Quadro P6000, a Tesla K40c and a Titan Xp GPU used for this research. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.

Biographies

Bogdan Toader

received a Ph.D. in mathematics from the University of Oxford in 2020. He joined the Cambridge Advanced Imaging Centre at the University of Cambridge for a postdoctoral position until 2021 and is currently a postdoctoral researcher at Yale University in the Department of Statistics and Data Science and the Quantitative Biology Institute. His interests include inverse problems, image processing, numerical analysis and their applications to science.graphic file with name 10851_2022_1100_Figa_HTML.jpg

Jérôme Boulanger

received a Ph.D. (2007) in Telecommunication and Signal Processing from the University of Rennes I, France. He joined the French National Centre for Scientific Research as a researcher scientist at the Institut Curie, Paris, France in 2011. Since 2015, he is an investigator scientist at the Laboratory of Molecular Biology at the Medical Research Council in Cambridge, UK.graphic file with name 10851_2022_1100_Figb_HTML.jpg

Yury Korolev

is an EPSRC postdoctoral fellow at the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge and a Research Fellow at Hughes Hall. Prior to joining Cambridge, he has worked at the Universities of Lübeck and Münster. Yury’s interests are applied analysis, mathematical foundations of machine learning, inverse problems and imaging.graphic file with name 10851_2022_1100_Figc_HTML.jpg

Martin O. Lenz

is a Senior Research Associate at the University of Cambridge with 17 years of experience designing bespoke optical instruments and more than 12 years developing high-end optical microscopes. He worked at some of the world’s leading groups for application of super-resolution microscopy (P. French, Imperial College London) and IINS Bordeaux where he successfully implemented point-scanning STED super-resolution microscopy with single molecule-based super-resolution on the same microscope platform. In recent years, he has successfully developed multiple light-sheet microscopes for application in biological sciences at the Cambridge Advanced Imaging Centre. His expertise centres around hardware and software development for advanced light microscopy and integrating new and existing technologies for successfully pushing the boundaries of microscopy.graphic file with name 10851_2022_1100_Figd_HTML.jpg

James Manton

studied Natural Sciences and received BA, MRes and PhD degrees from the University of Cambridge. He is a Senior Investigator Scientist at the MRC Laboratory of Molecular Biology, where his research interests include live-cell microscopy and imaging-related inverse problems.graphic file with name 10851_2022_1100_Fige_HTML.jpg

Carola-Bibiane Schönlieb

graduated from the Institute for Mathematics, University of Salzburg (Austria) in 2004. From 2004 to 2005 she held a teaching position in Salzburg. She received her PhD degree from the University of Cambridge (UK) in 2009. After one year of postdoctoral activity at the University of Göttingen (Germany), she became a Lecturer at Cambridge in 2010, promoted to Reader in 2015 and promoted to Professor in 2018. Since 2011 she is a fellow of Jesus College Cambridge and since 2016 a fellow of the Alan Turing Institute, London. She currently is Professor of Applied Mathematics at the University of Cambridge, where she is head of the Cambridge Image Analysis group and co-Director of the EPSRC Cambridge Mathematics of Information in Healthcare Hub. Her current research interests focus on variational methods, partial differential equations and machine learning for image analysis, image processing and inverse imaging problems.graphic file with name 10851_2022_1100_Figf_HTML.jpg

Leila Mureşan

received a Ph.D. in Engineering (2010) from the Johannes Kepler University, Linz, Austria. She was a postdoctoral researcher at the Institute of Biology at the Ecole Normale Supérieure in Paris and at Centre de Génétique Moléculaire in Gif-sur-Yvette, France. In 2014, she joined the Cambridge Advanced Imaging Centre, where she focuses on light microscopy image analysis solutions. Since 2018, she is a Research Software Engineer EPSRC Fellow at the Department of Physiology Development and Neuroscience, University of Cambridge.graphic file with name 10851_2022_1100_Figg_HTML.jpg

Declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Footnotes

1

There is a geometric interpretation of the infimal convolution. Given two functions φ1,φ2 on Ω, the epigraph (the set of all points lying on or above the graph of a function) of their infimal convolution φ1φ2 is the sum of the epigraphs of φ1 and φ2. This can be easily seen if we rewrite the definition (3.4) as (φ1φ2)(f)=infφ1(u)+φ2(v)|u,vΩ,u+v=f.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Pawley J. Handbook of Biological Confocal Microscopy. Berlin: Springer; 2006. [Google Scholar]
  • 2.Method of the year 2014. Nature Methods 12(1), 1 (2015) [DOI] [PubMed]
  • 3.McNally JG, Karpova T, Cooper J, Conchello JA. Three-dimensional imaging by deconvolution microscopy. Methods A Companion Methods Enzymol. 1999;19(3):373–385. doi: 10.1006/meth.1999.0873. [DOI] [PubMed] [Google Scholar]
  • 4.Starck J, Pantin E, Murtagh F. Deconvolution in astronomy: a review. Publ. Astron. Soc. Pac. 2002;114(800):1051–1069. doi: 10.1086/342606. [DOI] [Google Scholar]
  • 5.Sarder P, Nehorai A. Deconvolution methods for 3-D fluorescence microscopy images. IEEE Signal Process. Mag. 2006;23(3):32–45. doi: 10.1109/MSP.2006.1628876. [DOI] [Google Scholar]
  • 6.Chambolle A, Pock T. An introduction to continuous optimization for imaging. Acta Numer. 2016;25:161–319. doi: 10.1017/S096249291600009X. [DOI] [Google Scholar]
  • 7.Denis L, et al. Fast approximations of shift-variant blur. Int. J. Comput. Vis. 2015;115(3):253–278. doi: 10.1007/s11263-015-0817-x. [DOI] [Google Scholar]
  • 8.Debarnot V, Escande P, Weiss P. A scalable estimator of sets of integral operators. Inverse Probl. 2019;35(10):105011. doi: 10.1088/1361-6420/ab2fb3. [DOI] [Google Scholar]
  • 9.Calatroni L, De Los Reyes JC, Schönlieb C-B. Infimal convolution of data discrepancies for mixed noise removal. SIAM J. Imaging Sci. 2017;10(3):1196–1233. doi: 10.1137/16M1101684. [DOI] [Google Scholar]
  • 10.Esser E, Zhang X, Chan TF. A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 2010;3(4):1015–1046. doi: 10.1137/09076934X. [DOI] [Google Scholar]
  • 11.Chambolle A, Pock T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011;40(1):120–145. doi: 10.1007/s10851-010-0251-1. [DOI] [Google Scholar]
  • 12.Nagy JG, O’Leary DP. Restoring images degraded by spatially variant blur. SIAM J. Sci. Comput. 1998;19(4):1063–1082. doi: 10.1137/S106482759528507X. [DOI] [Google Scholar]
  • 13.Hadj SB, Blanc-Féraud L, Aubert G. Space variant blind image restoration. SIAM J. Imaging Sci. 2014;7(4):2196–2225. doi: 10.1137/130945776. [DOI] [Google Scholar]
  • 14.Hirsch, M., Sra, S., Schölkopf, B., Harmeling, S.: Efficient filter flow for space-variant multiframe blind deconvolution. In: Proceedings of the 23rd IEEE Conference on Computer Vision and Pattern Recognition. Max-Planck-Gesellschaft, Piscataway, NJ, USA, pp. 607–614. IEEE (2010)
  • 15.O’Connor D, Vandenberghe L. Total variation image deblurring with space-varying kernel. Comput. Optim. Appl. 2017;67(3):521–541. doi: 10.1007/s10589-017-9901-1. [DOI] [Google Scholar]
  • 16.Yanny K, Monakhova K, Shuai RW, Waller L. Deep learning for fast spatially varying deconvolution. Optica. 2022;9(1):96–99. doi: 10.1364/OPTICA.442438. [DOI] [Google Scholar]
  • 17.Temerinac-Ott M, et al. Multiview deblurring for 3-D images from light-sheet-based fluorescence microscopy. IEEE Trans. Image Process. 2012;21(4):1863–1873. doi: 10.1109/TIP.2011.2181528. [DOI] [PubMed] [Google Scholar]
  • 18.Preibisch S, et al. Efficient Bayesian-based multiview deconvolution. Nat. Methods. 2014;11(6):645–648. doi: 10.1038/nmeth.2929. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Ancora D, Furieri T, Bonora S, Bassi A. Spinning pupil aberration measurement for anisoplanatic deconvolution. Opt. Lett. 2021;46(12):2884–2887. doi: 10.1364/OL.427518. [DOI] [PubMed] [Google Scholar]
  • 20.Furieri T, et al. Aberration measurement and correction on a large field of view in fluorescence microscopy. Biomed. Opt. Express. 2022;13(1):262–273. doi: 10.1364/BOE.441810. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Becker K, et al. Deconvolution of light sheet microscopy recordings. Sci. Rep. 2019;9(1):1–14. doi: 10.1038/s41598-019-53875-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Guo, M. et al.: Rapid image deconvolution and multiview fusion for optical microscopy. Nat. Biotechnol. (2020) [DOI] [PMC free article] [PubMed]
  • 23.Zhang Z, et al. 3D Hessian deconvolution of thick light-sheet z-stacks for high-contrast and high-SNR volumetric imaging. Photon. Res. 2020;8(6):1011–1021. doi: 10.1364/PRJ.388651. [DOI] [Google Scholar]
  • 24.Cueva E, et al. Mathematical modeling for 2D lightsheet fluorescence microscopy image reconstruction. Inverse Probl. 2020;36(7):075005. doi: 10.1088/1361-6420/ab80d8. [DOI] [Google Scholar]
  • 25.Zhang J, et al. Bilinear constraint based ADMM for mixed Poisson–Gaussian noise removal. Inverse Probl. Imaging. 2021;15(2):339–366. doi: 10.3934/ipi.2020071. [DOI] [Google Scholar]
  • 26.Hanser BM, Gustafsson MG, Agard DA, Sedat JW. Phase-retrieved pupil functions in widefield fluorescence microscopy. J. Microsc. 2004;216(1):32–48. doi: 10.1111/j.0022-2720.2004.01393.x. [DOI] [PubMed] [Google Scholar]
  • 27.Stokseth A. Properties of a defocused optical system. J. Opt. Soc. Am. 1969;59(10):1314–1321. doi: 10.1364/JOSA.59.001314. [DOI] [Google Scholar]
  • 28.Soulez, F., Hiébaut, E.T., Ourneur, Y.T., Enis, L.D.: Déconvolution aveugle en microscopie de fluorescence 3D. GRETSI (2013)
  • 29.Paxman RG, Schulz TJ, Fienup JR. Joint estimation of object and aberrations by using phase diversity. J. Opt. Soc. Am. A. 1992;9(7):1072. doi: 10.1364/JOSAA.9.001072. [DOI] [Google Scholar]
  • 30.Petrov PN, Shechtman Y, Moerner WE. Measurement based estimation of global pupil functions in 3D localization microscopy. Opt. Express. 2017;25(7):7945. doi: 10.1364/OE.25.007945. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Wyant JC, Creath K. Basic wavefront aberration theory for optical metrology. Appl. Opt. Opt. Eng. 1992;XI:11–53. [Google Scholar]
  • 32.Burger M, Osher S. A guide to the TV zoo. In: Burger M, Osher S, editors. Level-Set and PDE-based Reconstruction Methods. Berlin: Springer; 2013. [Google Scholar]
  • 33.Hohage T, Werner F. Iteratively regularized Newton-type methods for general data misfit functionals and applications to Poisson data. Numer. Math. 2013;123(4):745–779. doi: 10.1007/s00211-012-0499-z. [DOI] [Google Scholar]
  • 34.Hohage T, Werner F. Inverse problems with Poisson data: statistical regularization theory, applications and algorithms. Inverse Probl. 2016;32(9):093001. doi: 10.1088/0266-5611/32/9/093001. [DOI] [Google Scholar]
  • 35.Lanza A, Morigi S, Sgallari F, Wen Y-W. Image restoration with Poisson–Gaussian mixed noise. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2014;2:12–24. doi: 10.1080/21681163.2013.811039. [DOI] [Google Scholar]
  • 36.Clason C, Lorenz DA, Mahler H, Wirth B. Entropic regularization of continuous optimal transport problems. J. Math. Anal. Appl. 2021;494(1):124432. doi: 10.1016/j.jmaa.2020.124432. [DOI] [Google Scholar]
  • 37.Bennett C, Sharpley R. Interpolation of Operators. Boston: Academic Press; 1988. [Google Scholar]
  • 38.Bauschke HH, Combettes PL. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Berlin: Springer; 2011. [Google Scholar]
  • 39.Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D. 1992;60(1):259–268. doi: 10.1016/0167-2789(92)90242-F. [DOI] [Google Scholar]
  • 40.Benning M, Burger M. Modern regularization methods for inverse problems. Acta Numer. 2018;27:1–111. doi: 10.1017/S0962492918000016. [DOI] [Google Scholar]
  • 41.Burger M, Osher S. Convergence rates of convex variational regularization. Inverse Probl. 2004;20(5):1411. doi: 10.1088/0266-5611/20/5/005. [DOI] [Google Scholar]
  • 42.Resmerita E, Anderssen RS. Joint additive Kullback–Leibler residual minimization and regularization for linear inverse problems. Math. Methods Appl. Sci. 2007;30(13):1527–1544. doi: 10.1002/mma.855. [DOI] [Google Scholar]
  • 43.Bungert L, Burger M, Korolev Y, Schönlieb C-B. Variational regularisation for inverse problems with imperfect forward operators and general noise models. Inverse Probl. 2020;36(12):125014. doi: 10.1088/1361-6420/abc531. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Morozov VA. On the solution of functional equations by the method of regularisation. Soviet Math. Dokl. 1966;7:414–417. [Google Scholar]
  • 45.Engl H, Hanke M, Neubauer A. Regularization of Inverse Problems. Berlin: Springer; 1996. [Google Scholar]
  • 46.Sixou B, Hohweiller T, Ducros N. Morozov principle for Kullback–Leibler residual term and Poisson noise. Inverse Probl. Imaging. 2018;12(3):607–634. doi: 10.3934/ipi.2018026. [DOI] [Google Scholar]
  • 47.Lindblad G. Entropy, information and quantum measurements. Commun. Math. Phys. 1973;33(4):305–322. doi: 10.1007/BF01646743. [DOI] [Google Scholar]
  • 48.Condat L. A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 2013;158(2):460–479. doi: 10.1007/s10957-012-0245-9. [DOI] [Google Scholar]
  • 49.Zanella R, Boccacci P, Zanni L, Bertero M. Efficient gradient projection methods for edge-preserving removal of Poisson noise. Inverse Probl. 2009;25(4):045010. doi: 10.1088/0266-5611/25/4/045010. [DOI] [Google Scholar]

Articles from Journal of Mathematical Imaging and Vision are provided here courtesy of Springer

RESOURCES