Abstract
We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point spread functions, such as in multiview light-sheet microscopes [1, 2], while preserving the best resolution information present in each image. We show RL deconvolution is also easily applied to merge high-resolution, high noise images with low-resolution, low noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced via different simulated illumination patterns, relevant to structured illumination microscopy (SIM) [3, 4] and image scanning microscopy (ISM). The quality of our ISM reconstructions are at least as good as standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that high-quality image merges are possible even in cases where a non-iterative inversion algorithm is unknown.
Keywords: Superresolution, Toeplitz matrix, deconvolution, Richardson-Lucy
Introduction
Modern fluorescence microscopes are invaluable for biologists, revealing the distribution and dynamics of tagged molecules (such as proteins) in living cells, tissues, and animals. Fluorescence images are informative but imperfect measurements of the density of the tagged substance, and improving the accuracy of these measurements enables further biological discovery. Deconvolution techniques use an imperfectly measured density and a model of the measurement process to estimate the true density more accurately [5]. For a noiseless linear measurement, density can be estimated fairly accurately by mathematically inverting the measurement process. Unfortunately, fluorescence images are always corrupted by Poisson noise. Straightforward inversion which ignores Poisson noise typically produces a poor estimate of the density.
Richardson-Lucy (RL) deconvolution [6, 7] is a particularly simple and useful method appropriate for improving density estimates drawn from this type of noisy, linear measurement. Given a Poisson-noisy measurement, and a noiseless but otherwise accurate model of the measurement process, RL deconvolution estimates the true density by an iterative procedure, improving the likelihood that the estimate is correct with every iteration. Because it improves likelihood assuming Poisson noise, RL deconvolution is more likely to be correct than methods which ignore noise or assume a non-Poisson noise model.
RL deconvolution is often applied in cases where the measurement is a simple blurring. In our experience, this application of RL deconvolution improves most of our microscope images, but also seldom reveals features in our images that were imperceptible before deconvolution. However, RL has another important application: when a measurement produces multiple images from a single density, each imperfect in a different way, the images can be ‘fused’ into a single, superior image through “joint” deconvolution. In this work, we use joint RL deconvolution to combine multiple images of a simulated object into a single image in the context of modern microscopy techniques. We find image combination by RL deconvolution compares favorably to existing, specialized methods for image combination, and is trivial to adapt to a wide variety of microscopy techniques, even in cases where no direct inversion algorithm is known.
Results
Consider a microscope that produces two 2D images of a sample: one image is badly blurred in the x-direction (Figure 1a), the other image is badly blurred in the y-direction (Figure 1b). What is the best way to combine these images? Simply averaging the two images would produce an image blurred in both directions; RL deconvolution of this average image produces more legible text (Figure 1c). However, by applying joint RL deconvolution to the two images, (see Methods for details), we obtain much more accurate, legible results (Figure 1d).
Figure 1.
RL deconvolution can be used to merge images that have different types of blurring. Input noisy images were simulated using a Gaussian blur with a sigma value of (a) 2 pixels in the x direction and 6 pixels in the y direction, and (b) 6 pixels in the x direction and 2 pixels in the y direction. (c) RL deconvolution of the average of the two images, using a blurring model averaging the two different blurs. (d) Applying joint RL deconvolution to the separate images yields a better estimate with more detail. In all images, the inset represents a zoomed view of the word ‘quick’ in the sentence ‘The quick brown fox jumps over the lazy dog’.
This type of fusion problem is encountered in multiview fluorescence microscopy. The axial resolution of a microscope is typically worse than its lateral resolution, so viewing the sample from different directions gives multiple measurements blurred in different ways. Averaging the different views improves axial resolution, at the cost of degrading lateral resolution. In striking contrast, joint RL deconvolution produces beautiful accurate fusions of the multiview data with nearly isotropic resolution[1, 8–11].
In our next example, we consider a measurement that produces two images, one with poor resolution but good signal-to-noise ratio and dynamic range (Figure 2a), the other with good resolution but poor signal-to-noise ratio and dynamic range (Figure 2b). Applying RL deconvolution to either of the individual images (such as in Figure 2c) produces inferior results to applying joint RL deconvolution (Figure 2d).
Figure 2.
Combination of a noisy, sharp image with a clean but blurry image. Simulated (a) blurry and (b) noisy image of the same structure. (c) RL deconvolution of the blurry data. (d) Joint deconvolution of the blurry and noisy data. In each panel, the inset graph represents the density profile along the indicated 15-pixel long horizontal white line. Unlike the deconvolution of the blurry data (c), the density profile of the joint deconvolution (d) displays two peaks, and suggests the lack of density between the peaks.
Images resembling Figure 2b are often produced by localization microscopy[12] with excellent resolution but few counts per histogram bin (pixel). Localization microscopes can also take a ‘widefield’ image, resembling Figure 2a, with high dynamic range and many counts, but badly blurred compared to the localization-based image. Joint RL deconvolution accurately preserves the low frequency components from the blurry measurement as well as the relevant high frequency components revealed by the noisy measurement. A similar deconvolution-based fusion technique successfully merges high-resolution confocal microscopy images with high-signal widefield microscopy images [8] and we expect this idea is equally relevant when complementing widefield microscopy with localization microscopy. We note that the noise in localization microscopy image histograms may not obey Poisson statistics, and may therefore require adaptation of an appropriate noise model.
These successes inspire us to ask: can RL deconvolution be used in general to combine complementary images, preserving the strengths of each image while discarding their weaknesses? For our next example, we consider image scanning microscopy (ISM, [4]), a form of superresolution microscopy in which a confocal microscope’s detector is replaced with a multi-pixel camera. As in confocal microscopy, point-focused illumination is raster-scanned through the sample, but in ISM the camera collects an image at each scan point, centered on the illumination spot (Figure 3a). Although each of these images contain diffraction-limited information from a small subregion in the sample (point scanning equivalent in Figure 3b and the summed images in Figure 3c), the inversion algorithm for ISM [4, 13] cleverly combines these multiple, complementary images to provide an image with twice the resolution of a conventional microscope (Figure 3d–e).
Figure 3.
ISM data processed using joint RL deconvolution. (a) 4D simulated ISM data represented as a 2d image. A small, cropped image centered on the illumination spot is shown at each scan position. The inset shows a zoomed view of this 2d representation. (b) An alternative view of a subset of the ISM data, collected by a single pixel of the detector. (c) Noisy widefield-like data obtained by summing the raw data from each detector pixel. (d) Standard ISM inversion of (a) improves resolution. (e). Deconvolution of (d) further improves resolution. (f) Joint RL deconvolution of (a). Note the similarity between (e) and (f).
We used RL deconvolution to merge the many small images of a simulated ISM dataset, and find the results (Figure 3f) are almost indistinguishable from application of the standard processing technique (Figure 3e). This result is encouraging because no special insight was required to apply joint RL deconvolution; we simply followed the same protocol as in our previous examples. We expect joint RL deconvolution to be equally relevant to other microscopes based on ISM, such as multifocal structured illumination microscopy[14], and to facilitate the development of even more complex imaging techniques for which direct inversion algorithms are not known. For example, the standard processing algorithm for ISM was derived assuming a illumination beam shaped like a Gaussian function. If the illumination shape was changed to a Bessel function, the standard algorithm would require careful alteration to account for this change, and the resulting algorithm might bear little resemblance to the Gaussian case. In contrast, the only alteration required for joint RL deconvolution is changing a few lines of code to account for an arbitrary change to the illumination shape.
In order to quantify the degree to which joint RL-deconvolution improves resolution, we deconvolved a target composed of three one-pixel lines separated by different intervals (Figure 4). Deconvolution was carried out on the widefield-like image produced by summing the raw data from each detector pixel (Figure 4, light gray line), on the ISM image produced by the standard processing algorithm (Figure 4, dark gray line), and using joint RL deconvolution of the raw ISM data (Figure 4, black solid line). Deconvolution of the widefield-like image barely resolved lines that were 15 pixels apart (Figure 4a), while the deconvolved ISM image and joint RL deconvolution were both able to easily resolve this target. Neither deconvolution of the ISM image nor joint RL deconvolution are able to resolve target lines that are less than 10 pixels apart (Figure 4d), which is consistent with the expected square root of two increase in resolution provided by the standard ISM processing algorithm.
Figure 4.

Deconvolution of ISM data simulated using a quantitative resolution target. Densities obtained along vertical lines drawn on targets separated by 15 (a), 14 (b), 10 (c) or 9 (d) pixels. The light gray broken line (
) represents the density obtained after deconvolution of widefield-like data. The medium-gray line (
) represents the density profile after deconvolution of the resolution–improved ISM-like data, and the solid black line (—) is the result from joint RL deconvolution. The vertical lines on the x axis indicate the true position of the target. The corresponding target, deconvolved widefield, deconvolved ISM, and joint RL deconvolution data are shown on bottom panels.
Next, we consider structured illumination microscopy (SIM, [3]), an increasingly popular superresolution technique with several commercially available implementations. SIM illuminates the sample with several (typically 15) sinusoidal patterns, and images the resulting fluorescence onto a camera. The conventional SIM inversion algorithm [3] is rather involved, and typically requires the user to hand-adjust a few parameters to achieve best image quality. As in our other examples, combining the multiple raw images in our simulated SIM dataset (Figure 5a–c) by summing produces a more blurred and noisy image (Figure 5d) whereas straightforward joint RL deconvolution produces an image with better signal to-noise and resolution (Figure 5e). Even though conventional ISM and SIM processing algorithms bear no resemblance to one another, the only difference between joint RL deconvolution of ISM and SIM data is inputting a different illumination pattern. Unlike the conventional SIM algorithm, there are no free parameters in the RL method.
Figure 5.

Images of the same object illuminated with structured light can be combined using joint RL deconvolution. (a)–(c) Simulated images of a test object illuminated with sinusoidal light patterns with different orientations. (d) Noisy, blurry image obtained by summing 25 simulated images with different illumination. (e) Joint RL deconvolution of the 25 frames.
For our final example, we consider gated stimulated-emission depletion microscopy (gSTED, [15]). The gSTED microscope scans a point-focused excitation beam together with a donut-shaped depletion beam, and measures the resulting fluorescence emission on a point detector, producing a time-series of emission at each pixel. Emission at early time points is not well-confined to the center of the donut, but signal is abundant. Emission at later time points is tightly confined to the donut, giving much higher resolution, but at the cost of lower signal levels. Conventional methods to process gSTED data discard the early portion of the emission, and simply sum the later portion. The position of this cutoff-point is typically hand-optimized by the user.
Predictably, joint RL deconvolution shows promise as an alternative processing method. We construct a dataset similar to gSTED data: an initially bright signal with low resolution, fading smoothly across many timepoints to a dim signal with high resolution. Subjectively choosing an optimal cutoff and summing produces a high-resolution but noisy image (Figure 6a), while summing all time points trades resolution for a cleaner signal (Figure 6b). RL deconvolution can be applied to either of these summed images (Fig. 6c, d), or summed images for any other cutoff point, but the results are inferior to joint RL deconvolution of the whole dataset (Figure 6e). This promising result suggests that RL deconvolution can improve current best practices for processing gSTED data, and implies that future imaging techniques may not require non-iterative inversion algorithms.
Figure 6.

Deconvolution of gSTED-like data, a series of simulated images with inversely related resolution and noise. (a) Cumulative sum of last three timeframes, with high noise and high resolution. (b) Sum of all timeframes, including those with low noise and low resolution. (c) RL deconvolution of (a). (d) RL deconvolution of (b). (e) Joint RL deconvolution of all timeframes preserving low noise and high resolution
Discussion
In every comparison we simulate, joint RL deconvolution gives results that are at least as good as conventional processing techniques, and in some cases better. For example, joint RL deconvolution of the multiview simulations led to more accurate representations of the fluorescence signals than those processed using the simple RL method (Fig. 1).
Joint RL deconvolution also combines multiple mages collected under different excitation patterns, which is directly applicable to structured illumination methods (Fig. 3, 4, 5). Lastly, this method can be readily applied to multiple images having variable signal-to-noise ratio and resolution to produce an improved measurement of the specimen (Fig. 6). We note that the same algorithm is applied in all of these examples with the only alteration being the inputted excitation pattern. Thus, joint RL deconvolution is extremely adaptable for many imaging purposes. Furthermore, this method has no parameters other than the iteration number for users to adjust in processing their data, which makes it a simple and robust approach in avoiding “over processing” artifacts.
Modifying RL deconvolution for other multi-image fusions is a simple matter of following a recipe, and does not require special mathematical insight or inspiration. We expect joint RL deconvolution will enable technique developers to explore more complicated techniques beyond simple image fusion [16] without the cognitive burden of constructing non-iterative data inversion methods, while producing results of the highest quality.
Methods
For simplicity, we restrict our discussion to one-dimensional data, but note that the extension to higher dimensions is straightforward. Consider a measurement process:
where ‘m’ is a known vector of measurements, ‘d’ is the unknown, true density we wish to estimate, ‘H’ is a known matrix describing the noiseless measurement, and ‘b’ is a known background signal. In this simple case, ‘d’ can be accurately calculated by matrix inversion:
If ‘H’ represents a band-limited measurement (as in fluorescence microscopy), then some care must be taken when inverting ‘H’ [5], but the band-limited portion of ‘d’ can still be recovered. Of course, any error in ‘H’ and ‘b’ will corrupt our estimate of ‘d’, but in practice, ‘H’ and ‘b’ can be determined with good accuracy.
Unfortunately, fluorescence microscopy measurements are always corrupted by Poisson noise:
Here the operator ‘P’ adds Poisson noise to the measurement. Specifically, ‘P(x)’ replaces each element ‘xi’ of its input vector ‘x’ with a random integer; the integer is drawn from a Poisson distribution with expectation value ‘xi’. Straightforward matrix inversion ignoring this Poisson noise generally produces an estimate ‘e’ which is badly corrupted. In this case, RL deconvolution gives superior results.
Given our Poisson-noisy measurement ‘mn’, our known measurement process ‘H’, and our background ‘b’, we may write an objective likelihood ‘L’ as a function of our estimate ‘e’ of the true density ‘d’:
RL deconvolution performs an iterative ascent on this objective function, refining our estimate ‘ei’ of the true density ‘d’ on each iteration
Here, ‘ei’ is our estimate of the true sample density ‘d’ on iteration ‘i’, ‘*’ and ‘/’ represent pointwise multiplication and division, and ‘HT’ is the transpose of the measurement matrix ‘H’ [17]. The array ‘ones’ has the same dimensions as ‘m’ but every entry has a value 1. The noisy measurement ‘mn’ is compared to its expected noiseless value ‘m’ by a pointwise ratio ‘r’. If the measurement and the estimate are consistent, the ratio produces a vector identical to the array ‘ones’, and the iteration does not change the values of ‘e’. If the measurement and estimate are inconsistent, the ratio differs from 1, and can be considered as a multiplicative “correction factor”, used to refine ‘e’. ‘HT’ can be thought of as “backprojection”; in other words, for matrix ‘x’, if ‘y = Hx’, then ‘HT’ reports on which elements of ‘y’ could have been touched by which elements of ‘x’, assigning blame for the parts of the correction factor that are not 1 to the parts of the estimate that could potentially be responsible.
If we estimate ‘d’ on the same pixel grid which we used when measuring ‘m’, ‘H’ is a square matrix. If ‘H’ describes a simple blurring, as in Figure 7a, its entries are constant along the diagonal (a “Toeplitz” matrix). In this case, ‘HT’ is also Toeplitz, and acts as a re-blurring of the correction factor, reducing amplification of high-frequency noise in the RL iteration.
Figure 7.
Matrix structures. (a) A simple blurring operation is represented by a “Toeplitz” matrix, which is constant along any diagonal. (b) A measurement which produces two images from a single density, (see Figure 1) can be written as two submatrices stacked one above the other. In this case, each submatrix represents a simple blurring operation. (c) The transpose of the matrix shown in (b) is simply the transposed sub-matrices stacked side-by-side. This matrix reduces two images to a single output image. (d) A measurement which produces multiple images from the same density, each image taken with the same blur and a different illumination (similar to Figures 3 and 4) is composed of stacked submatrices, as in (b), but the submatrices are now the product of a diagonal illumination matrix followed by a Toeplitz blurring matrix. The transpose of the submatrix reverses the order of blurring and illumination.
We find thinking of the ‘H’ operation as a matrix multiplication to be conceptually useful, but explicitly constructing and storing ‘H’ is very inefficient in the case of simple blurring, so we use direct convolution or multiplication in the Fourier domain to compute our blurring operations (see http://code.google.com/p/iterative-fusion/ for details).
In the case when ‘H’ produces ‘N’ differently blurred images from a single density ‘d’ (as in Figure 1), ‘H’ has a factor of ‘N’ times more rows; it can be regarded as ‘N’ square matrices stacked on top of one another, each square matrix describing the blurring process producing one of the output images (Figure 7b). ‘HT’ would similarly consist of ‘N’ square matrices stacked side-by-side, each being the transpose of one of the square submatrices in ‘H’ (Figure 7c). The effect of ‘HT’ in this case is to re-blur the correction factor computed from each image, and they are combined into a weighted sum.
For Figure 1, we implemented ‘H’ as a function that takes in a single image, and returns two blurred versions of the image. The first image is convolved with a blurring kernel extended in ‘x’, the second is convolved with a blurring kernel extended in ‘y’, computed using direct convolution. Similarly, we implemented ‘HT’ as a function that takes in two images, blurs each image with the same blurring kernel described above via direct convolution, and averages the two images.
When ‘H’ produces multiple images that are differently blurred, and with different signal levels, (as in Figures 2 and 6), this is handled similarly to the case of Figure 1, except that the entries of the individual matrices are scaled by the total signal expected in each output image. In our code, ‘H’ is still implemented as a function that takes one image and returns multiple blurred versions of the image, each convolved with a different blurring kernel via direct convolution. However, the total brightness of each blurring kernel now varies from kernel to kernel, giving different total brightness in each blurred image, which gives different noise levels when Poisson noise is applied. Similarly, ‘HT’ is still a function which takes in multiple images, blurs each image with the corresponding blurring kernel used in ‘H’, and returns an average of these blurred images.
When ‘H’ produces multiple images that have been recorded with different illumination patterns (as in Figures 3–5), the ‘N’ square submatrices of ‘H’ are no longer Toeplitz (constant along their diagonal). However, the submatrices can be written as the product of a Toeplitz blurring matrix ‘B’ and a diagonal illumination matrix ‘Ex’, as shown in Figure 7d. ‘HT’ still consists of ‘N’ square submatrices side-by-side, but with the order of illumination and blurring reversed by the transpose operation. The diagonal illumination matrix is of course unchanged by the transpose, and the blurring matrix is unchanged if it represents a symmetric blur (the only case we consider in this work). The effect of ‘HT’ in this case is to re-blur the correction factor computed from each measured/expected image pair, multiply each correction factor by the appropriate illumination, and average the corrections together so they can be applied to ‘e’.
For Figure 3 and 4, ‘H’ is implemented as a function that takes a single input image and returns one output image for every pixel in the input image. Each output image is constructed by multiplying the input image with a Gaussian illumination pattern, blurring by direct convolution with a blurring kernel, and cropping the (mostly zero) image to a reasonable size. The resulting plethora of cropped images is returned as a 4-dimensional array, with the first two axes corresponding to the pixels of the input image. Similarly, ‘HT’ is implemented as a function that takes in a plethora of small images held in the same type of 4-dimensional array returned by ‘H’, and returns a single image with the same dimensions as the image input to ‘H’. Each small 2D input image to ‘HT’ is blurred with the same kernel used in ‘H’, multiplied by the same illumination used in ‘H’, and “un-cropped”, meaning, embedded in an otherwise empty larger image at the appropriate position. These blurred, multiplied, un-cropped 2D images are averaged together to produce the output image.
For Figure 5, ‘H’ and ‘HT’ are implemented similarly to Figure 3, except cropping and uncropping are not necessary. The function implementing ‘H’ takes in a single input image and returns fifteen output images. Each output image is produced by multiplying the input image by a different 2D sinusoidal illumination pattern, and then blurred via direct convolution with a blurring kernel. Similarly, the function implementing ‘HT’ takes in fifteen input images, blurs each one with the same kernel used in ‘H’, multiplies each image by the corresponding illumination pattern used in ‘H’, and averages the resulting images to produce a single output image.
The simulation parameters used for the images that were blurred in different directions (Fig. 1) were a blurring sigma in the x and y directions of 2 and 6 or 6 and 2 pixels, an input unblurred image with a maximum pixel value of 255 and no background, a Poisson error intensity scaling factor of 0.1 and 100 iterations. For the high-signal/high-resolution images (Fig. 2), the input image displayed a maximum pixel level of 65535 and had no background, the blurring sigmas were 0.7 for high-resolution and 3.5 for high-signal, the Poisson intensity scaling was 0.0001 for high-signal and 0.1 for high-resolution, and the number of iterations was 300. The ISM-like data (Fig. 3) was simulated with an input image with a maximum pixel value of 255, a Poisson intensity scaling of 5, an excitation and emission blurring sigma of 4 pixels and 50 iterations. The ISM-like data for figure 4 was simulated with an input image with a maximum pixel value of 65000, a Poisson intensity scaling of 0.5, an excitation and emission blurring sigma of 8 pixels and 150 iterations. The SIM-like data (Fig. 5) had an input image with a maximum pixel value of 255, 0.1 Poisson intensity scaling, 5 rotations and 5 phases of the illumination pattern, an illumination period of 30 pixels, an emission blurring of 6.4 pixels and 100 iterations. For STED-like data (Fig. 6), 10 images were simulated using an input object with a maximum pixel value of 255 and a Poisson intensity scaling of 0.5, signal scaling that ranged linearly from 1 to 0.1, 30 iterations, and a blurring sigma that ranged linearly from 7.5 to 2 pixels.
The code used to produce our figures is written in the Python language [18], using the python subpackages Numpy [19] and Scipy [20]. For any additional details of our processing, please consult the code, freely available at http://code.google.com/p/iterative-fusion/.
Acknowledgments
We thank Michael Broxton for valuable discussions and critical evaluation of this manuscript. This work was supported by the Intramural Research Program of the NIH including the National Institute of Biomedical Imaging and Bioengineering.
Footnotes
Supporting information for this article is available on the WWW under http://code.google.com/p/iterative-fusion/
References
- 1.Temerinac-Ott M, Ronneberger O, Ochs P, Driever W, Brox T, Burkhardt H. IEEE Trans Image Process. 2012;21:1863. doi: 10.1109/TIP.2011.2181528. [DOI] [PubMed] [Google Scholar]
- 2.Temerinac-Ott M, Ronneberger O, Nitschke R, Driever W, Burkhardt H. IEEE I S Biomed Imaging. 2011:899. [Google Scholar]
- 3.Gustafsson MG. J Microsc. 2000;198:82. doi: 10.1046/j.1365-2818.2000.00710.x. [DOI] [PubMed] [Google Scholar]
- 4.Müller CB, Enderlein J. Phys Rev Lett. 2010;104:198101. doi: 10.1103/PhysRevLett.104.198101. [DOI] [PubMed] [Google Scholar]
- 5.Jansson PA. Deconvolution of images and spectra. Academic Press; 1997. [Google Scholar]
- 6.Richardson WH. JOSA. 1972;62:55. [Google Scholar]
- 7.Lucy L. Astronomical Journal. 1974;79:745. [Google Scholar]
- 8.Verveer PJ, Jovin TM. Appl Opt. 1998;37:6240. doi: 10.1364/ao.37.006240. [DOI] [PubMed] [Google Scholar]
- 9.Swoger J, Verveer P, Greger K, Huisken J, Stelzer EH. Opt Express. 2007;15:8029. doi: 10.1364/oe.15.008029. [DOI] [PubMed] [Google Scholar]
- 10.Heintzmann R, Cremer C. J Microsc. 2002;206:7. doi: 10.1046/j.1365-2818.2002.01000.x. [DOI] [PubMed] [Google Scholar]
- 11.Wu Y, Wawrzusin P, Senseney J, Fisher RS, Christensen R, Santella A, York AG, Winter PW, Waterman CM, Bao Z, Colón-Ramos DA, McAuliffe M, Shroff H. Nat Biotechnol. 2013;11:1032. doi: 10.1038/nbt.2713. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Betzig E, Patterson GH, Sougrat R, Lindwasser OW, Olenych S, Bonifacino JS, Davidson MW, Lippincott-Schwartz J, Hess HF. Science. 2006;313:1642. doi: 10.1126/science.1127344. [DOI] [PubMed] [Google Scholar]
- 13.Sheppard CJR. Optik. 1988;80:5354. [Google Scholar]
- 14.York AG, Parekh SH, Dalle Nogare D, Fischer RS, Temprine K, Mione M, Chitnis AB, Combs CA, Shroff H. Nat Methods. 2012;9:749. doi: 10.1038/nmeth.2025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Vicidomini G, Moneron G, Han KY, Westphal V, Ta H, Reuss M, Engelhardt J, Eggeling C, Hell SW. Nat Methods. 2011;8:571. doi: 10.1038/nmeth.1624. [DOI] [PubMed] [Google Scholar]
- 16.Broxton M, Grosenick L, Yang S, Cohen N, Andalman A, Deisseroth K, Levoy M. Opt Express. 2013;21:25418. doi: 10.1364/OE.21.025418. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Bertero M, Boccacci P, Desiderà G, Vicidomini G. Inverse Probl. 2009;25:123006. [Google Scholar]
- 18.Python Software Foundation. Python Language Reference, version 2.7. Available at http://www.python.org.
- 19.Oliphant TE. Comput Sci Eng. 2007;9:10. [Google Scholar]
- 20.Jones E, Oliphant T, Peterson P, et al. SciPy: Open Source Scientific Tools for Python. 2001 Available at: http://www.scipy.org/




