Abstract
Capturing biological dynamics with high spatiotemporal resolution demands the advancement in imaging technologies. Super-resolution fluorescence microscopy offers spatial resolution surpassing the diffraction limit to resolve near-molecular-level details. While various strategies have been reported to improve the temporal resolution of super-resolution imaging, all super-resolution techniques are still fundamentally limited by the trade-off associated with the longer image acquisition time that is needed to achieve higher spatial information. Here, we demonstrated an example-based, computational method that aims to obtain super-resolution images using conventional imaging without increasing the imaging time. With a low-resolution image input, the method provides an estimate of its super-resolution image based on an example database that contains super- and low-resolution image pairs of biological structures of interest. The computational imaging of cellular microtubules agrees approximately with the experimental super-resolution STORM results. This new approach may offer potential improvements in temporal resolution for experimental super-resolution fluorescence microscopy and provide a new path for large-data aided biomedical imaging.
Introduction
Super-resolution fluorescence imaging techniques have overcome the optical diffraction limit of conventional fluorescence microscopy, allowing visualization of biological structures with near-molecular-scale resolution1,2. However, an inevitable challenge for all super-resolution techniques remains that greater spatial resolution is obtained at the expense of prolonged acquisition time, leading to compromised temporal resolution for live imaging. For wide-field techniques based on single-molecule localization, such as stochastic optical reconstruction microscopy (STORM)3 or (fluorescence) photoactivated localization microscopy ((F)PALM)4,5, this trade-off arises from the need to accumulate enough single-molecule localizations to reveal the biological structures (i.e. to meet Nyquist criterion, in which the image sampling interval must be smaller than half of the desired resolution)6. For methods based on patterned-illumination, such as stimulated emission depletion (STED) microscopy7 and (saturated) structured illumination microscopy ((S)SIM)8, this compromise is caused by longer scanning time or the need to acquire images with a series of excitation patterns. Various strategies have been reported to accelerate these super-resolution imaging processes without significantly degrading the spatial resolution. For STORM or (F)PALM, the methods include those that increase the single-molecule switching rate by stronger excitation and the readout speed with faster cameras9,10 and those that allow a higher density of emitted fluorophores in each individual frame by discerning the overlapping single-molecule images with computational algorithms, such as DAOSTORM11,12, Bayesian statistics13 and compressed sensing14,15. For STED or SIM, the improvement in temporal resolution has been focused on the parallelization of multiple patterns in one camera exposure in order to reduce the total imaging time16,17. While these approaches have led to substantial improvements in temporal resolution, by orders of magnitude in some cases, the construction of a sub-diffraction-limit image is still fundamentally limited by the need to accumulate spatial information in a sequential fashion. Here we present an example-based approach to inferring a super-resolution (SR) image directly from a single low-resolution (LR) fluorescent image for those structures with prior knowledge of their shapes. With a sufficiently large example library that contains images of structures with a large variety of different shapes, it might also be possible to use this method to infer SR images for structures without prior knowledge of their shapes. This method has a potential to advance the way by which images are obtained in super-resolution microscopy, thereby significantly improving temporal resolution.
Example-based resolution improvement methods have recently emerged in computer vision, whereby a LR natural image (e.g. a photograph) can be efficiently transformed into high-resolution format by learning and estimating from a database that consists of low- and high-resolution example images18. These example images of natural scenes are typically unrelated to the LR image of interest, and the resolution improvement relies on the statistical restoration of the missing high-frequency components in the LR image by inference from the features of the frequency compositions in the examples18. In contrast to natural images, fluorescent images are highly specified, exhibiting precisely labeled molecular targets or structures in the sample. Thus, for fluorescence microscopy, the LR and SR examples in the database can be composed of fluorescent images consisting of the same type of biological structures as in the LR image of interest, so that more precise (less statistical) image inference can be achieved.
The principle of our method is shown in Fig. 1 (for details, see Methods and Supplementary Figure 1). First, LR and SR example image pairs are segmented into small patches and stored in a database. Next, an input LR image is also segmented into patches with the same size as those in the example database (Fig. 1a). Third, to infer the SR form of the LR input, we first compared each individual patch of the input LR image with the LR patches in the example database. The pixel-value distances between these patches are calculated, and those LR example patches that have the lowest distance (i.e. highest similarity), along with their SR pairs, are selected as candidates (Fig. 1b). Finally, the pixel-value distance between the overlapping boundaries of the neighboring selected SR candidates are calculated. Those that provide the overall smoothest connectivity are chosen to construct the final SR image (Fig. 1c). These two steps of inference are described by a Markov random field (MRF). A detailed mathematical description of the procedures can be found in Methods.
Experimentally, we validated the method by imaging microtubules in cells, a biological structure often used for the calibration of SR imaging systems. To build the example database, we considered the known width and morphology of fluorescently labeled microtubules19 and developed a strategy to construct a synthetic database (see Methods). We reason that the use of synthetic images, rather than experimental ones, can substantially reduce redundancy in the database and provide the capability to address any specific (or complex) structures that require corresponding examples in the database. Practically, we categorized the LR and SR example image pairs of microtubules into three groups based on their geometrical features: 1) single microtubule filaments of varying orientations (Fig. 2a–c), 2) two crossing microtubule filaments of varying orientations (Fig. 2d–f), and 3) randomly distributed microtubule filaments (Fig. 2g,h). The first two groups covered a large portion of microtubule features due to the sparse and extended microtubule distributions in cells. The third group is created to remedy those missing features from the first two groups, e.g. to consider the cases of densely packed regions with more than two filaments. In this work, we limited the number of images in the third group for demonstration. As seen later, however, we showed that the performance of the method can be substantially improved by expanding the third group to contain more microtubule features.
Next, once the database of microtubule filament images is established, a LR image of interest was introduced as shown in Fig. 2i, and the two-step procedures of image inference described above were applied. As a specific example, first, the LR input image was segmented into patches, which were compared with the database to search for matching LR and SR candidates. The search returned the best 30 LR candidates as well as their SR pairs. As shown in Fig. 2j, these LR candidates were largely indistinguishable due to the limited resolution, while their SR images were explicitly distinct from each other (Fig. 2k). Thus, the second step of inference was performed to choose among these SR candidates based on the connectivity between their boundaries. The 30 SR candidates were examined by comparing the overlapping regions of neighboring patches in both horizontal and vertical dimensions (as previously shown in Fig. 1c). Among various combinations formed by these SR candidates, the one that fulfilled the best global (i.e. whole image) connectivity was finally selected to construct the SR output image (see Methods). It should also be noted that the example database as shown in Fig. 2a–h only considered rotational rather than translational variance among microtubule features. For example, in Fig. 2a–c, the translations of the single microtubule filaments oriented in varying angles in each example image were not considered (i.e. these filaments are all centered in each example image). This may result in less accurate inference to cover all possible microtubule distributions. To address this problem without increasing the size of the database, i.e. the computation time, we combined another strategy. Here we instead translated the input LR image at single-pixel steps, performed the two-step image inference to identify the SR images of all the translated images, and then translated the SR results inversely by the same amount of single-pixel steps (Supplementary Figure 2). The final SR image of the original LR input is the average over all these SR images. Since the image inference procedures maximally search for the best SR reconstruction from the database, this average provides the maximum likelihood of the accurate reconstruction of all patches. As a result, this strategy effectively enhances the usefulness of a relatively small database, overcomes the lack of sufficient examples of translations, and provides both smoothness and accuracy in the final SR image (Supplementary Figure 3).
Figure 3 shows several super-resolved microtubule images using this resolution improvement method. Compared with images taken by the conventional LR wide-field microscope (left column, Fig. 3), the computational results using our method exhibit considerably improved resolution (center columns, Fig. 3). As seen in the magnified sections, some of originally indiscernible microtubule filaments in the conventional images became resolved using the computational method. We also compared the results with experimentally determined super-resolution images of the same areas using STORM (right column, Fig. 3). The global microtubule structures obtained by the example-based, computational method approximately agreed with the experimental super-resolution imaging results. However, it is worth noting that compared with the STORM results, some incorrectly retrieved SR features, especially at those dense structures, can be clearly noticed.
We find that the completeness of the database is critical to the performance of the method. The demonstration in Fig. 3 utilized only 30 images in the third group as described in Fig. 2, which led to a substantial lack of necessary features for the reconstruction. One could be the lack of more complex examples that resulted in erroneous inference for high-density structures. Also, the database is lacking three-dimensional (3D) examples, so the algorithm may confuse the image of a defocused filament (i.e. broadened image due to the diffraction of light) with that of two or more overlapping filaments. To examine such a dependence on the completeness of the example database, we show in Fig. 4a that nearby filaments became better resolved as we moderately enlarged the number of randomly distributed microtubules in the database. Quantitatively, we evaluated this improved image inference by the structural similarity index (SSIM)20, which is used as a universal image quality measure to compare processed and ground-truth images. Here the computational images obtained by our example-based method were compared with the experimentally derived SR images as a function of the database size (see Methods). The comparison showed that the SSIM increased for databases containing more examples, a quantitative result consistent with the visually apparent improvements in image quality seen with larger databases. In addition, we also measured the closest distance between microtubules that the computational method could resolve correctly. We used this value to determine the resolution in our computational imaging system. As shown in Fig. 4b, the system resolution improves as the database is enriched with more features. We also showed in Supplementary Figure 4 the stability of the inference by calculating the standard deviation (SD) of the matches as we translated the input LR image at the single-pixel step. This SD describes the consistence of the inference around a certain pixel as its containing patch is translated. In the case when the library is built with sufficient true structures that match the structures under investigation, all these patches should be able to find identical matches. Hence, around such a location, the SD will be low and the inference is thus considered to be highly reliable. All these results imply that the performance of the method is critically dependent on the composition of the database. Furthermore, noise is another factor that affects the performance of the method. In practice, the noise level (or signal-to-noise ratio (SNR)) in the library should be consistent with that in the input LR image so that the images can be fairly compared. As shown in Supplementary Figure 5, the quality of reconstruction can be much improved when the SNRs in the library and input are similar, while the quality becomes degraded when the SNRs are deviated. In addition, the higher the SNRs in both the library and input, and better the quality of the computed image. The finding implies the critical role of the noise level and in the future development of the method. In addition to simply increasing the number of features present in the library, we could also strategically enhance the database with tailored features based on the regions in the final SR image where extra computational consideration is required. Such an algorithm could adaptively generate examples in the library focusing on a specific complex structure in the sample. For example, if a LR image has a region that shows the crossing of five microtubules, the algorithm should be able to adaptively learn and generate a large set of examples to infer this case until a satisfying quantitative score is reached.
In summary, our proof-of-principle work demonstrates a resolution improvement method that is able to computationally convert a LR image into its SR form with an algorithm using a database of examples. The work reveals that SR images can be obtained directly from their LR images without the need to accumulate sequential image frames in some cases. We anticipate this approach to work better for those structures that prior knowledge of their shapes exists, which would facilitate the construction of the example library. We thus expect this computation-based method to improve image resolution from conventional LR images to be useful for imaging fast dynamics of known biological structures (e.g. microtubule dynamics) with higher resolution. Furthermore, compared with deconvolution microscopy, because this method explores both prior knowledge and boundary (or global) connectivity, the resulting images are not only sharpened but also able to resolve some indiscernible (overlapping) structures where deconvolution is unable to resolve due to the lack of a prior knowledge based algorithmic framework (Supplementary Figure 6). It is however important to note that, though greater resolution has been achieved with this example-base computational method, inaccurately retrieved patterns have also been observed compared to the experimentally obtained super-resolution images. Practically, labeling deficiencies will affect the performance of the method due to both the non-uniform brightness in the structure and noise in the background. In the current work because we used a simulated library, which brightness is uniform and background contains no noise. Future improvements can be made by additionally include experimental examples in the library which may partially mitigate the problem. In addition, the current implementation of the method may not be able to identify new structures if they do not have corresponding examples in the library, or the global smoothness may not be well achieved for structures that contain spatially separated clusters, rather than continuous ones like microtubules. These reflect limitations in both the database and the algorithm. Future improvements on both fronts may help overcome this challenge and allow the method to be used for broader applications. For example, future development will be aimed at developing a hybrid database that efficiently combines numerical simulations that take into account prior knowledge of the sample of interest with the experimental results obtained by all types of super-resolution optical imaging techniques (STORM/(F)PALM, STED or SIM, etc.). To further broaden the structures that can be inferred, we will establish a collection of databases for many types of biological structures, as well as develop algorithms to address local features (e.g. geometry, polarization, orientation) in addition to global connectivity, so that the method can be adapted to a wide range of biological investigations. With a sufficiently large example library that increase the possibility that the shapes of the structures under investigation are adequately covered in the library, this approach might allow us to approximate SR images of structures from LR conventional images without prior knowledge of their shapes. Computationally, the current method is demonstrated on a desk-top PC (2.67-GHz GPU, 72-GB Memory), and the reconstruction of each SR image requires 5–10 hours. Accelerated computing approaches will be incorporated into the existing algorithm for future development. In the long term, we expect to establish an on-line system where all researchers in the field can contribute their image data to study specific biological structures. More advanced algorithms will be developed to sort and process the on-line data, construct such libraries, and perform SR image reconstruction. With further advancement, this example-based approach may enable computational super-resolution imaging of many biological structures and provide a new path for large-imaging-data aided biomedical research.
Methods
The principle of the algorithm
The input LR image of interest is represented by p, which contains two-dimensional pixel values of the image. For the first procedure, for patch i in the input LR image p, the images of its LR matching candidates xi are denoted by L(xi). Based on a Markov random field (MRF), we consider that any of the example patches have an equal probability to match the input patch i. Thus, the difference between the input and example patches is considered as independent, equally distributed Gaussian noise at each pixel. The evidence potential ψi(xi) is then described as
where the variance σ1 is determined by the completeness of the example database, e.g. a large example database yields better matching and hence a smaller variance. In this work, we used a normalized σ1 = 1 for all measurements. Based on the evidence potential, the best matching LR and SR candidates are selected. For the second procedure, belief propagations between the neighboring SR candidate patches are performed to ensure globally smooth connectivity. As shown in the following Appendix section and Supplementary Figure 1 on database construction, each SR patch has an overlapping region with the neighboring ones. Hence, the selection of the best candidate in the second procedure relies on the connectivity potential between neighboring SR candidate patches S(xi) and S(xj) for input patches i and j, respectively. Again, we assume the difference of the overlapping region Oij(or Oji) of the SR candidates are described by MRF and hence follows a Gaussian distribution. Thus
where σ2 denotes the variance of difference between the overlapping regions in neighboring candidates, which is set to 1 in this work for all measurements. With these two procedures, the goal is then to globally maximize the joint probability
where ε is the set of edges in the above MRF procedures denoted by the neighboring patches i and j, and Z is a constant normalization factor. Those SR candidate patches that fulfill a maximized joint probability P(x) will be selected to construct a final SR image.
The construction of synthetic microtubules
As shown in Fig. 2, numerically simulated LR and SR image pairs of microtubule filaments were generated in three categories: single filaments, crossing filaments, and randomly distributed filaments. The LR and SR microtubules have a 341-nm and 60-nm width, respectively, consistent with experimental measurement of immuno-stained microtubules19. The orientations of these microtubules are described in the main text and the caption of Fig. 2. In the third category, each microtubule is determined by its two ends, the positions of which are randomly chosen within the field of the image. Each image contains 20 such randomly distributed microtubules. LR example images are 50 pixel × 50 pixel in categories 1 and 2, and 200 pixel × 200 pixel in category 3, with a pixel size = 160 nm. Corresponding SR example images are 250 pixel × 250 pixel in categories 1 and 2, and 1000 pixel × 1000 pixel in category 3, with a pixel size = 32 nm. In this work, background noise in both LR and SR images were ignored given the fact that the labeling is highly specified and the use of relatively thin samples (compared to the depth of focus) in STORM.
The construction of database
First, LR example images as described above were up-sampled 5 times using bi-cubic interpolation so that the pixel size in the LR images (160 nm/5 = 32 nm) matched that in their SR counterparts (32 nm). Next, LR images were segmented into 21-pixel × 21-pixel patches, with each segmentation centered at a step of 5 pixels in both horizontal and vertical dimensions (Supplementary Figure 1). As a result, the overlapping regions of neighboring patches are 21 pixel × 16 pixel. The central 15-pixel × 15-pixel region of each LR patch was paired with a SR image. As a result, the adjacent SR patches have an overlapping region of 15 pixel × 10 pixel. Both LR and SR patches were stored as k-dimensional (k-d) binary tree21 for data sorting and searching.
The segmentation of an input LR image
An input image (camera pixel size = 160 nm) was first up-sampled 5 times using bi-cubic interpolation to match the pixel size of the LR and SR patches in the database (32 nm). The image was then segmented into 21-pixel × 21-pixel patches, centered at a step of 12 pixels in both horizontal and vertical dimensions (Supplementary Figure 1). As a result, neighboring patches have an overlapping region of 21 pixel × 9 pixel. After the first procedure, SR candidate patches for each input patch have an overlapping region of 15 pixel × 3 pixel. The determination of the above patch size in both the database and the input image is based on the empirical consideration that it is difficult to find matching examples for a larger patch in the database due to the complicated structural features, or a smaller patch that contains too little information for the algorithm to process accurately. Supplementary Figure 7 shows the outcome of different patch sizes. In addition, because SR images reveal more information than LR ones, the SR patches are thus designed to be smaller than the LR ones (i.e. using the central 15-pixel × 15-pixel region of each LR patch).
Image contrast enhancement
To provide a better image contrast in Fig. 3, we employed a linear filter provided in MATLAB to extract gradient information (i.e. sharpness) from the images. In brief, the filter adopts a standard 2D Laplacian derivative operator22: . Under this operator, a 2D image f at pixel (i, j) becomes
Such an image contrast enhancement improves the image quality but not super-resolution capability.
Structural similarity index (SSIM)
The index is made of three comparisons: luminance, contrast and structure20. In this work, the index mainly considers structural comparison using correlation between the two images. Other factors such as luminance and contrast are negligible due to the use of synthetic data. We used the standard SSIM function in MATLAB to quantify the quality of SR reconstructions in Fig. 4.
Immunofluorescence staining of microtubules
STORM sample preparation, imaging and data analysis follow previously reported procedures19. In brief, immunostaining was performed using BS-C-1 cells (American Type Culture Collection) cultured with Eagle’s Minimum Essential Medium supplemented with 10% fetal bovine serum, penicillin and streptomycin, and incubated at 37 °C with 5% CO2. Cells were plated in LabTek 8-well coverglass chambers at ~20,000 cells per well 18–24 hours prior to fixation. The immunostaining procedure for microtubules and mitochondria consisted of fixation for 10 min with 3% paraformaldehyde and 0.1% glutaraldehyde in PBS, washing with PBS, reduction for 7 min with 0.1% sodium borohydride in PBS to reduce background fluorescence, washing with PBS, blocking and permeabilization for 20 min in PBS containing 3% bovine serum albumin and 0.5% (v/v) Triton X-100 (blocking buffer (BB)), staining for 40 min with primary antibody (rat anti-tubulin (ab6160, Abcam) for tubulin or rabbit anti-TOM20 (sc-11415, Santa Cruz) for mitochondria) diluted in BB to a concentration of 2 μg/mL, washing with PBS containing 0.2% bovine serum albumin and 0.1% (v/v) Triton X-100 (washing buffer, WB), incubation for 30 min with secondary antibodies (~1–2 Alexa 647 dyes per antibody, donkey anti-rat for microtubules and donkey anti-rabbit for mitochondria, using an antibody labeling procedure previously described)19 at a concentration of ~2.5 μg/mL in BB, washing with WB and sequentially with PBS, postfixation for 10 min with 3% paraformaldehyde and 0.1% glutaraldehyde in PBS, and finally washing with PBS.
STORM imaging buffer
All imaging was performed in a solution that contained 100 mM Tris (pH 8.0), an oxygen scavenging system (0.5 mg/mL glucose oxidase (Sigma-Aldrich), 40 μg/mL catalase (Roche or Sigma-Aldrich) and 5% (w/v) glucose) and 143 mM beta-mercaptoethanol.
STORM imaging
A 647-nm laser at an intensity of 2 kW cm−2 was used for excitation of the dyes. Under this condition, the dye molecules were in the fluorescent state initially but rapidly switched to a dark state. All STORM movies were recorded at a frame rate of 60 Hz using home-written Python-based data acquisition software. The movies typically consisted of 30,000–100,000 frames. During each movie, a 405-nm laser light (ramped between 0.1 and 2 W cm−2) was used to activate fluorophores and to maintain a roughly constant density of activated molecules. STORM data was analyzed using lab-written software.
Electronic supplementary material
Acknowledgements
This project is in part supported by National Science Foundation (CBET-1604565), Defense Advanced Research Projects Agency (DARPA) (D16AP00108); National Institute of General Medical Sciences (NIGMS) (1R35GM12484601).
Author Contributions
S.J. and J.N.K. designed the studies. S.J. and B.H. developed the algorithms and performed the simulations and analysis. S.J. performed STORM imaging. All authors contributed to the writing of the manuscript.
Competing Interests
The authors declare no competing interests.
Footnotes
Electronic supplementary material
Supplementary information accompanies this paper at 10.1038/s41598-018-24033-7.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Shu Jia, Email: s.jia@stonybrook.edu.
J. Nathan Kutz, Email: kutz@uw.edu.
References
- 1.Hell SW. Far-Field Optical Nanoscopy. Science (80-.). 2007;2:1153–1158. doi: 10.1126/science.1137395. [DOI] [PubMed] [Google Scholar]
- 2.Huang B, Babcock H, Zhuang X. Breaking the diffraction barrier: Super-resolution imaging of cells. Cell. 2010;143:1047–1058. doi: 10.1016/j.cell.2010.12.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Rust MJ, Bates M, Zhuang X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM) Nat. Methods. 2006;3:793–795. doi: 10.1038/nmeth929. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hess ST, Girirajan TPK, Mason MD. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys. J. 2006;91:4258–72. doi: 10.1529/biophysj.106.091116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Betzig E, et al. Imaging intracellular fluorescent proteins at nanometer resolution. Science. 2006;313:1642–5. doi: 10.1126/science.1127344. [DOI] [PubMed] [Google Scholar]
- 6.Shroff H, Galbraith CG, Galbraith JA, Betzig E. Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics. Nat. Methods. 2008;5:417–423. doi: 10.1038/nmeth.1202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Hell SW, Wichmann J. Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt. Lett. 1994;19:780–782. doi: 10.1364/OL.19.000780. [DOI] [PubMed] [Google Scholar]
- 8.Gustafsson MGL. Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proc. Natl. Acad. Sci. USA. 2005;102:13081–13086. doi: 10.1073/pnas.0406877102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Jones SA, Shim S-H, He J, Zhuang X. Fast, three-dimensional super-resolution imaging of live cells. Nat. Methods. 2011;8:499–508. doi: 10.1038/nmeth.1605. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Huang F, et al. Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms. Nat. Methods. 2013;10:653–8. doi: 10.1038/nmeth.2488. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Holden SJ, Uphoff S, Kapanidis AN. DAOSTORM: an algorithm for high- density super-resolution microscopy. Nature methods. 2011;8:279–280. doi: 10.1038/nmeth0411-279. [DOI] [PubMed] [Google Scholar]
- 12.Babcock H, Sigal YM, Zhuang X. A high-density 3D localization algorithm for stochastic optical reconstruction microscopy. Optical Nanoscopy. 2012;1:6. doi: 10.1186/2192-2853-1-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Cox S, et al. Bayesian localization microscopy reveals nanoscale podosome dynamics. Nature Methods. 2011;9:195–200. doi: 10.1038/nmeth.1812. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Zhu L, Zhang W, Elnatan D, Huang B. Faster STORM using compressed sensing. Nature Methods. 2012;9:721–723. doi: 10.1038/nmeth.1978. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Babcock HP, Moffitt JR, Cao Y, Zhuang X. Fast compressed sensing analysis for super-resolution imaging using L1-homotopy. Opt. Express. 2013;21:28583. doi: 10.1364/OE.21.028583. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Chmyrov A, et al. Nanoscopy with more than 100,000 ‘doughnuts’. Nat. Methods. 2013;10:737–40. doi: 10.1038/nmeth.2556. [DOI] [PubMed] [Google Scholar]
- 17.York AG, et al. Instant super-resolution imaging in live cells and embryos via analog image processing. Nat. Methods. 2013;10:1122–6. doi: 10.1038/nmeth.2687. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Freeman, B. & Liu, C. In In: Advances in Markov Random Fields for Vision and Image Processing (eds Blake, A., Kohli, P. & Rother, C.) (MIT Press, 2011).
- 19.Dempsey GT, Vaughan JC, Chen KH, Bates M, Zhuang X. Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging. Nature Methods. 2011;8:1027–1036. doi: 10.1038/nmeth.1768. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004;13:600–612. doi: 10.1109/TIP.2003.819861. [DOI] [PubMed] [Google Scholar]
- 21.Bentley JL. Multidimensional binary search trees used for associative searching. Communications of the ACM. 1975;18:509–517. doi: 10.1145/361002.361007. [DOI] [Google Scholar]
- 22.Rosenfeld, A. & Kak, A. C. Digital picture processing. 2nd edn, (Academic Press, 1982).
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.