Skip to main content

This is a preprint.

It has not yet been peer reviewed by a journal.

The National Library of Medicine is running a pilot to include preprints that result from research funded by NIH in PMC and PubMed.

ArXiv logoLink to ArXiv
[Preprint]. 2023 Oct 17:arXiv:2304.01456v3. Originally published 2023 Apr 4. [Version 3]

Fluorescence Microscopy: a statistics-optics perspective

Mohamadreza Fazel 1,2, Kristin S Grussmayer 3, Boris Ferdman 4, Aleksandra Radenovic 5, Yoav Shechtman 4, Jörg Enderlein 6, Steve Pressé 1,2
PMCID: PMC10104198  PMID: 37064525

Abstract

Fundamental properties of light unavoidably impose features on images collected using fluorescence microscopes. Modeling these features is ever more important in quantitatively interpreting microscopy images collected at scales on par or smaller than light’s wavelength. Here we review the optics responsible for generating fluorescent images, fluorophore properties, microscopy modalities leveraging properties of both light and fluorophores, in addition to the necessarily probabilistic modeling tools imposed by the stochastic nature of light and measurement.

I. INTRODUCTION

A. A brief history of optics and statistics

The ancient Greeks were divided over whether vision arose from rays entering or leaving the eyes [1, 2]. For instance, atomists believed that perception arose from an atom flux traveling through space to the eyes. Aristotle (384–322 BCE) later proposed the notion of ether serving as a medium for transmission of intrinsic qualities of objects to the eye rather than fluxes of atoms. An alternative formulation, advocated by Pythagoras (570–495 BCE) and Euclid (325–270 BCE), proposed the notion of ocular fire whose rays impassively scanned their surroundings. Following this logic, Euclid established a geometric optics explaining the perception of size and angles from the geometry of these ocular rays. Along these same lines, the Chinese philosopher Mo Di (470–391 BCE) established a geometric optics similar to Euclid’s explaining the formation of shadows and images in mirrors [3].

An amalgam of these ideas–with fire originating from the eyes coalescing with another fire derived from objects enabling vision–was perhaps now demanded on philosophical grounds and promoted by Plato (427–347 BCE). In Ptolemy’s optics (100–170 CE), sunlight activated objects whose emitted rays now interacted with visual rays to give rise to perception. In Ptolemy’s theory, perception relied on the angular distribution, length, refraction and reflection of rays from the eye [1, 4].

Although these early Greek theories appear manifestly naive, emerging notions of geometric optics served as a clear starting point to Medieval Arabs who took a decidedly more phenomenological approach. For example, inspired by Euclid’s geometric optics, Al-Kindi (801–873CE) demonstrated that visual rays travel in straight lines by simple experiments on shadows [1]. This early progress was followed by insights from Ibn al-Heytham–latinized as Alhazen (965–1040 CE)–who showed that eyesight is derived from light rays received by the eyes from objects [1, 5]. Further, he consistently devised experiments to test his optical theories including theories on refractive and reflective properties of light rays on boundaries, lenses and spherical mirrors among others [1, 57].

The distribution of Latin translations of Alhazen’s Book of Optics [8] amongst other ancient works, ultimately sparked a Renaissance that presages the onset of modern optics in Europe. From the democratization of knowledge driven by the indefatigable Gutenberg presses followed refractive telescopes attributed to the Dutch spectacle-makers Zacharias Janssen (1585–1638 CE) and Hans Lippershey (1570–1619 CE) and reflecting telescopes attributed to Issac Newton (1643–1727 CE) [9]. In contrast to telescopes, there is uncertainty regarding the original inventor of microscope, though often credited to Zacharias Janssen [6, 10].

From the very start, the world of microscopy and biology were intertwined: the Dutch businessman and scientist Antonie van Leeuwenhoek (1632–1732 CE) exploited his microscope to single-handedly discover bacteria, sperm cells, and red blood cells amongst other actors dominating the microscopic realm [10]. Little, in this regard, has changed throughout history with sizes, features, and other optical properties of the Natural world motivating the design of modern microscopes. Subsequent compound microscopes [11], also credited to Janssen and foreshadowing our multi-lens microscopes, provided improved magnification and were widely used by Robert Hooke (1635–1703 CE) [11], author of the first book on microscopes Micrographia.

Now taken for granted, successive properties of light–including diffraction, refraction, reflection as well as light’s particulate nature–were each individually leveraged in microscope development with diffraction through an aperture first reported by the Italian Jesuit Francesco Maria Grimaldi (1618–1663 CE), followed by a number of discoveries culminating in Maxwell’s (1831–1879 CE) electromagnetic theory, and theories on light’s quantization [12, 13] due to Planck (1858–1947 CE) and Einstein (1879–1955 CE).

Setting aside remarkable later microscopy advances–including phase imaging [14, 15]–we interrupt history to pause at fluorescence microscopy which has dominated the scene of the last half century as smaller scales demanded increased contrast between background and object of interest [16]. At such scales, the stochastic properties of light, intrinsic to quantum mechanics, dictate our ability to interpret fluorescence microscopy data and bring us back to the primary focus of this review: fluorescence microscopy from a statistics-optics perspective.

Modeling light’s stochastic properties isn’t an exercise in mitigating the recurring nuisance of shot noise. It is, instead, fundamental to how we draw insights at the scales fluorescence microscopy has unraveled. In fact, a fluorescent photon’s emission time, its absorption time, emission wavelength, and detection location, i.e., where a photon is detected on an image plane, are all random variables. These random variables themselves are drawn from probability distributions. In the classical limit, the probability density for locating photons is proportional to the time-averaged energy flux given by Poynting’s theorem [17], introduced by John Henry Poynting (1852–1914 CE). For point-like sources of light, e.g., fluorophores, the normalized spatial distribution, coinciding with a slice orthogonal to the propagation direction, is termed the Point Spread Function (PSF). This inherent randomness in a photon’s location, imperfectly detected and reporting only probabilistically on a fluorescent object of interest, now introduces multiple levels of stochasticity between the object whose properties we care to characterize and measurement output. This, unavoidably, introduces statistical concepts–including notions of latent variables and hierarchical probabilistic models–in the quantitative modeling of imaging systems.

The manipulation of hierarchical dependencies between random variables then requires what is known today as Bayes’ theorem. The theorem, attributed to its namesake Thomas Bayes (1702–1761 CE), was popularized by Pierre-Simon de Laplace (1749–1827 CE) who introduced and codified, through seminal texts on probability [18, 19], probabilistic modeling to the Sciences [20].

Before we return to microscopy, we now take a brief detour to discuss statistical modeling relevant to our future applications.

B. Introduction to statistical modeling

The electromagnetic force carrying particle, the photon, is intrinsically both wave-like and particulate. While the continuous spatial distributions over a photon’s location are dictated by the photon’s wave properties, photon detections themselves are necessarily pointillistic and probabilistic. As such, even before considering other sources of stochasticity like detection, a quantitative picture of microscopy demands, at its most fundamental level, an exposition of the theory of statistical sampling.

Here, we first lay out the main concepts for probabilistic modeling. We then discuss the concept of likelihoods and Bayesian inference key to the statistical frameworks introduced throughout this review.

1. Basic concepts and notation

Stochasticity in a system arises either from the inherent random nature of the physical system or measurement noise or both. Both are relevant in quantitative microscopy and thus we minimally require two layers of stochasticity: at the level of photon shot noise and at the level of detection; see Appendix A. Soon, we will also see that additional levels of stochasticity may arise from the behavior of fluorescent labels.

For this reason, we begin by defining the requisite notions of a random variable. A random variable, R, represents a collection of possible options, either numeric or non-numeric, following the statistics of a probability distribution P.

As such, we often write R~P, where the above reads “the random variable R is sampled from the probability distribution P”. We then denote r a particular realization of R and p(r) the probability density associated with the probability distribution P.

Generally, the probability distribution itself depends on parameters, ϑ. To make such dependency explicit, we may write p(r;ϑ) and P(ϑ) 21]. For example, the location at which the photon is detected is itself a random variable, R, sampled from a distribution centered at the emitting molecule’s location, r0. As such, we write

R ~Ur0, (1)
pr;r0=Ur;r0, (2)

where ϑr0, and p(r;ϑ) is the probability density, i.e., the PSF, from which r is drawn.

It is often of interest to compute the probability of obtaining a value from a subset η of the possible values (rη), given by

𝒫η=ηdrpr;ϑ. (3)

By definition if η is the entire set of options then 𝒫η=1. For instance, the probability of a photon reaching a pixel is given by the integral of the PSF over the pixel area 𝒜

𝒫pix=𝒜Ur;r0dr. (4)

In probabilistic modeling, we often work with many random variables, R1,R2,,RN, at once. For this reason, we define the joint density

pr1:N;ϑ=pr1,r2,,rN;ϑ. (5)

The density of any individual rn is then obtained by integrating the joint density with respect to all values of r1:n-1 and rn+1:N

prn;ϑ= dr1:n-1drn+1:Npr1:N;ϑ. (6)

This integration, termed a marginalization, results in a marginal density, prn;ϑ. Marginalization is often useful in computing, say, the probability over the diffusion coefficient of an emitter (a fluorescently labeled molecule or dye) irrespective (and thus integrating over) its exact location in space. This is later explored in, e.g., Fig. 44.

FIG. 44:

FIG. 44:

Posteriors over diffusion coefficients strongly depend on the pre-specified M when operating within a parametric Bayesian paradigm. The trace analyzed contains ≈1800 photons generated from 4 molecules diffusing at D=1μm2/s for 30 ms with a background and maximum molecule photon emission rate of 103 and 4×104photons/s, respectively. To deduce D within the parametric paradigm, we assumed a fixed number of molecules: (a) M=1; (b) M=2; (c) M=3; (d) M=4; and (e) M=5. The correct estimate in panel d–and the mismatch in all others–highlights why we must use the available photons to simultaneously learn the number of molecules and D. The figure is adapted from Ref. [262].

If random variables R1:N are independent and identically distributed, iid, then Eq. 5 assumes the simpler form

R1,R2,,RN~iidP(ϑ) (7)

with the understanding that the joint density decomposes into the product of independent densities pr1;ϑ,pr2;ϑ,,prN;ϑ. For example, iid random variables include photon arrival times following pulsed excitation for a static distribution of molecules; e.g., as later explored in the Box IV C.

Statistical Framework IV C: single spot FLIM.

Data: photon arrival times

Δt1:K=Δt1,,ΔtK.

Parameters: inverse lifetimes, weights

ϑ=λ1:M,π1:M.

Likelihood:

PΔt1:Kϑ=kPΔtkϑ.

Priors:

λm~Gammaαλ,βλ,
πm ~Dirichlet1:MαπM,,απM,M

Posterior:

PϑΔt1:KPΔt1:KϑPϑ.

In general, random variables are not independent, e.g., the position of a molecule in time where the system’s state depends on its state at a previous time point either exactly or by approximation. This dependency, explored in the context of fluorophore dynamics in Sec. II C is termed the Markov assumption. In this case, we say that values that can be ascribed to R2 depend on the realization of a preceding random variable r1. This dependency is often expressed as

R2r1,ϑ~Pr1,ϑ, (8)

which reads “the random variable R2 given the parameters ϑ and realization (or “conditioned on”) r1 of R1 is sampled from the probability distribution Pr1,ϑ“. The density we associate to this probability distribution then reads pr2r1,ϑ and is referred to as a conditional density. In general, a random variable RN can depend on many other random variables R1:N-1 with associated conditional density prNr1:N-1,ϑ. Such conditionals will become useful as we build hierarchical models relating random variables across our boxed environments.

Bayes’ theorem, of central importance in expressing hierarchical random variable dependencies, then follows from the observation that conditional densities, such as pr1:2=pr2r1pr1, satisfy pr1:2=pr2:1 and thus

pr1r2pr2=pr2r1pr1. (9)

As is customary in physics, we will now denote both random variables and their realizations with lower case letters. The distinction between both notions will be implied by the context.

2. Likelihood

We can now introduce the object at the heart of quantitative analysis of microscopy data: the likelihood. The likelihood is a probability distribution over those random variables coinciding with K experimental observations, w1:K, conditioned on ϑ. The likelihood’s density is thus written as pw1:Kϑ where w1:K=w1,w2,,wK. It is also convenient to denote this set with an overbar, w.

The term likelihood follows from the notion that pw1:Kϑ is a likelihood of observing the sequence of observations w1:K under the assumptions of the model (i.e., calibrated values for parameters ϑ of a particular model). Indeed, all box environments will contain likelihoods for each statistical framework presented.

Often as the parameters are themselves unknown, we ask what values for these parameters maximize the likelihood of the observed sequence w1:N. These parameter values are called estimators and are denoted by ϑˆ. For example, we can ask what values of the excited state lifetime (assuming one fluorophore species) make the photon arrival times observed most probable; e.g., Box IV C.

For practical reasons, it is common to work with, and maximize, the likelihood’s logarithm w1:Kϑ=logpw1:Kϑ, sometimes termed log-likelihood, rather than the likelihood itself, e.g., see Sec. V C. This is because the logarithm is both monotonic with the original function and avoids numerical underflow typical of small probability densities arising as K grows.

Within a Maximum Likelihood Estimation (MLE) framework, ϑ are treated as fixed (deterministic) parameters and the data, w1:K, are understood as realized random variables. While the MLE yields a single value (estimator) for the parameters, the uncertainty around the parameter estimate is captured by computing the likelihood’s breadth around its maximum. The breadth is often estimated as

σϑl2=𝒬(ϑ)-1ll, (10)

where l counts the elements of the model parameter set, ϑ. Here, 𝒬(ϑ) is the Fisher information matrix defined as [22, 23]

𝒬llϑ=-E2w1:Kϑϑlϑlϑˆ, (11)

where E denotes the expected value of the expression within the parenthesis. As Eq. 10 sets the variance, an uncertainty bound, around the MLE, it is sometimes termed the Cramér-Rao Lower Bound (CRLB).

As may be evident, MLE-based approaches present challenges for likelihoods with multiple degenerate maxima or, more importantly, when the model is unknown. What is more, even assuming a model form, the MLE only provides a point estimate not a full distribution over the putative parameter values.

It is for all these reasons that we often turn to a more general Bayesian paradigm. In this setting, we use the likelihood to construct the distribution over the parameters of interest given the observed data, pϑw1:K. The latter object is termed the posterior and is central to Bayesian inference.

3. Posterior

In working with likelihoods, the data is understood as random variables and parameters, ϑ, as fixed but to be determined. In contrast, in a Bayesian setting both data and parameters are treated as random variables. In particular, the data are random variables already realized and whose values are used to construct the probability, pϑw1:K, over the unknown random variables, ϑ. The Bayesian paradigm allows us to properly propagate uncertainty over ϑ from all sources including detector noise, camera intensity pixelation, motion aliasing, photon shot noise, and many more.

The posterior is constructed from the likelihood by invoking Bayes’ theorem, Eq. 9,

pϑw1:K=pw1:Kϑp(ϑ)pw1:K, (12)

where, by normalization,

pw1:K= dϑpw1:Kϑpϑ. (13)

Here p(ϑ), termed the prior, provides a mean to regulate the parameters. For instance, determining a range over which non-zero values of the density arise, e.g., positive or integer values, prior to considering the data.

Thus, from Bayes’ theorem, we obtain a clear recipe by which the prior distribution is updated based on data, w1:K, encoded in the likelihood, to arrive at the posterior pϑw1:K. It is thus clear that to avoid the prior biasing the posterior, K must be sufficiently large [2427]. To mitigate the size of K needed, roughly “flat” or featureless prior distributions between ϑ’s upper and lower bounds are preferred.

As we will see in all applications, likelihoods can generally be constructed from knowledge of the microscopy technique and the physics of the problem while priors are normally motivated by computational convenience. The broad question then arises: Can we determine whether the posterior is peaked at some value of ϑ? More concretely, what does our posterior look like?

Unfortunately, posteriors rarely attain a simple, analytic form, on account of the measurement and physics informing the likelihood. As such, values of ϑ are typically numerically sampled from posteriors using Monte Carlo methods. For example, as we later discuss in the context of confocal microscopy, e.g., Sec. IV C, we will see that ϑ includes quantities such as diffusion coefficients, emission rates, and emitter locations. As posteriors are thus often multi-variate, a common Monte Carlo strategy, loosely speaking, involves sampling one random variable at a time in a scheme termed Gibbs sampling [28].

Whether sampling a posterior exactly or numerically, e.g., via Gibbs sampling, it is often computationally convenient to judiciously select the functional form of priors. Indeed, some prior forms play a special role in Bayesian modeling by having the unique mathematical property that, when multiplied by the likelihood, result in a posterior of the same form as the original prior (albeit with updated, “re-normalized”, parameters). As such, we often speak of conjugate prior-likelihood pairs or, for succinctness, conjugate priors when such priors can be identified. While we will not dwell on specialized notions of Bayesian inference, we make the reader aware that computational efficiency is what makes it possible to include measurement noise details at marginal added computational cost whilst improving the spatiotemporal resolution of any fluorescence analysis method. Indeed, whenever possible, specialized Monte Carlo schemes (from Gibbs sampling [28], to Metropolis-Hastings [29, 30], to slice sampling [31], and beyond [21, 32, 33]) used across all applications discussed herein benefit from any computational advantage thrown their way.

4. Bayesian non-parametrics

From Eq. 12, we see that constructing a posterior demands a mathematical, i.e. “parametric”, form of the likelihood. However, for most practical cases, we often do not know what competing models describe a given data set. We also know, and can demonstrate by way of examples, that the more complicated we make a model, the larger its likelihood, i.e., we over-fit the data.

Compromising between data under- and over-fitting is at the heart of the fundamental model selection problem. From the onset, progress in model selection has been critical, for instance, in clustering problems where the number of clusters (i.e., the model) are unknown [3437]. Indeed, the model selection problem manifests itself across microscopy applications. For example: determining the number of molecules within a diffraction-limited spot (i.e., the model) explored in Box II C; or determining the number of fluorophore species in lifetime imaging explored in Box IV C.

Statistical Framework II C: Counting.

Data: Sum of ROIs’ pixel values in ADUs

w1:K=w1:K1,,w1:KR.

Parameters: loads, fluorophore intensity, background, transition probabilities, fluorophores’ trajectories in the state space

ϑ=b-,IA,-,Π,𝒮--

Likelihood:

Pw1:Kϑ=k,rPwkrΛkr,Ξ.

Priors:

qm~BetaAq,Bq,m=1:,
bm ~Bernoulliqm,
IA~GammaαA,βA,
~Gammaα,β,
Π~DirichletαΠ,
skmrsk-1mr,Π~Categoricalπsk-1mr.

Posterior:

Pϑw1:KPw1:KϑP(ϑ).

While heuristically comparing a fixed set of models to resolve model selection—for example by relying on information criteria [38], and other tools introduced as post-processing steps—is computationally advantageous, such an approach presents theoretical problems. For example, it is often limited to cases where we can exhaustively enumerate models. For example, how many emitters in each frame across a stack of frames can we consider in any wide-field tracking application? Even if enumerable, how do we assign probabilities to these competing models given the data?

Answers to these questions, outside the realm of the Natural Sciences, have led to the formal development of Bayesian Non-Parametrics (BNPs) [21, 39] alongside Monte Carlo tools to sample from the resulting non-parametric posteriors, including Reversible Jump Markov Chain Monte Carlo (RJMCMC) [40]. In short, BNPs treat model and parameter estimation on the same footing [37, 4143] and construct non-parametric posteriors over both models and their associated parameters.

In particular, within a non-parametric treatment, we consider a priori an infinite number of competing models. We place priors on these models alongside their associated parameters just as we place priors on parameters alone within the regular (parametric) Bayesian paradigm.

One catch is that BNPs are limited to a particular class of models termed nested models. Fortuitously, many models considered across microscopy applications belong to this class. Briefly, nested models include all models that can be generated from a more general model by setting parameters to different values (including zero) with the most general model itself being infinite dimensional. For example, a two state model used in analyzing a Förster Resonance Energy Transfer (FRET) time trace, later explored in greater depth in Sec. II A, follows from a three state model where transitions to the third state are all set to zero. Other examples of nested models we will explore include: 1) the number of molecules in a diffraction-limited spot (see Boxes II C and V B 3); 2) the number of fluorophore species in lifetime imaging (see Box IV C); and perhaps less intuitively 3) all competing two-dimensional lifetime maps obtained from scanning confocal lifetime imaging; see Box IV C.

Statistical Framework V B 3: SMLM.

Data: pixel values in ADUs

w1:N=w1,,wN.

Parameters: loads, fluorophore locations, intensities, background

ϑ=b,r-,I,.

Likelihood:

Pw1:Nϑ=nGaussianwn;gΛnϑ+o,σw2.

Priors:

qm~BetaAq,Bq,m=1:,
bm~Bernoulliqm,
rm~Uniform over FOV,
Im~Empirical,
~Gammaα,β.

Posterior:

Pϑw1:NPw1:NϑPϑ.
Statistical Framework IV C: Multi-pixel FLIM.

Data: photon arrival times and pulses being empty or not

Δt--=Δt1:Kpi=1,,Δt1:KpN,
𝒲--=𝒲1:Kpi=1,,𝒲1:KpN.

Parameters: loads, inverse lifetimes, lifetime maps, GP prior averages (hyper-parameters)

ϑ=b1:M,λ1:M,Ω1:M,ν1:M.

Likelihood:

P𝒲--,Δt--ϑ=knP𝒲kn;ϑPΔtkn;ϑ.

Priors:

qm~BetaAq,Bq,m=1:,
bm ~Bernoulliqm,
λm~Gammaαλ,βλ,
Ωm ~GPνm,K,
νm ~Normal0,σχ2.

Posterior:

Pϑ𝒲--,Δt--P𝒲--,Δt--ϑPϑ.

These examples were intentionally numbered. They allow us to introduce three commonly used non-parametric priors used in constructing non-parametric posteriors. In the order in which these examples are listed, we have: the Beta-Bernoulli process prior [21, 4448]; the Dirichlet process prior [21, 3537, 43, 49]; and the Gaussian Process (GP) prior [21, 50, 51].

The Beta-Bernoulli process prior is used when we try to estimate the number of discrete elements contributing to the data. These could be, for example, the number of clusters or, equivalently, the number of emitters contributing photons generating an image frame or producing a stream of photons within a confocal spot, e.g., Box IV C. Within a BNP paradigm, we assign a Bernoulli variable (binary random variable), bm called a load, to each discrete element (molecule). Considering as many as M loads (and letting M eventually tend to infinity), the unknowns appearing in ϑ are augmented to include b1:M. Thus ϑ for the single spot confocal would now include the diffusion coefficient, emission rate, molecular locations, as well as loads, b1:M.

Statistical Framework IV C: Confocal under continuous illumination.

Data: photon inter-arrival times

Δt1:K=Δt1,,ΔtK.

Parameters: loads, diffusion coefficient, molecule trajectories, molecule maximum emission rate, background emission rate

ϑ=b,D,r--,μ0,μ.

Likelihood:

PΔt1:Kϑ=k Exponential Δtk;μk(r).

Priors:

qm ~BetaAq,Bq,m=1:,
bm~Bernoulliqm,
D~InvGammaαD,βD,
rkm~Normalrk-1m,2DΔtk,
μ0 ~Gammaαμ,βμ,
μ~Gammaαb,β.

Posterior:

PϑΔt1:KPΔt1:KϑP(ϑ).

When multiplying the likelihood by the appropriate Beta-Bernoulli prior process, we may then construct a posterior, whose parameters we wish to sample, include the loads. The resulting posterior is, in turn, often sampled using Monte Carlo techniques to determine which loads are sampled mostly as 0’s (and thus coincide with molecules not warranted by the data) or coincide with 1’s (and thus coincide with molecules warranted by the data). The number of molecules in each draw from the posterior are then determined by summing all loads.

We now turn, much more briefly, to the subsequent two non-parametric priors. For instance, the Dirichlet process prior is used when we wish to assign probabilities to an infinite number of components. For example, when we wish to determine to what degree each unique chemical species contributes photons in a lifetime experiment; see Box IV C. Ideally, based on Monte Carlo sampling of the non-parametric posterior (obtained from the product of the likelihood and the Dirichlet process prior), we would find which of the infinite species introduced in modeling contribute non-negligibly to the data.

Finally, GP priors are used in estimating smooth functions. Smooth functions of interest in microscopy include, for example, fluorophore density maps explored in Sec. IV C or even smooth background for large numbers of emitters. Each of these maps consists of an infinite set of correlated random variables, i.e., values of the map at every point in space. Draws from the (non-parametric) posterior then assign values to each point on the map. In practice, the number of map points whose value we wish to deduce is kept finite and limited to a fixed number of points typically over a uniform mesh grid termed inducing points [5153]. The value of the map on a finer spatial grid can then be interpolated from the spatial correlation function already informing the GP prior.

Having now introduced key notions from statistics, we turn to microscopy.

C. Basic characteristics of fluorescence microscopy

All optical microscopes use light, one way or another, to interact with the sample under observation. Indeed, bright-field, dark-field, or even phase contrast imaging differ from each other in details pertaining to which part of the excitation or detection arms are altered or blocked to create images at the detector.

However, these microscopes are limited in their ability to discern contrast at molecular and even supramolecular length scales at which life unravels. At such scales, we exploit fluorescence microscopy, involving fluorophore-labeled samples, later detailed in Sec. II. When excited, fluorophores emit light that can be selectively filtered from the excitation beam to form an image. In its simplest form, a fluorescence microscope is a two-lens system: an objective lens with small focal length f1 and a tube lens with long focal length f2; see Fig. 1.

FIG. 1:

FIG. 1:

Schematic of an infinity-corrected wide-field microscope consisting of an ideal objective lens with focal length f1 and an ideal tube lens with focal length f2. We show light propagation from a point source in the focal plane (sample space) to the image point in image space. The plane between the lenses, a distance f1 away from the objective lens and f2 from the tube lens, is called the conjugate plane (green vertical line). The conjugate plane is also sometimes termed the back focal plane, Fourier plane or pupil plane. Here the light from any point source on the focal plane crosses through the same lateral position. By considerations of geometric proportion, it can be seen that the ratio of lateral displacement of the image point to lateral displacement of the source point is equal to the ratio of the focal lengths, f2/f1. This ratio is the microscope’s magnification .

In modern infinity-corrected research microscopes, the objective converts the diverging spherical wavefront emitted by a point emitter in the focal plane in sample space into a planar wavefront. The planar wavefront is then reconverted by the tube lens into a spherical wavefront converging into a point on the image plane.

The two most important characteristics of a microscope are its magnification and its resolution, i.e., how well sample features are resolved. From Fig. 1, the system’s magnification is given by the ratio f2/f1 (from the proportionality of vertical to horizontal distances). However, the magnification of an optical microscope today is of secondary importance as images are recorded with array detectors, such as Complementary Metal-Oxide Semiconductor (CMOS) or Electron Multiplying Charge-Coupled Device (EMCCD) cameras with varying pixel size; see Appendix A. This is in contrast to visual inspection of a sample where our rod and cone cell sizes are fixed. For such wide-field microscopes equipped with a camera, the detector’s physical pixel size divided by the microscope’s magnification set an upper bound on the image quality. This effective pixel size should be at least two times smaller than the microscope’s optical resolution (Nyquist criterion).

This leads us to the second important microscope characteristic: its resolution. The microscope resolution is limited by a number of factors including the diffraction of light and light collection by objective lenses. These two effects lead to a fundamental resolution limit of approximately half of the wavelength. As such, if the emitted light’s wavelength were to be far smaller than typical dimensions of the molecular species of interest, then our review article would stop here and textbooks would be replenished with real life images reminiscent of David Goodsell’s artistic renderings of life inside the cell [54]. However, this is not the case.

We will discuss more thoroughly resolution of different microscope modalities shortly though we start with a heuristic albeit useful visualization of a fundamental microscope’s optical resolution limit; see Fig. 2. Here we show the (far-field) electric-field distribution of light from two coherent point sources, designated by red dots, before an objective lens. As both point sources are assumed to emit light coherently, the resulting intensity distribution shows characteristic lanes of constructive and destructive interference. When the distance between the two point emitters, y, is gradually reduced (from left to right panels in Fig. 2), the two symmetric lanes of destructive interference (directions of zero light intensity) closest to the optical axis migrate towards higher emission angles, until they reach the objective lens’ edge. At that point, the objective detects only light of a continuous spherical wavefront absent any zero-intensity minima within its light detection cone (with half angle Θ), similar to what the objective would see from a single emitter.

FIG. 2:

FIG. 2:

Visualization of the diffraction limit of resolution. Here, we show interference patterns of two coherently emitting point emitters, shown by red dots, for three different distances between emitters across panels. The closer the emitters are positioned with respect to each other, the larger the angular positions of the destructive interference lanes (directions of zero light intensity). At a critical distance, shown in the right panel, the first lane of destructive interference is positioned at the half angle Θ of light collection of the objective, and the objective lens receives a continuous wavefront absent intensity minima appearing as a single emitter wavefront.

Simple trigonometry dictates that the path difference between 1) the first emitter, and the edge of the lens, and 2) the second emitter, and the same edge of the lens is y sinΘ. In doing so, we assumed that the separation of the lens, and emitters is much larger than y in the far-field limit. The first destructive interference lane therefore occurs at angle Θ if the path difference is half the wavelength, i.e., yminsinΘ=λ/2n, where λ is the vacuum emission wavelength, and n is the refractive index of medium in which the emitters are embedded. As such, the wavelength in this medium is λ/n. From this result follows Abbe’s famous resolution limit, first formulated by Ernst Abbe (1840–1905 CE) in 1873 [55], as

ymin=λ2nsinΘ=λ2NA, (14)

where NA is the objective’s numerical aperture.

A similar simplified consideration can also be applied toward understanding the spatial resolution of a Confocal Laser Scanning Microscope (CLSM). In a CLSM, the sample is scanned with a tightly focused laser beam, and the excited fluorescence light is collected by the microscope optics, focused through a confocal pinhole to suppress out-of-focus light, and finally detected with a point detector (usually silicon-based photo-diode, or photo-electron multiplier tube); see Sec. IV B 1. The recorded fluorescence light intensity as a function of scan position is then used to reconstruct an image. The fundamental advantage of a CLSM as compared to a wide-field imaging microscope is its optical sectioning capability, i.e., its capability to record true three-dimensional sample images, later detailed when considering the Optical Transfer Functions (OTF) of both microscope types. Neglecting momentarily a CLSM’s confocal detection volume, then its lateral resolution is determined by how tightly a laser beam can be focused into an excitation spot. In a mathematically more precise manner, one asks about the tightest spatial intensity modulation still present in a diffraction-limited focus. The answer is given by Fig. 3, which shows that the tightest modulation is achieved by the interference of the two light rays exiting the objective at the highest possible angle, which is exactly the half angle of light detection Θ of the objective. As can be seen, the spatial periodicity of this intensity modulation is again given by Abbe’s formula, Eq. 14, only with the emission wavelength now replaced by the excitation wavelength (usually shorter than the emission wavelength due to the spectral Stokes shift of fluorescence emission with respect to excitation; see Sec. II).

FIG. 3:

FIG. 3:

Lateral resolution limit of a CLSM. The resolution is determined by the highest lateral spatial frequency contained in a focused bright spot. This is generated by the interference of two rays traveling from the edges of the objective to the focal point with the highest possible incidence angle Θ with respect to the optical axis as shown. The associated wave vectors are of equal magnitude, 2πn/λ, where λ is the vacuum wavelength. The corresponding lateral components, kx,θ, of these wave vectors are of equal magnitude given by kx,θ=2πn sinΘ/λ, and opposite directions resulting in a difference of 4πn sinΘ/λ. As such, the interference of the two beams leads to a periodic interference pattern in the lateral direction with periodicity λ/2n sinΘ, equal to the lateral resolution limit of a CLSM.

In a similar vein, we can also obtain the axial resolution limit of a (confocal laser scanning) microscope, by asking about the tightest spatial intensity modulation achievable when focusing light through the objective. The answer is presented in Fig. 4, where the tightest modulation is now generated by the interference of an axial light ray with a light ray traveling at the highest possible incidence angle Θ. This directly yields the axial resolution limit of an optical microscope, complementary to Abbe’s lateral resolution limit, and is given by

zmin=λn(1-cosΘ)2nλ(NA)2, (15)

where the approximation on the right hand side is valid only for small numerical aperture values.

FIG. 4:

FIG. 4:

Axial resolution of a CLSM: Similar to the lateral resolution, the axial resolution is determined by the tightest spatial modulation of light that can be generated along the optical axis. This is achieved by interfering an axially propagating beam with one traveling at the highest possible incidence angle. The axial component of the wave vector of the former is equal to the full wave vector length k0=2πn/λ, and the axial component for the latter is kz,Θ=2πn cosΘ/λ. The resulting interference therefore leads to a spatial intensity modulation along the optical axis with periodicity λ/n(1-cosΘ) setting a CLSM’s axial resolution limit.

We summarize physical scales associated with lateral and axial resolution of diffraction-limited optical microscopes in Fig. 5. Here we show lateral and axial resolutions as functions of the numerical aperture, NA, for optical wavelengths across the visual spectrum using for concreteness a water immersion objective (i.e., designed for imaging in water with refractive index 1.33).

FIG. 5:

FIG. 5:

Lateral and axial resolution of diffraction-limited optical microscopy using a water immersion objective (designed for imaging in water with refractive index 1.33) as a function of numerical aperture NA and wavelength.

While providing qualitative guidance on optical system design, axial and lateral spatial resolution expressions provided in Eq. 1415 remain theoretical. In particular, such expressions provide an upper bound on the resolution otherwise limited by factors including crucial notions of stochastic nature of photons, and undesired out-of-focus light among others.

A final important note is warranted on light (information) collection efficiency and suppression of out-of-focus light from regions outside the focal plane, i.e., limiting light collection to a certain axial range termed optical sectioning. For this purpose, specialized sample illumination and fluorescent light detection techniques have been developed including Total Internal Reflection Fluorescence (TIRF) microscopy [56], Super-critical Angle Fluorescence (SAF) microscopy [57, 58], Metal-Induced Energy Transfer (MIET) microscopy [59], confocal microscopy [60], Image Scanning Microscopy (ISM) [61, 62], two-photon microscopy [63], 4pi microscope [64], Structured Illumination Microscopy (SIM) [65], light-sheet microscopy [66, 67], and multi-plane microscopy [68, 69].

All methods mentioned accomplish optical sectioning and enhance photon collection efficiency in improving image resolution and contrast. These techniques pushed the optical resolution to its very limits as dictated by Abbe’s diffraction barrier. However, it was not until the end of 20th century that this barrier was overcome to achieve spatial resolutions in far-field light microscopy far beyond the diffraction limit [70]. Research in this front is still ongoing leveraging advances in four main components of fluorescence microscopes: fluorescent emitters; optical setups; detectors; and analysis. In what follows, we first discuss fluorescent light sources and then proceed to review optics of different microscope modalities while presenting statistical analysis frameworks throughout.

II. FLUOROPHORES

Point fluorescent emitters or light sources, often molecules termed fluorophores, are key to fluorescence imaging of labeled samples. Both conventional fluorescence imaging, as well as microscopy techniques achieving resolution beyond light’s diffraction limit, rely on tunable properties of fluorophores including emission rates, brightness, absorption and emission spectra, excited state lifetimes, and other photo-physical properties such as blinking and photo-bleaching [71]. Here, we will discuss quantum fluorophore properties, alongside their statistical modeling, and relegate classical models to Sec. III D, where we derive their emission fields.

A. Fluorophore properties

Most molecules do not naturally fluoresce in regimes detectable by modern detectors and cannot easily be excited without inducing photo-damage. Thus, one must often resort to specific fluorescence labeling [72] of biological samples, e.g., to identify and investigate structures against the vast cellular background of proteins, nucleic acids, lipids, and small molecules.

While the addition of fluorescent labels introduces challenges, their intrinsic properties as well as non-linear response to light in themselves open windows of opportunity, e.g., to study molecular interactions [73, 74], determine molecular copy numbers [7577], and improve optical resolution [78, 79] as later detailed in Sec. V.

The most common labels include: fluorescent proteins [8082]; organic dyes [83, 84], generally small organic molecules containing conjugated π-electron systems; and semiconductor quantum dots, inorganic nanocrystals with especially broad excitation and correspondingly narrow emission spectra [85].

Fluorescent labels include a large variety of fluorophores with excitation and emission wavelength maxima spanning the near-infrared, visible and UV spectrum [86, 87]. Less common, “exotic”, fluorescent labels providing an even larger color palette and increasingly tunable photo-physical properties include carbon nanorods, carbon dots, polymer dots, fluorocubes, and fluorescent defects in diamond or 2D materials [8890].

Basic fluorophore photo-physics are captured by Jablonski diagrams such as Fig. 6 for an organic dye illustrating select transitions between different molecular energy and spin states. A more rigorous treatment of transition rules, molecular spectra, and interactions of light and matter, can be found in the books of Lakowicz [91] and Valeur et al [92].

FIG. 6:

FIG. 6:

Simplified Jablonski diagram. The electronic ground state S0, the singlet excited states Sn, the triplet excited states Tn, and radical cation F+ or anion states F·. Thick lines represent electronic energy levels, thin lines vibrational energy levels, while rotational energy states are left unmarked. Here we denote: Phosphorescence by P; Vibrational Relaxation by VR; Internal Conversion by IC; Inter System Crossing by ISC; and rates of oxidation and reduction are kox and kred, respectively. Arrows represent a subsample from all possible transitions between different states.

A molecule in the (typically singlet) ground state is excited to a singlet excited state by absorbing a photon with a probability depending on the excitation light intensity and the molecule’s absorption cross-section (linearly related to the molar extinction coefficient [91]). The molar extinction coefficient ϵλ is a measure of how strongly a solution containing one mole of a fluorophore absorbs (attenuates) light at wavelength λ expressed using the Lambert-Beer law [91]

ϵλ=AλcMl=log10I0λ/IλcMl, (16)

where Aλ is the absorbance measured, I0λ is the initial light intensity of wavelength λ, and Iλ is the light intensity after traveling the path length l through the solution with molar concentration cM. From Eq. 16, it is clear that the SI unit of molar extinction coefficient is m2/mol, but the commonly used unit is lit/cm/mol.

From ϵλ, we immediately arrive at another important fluorophore property, namely molecular brightness Bλ. To achieve high Signal to Noise Ratio (SNR), fluorescent labels with high molecular brightness, Bλ=Qfϵλ, are desired. Here Qf is a unitless quantity called fluorescence quantum yield describing how many fluorescence photons are emitted relative to the number absorbed. This is given by the ratio of the sum of radiative transitions to the total transitions, i.e., the sum of transition rates corresponding to all transition paths out of the excited state,

Qf= kf kf+ knon, (17)

where kf and knon, are, respectively the rate of fluorescence or radiative decay, and rate of non-radiative decay.

Another important fluorophore property is the average time, τ, a fluorophore remains excited prior to emitting a photon

τ=1 kf+ knon. (18)

Here τ, termed fluorescence lifetime, typically lasts on the order of nanoseconds for organic dyes. The fluorescence lifetime is a characteristic property of fluorophores in their unique environment tuned by pH, ion or oxygen concentration, molecular binding, or proximity dependent inter-molecular energy transfers primarily influencing the rate of non-radiative decay [91, 92]. As such, differences in fluorophore lifetimes can be employed to distinguish fluorophore species thereby broadening the appeal of Fluorescence Lifetime Imaging Microscopy (FLIM) [93, 94] in functional and multiplexed imaging of disparate fluorophores with otherwise overlapping spectra [9597]; see Sec. IV B 1.

As described above, the quantum yield is tied to the number of possible transitions out of the excited state either non-radiatively or radiatively. Upon fluorophore excitation, one such radiative transition occurs via rapid vibrational relaxation to the lowest energy level of the S1 excited state followed by radiative decay to a vibrational ground state level with spontaneous fluorescence emission; see Fig. 6. The fluorescence emission is shifted towards longer wavelengths (Stokes shift) as compared to excitation, due to fast internal conversion and vibrational relaxation to the lowest level of the S1 excited state (Kasha’s rule [98]). Another radiative transition out of the excited state, of later interest, is stimulated emission. Typically, stimulated emission does not play a role at room temperature so long as the excitation intensity is low. However, this non-linear process is exploited in STimulated Emission Depletion (STED) super-resolution imaging [70] later described in Sec. V A 1.

In addition to radiative transitions, several alternative non-radiative pathways are available for transition from the first singlet excited state, S1, to the ground state. For instance, the molecule can return to the ground state dissipating the energy to the environment as heat. For example, the non-radiative transition to the triplet state, T1, via inter-system crossing is often employed in Single Molecule Localization Microscopy (SMLM); see Sec. V B 2. Return from T1 to the ground singlet state (phosphorescence) is typically delayed on account of a forbidden spin flip transition; see Fig. 6. As such, transitions to and from triplet, or further reduced/oxidized off-states (also referred to as bright and dark states, respectively) occur on longer timescales (0.1 ms to 100 ms).

To control fluorophore switching between triplet dark and bright states, i.e., to control blinking, oxygen concentration may be adjusted. Upon reaction with dissolved molecular oxygen, fluorophores may transition from the triplet dark (off-state) to singlet ground (on-state) states by interacting with molecular oxygen’s ground triplet state. Molecular oxygen can also accept an electron from a triplet fluorophore inducing typically undesirable phototoxic effects, i.e., irreversible photo-bleaching [99] occurring from many states as shown in Fig. 6.

Though in some applications photo-bleaching is desirable, in others, such as particle tracking [100102] and protein-protein interactions via FRET [103, 104], photo-bleaching and blinking are problematic and suppressed by removal of dissolved oxygen via oxygen scavenging systems, such as glucose oxidase coupled with catalase [105], or by depopulating dark states leveraging both reducing and oxidizing agents [106].

In many cases, such as in STochastic Optical Reconstruction Microscopy [107], blinking of fluorophores is desirable to achieve spatial resolution below the diffraction limit; see Sec. V B 2. Here, many cyanine and rhodamine dyes are used as they can be reversibly photo-switched from a bright state to a dark state (blink) in a buffer containing enzymatic oxygen scavengers and a primary thiol such as β-mercaptoethylamine or β-mercaptoethanol [87, 108]. Alexa Fluor 647 is the organic dye of choice for state-of-the-art direct STochastic Optical Reconstruction Microscopy (dSTORM) imaging due to its high brightness and efficient switching behavior [109]. For several cyanines, e.g., Cy5, it has been shown that thiolate anions covalently bind to the fluorophore [110], thereby disrupting the conjugated system resulting in dark state. The dyes can also be chemically reduced by NaBH4 to a non-fluorescent form or synthesized in a caged form that can later be photoactivated, which has been used in different SMLM techniques [111, 112]. Rhodamine dyes can as well reversibly switch from a fluorescent to a non-fluorescent form by intra-molecular spirocyclization either spontaneously or driven by UV light. This has been exploited to generate sensors and switches or can be used across SMLM applications [87, 113].

Examples of SMLM include (fluorescence) Photo-Activated Localization Microscopy ((f)PALM) [114, 115], as well as derivatives such as single particle tracking PALM (sptPALM) [116]. In these applications, advanced fluorescent proteins are used. These switch between fluorescent states through at the chromophore either reversibly (e.g., on and off for Dronpa by cis-trans isomerization) or through photo-activation (e.g., PA-GFP by decarboxylation) or photo-conversion (e.g., green to red wavelength for mEos by β-elimination) [82, 108].

More recently, studies of protein activity and SMLM have benefited from the discovery of a new class of ligand-activated fluorescent proteins [117]. The prototype UnaG binds the small molecule bilirubin via multiple noncovalent interactions and forms a fluorescent complex. The oxidized (and photo-bleached) ligand can detach from the protein, allowing a fresh bilirubin molecule to bind and act as a sensor for small molecules thereby reporting on protein activity [118].

In general, fluorescent proteins have the advantage of being genetically encodeable, allowing fluorescent labeling of nearly arbitrary target proteins in living cells and organisms by creating fusion constructs. However, this also means that proteins must undergo appropriate folding followed by chromophore maturation, i.e., formation of a fluorescent molecule typically starting from three amino acids [82]. This process can take minutes to hours, may be incomplete, and can impair the temporal accuracy of measurements of rapid processes such as gene expression dynamics [119]. While organic dyes circumvent some of these difficulties, both organic dyes and fluorescent proteins often exhibit complex photo-physical and photo-chemical behaviors complicating quantitative analysis. For instance, organic dyes can exhibit spectral blue shifts upon high laser radiation [120, 121] or spectral shifts from substrate (green) to product state (orange) like in the epoxidation of a double bond in conjugation to a BODIPY dye [122] useful in mechanistic studies of chemical reactions at the single molecule level [123]. However, such spectral shifts may affect multi-color applications, e.g., in super-resolution imaging or Single Particle Tracking (SPT), and are problematic to FRET experiments. Moreover, many proteins have additional dark states, e.g., mEos cis-trans isomerization [124, 125], and organic dyes may have several conformations with different intensity levels, e.g., Atto647N, with at least three states differing in fluorescent lifetimes [126] complicating quantitative single molecule read-outs.

B. Förster resonance energy transfer

In the previous section, we discussed fluorophore properties involving radiative transitions or non-radiative transitions. Here, we continue by considering non-radiative transitions through inter-molecular energy transfer [92]. A few example of these transitions include: Photo-induced Electron Transfer (PET) [127], collisional quenching or FRET, Bioluminescence Resonance Energy Transfer (BRET) [128, 129], Protein Induced Fluorescence Enhancement (PIFE) [130, 131], or the recently discovered Proximity-Assisted Photo-Activation (PAPA) [132]. Such transitions are distance dependent and thus have been leveraged to probe binding interactions or conformational changes.

In what follows, we focus on FRET, an inter-molecular energy transfer process widely used to measure molecular interactions serving as a distance ruler for structural biology [91, 133, 134]. In FRET, non-radiative energy transfer from a donor to an acceptor fluorophore occurs through dipole-dipole coupling with rate constant kFRET when the donor’s emission spectrum overlaps with the acceptor’s absorption spectrum [135]. Under the dipolar approximation, the probability for energy transfer to occur, termed FRET efficiency (EFERT, scales with the donor-acceptor distance to the inverse 6th power [136]. and is 50% at the Förster radius R0

EFERT=11+r/R06=kFERT kf+ knon=1-τDAτD, (19)

where τDA and τD are, respectively, the donor fluorescence lifetime in the acceptor’s presence and absence. For typical donor-acceptor pairs, R0 is a few nanometers [91] and depends on the donor emission-acceptor absorption spectral overlap and the relative orientation of donor-acceptor dipole moments. It is explicitly given by

R06=9000ln10128π5NAn4κ2Qf,D IDλϵAλλ4dλ, (20)

where κ is the so-called orientation factor,

κ=3cosθDcosθA-cosθDA, (21)

Qf,D is the donor’s quantum yield in the absence of the acceptor, n is the solution’s refractive index, NA is the Avogadro constant, ID is the donor’s normalized fluorescence emission spectrum, ϵA is the acceptor’s molar extinction coefficient, θDA is the angle between donor and acceptor transition moments, and θD and θA are the angles between these moments and the vector connecting donor to acceptor, respectively. For ϵA and λ, respectively, given in lit/cm/mol and cm units, R0 is in cm.

Ignoring the angular dependence of the energy transfer, as described in Eq. 20, for fixed dipoles can yield significant biases in FRET distance assessments [137]. Fortunately, in practice, the dipoles are often freely and rapidly rotating (rapid compared to the donor de-excitation rate) leading to an average value of κ2=2/3.

FRET can also occur between spectrally identical molecules (homo-FRET), and is observed by measuring its effect on fluorescence polarization anisotropy [138]

r=I-GII+2GI. (22)

Here, I/ is the intensity measured when the polarizers in the detection path are aligned parallel/perpendicular to those in the excitation, and G is a correction factor for the difference in the instrument’s sensitivity to the two orthogonal polarization orientations.

Upon exposure to linearly polarized light, the excitation probability is highest for molecules whose absorption dipole moments are aligned parallel to the polarization vector of the exciting light. In most cases, the absorption and emission dipoles of a molecule are co-linear, such that fluorescence emission remains polarized immediately after excitation. Fluorescence remains anisotropic unless the molecule rotates over the fluorescence lifetime or the excitation energy is transferred to a different molecule. Thus anisotropy or polarization measurements inform us on molecular parameters such as orientation, oligomerization or size, and environmental conditions like viscosity [138, 139]. Polarization can also be read out in super-resolution imaging, e.g., using polarized light in illumination or detection and capturing polarized emission by implementing specifically engineered PSFs sensitive to polarization [140]; see Sec. V C.

Polarization, lifetime, FRET efficiency, or other photo-physical markers we have discussed herein are only interesting in so far as their changes report back on the kinetics of the underlying labeled molecules. We now turn to Markov models describing discrete molecular events to extract molecular kinetics from photo-physical changes.

C. Markov models for fluorophores

To help motivate the use of Markov models, we consider them in the analysis of FRET data and the enumeration of fluorophores within a diffraction-limited Region Of Interest (ROI).

For example, observations from FRET experiments with photons individually recorded (at avalanche photodiodes abbreviated as APDs) include a set of photon arrival times along with a set of corresponding colors (wavelengths), designated by c=1,2, attributing photons to either donor or acceptor channels, respectively.

The set of photon arrival times (data) are either measured with respect to the start of the experiment, for continuous illumination [141], or with respect to the pulse immediately preceding a photon detection, such as in pulsed illumination [142]. Here, for sake of illustration, we assume continuous illumination where data consists of intervals between photon arrivals. We let K+1 coincide with the total number of photons and denote the data with Δt1:K=Δt1,,ΔtK. The sets of inter-arrival times are then used to learn transition kinetics between system states comprised of molecular and label photo-physical states. For concreteness, we assume that molecular states coincide with conformational states of a typically large biomolecule.

To collect such typical FRET data sets, the donor is excited using an illumination laser and we assume, only for simplicity here though performed more generally in Ref. [143], that acceptors become excited exclusively via FRET. The rate of donor and acceptor emission then depends on their separation characterizing a conformational state and its corresponding FRET efficiency; see Sec. II B. As the number of conformational states associated with different FRET efficiencies (EFERT, Eq. 19) may be unknown, these may be learned non-parametrically [141, 143]. However, for simplicity here again, we presume two states termed high and low FRET designated by ξm,m=1,2. Further, given that both donors and acceptors are rarely simultaneously excited, we only consider three possible photo-physical states: f1=(Ground,Ground), f2=(Excited,Ground), and f3=(Ground,Excited) where the first elements represent the donor’s state. The entire problem’s state space is then spanned by a set of states obtained from the tensor product of photo-physical and conformational states termed composite states. To facilitate the notation, we designate composite states by smξ1,f1,ξ1,f2,ξ1,f3,ξ2,f1,ξ2,f2,ξ2,f3 with m=1:6.

We can now write a generative model required in constructing the likelihood used in the analysis of FRET experiments. To do so, we start from the rate matrix

K=0ks1s2ks1s6ks2s10ks2s6ks6s1ks6s20, (23)

where self-transitions are, by definition, disallowed and ksmsm is the transition rate from state sm to sm. Furthermore, elements of the rate matrix coinciding with simultaneous conformational and photo-physical transitions are set to zero owing to their rarity. Non-zero matrix elements of the rate matrix thus coincide with: 1) transitions between the two FRET conformational states kξ1ξ2,kξ2ξ1 while the photo-physical states remain fixed; or 2) transitions between different photo-physical states while conformational states remain fixed. To be more precise, photo-physical transitions include donor excitation ks1s2=kex, donor radiative relaxation ks2s1=kd, acceptor relaxation ks3s1=ka, FRET transition when in ξ1ks2s3=kFERT(1), and FRET transition when in ξ2ks5s6=kFERT(2). As such, written explicitly, the rate matrix for this simple case reads

K=0kex0kξ1ξ200kd0kFERT(1)0kξ1ξ20ka0000kξ1ξ2kξ2ξ1000kex00kξ2ξ10kd0kFERT(2)00kξ2ξ1ka00. (24)

Observations only occur when either the donor or acceptor emit radiatively. As such, the system may visit intermediate states between photon emissions such as undergo conformational transitions. For a perfect detector, e.g., ignoring detector dead time [143] and assuming complete detection efficiency (otherwise kex is understood as an effective excitation rate), the photon inter-arrival time coincides with the total time the system spends avoiding radiative transitions.

Now to construct the likelihood for a FRET data set (inter-photon arrival times and detection channels), we begin by illustrating how such data set can be obtained from a generative model. To do so, we first designate the state of the composite system at time tn as stn. Next, following the notation introduced in Sec. I B (see Eq. 8), a state trajectory is constructed following the Gillespie algorithm [144] by first selecting the state to which we transition and then deciding when this transition occurs

stn+1stn ~Categoricalks(t)s1kstn,,ks(t)s6kstn, (25)
δtn~Exponentialkstn. (26)

Here, δtn=tn+1-tn is the time the system spends in state stn, and kstn is the escape rate out of stn, i.e., sum of rates pointing out of stn. The Categorical distribution introduced herein is treated here as the generalization of the Bernoulli albeit with more than two outcomes.

Taken together, Eqs. 2526 constitute what is called a generative model, i.e., a model both helpful in generating the data but also in constructing the likelihood. This generative model can indeed be further generalized to include imperfect detectors, dead time, and other artifacts such as direct acceptor excitation and cross-talk [143, 145147].

We are now presented with a modeling choice. That is, we may learn the trajectory in composite state space (states occupied across time points) and kinetic rates populating the rate matrix [142, 148]. Alternatively, as is more commonly done, we may marginalize (see Eq. 6) over all trajectories and learn only kinetic rates [143, 149].

As it is most common, we select the latter path and marginalize over all possible (non-radiative) paths between observations. To achieve this, we use the master equation [21, 143, 149151]

ddtP(t)=P(t)G (27)

describing the evolution of the probability vector P(t) collecting the probabilities of occupying different states at time t. Here, G, the generator matrix, is related to the rate matrix as follows

G=K-ks1000ks2000ks6, (28)

where the diagonal matrix has the same size as K and its non-zero elements coincide with the escape rates. From the generator matrix, we obtain a propagator matrix Q collecting transition probabilities over an infinitesimal period ε

Q=expGε. (29)

Therefore, given the probability vector at time t-ε,P(t-ε), the probability vector at time t reads P(t)=P(t-ε)Q. As such, given the initial probability vector Pin, we find the probability at any time by dividing the time interval into N small periods of ε

P=Pin Q1QN, (30)

where Q1==QN=Q in the absence of observations. However, in the presence of observations, the propagators in Eq. 30 are modified according to the monitored transitions [143]. For example, observation of no photon over the nth period ε signifies no radiative transitions allowing us to set ka=kd=0 for this period, which in turn results in a modified propagator, designated by Qnnon. Furthermore, a photon arrival, indicating a radiative transition, forces non-radiative transition rates to be zero leading to a modified propagator Qkrad for the kth photon over an infinitesimal period ε.

The likelihood over a set of observations is now expressed in terms of these modified propagators [143, 152]

PΔt1:KK,Pin Pin Q1non Qkrad QNnon PnormT, (31)

where Pnorm  is a row vector of ones.

Until now, we have assumed a parametric framework with a fixed number of conformational states, often set to two, low and high FRET [153], in the literature. Now we lift this constraint and treat the number of conformational states as unknown and extend the formulation above to the non-parametric regime. To do so, we assume an infinite number of conformational states with a load bm (see Sec. I B) associated to each mth state resulting in an infinite dimensional generator matrix; see Refs. [141, 143]. From the non-parametric generator matrix, we compute the corresponding propagator matrices and use them to build a likelihood similar to Eqs. 2931. The non-parametric posterior over the set of unknowns ϑ=b,K,Pin is then constructed by including a Beta-Bernoulli process prior (see Sec. I B) over the loads and appropriate priors over the remaining unknowns (ideally conditionally conjugate priors if available [21]); see Box II C. Strictly speaking, in computational applications, we often use large albeit finite load numbers, M, and verify that for large enough M the conclusions drawn are independent of M. Finally, the FRET posterior obtained is sampled using Monte Carlo methods to deduce the set of unknowns [141143].

Statistical Framework II C: FRET.

Data: Photon inter-arrival times

Δt1:K=Δt1,,ΔtK.

Parameters: loads, transition rates, initial probability vector

ϑ=b,K,Pin .

Likelihood:

P(Δt-ϑ)Pin Q1non QkradQNnon Pnorm T.

Priors:

qm~BetaAq,Bq,m=1:,
bm~Bernoulliqm,
K~GammaαK,βK,
Pin ~DirichletαΠ.

Posterior:

PϑΔt-PΔt-ϑPϑ.

An alternative statistical FRET framework makes use of photon counts over equal time windows, i.e., bins, during the experiment rather than single photons [143, 149]. In this case, the likelihood is derived using the fact that photon counts over fixed periods are Poisson distributed (ignoring detector noise convoluted with Poisson shot noise required of quantitative analyses) [143]. The derivation of such likelihoods is more straightforward than the single photon case [147, 154] and learning rates (or, more accurately, transition probabilities) is achieved using Hidden Markov Models (HMMs) [145]. While traditional HMM frameworks require the number of FRET states as input, more recent iterations have leveraged variational tools to determine states e.g., vbFRET [155], with recent developments in non-parametric infinite HMMs (iHMMs) now allowing posterior probabilities over states warranted by the data to be sampled simultaneously alongside kinetics [37, 147].

However, by virtue of binning photon arrivals, whether by choice or due to the detector used, HMM frameworks naturally compromise our ability to resolve fast kinetics, occurring on timescales at or below the bin size. For this reason, other than the potential for computational speed-up, there is no reason to bin single photon data. On the other hand if using detectors that unavoidably bin counts across pixels commonly used in wide-field applications (see Appendix A), then fast transitions may be deduced on timescales exceeding data acquisition. This is achieved by leveraging the fact that the signal amounts to an average of the properties over the state visited [148, 156, 157]; see Fig. 7.

FIG. 7:

FIG. 7:

Data simulated for discrete measurements of two state systems with fast and slow transitions depicted in panels a and b, respectively. The system trajectories in the state space, measurements at different times intervals (δT), i.e., bins, and the state signal levels in the absence of noise are, respectively, denoted by cyan, gray, and dotted lines. The measurements between the state signals level coincide with time intervals where the system has switched to a different state at some point during those intervals. In the simulations, data acquisitions take place at every δT=0.1 s where the average time spent in each state is, respectively, 0.8 s and 0.066 s for slow and fast kinetics. The figure is adapted from Ref. [148].

Such strategies used to deduce dynamics on timescales at or exceeding data acquisition rely on the Markov jump process (MJP) [148, 158] which assumes that the system evolves in continuous time. This is by contrast to the HMM paradigm which approximates dynamics as occurring discretely and only at the measurement time. Put differently, the MJP accurately pre-supposes a continuous time trajectory 𝒮(t) in the discrete state space of the composite system generated using the same procedure as described by Eqs. 2526. The observation for the kth data acquisition period (bin) is therefore [148, 157]

wk~Poissontktk+δTμ𝒮tdt, (32)

where μ𝒮(t) represents the photon emission rate for the instantaneous state occupied at time t,μ𝒮(t).

Having briefly highlighted Markov model applications for FRET, here we describe how Markov models are used when enumerating fluorophores [76, 159162] typically with the intent of determining the stoichiometry of a labeled protein complex within a diffraction-limited spot.

For a single fluorophore we assume, for simplicity of demonstration alone, a state space spanned by 3 photo-physical states, though this treatment is generalized elsewhere [76, 154]. These include the: 1) bright state, fA;2) dark state, fD;3 photo-bleached state, fB. Transitions between these states include: fAfA,fAfD,fAfB,fDfD,fDfA,fBfB. Here, the photo-bleached state is an absorbing state from which escape is impossible; see Sec. II A.

Typically, in such applications, a wide-field detector (see Appendix A) is used to record data from ROIs containing one or multiple putative complexes. The ROIs may contain one or more pixels. The input to the analysis then consists of the sum of the intensity or brightness in each ROI typically obtained by summing the pixel values (Analogue-to-Digital Units or ADUs) in each pixel involved. The sum of ADUs for each ROI is then recorded over K successive frames and designated by w1:K=w1:K1,,w1:KR, where the overbar represents the set of R ROIs. Typically, the last frame is taken after all fluorophores within the ROI have photo-bleached; see Fig. 8. Assuming only photo-bleaching and ignoring transitions from bright to dark states, the number of discrete intensity drops in the time trace, if all fluorophores are initially bright, should coincide with the number of photo-bleaching events and thus the complex stoichiometry. However, not all fluorophores may initially be active such as in the case of PALM [160]. What is more, fluorophores blink; see Sec. II A and Fig. 8.

FIG. 8:

FIG. 8:

Fluorophore enumeration. (a) Cartoon representation of the enumeration problem where the ROI intensity varies as fluorophores switch between the dark, bright, and photo-bleached states. (b-d) Histogram of the sampled posterior over the number of fluorophores, i.e., sum of sampled loads, for experimental data with, respectively, 24, 49 and 98 fluorophores using the statistical framework appearing in Box II C. The figure is adapted from Ref. [76].

If our goal is to enumerate the fluorophores, assuming identical complexes across ROIs, then for independent ROIs (iid variables), the likelihood reads (see Sec. I B)

Pw1:KΛ1:K,Ξ=rkPwkrΛkr,Ξ, (33)

where Ξ denotes the camera parameters (see Appendix A) and the elements of Λ1:K, namely Λkr, coincide with the expected photon count, i.e., brightness obtained from the emission rate multiplied by the camera exposure time, of the rth ROI at frame k.

Decomposed in terms of emission due to background and fluorophores, Λkr reads

Λkr=r+IAm=1MrδA,skrm, (34)

where m counts Mr fluorophores within the rth ROI. Here IA,r, and skrm respectively, denote the fluorophore’s brightness, background brightness of the rth ROI per frame, and the state of the mth fluorophore within the rth ROI at frame k. The Kronecker delta, δA,skrm, assumes fluorophores only emit in the bright state. This decomposition assumes, perhaps erroneously in some cases, that the fluorphores do not interact [90].

Next, approximating the fluorophore state as remaining the same over each frame and the state at frame k only depending on its (potentially different) state at frame k-1, i.e., the Markov assumption, we may formulate the problem using transition probabilities between different states and avoid transition rates altogether. The transition probabilities associated to a single fluorophore can be collected as elements of a matrix, designated by Π, analogous to the propagator, Q in Eq. 29, for finite time windows

Π=expGδT=πAAπADπABπDAπDD0001. (35)

Here, δT is the fixed period of time between measurements (frame exposure time) and each line of the transition matrix contains transition probabilities out of a certain state. For instance, we have πA=πAA,πAD,πAB for the bright state. The structure of the last row in Π reflects the absorbing nature of the bleached state.

The state of a single fluorophore at frame k given its state at k-1 is sampled as follows

skmrsk-1mr~ Categorical πsk-1mr, (36)

where πsk-1mr collects the set of possible transitions’ probabilities out of sk-1mr. Finally, as fluorophore transitions are assumed independent, transitions of the full system are obtained from the product of the individual fluorophore transition probabilities.

While the photo-physics of individual fluorophores may be known, the number of fluorophores are themselves unknown. This presents a model selection challenge warranting a non-parametric formulation. Conceptually, this is achieved by assuming an infinite number of fluorophores with associated loads; see Sec. I B. Concretely, we modify Eq. 34 as follows

Λkr=r+IAm=1bmrδA,skmr, (37)

where bmr is the load associated to the mth fluorophore in the rth ROI. In this case, the number of fluorophores is replaced by loads for each ROI. We collect the set of unknowns in ϑ=b-,IA,-,Π,𝒮--. Here, double overbars represent the set of all possible values for the two indices associated to each of parameters b and 𝒮.

Finally, to construct the posterior for the set of parameters in ϑ, we introduce priors. The most notable priors are the Beta-Bernoulli process priors on loads and the prior on the transition probabilities, the Dirichlet prior, due to its conjugacy to the Categorical distribution Eq. 36. For the remaining priors in Box II C, we opt for computationally efficient priors when possible leveraging the mathematical structure for the likelihood (see Sec. I B) [76]. In particular, we invoke multiple Monte Carlo to draw samples of ϑ from the posterior with forward filter backward sampling specifically used to sample fluorophore trajectories [32, 76, 163].

Having discussed how to decode temporal data, we now turn to spatiotemporal data and, for this, we discuss the optics of different microscope modalities and derive their corresponding PSFs.

III. FLUORESCENCE MICROSCOPY: POINT SPREAD FUNCTION

In this section, we develop in a brief but otherwise self-contained manner the physical theory of optical imaging within a wide-field fluorescence microscope. We start by deriving the Abbe sine condition subsequently used to describe fundamental properties of electromagnetic wave propagation through optical systems. We then continue by deriving the basic principles of how to compute the OTF and PSF of a microscope, discuss the lack of optical sectioning of wide-field microscopes, and illustrate the effect of optical aberrations on PSFs.

A. Fundamental property of microscopic imaging: Abbe’s sine condition

To gain a deeper understanding of how a microscope forms an image alongside fundamental principles governing image formation, we start by considering the imaging of a generic point source in sample space into an image point in image space; see Fig. 9. To do so, we denote parameters associated to the image and sample spaces with and without prime, respectively, hereafter. A point source in the focal plane on the optical axis (symmetry axis designated by blue lines) emits concentric (electromagnetic) waves. The segment of the spherical wavefront collected by the objective is then converted by the microscope into a segment of a spherical wavefront converging onto the corresponding image point. To facilitate subsequent derivations, we assume that the distance between the sample point and the objective lens is large enough such that the spherical wavefront incident on the objective can be considered as a super-position of planar wave-front segments traveling at different propagation angles θ with respect to the optical axis (Fraunhofer diffraction limit). Correspondingly, the transformed spherical wavefront in image space is also considered to be a super-position of planar wavefront segments traveling at angles θ with respect to the optical axis.

FIG. 9:

FIG. 9:

The optical microscope, i.e., imaging system, is a wavefront transforming system converting the outgoing spherical wavefront of a point emitter in sample space (left) into a concentric spherical wavefront in image space (right) converging into an image point in the image space.

We can now obtain a relation between the angle θ and the corresponding angle θ of a planar wavefront segment within the sample and image spaces, respectively; see Fig. 1. We begin by assuming that the point source is shifted laterally away from the optical axis by a distance y; see Fig. 10. Considering a perfect imaging system, the spherical wavefronts from the shifted point source, shown in green, will be converted into spherical wavefronts converging onto a point shifted a distance y away from the optical axis in the image space where the relation between y and y is given by y=y. Here, denotes the microscope’s magnification.

FIG. 10:

FIG. 10:

The phase relation between planar wavefront segments propagating along the same angle θ but emanating from two different point sources, where one point source is on the optical axis (red) and the other is laterally shifted by a distance y (green). The image point (point of convergence of the spherical wavefront segment) corresponding to the shifted point source is translated by a distance y away from the optical axis. The ratio between y and y is the magnification . Optical path length differences between wavefront segments traveling along angles θ or θ, respectively, are shown as thin bluish lines at the emitters’ positions and oriented perpendicular to the propagation directions θ and θ.

Now, consider two planar wavefront segments traveling at angle θ from a source located at y and on the optical axis. There is a phase difference between these two planar wavefront segments proportional to ny sinθ. The microscope transforms these planar wavefront patches into two planar wavefront patches traveling along angle θ in the image space with a phase difference of ysinθ between the patches (assuming both here and later that the refractive index of the image space is always that of air, i.e., ≈ 1.0). Now, to attain perfect focus, all planar wavefront patches originating from a point source and converging at a corresponding focal point in the image space must have the same phase at the focal point (maximum constructive interference). In other words, the phases of all planar wave components constituting the spherical wavefront must be the same at the image point where the spherical wavefront converges. We thus find ny sinθ=ysinθ. When considering that the ratio between y and y is the image magnification, this yields

nsinθ=sinθ, (38)

which is the so-called Abbe sine condition [164, 165] for a perfect aplanatic imaging system (i.e., emission from a point at lateral distance y in the focal plane in sample space is converted into a perfect spherical wavefront segment converging into an image point at position y=y in the image plane).

Invoking similar arguments, we can derive the relation between θ and θ required for the perfect imaging of point sources along the optical axis into corresponding image points in image space. This situation is illustrated in Fig. 11 where we again compare the phase differences between: 1) wavefronts from the point source in the focal plane with the shifted point source; and 2) corresponding wavefronts converging in the image points. As such, we now find the following relation between θ and θ

ncosθ-1=zcosθ-1, (39)

where z denotes the axial magnification [166168]. As can be seen, it is impossible for both the Abbe sine condition and Eq. 39 to be simultaneously satisfied. This shows that an optical system which perfectly images points from the focal plane onto the conjugate image plane can do so only on these two specific planes and exhibit aberrations, i.e., deviations of wavefronts from spherical shape, away from the focal plane. Interestingly, for small values of θ, we can expand Eq. 39 into a first order Taylor series, i.e., nθ2/2zθ2/2, which can simultaneously be satisfied with Abbe’s sine condition if

nsin2θ/2zsin2θ/2 (40)

and z2/n. Eq. 40 is called Herschel’s condition [167171]. This shows that a system satisfying Abbe’s sine condition (aplanatic imaging system) has an axial magnification of roughly the square of the lateral magnification divided by the sample medium’s refractive index.

FIG. 11:

FIG. 11:

Phase relation between planar wavefront segments propagating along the same angle θ but emanating from two different point sources along the optical axis. Similar to Fig. 10, optical path differences (phase differences) between wavefront segments traveling along angles θ or θ, respectively, are shown as blue rectangles.

B. Electromagnetic field of image formation

In this section, we consider a point emitter with incoherent emission in sample space and proceed to derive a relation between the corresponding electromagnetic fields in the sample and image spaces. Specifically, we operate in the Fourier domain to derive electric and magnetic field components in image space in terms of the emissive electric fields in sample space. To begin, we write the emitter’s electric field plane wave (Fourier) representation in sample space

Er=0Θdθsinθ02πdϕE0θ,ϕexpikr, (41)

where r is the position vector in sample space with respect to the objective focal point in sample space; see Fig. 1. Moreover, E0(θ,ϕ) is the electric field amplitude for the plane wave traveling along wave vector k with length |k|=2πn/λ and direction kˆ=(cosϕsinθ,sinϕsinθ,cosθ) (a hat above a vector always designates a unit vector with components (x,y,z) in Cartesian coordinates); see Fig. 12. Furthermore, the angular integration extends over the whole cone of light with angle Θ detected by the objective (recalling that nsinΘ is the objective’s numerical aperture; see Fig. 2).

FIG. 12:

FIG. 12:

Geometry of propagation of a narrow section of the wavefront from the emitter to the image plane.

In Fig. 12, considering the plane on which both the optical axis (z-axis in Fig. 12) and k lie, then it is convenient to split the electric field amplitude E0(θ,ϕ) into two orthogonal polarization components, namely parallel and perpendicular to this plane E0=E0,(θ,ϕ)eˆ+E0,(θ,ϕ)eˆ, where E0, and E0, are the corresponding electric field amplitudes along the two polarization orientations, and the corresponding unit vectors are denoted by eˆ and eˆ. These two unit vectors with the unit vector kˆ form an orthonormal set of unit vectors, given as follows in Cartesian coordinates

kˆ =cosϕsinθ,sinϕsinθ,cosθ,eˆ=-sinϕ,cosϕ,0,eˆ=eˆ×kˆ=cosϕcosθ,sinϕcosθ,-sinθ. (42)

This representation immediately allows us to write down the magnetic field in sample space. We do so by recalling that for a plane wave with wave vector k and electric field amplitude E0, the magnetic field amplitude is B0=nkˆ×E0 [172]. Thus, the magnetic field amplitude in sample space reads B0=n-E0,(θ,ϕ)eˆ+E0,(θ,ϕ)eˆ.

The microscope’s optics now converts each plane wave component of Eq. 41 into a corresponding plane wave component E0θ,ϕexpikr in the image space; see right panel of Fig. 12. Here, r is centered at the focus of the tube lens (see Fig. 1), the angle ϕ remains the same, and the propagation angles θ and θ are connected via Abbe’s sine condition given by Eq. 38. As before, we conveniently split the electric field amplitude into two principal polarization directions E0=E0,(θ,ϕ)eˆ+E0,(θ,ϕ)eˆ, where the set of unit vectors in the image space is obtained by substituting θ by θ in Eq. 42. Moreover, we note that eˆ=eˆ due to its independence of θ. Now, the corresponding magnetic field amplitude can be obtained as B0=-E0,θ,ϕeˆ+E0,θ,ϕeˆ, assuming a refractive index in image space of unity.

We now relate the electric field amplitudes in sample and image spaces by considering the conservation of energy flux density along the optical axis for every plane wave component absent attenuation (attenuation can be considered as a form of aberration discussed in Sec. III F). This flux density is given by the z-component of the time-averaged Poynting vector P [172] which reads

Pz=c8πeˆzE0×B0*=c8πeˆzE0×B0*, (43)

where a star denotes complex conjugation. For B0=nkˆ×E0 in sample space and B0=kˆ×E0 in image space, we obtain nE02cosθ=E02cosθ from which the electric field amplitudes in image and sample spaces are related

E0=ncosθcosθE0. (44)

Furthermore, by combining Abbe’s sine condition nsinθ=sinθ, Eq. 38, and its differential ncosθdθ=cosθdθ, we have

sinθdθ=n2cosθcosθsinθdθ. (45)

Substituting the above into the electric field’s plane wave representation, Eq. 41, and leveraging Eq. 44, we arrive at the following expression for the image space electric field plane wave representation

Er=2n3/20Θdθsinθcosθcosθ02πdϕE0,eˆ+E0,eˆexpikr, (46)

where the maximum integration angle, derived from Abbe’s sine condition for Θ and Θ, is now Θ=arcsin(nsinΘ/)=arcsin(NA/). Similarly, for the magnetic field, we find

Br=2n0Θdθsinθcosθcosθ02πdϕ-E0,eˆ+E0,eˆexpikr. (47)

Recognizing that the above equations for both electric and magnetic field components are nothing other than Fourier representations (expansion into plane waves expikr, we comment briefly on the frequency support restricted to wave vectors with k=k=kx2+ ky2+kz21/2=2π/λ,0θΘ, and 0<ϕ2π. This restriction is illustrated as a spherical cap of radius k=2π/λ in the frequency domain; see left panels in Figs. 1314. In other words, the Fourier amplitudes of the electric and magnetic fields are only non-zero on this spherical cap in Fourier space. To better see this, we rewrite Eq. 46 as

Er= d3k(2π)3E˜kexpikr, (48)

where a variable with tilde denotes Fourier representation of the variable hereafter. Now, assuming that the three-dimensional integration extends over the whole k-space (Fourier space), the integration measure in spherical coordinates is d3k=k2sinθdkdθdϕ, and the electric field Fourier amplitude (integrand in Eq. 48) for angles 0θΘ is given by (all constant pre-factors omitted)

E˜kδk-2πλcosθcosθE0,eˆ+E0,eˆ (49)

while it is zero for angles θ>Θ. Here, δ denotes Dirac’s delta function and guarantees that k=2π/λ. The absolute value of the electric field in Eq. 49 is obtained as (see left panels in Figs. 1314)

|E|{cosθcosθ(E0,2+E0,2),k=2πλ & 0θΘ0,otherwise. (50)

A similar expression holds for the Fourier representation of the magnetic field, when replacing E0, by -nE0, and E0, by nE0,.

FIG. 13:

FIG. 13:

From electric/magnetic field to intensity. The two spherical caps in the left panel show the support of the Fourier representations of electric and magnetic fields given by Eq. 49. The right panel represents the extent of frequency support of the imaging OTF obtained by the convolution of the two caps on the left panel; see Eq. 55. The shape in the right panel is termed butterfly-shape and its missing cone in the middle highlights a wide-field microscope’s inability to collect sufficient axial frequencies and thus lack of optical sectioning.

FIG. 14:

FIG. 14:

Visualization of the maximum axial and lateral extents of the Fourier representation of the electric field and the imaging OTF. (a) A cross-section of the Fourier representation of the electric field (cap) at ky=0. The cross-section is an arc with radius k=2π/λ and 0θ<Θ (see Eq. 50). The maximum extents of the cap along the lateral and axial directions are, respectively, given by Δk=2πλsinΘ and Δkz=2πλ1-cosΘ. (b) Here we show the convolution of the caps associated to the electric and magnetic fields along the largest axial and lateral extents beyond which the convolution is zero.

C. Point spread function

Now, we are in a position to calculate the PSF, denoted by Ur. The PSF is, by its very nature, a probability density over a photon reaching the point r on the image plane, i.e., detector, where r is a random variable. That is, the PSF plays the role of a normalized spatial distribution of light intensity recorded by a detector at the image plane for a point-like emitter located in the sample space. From this fundamental probabilistic property of light follows most statistical concepts inherent to the modeling of fluorescence microscopy.

The PSF itself, once more, follows from the Poynting vector’s z-component (see Eq. 43)

Ur =c8πeˆzEr×B*r=c8πExrBy*r-EyrBx*r. (51)

Knowing the PSF, the image model Λr, i.e., the spatial distribution of expected photon intensity or photon count, in image space, for an arbitrary sample follows from the convolution

Λr=I d3r0Ur-r0Sr0, (52)

where Sr0 is the so-called sample function describing the fluorophore distribution. We assume the PSF, U, to be normalized to unity and I to reflect the total photon emission per fluorophore.

For an aplanatic imaging system, which is shiftinvariant (see Sec. III F), Eq. 52 is exact for all emitters on the focal plane, i.e., for z0=0. However, it is an approximate for emitters outside the focal plane, as follows from the discussion of the Abbe and Herschel conditions of Sec. III A.

Using the electric field of Eq. 49, the lateral components of the electric and magnetic fields in the Fourier domain are explicitly given by (for θΘ)

E˜xE˜yδk-2πλcosθcosθ-E0,sinϕ+E0,cosθcosϕE0,cosϕ+E0,cosθsinϕ, (53)

and

B˜xB˜yδk-2πλcosθcosθ-E0,cosθcosϕ-E0,sinϕ-E0,cosθsinϕ+E0,cosϕ, (54)

where we also used the Cartesian representation of eˆ and eˆ similar to Eq. 42. Moreover, we remember that the refractive index in image space is assumed to be 1 (air). Thus, no additional prefactor appears in the coinciding magnetic field expression.

Now, with the Fourier representations of the electric and magnetic fields at hand, we derive the imaging OTF then the PSF. To start, we note that the PSF is given by the product of the electric and magnetic field components in the spatial domain; see Eq. 51. However, within the Fourier domain, we use the well-known convolution theorem: the Fourier representation of the product of two functions is proportional to the convolution of their Fourier representations. As such, the imaging OTF is given by

U˜kE˜xkB˜y*k-E˜ykB˜x*k= d3kE˜xk-kB˜y*k-E˜yk-kB˜x*k, (55)

where denotes convolution. The resulting OTF is then related to the PSF by Fourier transform

Ur= d3k(2π)3U˜kexpikr. (56)

The convolution of Eq. 55 is visualized in Fig. 13. The two spherical caps (note that it is only the area on the surface) shown in the left panel represent regions where the Fourier amplitudes of the electric and magnetic fields are non-zero (see Eq. 50). The convolution of these caps results in the butterfly-shaped three-dimensional figure shown in the right panel, where the surface shown represents the maximum extent of frequency support of the OTF. That is, the OTF amplitude vanishes for all frequencies outside this region and takes non-zero values only for frequencies within the three-dimensional shape also termed microscope’s band-pass.

From Fig. 14a, one finds that the lateral and axial extents of the Fourier representations of the electric/magnetic fields are Δk=2πsinΘ/λ and Δkz=2π1-cosΘ/λ, respectively. As the OTF is computed from the auto-convolution of the cap associated to the electric/magnetic fields, the lateral and axial size of the OTF, respectively, is then found to be 4πsinΘ/λ and 2π1-cosΘ/λ, see Fig. 14b.

Put differently, the microscope does not transmit lateral spatial frequencies beyond k>4πsinΘ/λ or any axial spatial frequencies beyond kz>2π1-cosΘ/λ, where k=kx2+ky2 is the amplitude k’s projection in the kxky-plane. Thus, the three-dimensional intensity distribution in image space does not transmit lateral spatial modulations smaller than 2π/maxk=λ/2sinΘ. This leads to spatial modulations of λ/(2nsinΘ) in image space using Abbe’s sine condition, and translates into the smallest discernible spatial variation λ/(2nsinΘ) in the sample space when taking into account that the lateral magnification is . Therefore, we recover Abbe’s resolution limit, Eq. 14. as 2π over the largest lateral spatial frequency transmitted by the microscope from sample to image space

rminl=2πkmaxl, (57)

where rminl and kmaxl, respectively, denote the resolution and maximum extent of the OTF along the lth direction.

While Eq. 57 provides a measure of resolution for lens-based imaging systems with OTF magnitudes consisting of a single lobe monotonically decaying to zero, e.g., lateral magnitude of wide-field microscope’s OTF, it should be used with care for more complicated OTFs, e.g., axial resolution for wide-field microscope (see Figs. 13 and Sec. III E), SIM (see Sec. IV D), some types of light-sheet microscopes with multiple gaps in their OTF magnitudes (see Sec. IV E), and others.

As such, regarding the wide-field microscope’s axial resolution, the situation is more complicated due to the OTF’s shape in the axial direction. To be more concrete, in the right panel of Fig. 13, one can see that the butterfly-shape imaging OTF does not support axial frequencies within a cone defined by kz/k>tanΘ. This is often called the OTF’s missing cone. One effect of this missing cone is that a wide-field microscope does not provide optical sectioning (z-sectioning). That is, for k0 a wide-field microscope collects limited axial spatial frequencies. Put differently, the PSF pattern formed by light collected from a fluorophore using a wide-field setup varies slowly with the fluorophore’s axial position.

Yet, as can also be seen from Fig. 13 axial frequencies kz have non-zero amplitudes for 0<k<maxk=4πsinΘ/λ. The maximal value kz=2π1-cosΘ/λ contained in the OTF shows that the smallest possible spatial modulation of the PSF along the optical axis is approximately λ/1-cosΘ. For paraxial optics, i.e., for small values of Θ and Θ, where we have approximately an axial magnification z=2/n (see Sec. III A), and with the approximation 1-cosΘΘ2/2n2Θ2/22, this translates into a small axial modulation of 2λ/nΘ22nλ/(NA)2 of the sample function transmitted through the microscope. This is in accordance with our previous estimate of the axial resolution limit in Eq. 15. Problems associated with the OTF’s missing cone, i.e., missing z-sectioning, is considered in Sec. IV where discuss confocal microscopy alongside other modalities.

D. Electromagnetic field emission of an oscillating electric dipole

In the previous section, we derived integral expressions for the OTF and PSF of a wide-field microscope; e.g., see Eqs. 51 and 5556. Here, we evaluate these integrals and obtain a wide-field microscope’s exact OTF and PSF using E0(θ,ϕ)=E0,eˆ+E0,eˆ for a fluorescent (point) emitter; see Eq. 49. We do so by noting that the electromagnetic emission of fluorescence emitters (e.g., organic dyes, proteins, quantum dots), used in fluorescence microscopy are often well approximated as an oscillating electric dipole. Important exceptions, to which we can geenralize, include some emission bands of rare earth emitters, e.g., europium complexes, exhibiting magnetic dipole or electric quadrupole properties [173, 174].

To compute the electric dipole’s oscillating electromagnetic field, we start from a dipole moment with amplitude p and oscillation frequency ω located at rd=xd,yd,zd in the sample medium with refractive index nd. Moreover, considering all fields oscillate as exp(-iωt) as the dipole moment oscillations, we focus on the amplitudes of the electric and magnetic fields. In this case, the Maxwell’s equations read

×E=iωcB,×B=-iωϵdcE+4πcj, (58)

where ϵd=nd2 is the dielectric constant of the sample solution in which the dipole is embedded and j=-iωpδr-rd is the electric current generated by the oscillating dipole. Thus, we find ××Ed-kd2Ed=4πk02pδr-rd for the electric field Ed of the dipole emitter, where k0=ω/c and kd=ndk0. Using ××Ed=Ed-2Ed [172] and passing to Fourier space yields for the Fourier amplitude E˜d

k2-kd2E˜d-kkE˜d=4πk02pexp-ikrd, (59)

where k is the Fourier space coordinate. Multiplying Eq. 59 by k yields kE˜d=-4πϵdkpexp-ikrd which we substitute into Eq. 59 to arrive at

E˜d=4πexp-ikrdϵdk2-kd2kd2p-kkp. (60)

In real space, the above reads

Ed= d3k2π2ϵdkd2p-kkpexpikr-rdk2-kd2, (61)

where r-rd is the distance between electric dipole’s location, rd, and the observation point, r.

To obtain an expression well suited in modeling the emission of a dipole in a planar system (e.g., above a flat coverslide), we perform the integration along the kz-coordinate in the above expression, using Cauchy’s residue theorem. To do so, we close the integration path along the real axis and complete a semi-circle at infinity over the complex kz-plane, as shown in Fig. 15. To make sure that the exponent vanishes when extending the contour into the complex plane, one has to close the contour over the positive imaginary half plane when z-zd>0 and over the negative imaginary half plane when z-zd<0. Along the real axis, the integrand has two poles at positions ±wd=±kd2-q2, where q2=kx2+ky2. However, the integration’s result must contain only outgoing plane waves (Sommerfeld radiation condition [175]), achieved by deforming the integration contour around the two poles as shown in Fig. 15. Subsequently applying Cauchy’s residue theorem yields

Ed=i2πϵd d2qwdkd2p-kdkdpexpiqρ-ρd+iwdz-zd, (62)

where we used the abbreviations kd=q,wd, with wd=kd2-q2 as the pole location. Here, ρ and q, respectively, collect lateral coordinates in real and Fourier spaces. Further, ρd,zd denotes the dipole spatial coordinates. The two-dimensional integration over q extends over an infinite (Fourier) plane oriented perpendicular to the optical axis. Eq. 62 is the plane wave representation of the electric field of a free oscillating dipole, also called the Weyl representation [176178]. As we will see, the Weyl representation is particularly suited to modeling the imaging of an emitter through a microscope.

FIG. 15:

FIG. 15:

Contour for the integration over kz of Eq. 61 in the complex kz-plane. For positive values of z-zd, the contour has to be closed, at infinity, over the positive Imkz half-space, while for negative values of z-zd it is over the negative half-space. Along the real axis, the integrand has two poles at ±wd=±kd2-q2.

Next, we consider the situation where the refractive index nd of the medium in which the emitting dipole is embedded and the refractive index n of the immersion medium of the microscope’s objective differ (e.g., imaging with an oil immersion objective with an emitter in water). This situation is schematically shown in Fig. 16. We use Eq. 62 to model the propagation of the electric field through an interface dividing the sample (dipole) and immersion medium, i.e., coverslide surface. To do so, it is convenient to recast the integrand in Eq. 62 as

kd2p-kdkdp=kd2peˆeˆ+peˆdeˆd, (63)

where we used p=peˆeˆ+peˆdeˆd+pkˆdkˆd since the unit vectors eˆ,eˆd and kˆd form an orthonormal set similar to Eq. 42. As such, the problem reduces to considering the propagation of s- and p-polarized plane waves through a planar interface.

FIG. 16:

FIG. 16:

Angular distribution of the electric field generated by a single dipole emitter. Here, the gray rectangle represents the coverslide (commonly assumed to coincide with z=0 plane) which is the interface between the electric dipole’s embedding medium (above the coverslide) and the immersion medium below the coverslide. The red two-headed arrow depicts the dipole; α and β are, respectively, polar and inclination (azimuthal) angles describing the orientation of the dipole; ϕ is the polar angle of the wave vector; θd and θ are the azimuthal angles of the wave vector above and below the interface.

We now use Eqs. 6263 to write the electric field after it crosses the interface between both media and travels a distance through the immersion medium (with refractive index n) before arriving in front of the objective lens in term of the p– and s-polarized components

Ed=ik022π d2qwtpeˆeˆ+tpeˆeˆexpiqρ-ρd-iwdzd+iwz-f, (64)

where the t, are the Fresnel transmission coefficients, (ρ,z) represent the observation point coordinates within the immersion medium, and the focal distance, f, is the location of the focal plane with respect to the interface z=0 coinciding with the coverslide surface separating the sample from the immersion medium; see Fig. 16. Here, the axial component w of the wave vector k in the immersion medium is given by w=k2-q2=n2k02-q2. Moreover, the unit vector eˆ is similar to eˆd but formed from the wave vector q,n2k02-q2 instead of q,nd2k02-q2.

The formulation above can be readily generalized to arbitrary number of interfaces. For instance, if an emitter is imaged through a stack of several layers characterized by different refractive indices, then the single interface’s Fresnel transmission coefficients in Eq. 64 must simply be replaced by those for the stacked structure.

Finally, considering Fig. 16, we have w=nk0cosθ and q=nk0(sinθcosϕ,sinθsinϕ,0) leading to d2q/w=dqxdqy/w=nk0sinθdθdϕ in spherical coordinates. Substituting this result into Eq. 64 and comparing with Eq. 41 yields the following electric field amplitude E0(θ,ϕ) for a dipole emitter (up to some constant factor)

E0tpeˆeˆ+tpeˆeˆexp-iqρd-iwdzd-iwf, (65)

or more explicitly

E0,E0,=E0eE0e|p|exp-iqρd-iwdzd-iwf-tsinβsin(ϕ-α)t[sinβcosθcos(ϕ-α)-cosβsinθ], (66)

where α and β are the dipole orientation angles as described in Fig. 16. By inserting these expressions into Eqs. 46, 47 and 51, one can compute the wide-field image PSF of the dipole emitter with arbitrary position and orientation. When doing so, it is convenient to present the results in terms of the lateral sample coordinates ρ=ρ/ instead of the image space coordinates ρ, and as a function of the axial position zd (with respect to the coverslide) of the emitter. This notation will be applied to all PSF visualizations throughout this review. Thus, in what follows, when writing the PSF, U(r), as a function of r, it is silently assumed that the lateral coordinates x and y are the coordinates conjugate to x and y, i.e., x=x/ and y=y/, and z refers to the axial position zd of the emitter.

As a first example of a PSF visualization, Fig. 17 shows three-dimensional representations of a dipole emitter’s PSF along the optical axis for a dipole oriented along the x-axis (left panel), z-axis (middle panel), and for a rapidly rotating emitter (right panel), where the isotropic PSF, Uiso(r), is given by an average of PSFs calculated for dipole orientations along the x,y and z axes [179]

Uisor=13Uxr+Uyr+Uzr. (67)

Accounting for effects of emitter orientation is of key interest in SMLM (Sec. V B 2) as fixed orientations can lead to systematic mislocalization of emitters in space [180183]. That being said, fluorescent labels are often coupled to structures with a sufficiently flexible linker allowing us to approximate labels as nearly freely rotating.

FIG. 17:

FIG. 17:

The PSF of a wide-field microscope, projected into sample space. Shown are plots of the 1/e,1/e2 and 1/e3 iso-surfaces of the maximum PSF value. The lateral coordinates refer to back-projected sample space coordinates (x,y)=x,y/, whereas the axial coordinate refers to an emitter’s axial position zd. We retain this PSF representation throughout the review. The individual panels are described in the main body. Calculations were performed for a NA = 1.2 water immersion objective with n=1.33 and emission wavelength λ=550 nm.

As an example, Fig. 18 shows images of single emitters with different axial positions and inclination angles towards the optical axis. As can be seen, for out-of-focus emitters intermediate values of the inclination angle β (see Fig. 16) can lead to considerable shifts in an emitter’s image apparent center of mass especially significant for emitters away from the focal plane. The situation worsens when working with oil-immersion objectives with a larger Total Internal Reflection (TIR) critical angle than water immersion objective, which allows collection of fluorescent light with larger incident angles. In this case, even in-focus positions depend on emitter orientation. While this effect hinders the localizations of rigid single molecules under the assumption of a symmetric PSF, it can be exploited to learn three-dimensional orientations of molecules [140, 184186].

FIG. 18:

FIG. 18:

Effect of orientation on the emitter’s image. Top row: images of electric dipole emitters of fixed strength but different orientations in the xz-plane, where β is the inclination angle; see Fig. 16. The emitter is situated 400 nm below the focal plane (NA=1.2,n=1.33). Middle row: same as top row, but for the emitter situated in the focal plane. Bottom row: same again but for an emitter situated 400 nm above the focal plane. The scale bar is 0.5μm.

Finally, we briefly consider refractive index mismatch resulting in PSF distortion; see Sec. III F. As an example, Fig. 19 shows this effect for a slight refractive index mismatch of Δn=0.05, again for a water immersion objective with NA = 1.2. We further assumed that the objective lens is corrected for the light refraction introduced by the coverslide. As can be seen, this mismatch primarily results in PSF axial stretching and an axial shift between its center position towards larger z-values with respect to the actual position of the emitter. However, the lateral PSF cross-section at the axial location of its maximum does not change significantly, meaning that the refractive index mismatch does not affect the lateral position of the focused image of an emitter, but does result in its mislocalization along the optical axis.

FIG. 19:

FIG. 19:

Effect of refractive index mismatch on the PSF. PSF of a rapidly rotating electric dipole emitter (isotropic emitter) positioned at various distances from a coverslide surface (z=0). Calculations were done for an NA = 1.2 objective corrected for an immersion/medium with n=1.33, while the solution above the coverslide has n=1.38 (i.e., refractive index mismatch Δn=0.05). The bottom of each box shows a density plot of the PSF’s cross-section through its maximum value.

E. Scalar approximation of the PSF

In the previous section, we derived the exact electric field of an emitter, i.e., oscillating dipole, (see Eq. 66) and used it to compute the PSF. However, these exact expressions are difficult to computationally manipulate. As such, here we provide a simple approximation to the emitter’s electric field and the resulting PSF.

Along these lines, for many practical applications, we assume an isotropic emitter, i.e., one with uniform emission amplitude in all directions. In such case, we can ignore the vectorial nature of the electric (and magnetic) fields resulting in an approximate scalar model. To derive such scalar approximations, we start from Eq. 46 and replace the amplitude vector E0,eˆ+E0,eˆ by a scalar constant. Therefore the expression for the now “scalar” electric (magnetic) field in the image plane generated by an isotropic emitter on the optical axis at position z=zd simplifies to (up to a constant factor)

Er0Θdθsinθcosθcosθ02πdϕeiqρ-ik cosθz0Θdθsinθcosθcosθ02πdϕeiqρcosϕ-ik cosθz0ΘdsinθsinθJ0k sinθρcosθcosθe-ik cosθz, (68)

where we have used qρ=qρ=|q|ρcosϕ due to ρ=ρ and Abbe’s sine condition sinθ=(n/)sinθ, while remembering q=k0sinθ and |q|=ksinθ=nk0sinθ. In the second step, we performed the integral with respect to ϕ and used the Abbe’s sine condition and its differential form (see Eq. 45), and ignored all the prefactors of n and . Here, Jm is the Bessel function of the first kind of order m [187].

Further simplification is possible by replacing the square root factor for unity valid for small values of θ and θ (far-field limit). Eq. 68 therefore simplifies to

Eρ,zn20sinΘdηηJ0kηρe-ik1-η2z, (69)

where η=sinθ. For the special case of z=0 (emitter in the focal plane), analytic integration then yields

EρNA2k0ρJ1NAk0ρ, (70)

where we have used ksinΘ=NAk0. Here, J1 is the Bessel function of the first kind of order one [187]. The PSF is then given by the absolute square of the “scalar” electric field. Therefore, for the 2D PSF of an in-focus isotropic emitter in the far-field limit, we find the well-known Airy pattern

UρJ1NAk0ρk0ρ2, (71)

where we have omitted a constant factor and, where we recall that k0nsinΘ=NAk0 is the maximum lateral wave vector component transmitted by the microscope from the sample to the image plane; see Sec. I C.

In situations where the scalar approximation is suitable (e.g., 3D imaging with molecules more than a wavelength away from the coverslide), this approximate PSF facilitates a computationally lighter model, as calculating Eq. 71 requires a single integration (Fourier transform) while evaluating Eq. 67 requires three integrations. To check the accuracy of this approximation, Fig. 20 shows a comparison of the PSF’s line cross-section through its center, calculated using the full vectorial model of Sec. III CIII D, and the scalar approximation of Eq. 71. As can be seen, the scalar approximation shows negligible deviations from the accurate model for the system considered (water immersion objective with NA = 1.2, emission wavelength 500 nm). In most cases, this approximation is sufficient for quantitative analysis of fluorescence microscopy data, e.g., fitting single molecule images (see Sec. V B 2) provided rapidly rotating molecules.

FIG. 20:

FIG. 20:

Comparison between scalar and vector PSF calculations. Shown are cross-sections of the PSF across the x-axis in the focal plane. The red curve shows results of the full wave-vector PSF calculation for an electric dipole emitter with fixed x-axis orientation, the blue curve the same calculation for a rapidly rotating (isotropic or random) emitter, the green curve presents the result of Eq. 71, and the ochre curve shows the Gaussian approximation of Eq. 74. Insets show two three-dimensional iso-surface PSF plots, left using the exact vector field calculation for an isotropic emitter, right for the scalar approximation. All calculations were performed for a water immersion objective with NA = 1.2.

However, the usefulness of the scalar approximation is further evident in considering a microscope’s OTF. When comparing Eqs. 46 (also see Eq. 50) and 68, the frequency support of the Fourier transforms for the vector and scalar representations of the electric field are identical, given by a spherical cap centered at k=0 with radius 2π/λ and half opening angle Θ; see Figs. 1314. Similar to the PSF visualization, it is convenient to show the OTF back-projected to sample space, easily done using Abbe’s sine condition as kx,ky=n/kx,ky and the relation k=k/n. Cross-sections of the corresponding electric (magnetic) field Fourier representation amplitude is shown in the left two panels of Fig. 21 at ky=0. In the case of vectorial model, for each of the vector fields E and B, one will have two such cross-sections, one for the EB and one for the EB components. Here, Fig. 21 represents the scalar approximation with a uniform field amplitude over the whole spherical cap, cf. with Eq. 68. In both the exact vector field description as well as the scalar approximation, the PSF is found by products of the electric and magnetic fields, which translates in Fourier space to a convolution of the corresponding Fourier representations of these fields.

FIG. 21:

FIG. 21:

Scalar approximation of the OTF of a wide-field microscope. Calculations were done for NA = 1.2 water immersion objective and an emission wavelength of 550 nm. The left panel shows the kxkz cross-section of the electric field amplitudes in sample space, having a frequency support (frequencies with non-zero amplitude) in the shape of a spherical cap with radius k=2πn/λ and an opening half angle equal to the objective’s maximum half angle Θ. The middle panel shows the same distribution for the magnetic field. The right panel is the three-dimensional convolution of the left two panels, yielding the scalar approximation of the OTF amplitude. All panels show density plots of the decadic logarithm of the Fourier amplitude’s absolute value (see color bar on the right hand side) normalized by the maximum absolute value of the corresponding amplitudes. For all panels, the coordinate origin kx=0,kz=0 is at the center. Throughout this review, we use the same representation for all OTFs shown.

A cross-section of the OTF amplitude at ky=0 is visualized in the right panel of Fig. 21, showing the (auto)convolution of the two Fourier amplitude distributions on the left. We note that, in general, the OTF is a complex quantity and all figures show the OTF amplitudes, sometimes termed modulation transfer function (MTF), but for brevity are simply termed OTFs for all subsequent figures. Although the exact amplitude distribution over the butterfly-shaped frequency support of the OTF will be slightly different for the full vector field (see Fig. 13 and Eq. 50) and the scalar approximation (see Eq. 70), the frequency support of the OTF remains identical. This is particularly important to emphasize, because the limits of this frequency support determines the microscope’s optical resolution. Here, again, we emphasize that the resolution, along a given direction, is determined by the maximum frequency kmax of this support along the chosen direction by Eq. 57. For the wide-field microscope in Fig. 21, the lateral and axial extents of the OTF’s frequency support are kmax,y=2nk0sinΘ and kmax,z=nk0(1-cosΘ), respectively; also see Fig. 14. This leads to the lateral and axial resolutions earlier derived (see Sec. I C, III C and Eq. 57)

ymin=2πkmax,y=λ2nsinΘ=λ2NA, (72)

and

zmin=2πkmax,z=λn(1-cosΘ)2nλNA2. (73)

The first equation is Abbe’s famous lateral resolution limit for a wide-field microscope, while the approximate axial resolution in the second equation obtained is only valid for small numerical apertures.

We can further simplify the PSF by approximating Eq. 71 with a 2D Gaussian function

Ugauss ρ-ρ0exp-ρ-ρ022σPSF2, (74)

where σPSF=2/NAk0=λ/2πnsinΘ, as can be found by requiring the same curvature values at the maximum for both Eq. 74 and Eq. 71; see also Fig. 20. This approximation is useful in creating a simple model, allowing straightforward fitting algorithms for many localization applications [183, 188]. This model fits the PSF’s main lobe and thus is a good approximation when imaging within the depth of focus of an aberration-free microscope. The width σPSF is usually experimentally fit from a calibration sample or model [189].

F. Optical aberrations

Finally, we discuss the impact of optical aberrations on the PSF. Optical aberrations refer to any deviation from idealized imaging models earlier presented and can be classified into various groups. The first distinction revolves around the wavelength, i.e., monochromatic aberrations occurring for a single wavelength, by contrast to chromatic aberrations, originating from the chromatic dispersion of the components in the optical system. The second distinction is characterized by shift-invariance, i.e., aberrations similar at every point in the Field Of View (FOV) versus off-axis aberrations. In the presence of optical aberrations, modeling the PSF as a two-dimensional Fourier transform, 2D, operation is common as then the aberrations can be treated as part of the system’s OTF. Here, we will focus on the scalar model, i.e., Eq. 68. This approach can however be generalized to the vectorial case [190192].

As, generally, optical aberrations can be a function of ϕ,θ, we return to Eq. 68 and extend it to include an additional amplitude/phase function that takes into account aberrations. Then, we can conveniently recast it as a 2D operation prior to the integration over ϕ

Eρ,z;r02D𝒜θ,ϕeiΨθ;z,f+Φθ,ϕ, (75)

where we ignored the term cosθ/cosθ due to its negligible contribution. Here, 𝒜ei(Ψ+Φ) is the so-called pupil function, where 𝒜θ,ϕ is the pupil function’s amplitude, which, neglecting all constant factors, simplifies to the Fourier plane support, limited by either the NA or nd as follows

𝒜θ,ϕ=1, if sinθminndn,NAn0, otherwise , (76)

where n and nd are, respectively, the refractive index of the objective immersion and the dipole (emitter) medium. In full generality, 𝒜 can be a function of θ and ϕ, for instance, in the presence of aberrations in the form of attenuation of the transmitted electric and magnetic fields. However, these types of aberrations are rare and often induce negligible changes to the PSF compared to the phase terms [193]. Therefore, it is safe to neglect the effect of amplitude and focus on the phase.

The first term in the phase, Ψθ;z,f, is induced by the molecule’s shift off-axis and out-of-focus, i.e., the term -qρd-wdzd-wfz in Eq. 66,

Ψθ;z,f=k0znd1-sinθ2-k0fn1-ndnsinθ2. (77)

For instance, the phase -k1-η2z in Eq. 69, where η=sinθ, is due to the out-of-focus location of the emitter. The second phase term in Eq. 75, Φθ,ϕ, describes any additional phase of the pupil function (originating from optical aberrations as described in this section or PSF modulating elements described in Sec. V C), otherwise null in perfect aplanatic imaging conditions, as in Eq. 69.

We start by considering monochromatic shift-invariant, i.e., (x,y) independent, aberrations. In this case, aberration terms can be readily added to Eq. 75 as a phase term Φθ,ϕ. This phase function lives on the disk-like support ϕ{0,2π} and θ0,Θ defined by the electric (magnetic) field Fourier amplitude distribution (see Sec. III B and Fig. 13).

It is often convenient to expand phase aberrations into a system of orthogonal basis functions, namely Zernike polynomials Zlmξ=sinθ/sinΘ,ϕ (see e.g., [194, 195])

Φξ,ϕ=lm=-llvlmZlmξ,ϕ, (78)

where vlm are coefficients corresponding to Zlm. These polynomials are defined by

Zlm(ξ,ϕ)=Rlm(ξ)sin(mϕ), if m>0Rlm(ξ)cos(mϕ), if m0, (79)

where the radial functions Rlm are given by

Rlm(ξ)=k=0(l-|m|)/2(-1)k(l-k)!ξl-2kk!l+m2-k!l-m2-k! (80)

if l-|m| is even, and zero otherwise; see table I.

TABLE I:

The first 12 Zernike polynomials.

# l m Znm name
1 1 −1 ξ cosϕ horizontal tilt
2 1 1 ξ sinϕ vertical tilt
3 2 0 2ξ2-1 defocus
4 2 −2 ξ2cos2ϕ vertical astigmatism
5 2 2 ξ2sin2ϕ oblique astigmatism
6 3 −1 3ξ2-2ξ cosϕ horizontal coma
7 3 1 3ξ2-2ξ sinϕ vertical coma
8 4 0 6ξ4-6ξ2+1 primary spherical
9 3 −3 ξ3cos3ϕ oblique trefoil
10 3 3 ξ3sin3ϕ vertical trefoil
11 4 −2 4ξ2-3ξ2cos2ϕ vert. secondary astigmatism
12 4 2 4ξ2-3ξ2sin2ϕ obl. secondary astigmatism

Figs. 2223, respectively, show density plots of the first 12 Zernike polynomials, and their impacts on the PSF for an isotropic emitter. The first three polynomials, namely horizontal tilt, vertical tilt and defocus, coincide with phases due to lateral, vertical and axial shifts in the emitter’s position, respectively. All other terms describe PSF distortions due to optical aberrations.

FIG. 22:

FIG. 22:

Density plots of the first twelve Zernike polynomials as presented in table I: (1) horizontal or x tilt; (2) vertical or y tilt; (3) defocus; (4) vertical astigmatism; (5) oblique astigmatism; (6) horizontal coma; (7) vertical coma; (8) primary spherical aberration; (9) oblique trefoil; (10) vertical trefoil; (11) vertical secondary astigmatism; and (12) oblique secondary astigmatism.

FIG. 23:

FIG. 23:

Model calculations of the image of an isotropic emitter (rapidly rotating dipole emitter) aberrated by a phase function given by the Zernike polynomials shown in Fig. 22. To better visualize the effects of aberration, all Zernike polynomials were multiplied by a factor 2.5. Calculations were again done for a water immersion objective with NA = 1.2 and for an emission wavelength of 550 nm. Yellow scale bar is 0.5μm.

In some cases, aberrations may not be well described by low order Zernike polynomials. For example, when using Liquid Crystal Spatial Light Modulators (LC-SLM) [196] or in some PSF engineering methods [197], a sudden phase step in the pupil function may require evaluating the aberration in a pixel-wise manner [190].

The second kind of aberration is chromatic shift-invariant. In microscopy, it is common to use achromatic objectives though dispersion from various other components inducing PSF deviations is unavoidable. These aberrations originate from the (broad, non-monochromatic) emission spectrum S(λ) of fluorescent molecules describing the probability to emit at a wavelength λ often with a width of a few tens of nanometers; see Sec. II A. In such cases, the image model follows from a super-position integral over the molecule’s spectrum

Λx,y;r0=λSλUx,y;r0,λdλ, (81)

where Ux,y;r0,λ is the λ-dependent PSF (as described in Sec. III E as a function of k0=2π/λ. Such aberrations are often detrimental in 3D microscopy. For example in multi-focus microscopy, a phase mask (more details in IV F) with custom chromatic correction gratings are designed to correct the chromatic shifts [198].

The most challenging aberrations are shift-variant, both chromatic and monochromatic, which cannot be simply described by the proposed model of Eq. 75, as the aberration is now a function of the lateral coordinates, i.e., Φθ,ϕ,x,y. In microscopy, these kinds of aberrations can occur either from the sample itself or from off-axis aberrations in the optical system, namely, systematic aberrations. Sample induced aberrations occur when the sample structure has significant refractive index variations (e.g., imaging in deep tissue). This issue can sometimes be addressed by adaptive optics (AO) [195, 199204]. Typically, in AO techniques, the wavefront distortion (due to aberrations) of light from fluorescent markers embedded within the sample, called guide stars, is measured and then used for wavefront correction, using deformable mirrors to remove the aberrations and achieve a flat wavefront. Off-axis aberrations often caused by the optical system, rather than by the sample itself, are typically easier to model as they tend to vary more smoothly. These aberrations can be modeled as 2D polynomial coefficients over the FOV [205] (which multiply Zernike coefficients for example) or addressed by Nodal Aberration Theory [206].

IV. FLUORESCENCE MICROSCOPY: MODALITIES

In the previous section, we described the fundamental optics of the wide-field microscope and derived its OTF and PSF. We also tied the lack of optical sectioning in wide-field microscopes to OTF’s missing cone; see Fig. 13. Here, we turn to different fluorescence microscopy modalities achieving optical sectioning and higher resolutions, i.e., near-field; point scanning; SIM; light-sheet; and multi-plane. In deriving their OTFs, we show that these modalities accomplish optical sectioning by collecting more spatial frequencies along the axial direction through either modification to the illumination and/or detection arms.

A. Near-field methods for enhanced axial resolution

Here, we turn to fluorescence imaging methods improving axial resolution using near-field effects. Electromagnetic near-fields are non-propagating (evanescent) fields with intensity gradients exceeding those of propagating waves.

1. Total internal reflection fluorescence microscopy

The first method discussed leverages TIR occurring when a plane wave is incident on an interface separating two media with different refractive indices.

We begin with Fresnel’s reflection and transmission coefficients r,r,t, and t for s- and p-polarized plane waves reflected at an interface dividing a medium with refractive index n1 (incidence medium) from a medium with refractive index n2, given compactly as follows [207]

r=n2wn2+w,r=1w1+w,t=2nn2+w,t=21+w, (82)

where we have used the abbreviations n=n2/n1 and w=w2/w1=n22-q2/n12-q2 defining w1,2 as the wave vector’s axial components in the first and second media, respectively. Moreover, q=2πn1sinθinc/λ is the length of the wave vector’s lateral component with θinc being its incidence angle upon the interface with respect to the normal to the interface within the first medium. Here, it is convenient to work in a unit system where the length of the vacuum wave vector is unity. In this unit system, we have q=n1sinθinc.

Now, as electric field and wave vectors are perpendicular, the electric field amplitude of the transmitted wave reads

E,=E0t,-w2qˆ+qzˆn2expiw2z+iqρ, (83)

where E0 is the amplitude of the incident field, with qˆ and zˆ unit vectors along the lateral wave vector component parallel to the interface and along the axial (z) direction perpendicular to the interface, respectively.

As can be seen from definitions of w following Eq. 82, for q=n1sinθinc>n2, the axial component w2 becomes purely imaginary and the absolute values of the reflection coefficients in Eq. 82 both become unity. Here, TIR is possible only if n1>n2, and becomes manifest when the critical incidence angle (TIR angle) is θTIR=arcsinn2/n1. However, as can be seen from Eq. 83, the electric field in medium 2 does not instantly go to zero but decays exponentially with increasing distance z from the interface. This decaying field in the second medium is termed evanescent field or wave. The characteristic decay length dTIR of the electric field intensity can be directly derived from Eq. 83 and reads

dTIR=12w2=12n12sin2θinc-n22. (84)

As such, although evanescent waves do not penetrate far within medium 2, they can still be used to excite fluorophores within a distance of dTIR from the surface, e.g., in TIRF microscopy [56]. By the same token, (out-of-focus) fluorophores deeper than dTIR are less likely to become excited decreasing undesired out-of-focus light.

To decode emitter axial location, variable angle TIRF (vaTIRF) [208] is used where several images are recorded at differing incidence angles of the excitation plane wave above the TIR angle. For increasing incidence angles (see Fig. 24), the excitation intensity’s decay becomes steeper. The variation in emitter brightness values across incidence angles is then used to assess its distance from the interface upon deconvolution [209, 210] with an axial resolution in some cases down to a few nanometers, i.e., by ca. 2–3 orders of magnitude better than the diffraction-limited resolution of a confocal microscope albeit within a limited range dTIR from the interface.

FIG. 24:

FIG. 24:

Total Internal Reflection Fluorescence (TIRF) microscopy. Excitation intensity above a coverslide interface with the sample medium as a function of incidence angle. The sample solution and coverslide refractive indices are, respectively, 1.33 (water) and 1.52, resulting in a TIR critical angle of ≈ 61°. The excitation wavelength is taken as 470 nm.

2. Super-critical fluorescence microscopy

The second near-field method discussed is SAF microscopy. This method employs the coupling of a fluorophore’s near-field emission into propagating modes in the coverslide’s glass to improve axial resolution [57, 58, 211214]. To be precise, fields due to an oscillating electric dipole have components decaying as 1/r,1/r2 and 1/r3 where only the first term coincides with the propagating term. The two other terms are non-propagating and represent near-field emissions decaying on short distances (λ). However, when the electric dipole is located close to a coverslide’s interface, non-propagating near-field dipolar components are converted into propagating modes upon coupling into the glass which can be then collected and imaged by the microscope objective. These modes can be decomposed into a super-position of plane waves traveling along directions above the critical TIR angle for the given emission wavelength (super-critical angle fluorescence or SAF emission). The coupling of near-field modes of the fluorophore into propagating modes in the glass decrease with increasing distance from the interface. In contrast, the emission into the angle below the TIR angle (under-critical angle fluorescence or UAF emission) is due to the propagation of the emitter’s far-field emission into the glass and does not depend on its distance from the surface. Thus, at its core, SAF microscopy leverages the variation in SAF to estimate the distance of an emitter from the coverslide’s interface by measuring the ratio of its SAF to SAF+UAF emission intensity.

To calculate the ratio of super-to under-critical angle emissions, we use the theoretical framework developed in Sec. III. In particular, for calculating SAF emission intensity, we use Eqs. 46 and 47, but with integration boundaries from θ=arcsinnsinθTIR/, dictated by the critical TIR angle, to θ=Θ, dictated by the numerical aperture. We then compute the energy flux density distribution from Eq. 51. The integral of the resulting energy flux density over the xy-plane is then proportional to the detectable SAF intensity. The UAF intensity is computed analogously but with integration boundaries from θ=0 to θ=arcsinnθTIR/. As an example, Fig. 25 shows the SAF to SAF+UAF ratio for a glass-water interface as a function of distance, assuming an isotropic emitter with emission wavelength of 550 nm. As can be seen, the dynamic range over which one can use this ratio in determining the emitter’s distance from the surface is very similar to the dynamic range over which vaTIRF is applicable; see Fig. 24.

FIG. 25:

FIG. 25:

Super-critical Angle Fluorescence (SAF) microscopy. Ratio of super-critical to total downward fluorescence emission for a rapidly rotating molecule as a function of distance from the interface of the coverslide and the sample medium. The refractive indices of the sample solution and coverslide are, respectively, assumed to be 1.33 (water) and 1.52 (glass), with the emission wavelength of 550 nm. The inset shows the angular emission intensity distribution of an emitter directly on the interface (with the blue, red and green curves denoting UAF and SAF emissions, and emission towards sample solution, respectively). The SAF emission strongly depends on the emitter’s distance to the interface, while the under-critical emission is independent of emitter axial position. By determining the ratio of SAF to SAF+UAF emission, we can find the axial position of an emitter.

3. Metal-induced energy transfer imaging

MIET, another near-field method used for axial localization [59], is based on near-field coupling similar to SAF microscopy. MIET uses the fact that when a fluorescent emitter (electric dipole emitter) approaches a metal layer, its electric near-field excites surface plasmons (coherent metal electron oscillations) in the metal, accelerating de-excitation of fluorescent emitter’s excited state. This is observed as a strong decrease in fluorescence lifetime with decreasing distance from the surface; see Fig. 26 and Eq. 18.

FIG. 26:

FIG. 26:

Metal-Induced Energy Transfer (MIET) microscopy: Dependence of the fluorescence lifetime (in terms of free space lifetime τ0) on the emitter’s distance from the glass substrate (coverslide) coated with a 20 nm gold layer. Calculations were done for an emission wavelength of 550 nm, and for a unit fluorescence quantum yield. Here we show the free curves for vertical, horizontal, and random emission dipole orientations. The inset illustrates the MIET sample geometry.

To infer distances from lifetime measurements, we use the theoretical framework developed in Sec. III. Briefly lifetime depends on the emission power requiring the explicit calculation of both electric and magnetic fields.

We start from the Weyl representation of the electric field of a free dipole emitter obtained in Eq. 62 to derive the electric field distribution above a MIET substrate (denoted by a metal surface in Fig. 27). As shown in Fig. 27, two sources contribute to the electric field above this metal surface: 1) direct emission from the dipole; and 2) emission reflected from the surface (i.e., emission from the emitter’s image)

Ed±=ik022π d2qwdpeˆeˆ1+reiwdz+zd+peˆ±eˆ±+peˆ+eˆ-reiwdz+zdexpiqρ-ρd+iwdz-zd, (85)

where terms with the reflection coefficients r, describe contributions from the reflected emission. Moreover, the superscripts “+” and “−” refer to plane waves moving towards and away from the metal surface. The r, are Fresnel’s q-dependent reflection coefficients for p- and s-waves for the MIET substrate.

FIG. 27:

FIG. 27:

Geometry for deriving the electric field generated by a single dipole emitter above the MIET substrate (metal surface). The red double headed arrow shows a dipole located a distance zd above the metal surface with an orientation of β and α denoting polar and inclination (azimuthal) angles, respectively. The three longer single-headed arrows show plane wave component vectors, with corresponding perpendicular polarization unit vectors eˆ and eˆ±. Here eˆ+ is the unit vector associated with the wave vector moving toward the metal surface. Similar conventions hold for the other unit vectors.

For planar structures of arbitrary complexity, these coefficients are readily obtained using propagation matrix formalism in Refs. [167, p. 254] and [215217]. Here, we now have to distinguish between two p-wave polarization unit vectors: eˆ+ for plane waves traveling towards the substrate, and eˆ- for plane waves traveling away from the substrate. The corresponding s-waves polarization unit vector eˆ is the same for both waves. We note that the result depends on the three-dimensional orientation of the emitter (given by the Euler angles α and β, see Fig. 27) via the scalar products peˆ± and peˆ.

Analogously, we can find the magnetic field as

Bd±=indk022π d2qwdpeˆeˆ±-eˆ+reiwdz+zd+peˆ±+peˆ+reiwdz+zdeˆexpiqρ-ρd+iwdz-zd. (86)

Now, given both electric and magnetic fields of Eqs. 8586. the total emission power, designated by S(β), of the emitter follows by integrating the outwards component of the Poynting vector over two planar interfaces sand-wiching the emitter

Sβ=ndc8π d2ρzˆE+×B+*z=0-E-×B-*z<zd. (87)

The emission power depends only on the dipole’s polar orientation angle β, and not its azimuthal angle α. The emission power S(β) can now be compared to the emission power S0 of a “free” dipole within a homogeneous medium with refractive index nd, given by the well-known formula in Ref. [172, p. 410] (also be obtained from the above equations by neglecting the contribution from reflected emission including coefficients r,) as S0=cndp2k04/3.

The observable enhancement of the radiative de-excitation rate kf of a fluorescence emitter due to the presence of the metal substrate with respect to the same emitter in a homogeneous environment is then given by the ratio S(β)/S0 [218].

As we recall from Sec. II, there is a contribution to the excited state lifetime from non-radiative decay pathways arising by collision with surrounding molecules and thermal dissipation of the excited state energy quantified by the fluorescence quantum yield, Qf. Here Qf is the probability that de-excitation proceeds radiatively with photon emission; see Eq. 17. The observable fluorescence lifetime τ is then the inverse of the total de-excitation rate kf+knon (see Eq. 18), such that its change in the presence of the metal substrate is given by

ττ0=S0S(β)Qf+1-QfS0. (88)

This is the final equation needed for calculating the dependence of fluorescence lifetime τ on emitter distance zd. An example is provided in Fig. 26 for the three cases of a vertically, horizontally, and randomly oriented emitter. In the latter case, the orientation-dependent S(β) is substituted for its orientational average S=(1/2)0πdβsinβS(β). As seen from Fig. 26 for a randomly oriented emitter, within a range of up to 200 nm from the surface, the lifetime depends monotonically with distance and a unique distance follows from the measured lifetime.

Further recent Ångström spatial resolution along the optical axis has been afforded by the use of materials such as Indium Tin Oxide (ITO) [219] or single-sheet graphene (graphene induced energy transfer or GIET) [220], leading to a distance-dependent modulation of the fluorescence lifetime on a ca. eight times smaller length scale.

B. Point scanning microscopy

Unlike wide-field imaging using multi-pixel detectors, point scanning microscopes sequentially record images by scanning samples over a set of positions and recording fluorescence signal from each position scanned. Moreover, in contrast to wide-field imaging, point scanning allows for out-of-focus light reduction thereby achieving optical sectioning. Here, we first consider image formation in the most widely used point scanning microscope: the Confocal Laser Scanning Microscope or CLSM [60, 221]. We then discuss enhanced-resolution achieved by ISM, 4pi, and two-photon microscopy.

1. Confocal laser scanning microscopy

A schematic of a point scanning microscope is shown in Fig. 28. An excitation laser beam, in yellow, is laterally deflected by a beam scanning unit along both directions perpendicular to the optical axis. Fig. 28 shows only one of these scanning directions where the excitation beam can be directed up and down upon reflection from the scanner by adjusting the scanner’s orientation. Following deflection, the excitation light is focused by the objective into a diffraction-limited focus within the sample. The emitted fluorescence light from the illuminated spot, shown in red, is then collected by the same objective and guided back through the same beam scanner towards the dichroic mirror. This process is known as de-scanning.

FIG. 28:

FIG. 28:

Schematic of a CLSM. Yellow and red beams, respectively, show the excitation and emission light. Emission passes through a confocal pinhole suppressing out-of-focus light; see details in text.

After de-scanning, fluorescence light is reflected away from the excitation beam by the dichroic mirror, which only reflects light within a range of wavelengths. The fluorescent light is next focused by the tube lens onto the circular aperture of a confocal pinhole obstructing the undesired fluorescent light from out-of-focus fluorophores. After potentially passing additional optical filters for background suppression, the fluorescence light is refocused onto a single-pixel point detector to record the in-focus fluorescence intensity.

In what follows, we derive the confocal PSF (for a single scanning spot). To avoid notational confusion, PSFs for the wide-field and CLSM are, respectively, denoted by Uwf and Ucf for the remainder of this section.

To derive the confocal PSF for an isolated emitter sitting in an excitation focal spot in sample space, we first consider major differences with the wide-field setup (described for the most general case and its approximate analytical forms in Secs. III C and III D; see Eqs. 71 and 74). These differences include: 1) the spot illumination procedure; and 2) the existence of the confocal pinhole.

We start from the fluorescent light from the emitter proportional to the three-dimensional excitation laser intensity at the focal spot Iex(ρ,z) (excitation is in the sample space and thus described by non-prime coordinates). The fluorescent light is, in turn, collected by the objective and focused onto the confocal pinhole (within image space). This results in a fluorescent intensity UwfIex prior to the pinhole where Uwf is this setup’s wide-field PSF in the absence of the pinhole and spot illumination. In the end, the confocal PSF (imaging PSF of a confocal microscope) is proportional to the fluorescence intensity (ignoring all constant prefactors) following the pinhole

Ucfρ,zAUwfIexρ,z= dρAρUwfρ-ρ,zIexρ,z, (89)

where A captures the confocal pinhole, set to unity for ρ=ρ smaller than the aperture radius a, and zero otherwise. Here, Uwfρ-ρ,z represents the wide-field PSF when imaging the fluorescence from an emitter at position r=(ρ,z) in sample space onto lateral position ρ in the plane of the confocal aperture within the image space (prime coordinates). Put differently, the confocal PSF of Eq. 89 is given as a product of: AUwf describing the detection, sometimes termed detection PSF; and Iex describing excitation, sometimes termed excitation PSF.

The integral in Eq. 89 is performed over the whole ρ-plane. The excitation PSF (excitation intensity distribution), Iex, entering the above equation is itself a function of the absorption dipole orientation pex of a fluorophore via Iex(r)Eex(r)pex2, where Eex denotes the electric field distribution in the focal spot.

In most cases of practical interest, one deals with rapidly rotating emitters for which the orientationally averaged excitation intensity reads (also see Eq. 67)

IexrEex,x2+Eex,y2+Eex,z2. (90)

To perform this calculation, we first consider the focusing of a planar wavefront through the objective into a diffraction-limited spot; see Fig. 29. Similar to Abbe’s sine condition relating propagation angles of wavefront patches in sample and image spaces, there is a similar relation between the distance ρ of a patch on the planar wavefront from the optical axis, and the propagation angle θ of the corresponding patch after focusing through the objective; see Fig. 29. This relation can be found from Abbe’s sine condition when moving the focus in image space to infinity (i.e., the focal length ftube of the tube lens tends towards infinity), and remembering that the magnification is given by the focal distance of the tube lens ftube divided by the focal distance f of the objective; see Fig. 1. Thus, we find sinθ=ftubefsinθ=n sinθ. When increasing the value ftube to infinity, the angle θ tends to zero, though the product ρ=ftube sinθ remains finite and coincides with the distance from the optical axis in the back focal plane. Thus, one finds the relation ρ=nfsinθ between the distance ρ before the objective and the propagation angle θ in sample space.

FIG. 29:

FIG. 29:

Schematic of the geometry of focusing a planar laser wavefront through the objective into the sample space; see Fig. 28. Wavefront patches at distance ρ from the optical axis in the back focal plane are converted into spherical wavefront patches traveling at angle θ=arcsin(ρ/nf) with respect to the optical axis z, where f is the focal length of the objective lens; see details in the main text.

Using this relation for ρ, we can expand the electric field in sample space into a plane wave super-position, similar to what we did in deriving the electric field of a point emitter in image space; see Eq. 46. When reading Eq. 46 in reverse, i.e., replacing all primed by non-primed variables and vice versa (thus starting with light coming from the back side of the objective focused through the objective into sample space), and when taking into account that the angles θ for the incoming light are all zero (plane wavefront), so that cosθ1, we arrive at

Eex(r)0Θdθsinθcosθ02πdϕE0,(ρ,ϕ)eˆ+E0,(ρ,ϕ)eˆexpikexr, (91)

where kex=2πn/λex(cosϕsinθ,sinϕsinθ,cosθ) is now the wave vector of a plane wave with wavelength λex (excitation light wavelength), where the electric field of the incoming laser beam in the back focal plane is expanded into its radially E0, and azimuthally E0, polarized components; see Fig. 29. For example, for a linearly polarized laser beam with polarization direction along x one has E0,cosϕ and E0,-sinϕ. This equation can now be used in calculating the three-dimensional excitation PSF in sample space. As an example, the left panel of Fig. 30 shows the CLSM PSF calculated assuming a 470 nm circularly polarized laser focused through a water immersion objective into a diffraction-limited spot (planar wavefront at the back focal plane).

FIG. 30:

FIG. 30:

CLSM and STED intensity distributions at the focus. Comparison of intensity distribution between conventional CLSM focus (left) with z-STED focus (middle) and xy-STED focus (right). Calculations were done for water immersion objective with NA = 1.2 at an excitation wavelength of 470 nm. On top of each column, the excitation polarization and its generating phase plate are shown. Bottom panels show 3D contour plots of the 1/e, 1/e2 and 1/e3 intensity iso-surfaces and projections of xy-,xz-, and yz-cross-sections through the center.

While we have focused on using Eq. 91 in computing the CLSM PSF, this equation is much more general. For instance, it can be used in calculating the intensity distribution of a donut excitation beam appearing in STED microscopy [70]. This donut intensity distribution, with zero intensity on the crossing of the optical axis with the focal plane (focus center), can be generated in two ways.

The first method generates a donut-shape laser intensity in the focal plane by sending a circularly polarized laser light through a ring-shaped phase plate thicker at its center. This results in retardation of the beams of light closer to the optical axis by half a wavelength with respect to the beams passing through the thinner outer part of the plate; see the central panel in Fig. 30. A snapshot of the resulting polarization structure across the back focal plane is depicted in the top middle panel in Fig. 30. Mathematically, this can be described by setting E0,cosϕ-isinϕ and E0,-sinϕ+icosϕ for ρρΦ and the same expressions but with opposite sign for ρΦ<ρ<fsinΘ, where ρΦ=fsinΘ/2 is the radius of the thicker central part of the phase plate. This special choice of ρΦ assures that the total excitation intensity in the focus center is indeed zero.

The second method sends circularly polarized light through a helical wave plate as shown at the top of the right panel in Fig. 30. When choosing an appropriate helical pitch, this leads to an excitation beam with polarization structure E0,sin2ϕ-icos2ϕ and E0,cos2ϕ+isin2ϕ. Three-dimensional representations of the resulting STimulated Emission (STE) intensity distributions and corresponding cross-sections are shown in the bottom panels of Fig. 30. As can be seen, neither the disk phase plate (middle panel) nor the helical phase plate (right panel) lead to an ideal STE intensity distribution, i.e., perfect donut shape with zero intensity at the middle. Whereas the disk phase plate leads to an intensity distribution achieving excellent axial compression of the STED-PSF, it performs poorly in lateral directions. In contrast, helical wave plates lead to excellent compression of the STED-PSF laterally, but not along the optical axis. Thus, 3D-STED systems use a combination of both excitation modalities [222].

Having in place an exact description of the excitation PSF (excitation intensity distribution), we can return to the imaging PSF of a CLSM and consider its optical resolution. To do so, we consider its OTF, i.e., the Fourier transform of Eq. 89, for which we replace Iex and Uwf of Eq. 89 by their Fourier expansions,

Uwfρ-r= dk2πU˜wfkexpikρ-r,Iexr= dk2πI˜exkexpikr, (92)

where we recall that a tilde over a symbol denotes its Fourier amplitude. This immediately leads to

Ucf(r) dρ dk dkAρU˜wfkexpikρ-rI˜exkexpikr. (93)

The integration over ρ can be now be performed analytically, resulting in

 dρAρexpikρ=2πaqJ1aq, (94)

where a is, as before, the radius of the confocal aperture, q=kx2+ky2 is the modulus of the radial part of the vector k, and J1 is the first order Bessel function of the first kind. Substituting this result into Eq. 93, we write

Ucf(r)  dk dk2πaqJ1aqU˜wfkI˜exkexpik-kr. (95)

Following some algebra, we find for the Fourier transform of Ucf(r), i.e., the CLSM’s OTF (up to some constant prefactor),

U˜cfk dkJ1aqqU˜wfkI˜exk+k. (96)

Thus, the OTF of the confocal microscope is given by the three-dimensional convolution of a wide-field microscope OTF, U˜wf(k), modulated by the aperture function, J1aq/q, (Fourier transform of detection PSF, Eq. 89, also sometimes termed detection OTF) and the Fourier transform of the excitation PSF, I˜ex (k) (also sometimes termed excitation OTF). This is visualized in Fig. 31, where the left panel shows the amplitude of the excitation OTF, I˜ex(k), the middle panel is the detection OTF given by the absolute value of the wide-field OTF U˜wf(k) multiplied by J1aq/q, and the right panel represents a cross-section of the amplitude of confocal OTF obtained by 3D convolution of the previous two panels.

FIG. 31:

FIG. 31:

Anatomy of the OTF (amplitude) of a confocal microscope. The left panel shows the excitation OTF. The middle panel shows the detection OTF for a confocal pinhole with 50μm radius and 60× magnification. The right panel shows the resulting confocal OTF obtained by a 3D convolution of the left two distributions.

The most noticeable difference between the confocal OTF of Fig. 31 and the wide-field OTF of Fig. 21 is that the confocal OTF has non-zero components along the optical axis (here kx=0 with the origin at the center) highlighting a confocal microscope’s ability for optical sectioning. The corresponding axial resolution is given by 2π divided by the maximum frequency supported along the kz-axis; see Eq. 57.

Fig. 32 shows how the confocal OTF changes with the pinhole size. As expected, for a large confocal pinhole radius of 200μm (top left panel), the confocal OTF approaches that of a wide-field microscope at the same wavelength, as can be seen by comparing with the right panel of Fig. 21. As the pinhole size shrinks (a=1μm), optical sectioning and axial resolution are optimized; see bottom right panel of Fig. 32. In this case, the confocal aperture can be approximated by a delta function so that the integral Eq. 94 results in a constant. As such, the OTF for a very small aperture reduces to the convolution of the wide-field OTF, U˜wf, with the excitation OTF, I˜ex. Thus the maximum frequency passed by the confocal OTF with a small aperture is given by kmax=kmax,ex+kmax,em where kmax,ex and kmax,em, respectively, denote the maximum extents of I˜ex and U˜wf.

FIG. 32:

FIG. 32:

OTF amplitude of a confocal microscope as a function of confocal aperture size. The confocal aperture radius is given at the top of each panel. Here, we assumed an excitation wavelength of 470 nm, emission wavelength of 550 nm, and a water immersion objective of NA = 1.2 at 60× magnification. The top most left panel shows the limit of an extremely large confocal pinhole so that the OTF approaches that of a wide-field microscope imaging at the same wavelength as the excitation wavelength of the excitation laser. The bottom right panel shows the limit of a nearly zero-size pinhole (a=1μm), so that the OTF approaches that of an ISM; see Sec. IV B 2.

The maximum extents of excitation and detection OTFs in the lateral direction are kmax,ex/det=4πnsinΘ/λex/em, which, in turn, results in the following lateral resolution (see Eq. 57)

ymin=12NA1λex+1λem-1, (97)

and similarly for the axial resolution

zmin=12n(1-cosΘ)1λex+1λem-1, (98)

where λex and λem are the excitation and emission wavelengths, respectively. Thus, ignoring spectral Stokes shift between excitation and emission, i.e., λemλex, (see Sec. II) then the confocal microscope with infinitely small pinhole has a twofold higher lateral resolution than a wide-field microscope as we can see by comparing Eq. 97 to Eq. 72. This improvement in resolution can also be explained in the spatial domain using Eq. 89 by setting Aρ=δρ-ξ (infinitely small aperture centered at ξ) and adopting Gaussian approximations for both the wide-field PSF as in Eq. 74 and excitation PSF Iex. In this case, the resulting confocal PSF would be the product of both Gaussians which is a Gaussian as well [223]

Ucfρ,zexp-ρ-ξρ22σρ2-z-ξz22σz2. (99)

Here, the widths of the resulting Gaussian PSF, σρ and σz, are smaller than the widths of both excitation and detection PSFs leading to higher resolutions.

The PSFs corresponding to the OTFs shown in Fig. 32 are presented in Fig. 33 illustrating how the PSF’s lateral width shrinks with decreasing pinhole size improving lateral resolutions albeit at a price. The smaller the confocal pinhole size, the fewer photons reach the detector thereby reducing SNR [224]. This is quantified in Fig. 34 showing the relation between PSF diameter (in the focal plane) and light detection efficiency for increasing pinhole radii (1-200μm) assuming 470 nm excitation and 550 nm emission wavelength, and for a water immersion microscope with NA = 1.2 objective and 60× magnification. As can be seen, light detection efficiency decreases as the confocal pinhole radius drops below 20μm motivating the use of ISM introduced next.

FIG. 33:

FIG. 33:

Confocal microscope PSF for an isotropic emitter as a function of confocal aperture size. The aperture radius is given above each panel. The parameters are similar to those in Fig. 32 with 60× magnification.

FIG. 34:

FIG. 34:

Relation between PSF size and detection efficiency in a CLSM. Here we show the light detection efficiency versus the Gaussian radius σ of the PSF in the focal plane as a function of the confocal aperture’s radius annotated a. Calculations were done for a water immersion objective with NA = 1.2 and image magnification of 60× (focal plane to pinhole plane). It was assumed that excitation is achieved with 470 nm circular polarized light focused into a diffraction-limited spot, and that the fluorescence emission is of 550 nm wavelength. We found the focal radius by fitting a radially symmetric Gaussian exp-ρ2/2σ2 to the PSF in the focal plane. The curve’s undulations at the upper right arise from diffraction effects of light passing through a circular pinhole.

2. Image scanning microscopy

As was discussed in Sec. IV B 1 when considering a confocal PSF, the maximum possible spatial resolution is achieved approaching an infinitely small confocal pinhole; see Eqs. 9799. However, as this would reduce light detection efficiency to almost zero (see Fig. 34), such an option is often avoided in practice. To simultaneously maximize spatial resolution and light detection efficiency, now beyond three decades ago, Colin Sheppard proposed to combine scanning spot illumination of confocal microscopes and wide-field light detection of an array detector, e.g., EMCCD camera, without pinholes mitigating light loss [61]. This idea, termed Image Scanning Microscopy or ISM, was first experimentally demonstrated in 2010 by Müller and Enderlein [62]. The core idea of ISM is to replace the confocal pinhole and the single pixel detector of a conventional CLSM by an array detector in the image plane (pinhole plane); see Fig. 28. The fluorescence light from an illumination spot at position r is then spread across multiple pixels of the detector array. In this setup, a pixel located at ξ records photons from the illuminated spot corresponding to a pinhole located at ξ with the same size as the pixel. The pixel size is often chosen small enough such that each pixel records an image of the illumination spot with a resolution similar to that of a CLSM with close to zero pinhole sizes; see Eqs. 97 and 98. Moreover, as ISM builds on a CLSM, it also provides optical z-sectioning.

The ISM setup described here, results in Np recorded images for each illumination spot associated to all Np pixels of the detector array. As such, upon scanning the sample at Ns locations, one acquires Np×Ns images. To combine all acquired images into a single high resolution image, we first consider the scan image recorded by one pixel at a given position ξ on the array detector. The PSF of this scan image is easily found when replacing the aperture function A(ρ) of Eq. 89 by the pixel area. However, as an idealization, we can consider the pixel area as a delta function δ(ρ-ξ) as compared to the size of features we care to learn. As such, the PSF for the scan image recorded by a pixel at position ξ is

Upixr,ξUwfξ-rIexr, (100)

where, as before, Uwf is the wide-field imaging PSF (detection PSF), and Iex is the excitation PSF. This is visualized in Fig. 35 where a cross-section of the excitation PSF Iex(r) is shown together with the detection PSF for a pixel at position ξ (described by Uwf(ξ-r)) and the product of both; see Eq. 100.

FIG. 35:

FIG. 35:

Image formation in ISM. The blue curve represents the excitation intensity distribution Iex (excitation PSF) with its center at ξ=0 (optical axis). The yellow curve shows the detection PSF Uwf for a pixel located at ξ away from the optical axis. The pixel PSF Upix, describing the image formation is, however, given by the product of the excitation and detection PSF, designated by the green curve and centered at ξ/κ. Thus, a fluorophore at ξ=0 (the excitation intensity’s center) will appear at ξ/κ.

When approximating the excitation and detection PSFs by Gaussians with variance σex2 and σem2, respectively, the product of both yields

Iex(r)Uwf(r-ξ)exp-(r-ξ/κ)22σPSF2 (101)

with σPSF-2=σex-2+σem-2, and κ=1+σem2/σex2. Recalling that σex and σem linearly scale with wavelength (see Eq. 74), we find

κ=1+λem/λex2 (102)

which equals 2 if one neglects the spectral Stokes shift between excitation and fluorescence emissions. Thus, the maximum of the product of excitation intensity distribution and detection PSF is located between the centers of both at position ξ/κ, such that the scan image is shifted by the same amount with respect to an image recorded by a pixel at position ξ=0; see Fig. 35. This insight yields a recipe for how to super-impose different scan images recorded by different pixels: an image recorded by a pixel at position ξ must be shifted by ξ/κ towards the optical axis before being added to the final sum image. Mathematically, this is expressed as

UISM(r) dξUpixr+ξκ,ξ= dξUwfκ-1κξ-rIexr+ξκ. (103)

There are two ways to realize this summation in practice. As shown in Fig. 36, one way is to scale down, by factor κ, all images recorded by the array detector at each scan position before adding them to the final image at the corresponding scan position (from top to bottom right in Fig. 36). Alternatively, one can leave the recorded array detector images as they are, but place them a factor κ farther away from each other when adding them to the final image (from top to bottom left in Fig. 36).

FIG. 36:

FIG. 36:

ISM image reconstruction. At each scan position, the array detector records a small image of the illuminated region (top). To reconstruct a final ISM image, we can either down-scale each recorded small image by a factor κ (bottom right), or leave the recorded images unchanged but place them in the final ISM image by the factor κ farther way from each other (bottom left).

Obviously, both procedures are mathematically equivalent ways to realize the algorithm described by Eq. 103, although the second algorithm is numerically simpler as it does not require any interpolation based down-scaling of the images recorded by the array detector. However, as first demonstrated by York and Shroff [225] and by de Luca and Manders [226], both algorithms can be realized in a fully optical way. The first algorithm, scaling down the array detector images, is optically realized by inserting an extra demagnifying lens pair into the detection pathway (as realized by instant SIM [225, 227], Optical Photon Re-Assignment or OPRA [228], or confocal spinning disk ISM [229]), while the second algorithm which scales up distances between recorded images is realized by a double mirror re-scan system (re-scan microscopy [226]) or by re-coupling the emission into the excitation scan system (rapid two-photon excitation ISM [230]).

By construction, both OTF and PSF of an ISM are identical to that of a confocal microscope with an infinitely small confocal pinhole; see last panels of Fig. 32 (OTF) and Fig. 33 (PSF), respectively. The corresponding achievable optical lateral and axial resolutions then immediately follow from Eqs. 97 and 98. One important particular property of ISM is that it also “concentrates” the collected fluorescence light into an area of the final image four times smaller than that of a conventional CLSM (“super-concentration of light”, [231], see also top and right panel of Fig. 36), significantly increasing image contrast. Meanwhile, multiple ISM variants (reviewed in Ref. [232]), and several commercial systems are available providing CLSMs with ISM options for improved resolution and high contrast imaging.

3. 4pi microscopy

One peculiarity of conventional CLSM is the disparity between lateral and axial resolutions (see Eqs. 9798) due to the PSF’s elongated shape along the optical axis yielding stretched 3D CLSM images; see Fig. 33. To overcome this strongly anisotropic PSF shape, Stelzer and Hell developed 4pi-microscopy using two opposing objectives to focus (and detect) light [64]. When sending laser excitation light through both objectives in a coherent manner, the resulting interference of both beams generates a multi-peaked interference pattern along the optical axis. The corresponding Fourier representations of the excitation electric fields are shown in the left and middle panels of Fig. 37, and the convolution of both, i.e., the 4pi excitation OTF, is shown in the right panel of Fig. 37. By contrast to the CLSM excitation OTF of Fig. 31, its 4pi counterpart populates high frequencies along the optical axis, coinciding with a tight modulation of the excitation intensity along this axis. The corresponding excitation intensity distribution (excitation PSF) in real space is shown in the left panel of Fig. 38.

FIG. 37:

FIG. 37:

4pi microscope excitation OTF generated by the interference of light focused through two opposing objectives. The left and middle panel show the same Fourier transform of the excitation electric field in sample space. The resulting excitation OTF shown in the right panel is the (auto)convolution of this electric field Fourier transform and represents the Fourier transform of the excitation intensity (excitation OTF). Excitation is assumed to be done using a water immersion objective with NA = 1.2.

FIG. 38:

FIG. 38:

Excitation PSF and (imaging) PSF of 4pi microscopy for a rapidly rotating emitter. The left panel shows the excitation PSF in the focus of a 4pi microscope, the middle panel shows the (imaging) PSF of a 4pi type A microscope, and the right panel that for a 4pi type C microscope. Calculations were performed using a water immersion objective with NA = 1.2 and 470 nm excitation wavelength and 550 nm fluorescence emission wavelength, and for a confocal detection in the limit of an infinitely small pinhole.

Detection in a 4pi microscope is done as usual in confocal detection mode, whereby two principal options are possible: 1) fluorescence is collected with both objectives and detected by two detectors resulting in two independent scan images added later to attain a single image (4pi type A microscope [233]); 2) fluorescence is collected with both objectives and coherently super-imposed onto one detector (4pi type C microscope [234]). As a special case is the 4pi type B microscope, performing similarly to the type A, where excitation is done incoherently (i.e., with no interference pattern generation) but the collected light is super-imposed coherently [235].

To determine the maximal possible resolution attainable with 4pi microscopy, we show in Figs. 39 and 40 the OTFs for type A and C microscopes in the limit of an infinitely small confocal pinhole (realized by combining a 4pi microscopy with ISM). Thus, the OTF of a 4pi type A microscope as shown in Fig. 39 is obtained by a convolution of the 4pi excitation OTF (see Fig. 37) with the OTF of a simple ISM (corresponding to wide-field detection).

FIG. 39:

FIG. 39:

OTF of a type A 4pi microscope where excitation is done through two opposing objectives, and detection from one side through a confocal pinhole. For simplicity, we consider here only the limiting case of an infinitely small pinhole maximizing spatial resolution. The left panel shows the excitation OTF, the middle panel the OTF of detection with an infinitely small pinhole, and the right panel shows the resulting 4pi OTF as a convolution of the two distributions shown on the left. Excitation and detection are achieved using a water immersion objective with NA = 1.2, and any Stokes shift between excitation and emission light is neglected.

FIG. 40:

FIG. 40:

OTF of a type C 4pi microscope. Similar to Fig. 39, but in this configuration, both excitation and detection occur through two opposing objectives. Again, we consider here only the limiting case of an infinitely small pinhole. The left panel shows the excitation OTF, the middle panel the (identical) Fourier transform for coherent confocal detection from both sides, and the right panel shows the resulting OTF as a convolution of the two panels shown on the left.

As in 4pi type C microscope, detection is achieved by coherently super-posing fluorescence light from both objectives, the OTF of such detection looks similar to that of the excitation shown in Fig. 37, except calculated for the fluorescence emission wavelength. The convolution of such a detection OTF with the excitation OTF then yields the OTF of the 4pi type C microscope; see Fig. 40. The corresponding real space PSFs for both type A and C 4pi microscopes are shown in the middle and right panels of Fig. 38.

As can be seen in Figs. 3940, 4pi microscopes collect more spatial frequencies than CLSMs (see Fig. 32) thereby improving their axial resolution. As before, we can again obtain quantitative numbers for the lateral and axial resolutions by inspecting the OTF and determining the maximum lateral and axial frequencies supported by the OTF. Concretely, the inverse of these maxima multiplied by 2π yields approximate values for the resolution; see Eq. 57. The lateral resolution of a 4pi microscope (with an infinitely small pinhole) is the same for both type A and C and equal to that of an ISM; see Eq. 97. However, the axial resolution of a type A 4pi microscope now reads

zmin121λex+1λem(1-cosΘ)-1 (104)

and similarly for the type C 4pi microscope

zmin121λex+1λem-1. (105)

As can be seen from the PSFs of Fig. 38, there are considerable side-lobes neighboring the central maximum along the optical axis, leading to “ghost” images in a recorded 3D scan image of a sample [234]. These ghost images are much more pronounced for type A than C, though even for type C they must be eliminated, currently by applying deconvolution algorithms [236, 237]. Both the technical complexity of a 4pi microscope as well as image deconvolution challenges to eliminate ghost images have prevented their further distribution. However, the ISM lateral resolution of a 4pi type C (image scanning) microscope together with its axial resolution represent the maximum possible spatial resolutions available along the x and z directions using a diffraction-limited microscope.

4. Two-photon microscopy

An important variant of the point scanning microscope is the two-(or multi-)photon excitation scanning microscope [238]. Here, a fluorophore is excited by a two-(or multi-)photon absorption process, typically with an excitation wavelength roughly twice (or multiple times) as large as that of one-photon absorption fluorescence excitation. Such two-photon excitation microscopes have several important properties [239, 240]. First, due to the longer excitation wavelength, typically in the infrared, excitation light can penetrate deeper into tissue than visible light. Thus, two-photon excitation microscopes are ideal for deep-tissue imaging in lipid and water rich tissues with high optical absorption in the visible spectrum. Second, there is a critical improvement in in-focus signal to background, i.e., undesired light from out-of-focus fluorophores, ratio compared to one-photon absorption fluorescence microscopy. This arises from: 1) fluorophore excitation taking place at longer wavelengths than the emission wavelength. In other words, the probability of simultaneous absorption of two or more photons is only significant at the focal spot with high photon density; 2) excitation light scattering is decreased at longer wavelengths; and 3) two-(or multi-)photon excitation does not require confocal detection for optical sectioning. This is because the two-photon excitation PSF is proportional to the square of the excitation light intensity distribution (probability of two simultaneous photon absorption is given by square of the one-photon excitation PSF), represented by an auto-convolution of the excitation OTF in Fourier space. A similar convolution was already considered when discussing the ISM’s OTF (i.e., as idealized by the last panel of Fig. 32), covering higher spatial frequencies by contrast to the OTF of a wide-field microscope or a CLSM with wide pinhole shown in the first panel of Fig. 32. Thus, a two-photon excitation microscope has a similar optical sectioning capability as a confocal (one-photon excitation) microscope at the same excitation wavelength when using an infinitely small detection pinhole (neglecting the spectral Stokes shift between excitation and emission). The required peak power of the excitation pulses, orders of magnitude larger than in single-photon excitation thereby increasing photo-damage and photo-bleaching [241], is the primary downside of two-photon excitation microscopy.

To gain deeper insight into the best possible lateral resolution achievable by a two-photon excitation microscope, we consider two-photon excitation along with ISM detection, i.e., recording at each scan position a small image of the excited region and performing pixel reassignment to obtain the high resolution ISM image; see Sec. IV B 2. To do so, we approximate the one-photon excitation PSF and the single pixel detection PSF once more by Gaussians with variances σex2 and σem2 (see Sec. IV B 2). We can visualize the PSF of the scan image recorded by one pixel at position ξ on the array detector as shown in Fig. 41; also see Eq. 101.

FIG. 41:

FIG. 41:

Pixel reassignment in two-photon excitation ISM. By contrast to the ISM in Fig. 35, the excitation intensity distribution (one-photon excitation PSF) in two-photon microscopy has a larger width due to the larger excitation wavelength.

The new reassignment factor κ (see Sec. IV B 2) is found by looking at the product of the detection PSF with the square of the one-photon excitation PSF, yielding a Gaussian function with variance σ-2=2σex-2+σem-2 and mid point position ξ/κ with κ=1+2λem/λex2, which would yield for the case λex=2λem the value κ=3/2; also see Eq. 102.

We now compare the performance of such a two-photon excitation ISM with that of a one-photon excitation CLSM and ISM at half the wavelength. For simplicity, we consider the toy model of a one-dimensional microscope. The Fourier representation of the excitation electric field of such a one-dimensional microscope is a uniform amplitude distribution over the frequency range supported by the microscope (maximum lateral frequency transmitted is nk0sinΘ. This is shown in Fig. 42 by the table-top function (electric field). The auto-convolution of this uniform amplitude distribution yields the excitation OTF and is, for the one-dimensional and one-photon case, the triangular function shown in Fig. 42 and denoted by “ 1hν excitation λ0.”

FIG. 42:

FIG. 42:

Comparison of one- and two-photon microscopy. For explanation see main text.

The two-photon excitation PSF for an excitation with 2λ0 wavelength is given by the square of the one-photon excitation PSF. As such, its OTF corresponds to the auto-convolution of the one-photon OTF shown by “1hν excitation λ0” in Fig. 42, but scaled down (along the frequency axis) by a factor of 2 (remember that we compare two-photon excitation at 2λ0 with one-photon excitation at 1λ0. The corresponding curve is denoted by “2hν excitation 2λ0”. The OTFs for the extensions of one-photon and two-photon excitation fluorescence microscopy with ISM are also shown, together with the OTF of the one-photon excitation at λ0/2 for comparison.

As can be seen, the frequency support of two-photon excitation at 2λ0 wavelength is equal to that of the one-photon excitation at λ0, but with increased amplitudes at low frequencies and decreased amplitudes at large frequencies. In other words, a two-photon microscope transmits high lateral spatial frequencies less efficiently than a one-photon microscope operating at half the wavelength. This is also true when we compare two-photon ISM with one-photon ISM, as shown by the two curves “1hν excitation λ0+ISM” and “2hν excitation 2λ0+ISM” in Fig. 42. Both modes have a frequency support equal to that of a one-photon excitation at λ0/2, but with considerably damped amplitudes at high spatial frequencies, with one-photon ISM performing slightly better than two-photon ISM. Thus, two-photon (or multi-photon) excitation generally performs worse, in terms of resolution, than one-photon microscopes at half the wavelength, though biological tissue remains more transparent (less scattering) at long wavelengths, giving access to greater penetration depths in two- and multi-photon excitation microscopes.

C. Models for single spot confocal analysis

Point scanning microscopes, including confocal and two-photon microscopes, have been used to study both dynamic [242246] and static [247251] phenomena with both immobile [244247, 252] as well as scanning [53, 93, 243, 248] spots under continuous or pulsed illumination [245, 253]. Point scanning microscopes, particularly confocal microscopes, provide data for myriad analysis tools including fluorescence recovery after photo-bleaching (FRAP) [242, 254, 255] used in the study of sub-cellular environments by monitoring diffusion of fluorophores into previously photo-bleached regions, FLIM [94, 256], where photon arrival time statistics following pulsed excitation are collected and analyzed, and Fluorescence Correlation Spectroscopy (FCS) [243, 257, 258] where photon arrival times or fluorescence intensities, often collected under constant illumination, are correlated in time to infer dynamical parameters [245, 246].

Here, we begin with a description of FCS where a static confocal spot is used to determine the reaction kinetics and diffusion coefficient of particles freely diffusing through the spot; see Fig. 43a. In particular, this figure illustrates a scenario often analyzed using FCS with labeled molecules freely diffusing through a static confocal spot becoming excited in proportion to the local light intensity. In traditional FCS analysis, a fraction of emitted photons are captured and dynamical properties are obtained by auto-correlating in time the emitted light intensity or photon arrival times [257260]. While auto-correlating photon arrivals is computationally informative, it is data inefficient and eliminates single molecule information already encoded in the signal [246, 261, 262]. What is more, uncertainty is rarely propagated on derived quantities. Thus, a statistical method directly analyzing photon arrivals is warranted avoiding data post-processing including auto-correlation [245, 246, 262]. Here, we begin by deriving the likelihood for the collection of K+1 photons whose inter-arrival intervals [262] are designated by Δt1:K=Δt1,,ΔtK, see Fig. 43b, under the assumption of continuous illumination.

FIG. 43:

FIG. 43:

In (a) we show a schematic of confocal volume (in blue) with labeled molecules emitting photons in proportion to their degree of excitation decaying from the confocal volume center. In (b) we show a synthetic trace with 1500 photons generated assuming four molecules diffusing at 1μm2/s for 30ms using background and molecule photon emission rates of 103photons/s and 4×104photons/s, respectively. The figure is adapted from Ref. [262].

We begin by considering the confocal PSF derived earlier in this section in Eq. 99 and, for simplicity, immediately adopt Cartesian coordinates where r=(ρ,z). For an arbitrary M molecules located at rkm at time tk, we write the following profile

Skr=m=1Mδr-rkm. (106)

As such, the total expected photon emission rate at time level k,μk, follows from

μkr=μ+μ0 drUcfrSkr=μ+m=1Mμkmrkm, (107)

where μkmrkm=μ0Ucfrkm is the expected photon emission rate from the mth molecule located at rkm,μ0 is the maximum photon emission rate associated with a molecule located at the PSF center, and μ is the background photon emission rate. The photon emission rate, μk, then dictates the photon interval time, Δtk,

Δtk~Exponential μkr, (108)

using notation introduced in Sec. I B. This exponential waiting time follows from Poisson distributed photon emission per unit time implying exponentially distributed photon inter-arrival times.

Finally, under the assumption of a normal diffusion model with open boundary conditions,

rkmD~Normalrk-1m,2DΔtk, (109)

where D is the diffusion coefficient assumed to be constant across time and space. From Eq. 109 we see that the rate μk(r) inherits its stochasticity from the stochastic positions.

Given the forward model described above, we now construct the likelihood for K photon inter-arrival times, Δt1:K, given by Eq. 108. As Δt1:K are iid (see Sec. I B), the trace’s likelihood is simply the product of the likelihood of every individual photon time interval

PΔt1:KM,D,r--,μ0,μ=kExponential Δtk;μk(r), (110)

where μk(r) is an implicit function of M, D, μ0 and μ; see Eqs. 107 and 109. Moreover, double overbars represent the set of all possible values for the two associated indices, namely m and k.

To maximize the likelihood we would need to determine the number of molecules either in advance, i.e., parametric model, or work within a non-parametric paradigm and infer the number of molecules alongside other parameters. The likelihood above cannot naively be maximized to obtain parameters due to classic over-fitting problems favoring more complex models, i.e., larger numbers of molecules. However, in the former case, assuming a wrong parametric model with M molecules [246, 262] can result in incorrect estimates of other parameters, e.g, diffusion coefficient; see Fig. 44.

As such, we abandon the parametric paradigm and start leveraging BNP tools [21, 39, 263, 264]. Of particular interest within the BNP paradigm is the Beta-Bernoulli process prior (see Sec. I B) on the number of candidate molecules, M, formally allowed to tend to infinity, M, a priori. Put differently, each molecule is treated as a Bernoulli random variable (a load), bm, learned simultaneously along with other unknowns; see Sec. I B. The probability of the load being one, equivalently the probability of the molecule being warranted by the data, is the single parameter of the Bernoulli distribution on which we place a Beta prior.

Within this framework, Eq. 107 is modified by replacing m=1Mμkmrkm on the right hand side with m=1bmμkmrkm and summing over infinite molecules. The likelihood then adopts the form

PΔt1:Kϑ=kExponential Δtk;μk(r), (111)

but where ϑ now collects all unknowns including all loads. Our non-parametric posterior is proportional to the product of this likelihood and all priors including Beta-Bernoulli process priors on each molecule; see Box IV C.

Now equipped with the posterior, we draw samples using Monte Carlo methods to learn the set of unknowns ϑ. To learn the trajectories r-, we use forward filtering backward sampling [21, 32, 163, 262], while the remaining parameters are sampled either directly or using brute-force Metropolis-Hasting; see Sec. I B. Fig. 45 benchmarks the statistical framework of Box IV C versus FCS.

FIG. 45:

FIG. 45:

Comparison of diffusion coefficients, D, obtained from the statistical framework versus FCS plotted against photon counts used in the analysis. Photon arrival times were simulated using the parameter values in Fig. 43b. The figure is adapted from Ref. [262].

While the above approach returns a trajectory, due to the symmetry of the confocal PSF (see Eq. 99), the photon emission rate of Eq. 107 and thus the likelihood given by Eq. 111, are invariant under transformations leaving ρ/σρ2+z/σz2 unchanged. As such, equivalent positions lead to the same likelihood and thus unique positions cannot be determined using a single confocal setup.

In contrast, it is possible to determine absolute molecular locations (trajectories) by breaking the spatial symmetry of the confocal spot by introducing a multi-focus confocal setup [261, 265, 266]. Such a setup splits the confocal spot by introducing 4 detectors with axially and laterally offset detection volumes; see Fig. 46ab. Photons from molecules in such a setup are detected in the lth detector with the following rate

μkl(r)=μl+μ0mbmlUcflrkm (112)

at time k; see Eq. 107. The total photon detection rate is, in turn, the sum of detection rates across all different detectors μk=lμkl and the likelihood is similar to the likelihood seen in Eq. 111. From the this likelihood follows a posterior analogous to Box IV C that when sampled yields absolute molecular trajectories; see Fig. 46c.

FIG. 46:

FIG. 46:

Multi-focal setup uniquely resolving many molecular trajectories simultaneously. (a) A beam splitter is used to divide the fluorescent emission (designated by green) into two paths later coupled into fibers and detected by 4 APDs corresponding to different focal spots. (b) PSFs associated to different light paths. (c) Trajectories for two freely diffusing molecules with D=1μm2/s,μ0=5×104 photons/s and μ=103 photons/s. Here, the orange and blue curves represent the learned trajectories’ ground truth and median, respectively. The blue and gray areas, respectively, denote the 95 percent confidence intervals and the PSF’s width. The figure is adapted from Ref. [261].

It is now conceivable to imagine generalizing the treatment above to include multiple diffusing species [267], species with donor and acceptor labels (FCS-FRET) [268, 269], as well as species undergoing reactions which alter their emission rate and kinetics [270, 271].

This brings us to the merits of statistical approaches compared to FCS. Such approaches are more data efficient, rigorously propagate error (including effects of finite data via the likelihood), can deal with any PSF shape, and optical aberrations [272, 273]. But also, fundamentally, by avoiding data post-processing they learn more. For instance, in contrast to FCS, the statistical methods described above can learn properties of every individual molecule diffusing through the spot providing single molecule resolution albeit at computational cost.

Having dealt with continuous illumination, we now turn to pulsed illumination and, for simplicity alone, assume an immobile sample. Under pulsed illumination, the data acquired is a trace of K photon arrival times, Δt1:K, reported with respect to the immediate preceding pulses. These arrival times, also termed micro-times, encode the excited state lifetimes, τm for the mth species, of fluorophore species (see Sec. II) present within the confocal spot. They also encode the associated photon ratios (weights) shown by πm for the mth species related to fluorophore densities as we will show later.

Although intuitive methods exist to determine excited state lifetimes [93], similar to Fig. 45, we find that lifetimes learned are sensitive to the parametric assumption on the number of lifetime species considered [274]. Indeed, existing techniques cannot simultaneously: 1) decode the number of fluorophore species present in a trace of photon arrival times; 2) operate on a broad range of lifetimes below the Instrument Response Function (IRF) (see Appendix A) or lifetimes comparable to the laser inter-pulse times or similar lifetimes; 3) provide uncertainties over parameter estimates; and 4) infer continuous fluorophore densities, i.e., lifetime maps given by Ωm(r)=μmSm(r) where Sm and μm are, respectively, the fluorophore densities (see Eq. 106) and fluorophore excitation probability (for in-focus fluorophores) during a laser pulse for the mth species.

Here, we review statistical frameworks for FLIM analysis addressing the issues highlighted above with minimal photon budgets. In doing so, we first discuss a framework for a single confocal spot and then generalize to FLIM analysis methods using data from a scanning confocal setup to deduce lifetime maps over large FOVs.

We begin by introducing the likelihood for Δt1:K collected from a single spot with M species

PΔt1:Kλ1:M,π1:M=k=1KPΔtkλ1:M,π1:M, (113)

where λm denotes the inverse lifetime τm=1/λm and PΔtkλ1:M,π1:M denotes the likelihood of the kth arrival time. To derive PΔtkλ1:M,π1:M, we sum over all possibilities that could give rise to this photon including: all M fluorophore species; and all Npl previous laser pulses. Assuming a Gaussian IRF, this leads to (see Appendix A and Eq. A23) [53, 274].

PΔtkλ1:M,π1:M=m=1Mπmn=0Nplλm2×expλm22τIRF -Δtk-nT+λmσIRF 2×erfcτIRF -Δtk-nT+λmσIRF 2σIRF 2, (114)

where τIRF, σIRF2 and T, respectively, denote the IRF offset and variance, and the inter-pulse time; see Appendix A. Ignoring excitation by previous pulses considered in Eq. 114, we arrive at the likelihood obtained in Ref. [275].

To summarize, parametrically, the number of fluorophore species, M, is pre-specified and often set to one or two for simplicity, e.g., Refs. [275, 276]. In contrast, non-parametrically, the number of fluorophore species are a priori assumed infinite [53, 274].

Within the non-parametric paradigm, the single spot FLIM posterior is proportional to the likelihood Eq. 114 and priors over all unknown parameters, namely λ1:M and π1:M. For λm, we use a Gamma prior to guarantee non-negative values. For πm, we leverage the non-parametric Dirichlet process prior [3537] to facilitate inference over the probability in the number of species present warranted by the data, i.e., to address model selection; see Sec. I B. Within this framework, as before when operating non-parametrically, we assume an a priori infinite number of species (M) with associated weights πm. As we sample these weights, the weights ascribed to species not contributing to the data attain negligible values. Fig. 47 shows lifetime histograms for two lifetimes below the IRF and with sub-nanosecond differences using 500, 1K and 2K photons.

FIG. 47:

FIG. 47:

Lifetime histograms from single-pixel FLIM. Here, lifetimes are below the IRF and differ by subnanoseconds. Data sets used in panels (a-c) were simulated with 5⋅· 102, 103, 2 · 103 photons, IRF width of 0.66 ns, and ground truth lifetimes of 0.2 ns and 0.6 ns denoted by dotted lines. Learning the correct number of fluorophore species here requires > 500 photons.

We now turn to FLIM over large FOVs where we show how to estimate smooth lifetime maps from confocal scanning data; see Fig. 48. FLIM data over large FOVs are typically collected using a CLSM to scan the sample over uniformly spaced horizontal trajectories where the spacing defines the data pixel size. The collected data is often arranged into a 2D pixel array where each pixel contains a subset of photon arrival times acquired over the pixel.

FIG. 48:

FIG. 48:

Experimental FLIM data from mixtures of two cellular structures (lysosome and mitochondria shown in green and red, respectively) stained with two different fluorophore species. (a-b) Ground truth lifetime maps. (c) Data acquired from mixtures of two ground truth maps. (d-e) Resulting sub-pixel interpolated lifetime maps obtained using the statistical framework of Box IV C. The average absolute difference between ground truth and learned maps is ≈ 4%. Scale bars are 4μm. The figure is adapted from Ref. [53].

One naive way to process such data is to analyze each pixel independently using the framework of Box IV C. However, this yields pixelated lifetime maps where information from one pixel does not inform the neighboring pixels. In what follows, we review a framework for multi-pixel FLIM over large FOVs [53, 277] reporting lifetime maps below the data pixel size leveraging spatial correlations across pixels by invoking (non-parametric) GPs; see Sec. I B and Fig. 48.

The likelihood here is now given by

P𝒲--,Δt--ϑ=ikpP𝒲kpiϑPΔtkpiϑ, (115)

where 𝒲kpi, for the kp th pulse and ith pixel, is a binary variable designating whether a laser pulse leads to a photon detection or not. As before, ϑ collects all unknowns including inverse of lifetimes λ1:M, multi-pixel lifetime maps Ω1:M, the loads b1:M, and hyper-parameters ν1:M over each species. Further, double overbars represent the set of all the possible values for the pair of indices associate to the corresponding parameter. Here, the likelihood associated with photon arrival times is similar to Eq. 114 and given by

PΔtkpiϑ=m=1Mπmn=0Nplλm2×expλm22τIRF-Δtkpi-nT+λmσIRF2×erfcτIRF-Δtkpi-nT+λmσIRF2σIRF2𝒲kpi, (116)

reducing to one for pulses that do not yield any photon detection (empty pulses with 𝒲kpi=0). In the above, the weights, π1:M, are directly related to the lifetime maps by πmi=1-P0miqmP0qi [53], where P0mi reflects the probability of no photon detection within the ith pixel from the mth species given by

P0mi=exp-bm ΩmrUcfξi-rdr, (117)

where ξi is center of the ith pixel. Moreover, bm denotes the loads associated to the mth lifetime map (see Sec. I B) on which we place Beta-Bernoulli process priors (just as in Box IV C) to deduce the number of lifetime maps introduced by each fluorophore species present within the data. As a sanity check, we note that for species with bm=0, the probability of no photon detection is one.

After illustrating how we compute PΔtkpiϑ, we compute P𝒲kpiϑ following the observation that 𝒲kpi is Bernoulli distributed with success probability 1-π0i

𝒲kpi~Bernoulli1-π0i. (118)

Here, π0i is the probability of no photon detection from the ith pixel given by π0i=m=1MP0mi.

After introducing the likelihoods, we construct the posterior proportional to the product of the likelihood and priors over all unknown parameters. Our framework is doubly non-parametric as we use: GP priors over continuous lifetime maps; and Beta-Bernoulli process priors over the loads; see Sec. I B. The GP priors over lifetime maps are comprised of an infinite set of correlated random variables, i.e., the value of the map at every point in space

Ωm~GPνm,K, (119)

where K and νm denote the correlation kernel (also termed a covariance matrix) and the GP prior’s mean. The remaining priors are either physically or computationally motivated; see Box IV C.

With the posterior at hand, we make inferences on ϑ once more by drawing samples from the posterior with Monte Carlo. Of note are elliptical slice samplers [31] used to sample lifetime maps as the GP and likelihood do not form a conjugate pair.

D. Structured illumination microscope

As discussed in Sec. III C, a major drawback of wide-field fluorescence imaging is the lack of optical sectioning arising from the OTF’s missing cone; see Fig. 13. This, in turn, yields out-of-focus blur degrading the final images. Previously, we discussed near-field and point scanning methods where, for example, conventional confocal microscopes achieved optical sectioning via pinholes; see Sec. IV B 1. Here, we discuss how SIM achieves both optical sectioning and resolution beyond the diffraction limit [178, 278281].

Patterned illumination, with a high spatial stripe contrast near the focal plane [282], was introduced in an effort to attain optical sectioning. The pattern, whose illumination contrast ideally fades away from the focal plane, was then translated twice yielding three images l with corresponding phase offsets ϕl,l=0:2. One way to attain optical sectioning was to create three images from differences in two images, say Δll(r)=l(r)-l(r), and combine them according to

Λsecr=Δ01(r)2+Δ12(r)2+Δ20(r)2,ϕl=2lπ3,l=0:2. (120)

The hope was that by subtracting images, unmodulated (out-of-focus) contributions cancel as they are approximately homogeneously illuminated.

These early efforts ultimately motivated structured illumination to achieve higher resolution [280, 281] that we now discuss by considering the SIM image formation model. SIM images are generated from the product of the fluorophores’ distribution, S(r) (see Sec. IV C), and the illumination intensity pattern, Iex(r), followed by convolution with the microscope’s wide-field detection PSF (also see Eq. 52)

Λr=+ISrIexrUwfr, (121)

where is the background arising from out-of-focus fluorescent features, ignored here for simplicity only, and Uwf(r) and I are, respectively, the wide-field PSF, (e.g., see Sec. III E) and fluorophore brightness per frame.

While various modulated illumination patterns are conceivable for SIM [283285], in practice, the sample is typically illuminated using a sinusoidal intensity, Iex(r), with different in-plane phases and angles (see Fig. 49) achieved by interference-based [286, 287] methods or using laser-scanning [288290].

FIG. 49:

FIG. 49:

Sinusoidal illumination pattern for SIM microscopy. Here, ki is the wave vector, L is the fringe spacing, and γi is the illumination’s in-plane angle. The phase is related to the position of the maxima relative to the optical axis.

Under the former method, such intensity patterns are generated by interfering two to three laser beams, followed by rotation and translation of the grating embedded within the setup’s illumination arm. For two beam interference, the image formation is described by

Λlir=ISr121+Mcosrki+ϕlUwfr,γi =arctankxikyi,L=2π/kxi2+kyi2 (122)

where M, the modulation depth, is assumed to be one in subsequent calculations for simplicity; ki is the wave vector, with components kxi and kyi, defining the oscillatory pattern’s period, i.e., the fringe spacing denoted by L; see Fig. 49. Next, γi and ϕl are, respectively, the lth in-plane illumination angle and the ith phase offset determining the position of the maxima relative to the optical axis; see Fig. 49 and Eq. 120.

The improved resolution is achieved by exploiting the frequency mixing, i.e., moiré effect, between the excitation pattern and the sample’s spatial frequencies. That is, previously unobservable high frequency information, beyond the wide-field OTF’s support, is shifted down into the microscope’s band-pass (i.e., frequency support of microscope’s OTF); see Fig. 21 and 50.

FIG. 50:

FIG. 50:

The SIM OTF. The left and middle panels, respectively, illustrate Fourier transforms of the modulated illumination intensity (SIM excitation OTF given by the three delta-peaks) and wide-field detection. The right panel shows the SIM OTF obtained by convolution of the two other panels; see also Eq. 123.

The effect of structured illumination is most intuitively demonstrated in Fourier space. For the sinusoidal pattern given in Eq. 122, its Fourier representation reads

Λ˜lik=IS˜kI˜exkOTFwfk=IS˜(k)δ0+12e+iϕlδk+ki+12e-iϕlδk-kiOTFwfk=IS˜k+12e+iϕlS˜k+ki+12e-iϕlS˜k-kiOTFwfk=˜0(k)+12e+iϕl˜+k+ki+12e-iϕl˜-k-ki, (123)

where OTFwf(k) denotes the wide-field OTF (see Fig. 21 and middle panel of Fig. 50), and the sinusoidal illumination pattern (for a given angle and phase) is described by three different frequencies in the Fourier domain (see the left panel in Fig. 50) yielding the three SIM harmonics ˜0,˜+,˜-.

In Eq. 123 the first delta function within the parenthesis coincides with the Fourier representation of the uniform (wide-field) illumination. However, the two subsequent terms arise from the illumination patterning. These additional terms are two copies of the Fourier representation of the sample S˜(k) phase shifted by a factor ϕl and frequency shifted by ki, providing extra information compared to wide-field microscopy.

Supposing the OTF cut-off frequency is kc, the frequency shifted components contain high frequency information otherwise absent in the central component (sum frequency k+kikc and difference frequency k-kikc at each sample frequency of k). When imaged, only frequencies inside the support of the wide-field OTF are captured. However, sample information across different (higher) frequency regions now lie within the microscope’s band-pass; see Fig. 50.

While the three SIM harmonics ˜0,˜+,˜- (wide-field and ± pattern wave vector) already contain frequencies beyond wide-field band-pass, no sub-diffraction resolution can yet be achieved. This is because these components overlap in frequency space. In order to unmix the overlapping parts, we need to acquire at least three images with different pattern phases ϕl designated by Λ˜li(k) in Fourier space. The relation between the three SIM harmonics and these images is best shown in matrix form

Λ˜0i(k)Λ˜1i(k)Λ˜2i(k)=10.5eiϕ00.5e-iϕ010.5eiϕ10.5e-iϕ110.5eiϕ20.5e-iϕ2˜0(k)˜+k+ki˜-k-ki.

Here, we used a mixing matrix (the square matrix on the right), with different phases for the available spectra evenly spaced between 0 and 2π. This allows us to solve for ˜0,˜+ and ˜-, i.e., unmixing the SIM harmonics. The unmixed components are then recombined by shifting them so that their true zero frequency is aligned with their zero frequency in Fourier space, i.e., k0 setting. This yields an effective OTF extended to frequencies beyond the original OTF’s support and thus yield high resolution SIM images, i.e., fluorophore densities S(r) [291, 292].

Several techniques, mostly operating within the Fourier domain, unmix the SIM harmonics to reconstruct SIM images [285, 291302]. Ideally, reconstruction requires knowledge of multiple imaging system properties including the exact OTF, pattern frequency, phases, and modulation depth (e.g., see Eq. 122). Inaccurately specified properties can result in imperfect SIM reconstructions typically exhibiting well-known artifacts [303]. For instance, refractive index mismatch (see Fig. 19) may lead to repeated features along the z axis known as “ghosting”. Similarly, fine hexagonal “honeycomb” pseudo-structures can arise when neglecting background ( of Eq. 121) in 2D SIM images; a false k0 setting impacting the OTF leads to so-called “hatching”, i.e., the appearance of angle-specific stripes in one or more directions, to name only a few.

Working in real space not only allows us to cleanly propagate uncertainty (as all data is collected in real space) but also avoids artifacts tied to Fourier domain, such as the k0 setting. For this reason, we review SIM reconstruction in real space [41].

The total likelihood over the data is the product of likelihood models corresponding to each phase ϕl and wave vector ki

Pw-1:NΛ1:N=i=13l=13n=1NPwnliΛnli, (124)

where double overbars represent all possible values of i and l (an overbar for each index) and where PwnliΛnli is the likelihood over a single pixel. Here, wnli and Λnli, respectively, denote the observed (data) and expected photon counts over the pixel n using an illumination with phase ϕl and wave vector ki. The expected photon count is given by (see Eqs. 344)

Λnli=𝒜ndxdyΛlir, (125)

where Λli(r) is given by Eq. 122 and 𝒜n is the pixel area. Assuming high SNR and a Charged Coupled Devices (CCD) camera noise model of Eq. A12, we arrive at the following single pixel likelihood

PwnliΛnli=Gaussianwnli;gΛnli+o,σw2, (126)

where g,o and σw2, respectively, are the camera gain, offset and read-out variance; see Appendix A.

Finally, we now present a Bayesian framework required in rigorous noise propagation from the SIM data [41]. Within this framework, we consider priors over unknowns including the GP priors (see Sec. I B) over the fluorophore distributions, S(r), and priors over the GP’s co-variance kernel, K(ν). These parameters are collectively re-grouped under ϑ={S(r),ν}. The complete framework is described in Box IV D.

Statistical Framework IV D: SIM.

Data: pixel values in ADUs

w-1:N=w-1,,w-N.

Parameters: fluorophore distribution, GP covariance kernel parameter (hyper-parameter)

ϑ=Sr,ν.

Likelihood:

pw-1:Nϑ=i=13l=13n=1NGaussianwnli;gΛnli+o,σw2.

Priors:

Sr~GP0,Kν,ν~Gammaαν,βν. (127)

Posterior:

Pϑw-1:NPw-1:NϑPϑ.

Finally, we numerically sample the posterior to learn the unknowns ϑ. The sampling procedure is particularly straightforward for this SIM framework as the Gaussian likelihood and GP priors are conjugate to the Gaussian likelihood resulting in a closed form posterior. At low SNR, this procedure fails as it leads to negative values for the fluorophore distributions allowed by the GP prior and an alternative method is proposed.

The SIM experiment described combined with image reconstruction typically achieves resolutions up to approximately two times better than the diffraction limit. This is because, in practice, the illumination pattern is also diffraction-limited implying that its corresponding Fourier peaks lie within the support of the system’s wide-field OTF, limiting the resolution improvement to a factor of about two (not considering, e.g., the Stokes shift of fluorescence emission; see Sec. II). The resolution of the SIM image is then approximately 2π/kc+ki along the direction of ki; see Eqs. 57 and 7273. The process has to be repeated for at least three orientations ki,i=1:3 to achieve near isotropic lateral resolution enhancement.

Resolution improvement using structured illumination can also be combined with illumination modalities other than wide-field epi-fluorescence providing optical sectioning, such as TIRF [304], grazing incidence illumination [305], or light-sheet microscopy [306308].

While the above discussion was focused on 2D SIM, the principle extends to 3D by using three (or more) interfering beams generating a laterally and axially varying illumination pattern [286, 309, 310]. In three-beam interference, five phase shifts are necessary to unambiguously unmix frequencies, resulting in five SIM harmonics all three orientations ki as opposed to three for 2D SIM; see Eq. 123. This leads to 15 SIM harmonics requiring 15 images to unambiguously unmix the harmonics. This process has to be repeated for each z-position. Although more complicated than 2D SIM, 3D SIM achieves approximately twofold resolution improvement and optical sectioning as the OTF copies in 3D SIM overlap and fill the wide-field OTF’s missing cone; see Fig. 13.

All SIM implementations mentioned thus far use linear fluorescence excitation. This has the advantage of being relatively gentle to living samples as low excitation intensities can be used compared to other super-resolution imaging methods employing non-linear response of fluorophores to excitation light; see Secs. II and V. While the SIM resolution improvement is restricted to approximately twofold as the illumination pattern itself is limited by diffraction, higher resolution is achievable by combining SIM with non-linear fluorophore photo-physics [311, 312]; see Secs. II and V A.

For instance, resolution improvement beyond twofold was achieved by combining structured illumination with saturation of the excited state emission, i.e., increasing the excitation intensity above a threshold where fluorophores spend a longer time in the excited than the ground state [312], termed Saturated SIM (SSIM). In such regimes, fluorophore responses to intensities exceeding the saturation threshold remain unchanged and thus the effective intensity seen by fluorophores is the saturation intensity. As such, the effective intensity pattern seen by fluorophores beyond the saturation threshold start deviating from the sinusoidal pattern, i.e., flat top sinusoidal pattern. Such distorted patterns contain more than three harmonics shifting more frequencies within the band-pass of the microscope by contrast to sinusoidal patterns; see Fig. 49. However, the frequency unmixing now provides more displaced SIM harmonics in Fourier space that require more images to be separated. When this process is repeated at multiple orientations, SSIM has achieved isotropic lateral resolution of approximately 50 nm on fluorescent beads in Ref. [312].

Alongside higher spatial resolution comes higher computational complexity in unmixing SIM harmonics and high intensities required for saturation prevent its use for biological imaging. Instead, photo-switchable fluorescent proteins (see Sec. II), cycling between dark and bright states at much lower intensities, can be used while remaining live-cell compatible. By leveraging both dye photo-switching with structured illumination patterns, resolutions similar to SSIM are achieved [313, 314].

E. Light-sheet microscope

Optical sectioning motivated the development of 3D microscopy such as Light-Sheet Fluorescence Microscopy (LSFM) [66]. LSFM allows optical sectioning, i.e., increases the OTF’s kz content, by generating a thin light sheet [315, 316]. In doing so, LSFM both simultaneously minimizes out-of-focus fluorescence, otherwise present in naive wide-field microscopy (see Fig. 13), and reduces sample photo-damage [316, 317].

In LSFM, illumination and light collection paths are orthogonal providing volumetric information on the sample when axially scanning the illumination sheet; see Fig. 51 [318]. This setup facilitates faster volumetric imaging in contrast to previously discussed point-by-point scanning (see Sec. IV B 1). Moreover, LSFM achieves optical sectioning through illumination in contrast to other modalities, e.g., CLSM, where sectioning is possible only along the detection path while illuminating large portions of the specimen along the excitation path [319]. Indeed, while TIRF (see Sec. IV A) avoids this unnecessary light dose, it is restricted to volumes neighboring the illuminated surface.

FIG. 51:

FIG. 51:

LSFM setups. (a) In Digitally scanned laser Light-Sheet Microscopy (DLSM) a galvanometric (galvo) scanning unit rapidly moves a Gaussian beam perpendicular to the detection axis focused in the sample through the excitation objective lens OLex. Signal from the excited focal plane is collected through the detection objective lens OLdet and tube lens (TL) onto a camera (C). (b) In SPIM, a static light-sheet is formed by a cylindrical lens in the excitation path creating an elongated beam in one direction (above) and the same perpendicular detection optics as in panel a. (c) A schematic of the Gaussian beam in panels a-b focused through a lens or objective with diameter D, beam waist ω0 and Raleigh length zr.

In modern LSFM, there are two main approaches to generating a thin light-sheet. In the first approach, a digitally scanned laser moves rapidly along a direction perpendicular to the detection axis to achieve a thin light-sheet, termed Digitally scanned laser Light-Sheet Microscopy (DLSM) [320], see Fig. 51a. In the second approach, termed Selective Plane Illumination Microscopy (SPIM) [67], a cylindrical lens is typically used along the excitation path to form an astigmatic Gaussian beam effectively elongating the beam in one dimension to generate a thin, static light-sheet; see Fig. 51b. The SPIM OTF is provided on the right panel of Fig. 52, and obtained by convolving the SPIM light-sheet’s Fourier representation (SPIM excitation OTF) on the left panel with the wide-field detection OTF in the middle panel. Compared to the wide-field OTF in Fig. 21, the resulting SPIM OTF has a larger band-pass along the z-axis facilitating optical sectioning.

FIG. 52:

FIG. 52:

SPIM OTF. Here, excitation is achieved by focusing a plane wave through a low-aperture lens (NA = 0.4) from the left, resulting in a weakly diverging horizontally elongated excitation region. See further details in the main text.

For the Gaussian beam described above [67, 320], LSFM’s axial resolution is, as a first approximation, related to the Gaussian beam’s thickness at twice the beam waist zmin=2w0, see Fig. 51c. Similarly, the FOV is related to the extent of the elongated Gaussian beam given by twice the Raleigh length 2zr [316]

zmin2w0=4λfπD=2nλπNA, (128)
FOV=2zr=2πw02λ, (129)

where f and D are, respectively, the focal length and lens diameter, with NA=nD/2f which is often smaller than 0.8 for light-sheet microscopes.

The improvement in axial resolution afforded by LSFM can be made clear when comparing to wide-field axial resolution approximately given by Eq. 128, and Eq. 15, as well as differently derived in Eq. 73, respectively. According to Eqs. 128, and 129, while thinner light-sheets (smaller w0) improve axial resolution, they lead to smaller FOVs because of worsening illumination uniformity across the FOV. Such non-uniform illuminations may also result in varying PSFs and OTFs across the FOV [321].

To soften the above trade-off and achieve simultaneous high axial resolutions and large FOVs, a few attempts have been made employing alternatives to Gaussian beams including: Bessel beams [284, 322]; Bessel beam lattices [323]; Airy beams [324, 325]; spherically aberrated beams [326]; and double beams [327]. While these beams achieve a Raleigh length typically larger than the Gaussian beam, it is unclear in practice whether high axial resolutions and contrasts are maintained [328331]. This is because these alternative beams exhibit strong side-lobes leading to contribution of glare worsening axial resolution and contrast. Moreover, due to these side-lobes, the complex form of the resulting OTF does not lend itself to resolution estimates relying on Eq. 57 or Eq. 129 [328, 330].

Further efforts at rejecting the light contribution from these side-lobes combined LSFM with CLSM, SIM, and two-photon microscopy [284, 332, 333]. Moreover, the concepts of Reversible Saturable OpticaL Fluorescence Transitions (RESOLFT) (later introduced in Sec. V A), and STED have been used in conjunction with SPIM to surpass the diffraction limit axially [334, 335]. Light-sheet illumination has also been combined with non-linear fluorophore response to light (see Sec. II) for SMLM [336338].

What is more, since the lateral and axial resolutions differ, to avoid anisotropic resolutions, advanced LSFM configurations use multiple objectives generating different views of the specimen. These images are then computationally fused yielding improved isotropic resolution [339342]. Another approach involves Axial Swept Light-sheet Microscopy (ASLM) [318, 343, 344], generating isotropic images by scanning the sample laterally, i.e., perpendicular to the detection arm, using a tightly focused light-sheet synchronized by a moving camera shutter. This only allows fluorescence originating from the well-focused parts of the light-sheet to reach the camera.

On the engineering front, orthogonal detection, and illumination through separate objectives (see Fig. 51) pose technical challenges when using two, bulky, high NA objectives, i.e., NA0.8. As such, multiple modification to conventional LSFM have been proposed. For instance, the iSPIM (inverted SPIM) design uses two objectives (with NA = 0.8–1.1) at 45 angle with respect to the coverslide [345]. More recently, different approaches have been developed achieving illumination, and fluorescent light collection using a single objective allowing use of higher NA objectives [337, 338, 346348].

F. Multi-plane microscope

To improve upon wide-field microscopy’s low axial resolution, we may acquire images from multiple planes across samples. The simplest approach toward achieving this is by moving the sample and focus plane with respect to each other; see Fig. 53a. However, this involves moving a large inertial object (sample, objective, camera) introducing time lags between planes and mechanical perturbation. Fast, adaptive elements or small moving components in a more complex detection path can speed this up, but do not eliminate axial scanning. Acquiring data across multiple focal planes simultaneously without moving the sample, or optical components, has been achieved by introducing either refractive or diffractive optical elements into the detection arm. These elements split the fluorescent emission into multiple paths leading to simultaneous acquisitions from different focal planes [68, 69, 349351]. For a more in-depth review on “snapshot” volumetric microscopy see e.g., Ref. [352].

FIG. 53:

FIG. 53:

Multi-plane microscopy. (a) A conventional fluorescence microscope with epi-fluorescence (FL) and white light illumination (IL) acquire images of different focal planes across the sample by moving the objective lens (OL), and the sample with respect to each other. Here, the nominal focal plane is shown in black while the planes shown in red and blue can be also imaged by adjusting the axial positions of, for example, the sample. Shown are the sample (S), objective lens (OL), dichroic mirror (DM), and tube lens (YL). (b) A multi-plane microscope relays the optical path from the intermediate image formed in panel a via a telescope with lenses of focal lengths F1, and F2 and uses a beam-splitting prism, i.e., a refractive element, along the detection path to separate fluorescence emission into multiple channels (here four) with different focal planes projected next to each other on two cameras (C1, C2); see Ref. [350]. (c) A multi-focus microscope uses a multi-focus grating (MFG), i.e., diffractive element, chromatic correction grating (CCG) and prism (CCP) to achieve multiple focal planes on one camera; see text for more details.

Multi-plane, also termed multi-focus microscopy imaging, is versatile and can be combined with wide-field fluorescence, or light-sheet excitation [353] for a number of applications. These include: SPT [354, 355], super-resolution microscopy [356, 357] (for statistical modeling of multi-plane super-resolution SMLM data see Sec. V C and Eqs. 143144), Super-resolution Optical Fluctuation Imaging (SOFI) [350, 358], structured illumination [359, 360], as well as single cell and whole organism imaging [349, 361, 362]. Furthermore, phase imaging [361, 363], polarization [364] and dark-field [361, 362] microscopy may also use a multi-plane setup.

In its simplest form, multi-plane microscopes use beam-splitters, i.e., refractive elements, in combination with optical detection paths of different lengths, or tube lenses with different foci [69, 357, 358, 362, 365]. In such setups, the inter-plane distance, and thus axial resolution, can be independently adjusted from the pixel size (tied to lateral resolution; see Sec. I C).

However, these versatile implementations are susceptible to misalignment of the detection channels due to opto-mechanical component drift especially relevant in super-resolution microscopy; see Sec. V. A more elegant solution involves a cascade of beam-splitters fused into a single piece, i.e., prism [350, 361], dividing the fluorescent light into multiple beams traveling optical paths with different lengths; see Fig. 53b. Here, increased mechanical stability arises from having all beam-splitting integrated into one optical element, i.e., the prism, minimizing chromatic aberration. This setup can also be extended to simultaneously image several colors across planes using spectral beam-splitters [366].

An alternative approach uses a Multi-Focus Grating (MFG), i.e., a diffractive element, splitting fluorescence emission into multiple paths corresponding to different diffraction orders. The grating pattern is designed to introduce diffraction order dependent de-focus phase shifts (see Sec. III F) leading to different focal planes for each path [68]; see Fig. 53c. However, the grating introduces chromatic dispersion, improved by introducing a Chromatic Correction Grating (CCG), and a Prism (CCP) to reverse the dispersion due to MFG [349] and separate the images laterally on the camera chip; see Fig. 53c. While aberration-corrected multi-focus microscopy grating design can further improve imaging of thicker samples [349, 367, 368], gratings have lower transmission, and new gratings are required to alter inter-plane distances.

V. SUPER-RESOLUTION MICROSCOPY

Resolution across fluorescence microscopy, as described in Sec. IV, is fundamentally limited by the frequency band-pass given by the corresponding OTFs. This restricts the maximum achievable resolution to approximately half of the emission wavelength under optimal conditions. This limit can be surpassed by exploiting the non-linearity in fluorophore response to excitation light; see Sec. II. This, in turn, has lead to the development of two main categories of super-resolution, or nanoscopy, methods to which we now turn: 1) targeted switching; and 2) stochastic switching techniques.

A. Targeted switching super-resolution microscopy

1. Stimulated emission depletion microscopy

Previously introduced fluorescent imaging techniques such as confocal, light-sheet, and multi-plane microscopy improve axial resolution using different optical sectioning strategies. Optical sectioning limits the collected fluorescence to an axial subset of fluorescent molecules preventing interference from fluorophores outside this axial subset. Although these techniques can significantly increase contrast, and improve axial resolution, their resolution remains limited by the diffraction of light. On the other hand, super-resolution methods such as STED microscopy [70, 369], and its generalization, RESOLFT [370, 371], are based upon a traditional point scanning microscope with confocal pinhole in the detection arm allowing higher resolution imaging while retaining the axial sectioning of confocal microscopy.

STED imaging was first achieved in the mid-nineties by Hell and Wichmann [70] and its popularity grew thanks to the high spatial resolution, relatively high imaging speed, and considerable imaging depth. These made possible, for instance, the visualization of biomolecular assemblies and live-cell nanoscopy [371, 372].

In terms of temporal resolution, as fast as millisecond imaging times for rapid dynamics in small fields of view was demonstrated by ultrafast STED nanoscopy [373], while spatially, the highest reported 3D isotropic resolution (< 30 nm in x,y,z simultaneously) was validated with the ultra-stable design of 4pi-based isoSTED [374].

In STED, spatial resolution improvement is achieved by adding a second de-excitation (depletion) laser quenching fluorescence around the excitation point confining fluorescence emission to a sub-diffraction limited spot. Stimulated emission is one means by which to depopulate excited states. In this process, theoretically discovered by Albert Einstein [375], the incoming photon triggers the excited system to decay to its ground state, emitting a photon, with a phase, frequency, polarization, and momentum identical to the incident photon; see Sec. II.

In STED, stimulated emission must precede spontaneous emission, requiring the excitation light to excite the sample (≈ 200 ps) prior to laser quenching. The whole imaging protocol is devised in two steps; see Fig. 54. First, fluorophores are excited by a diffraction-limited laser beam with a Gaussian waist shown in green in Fig. 54. If we wait until molecules spontaneously decay without stimulated emission, no gain in resolution will be achieved. Therefore, it is necessary to introduce the second step where a fraction of the fluorophores are depleted using a torus, or donut-shaped diffraction-limited beam shown in red in Fig. 54, whose central minimum coincides with the Gaussian excitation maximum. As such, the recorded signal only originates from the “donut hole” far narrower than the original Gaussian waist shown in orange in Fig. 54. To understand how STED beams are generated, see Sec. IV B 1 and Fig. 30.

FIG. 54:

FIG. 54:

Schematics for STED imaging. Excitation and depletion beams are used to acquire a sub-diffraction-limited image, formed after raster scanning the full sample. The resulting image can be understood as a convolution between the effective PSF combined from the excitation, and depletion laser beams, and the fluorescent molecule distribution in the sample. The image is adapted from Refs. [370, 371]. Schematics on the left hand side compare diffraction-limited confocal images of microtubules with the coinciding STED image. On the right panel we show the electronic transitions of excitation, and stimulated emission in STED (top), groundstate depletion GSD (middle), and RESOLFT (bottom). The figure is adapted from Ref. [377].

The resolution gain in STED, ySTED, given below is set by the inner donut radius

ySTED=λ2NA1+I/Isat=ymin1+I/Isat. (130)

Here, ymin is the wide-field resolution (see Eq. 14), I is the depletion laser intensity, and Isat is the depletion intensity required to outperform fluorescence emission.

Although STED’s resolution can theoretically be arbitrarily small provided high enough depletion intensity (I) [376], in practice, factors limiting resolution include the nature of the fluorophores used (and their absorption cross-section of the depletion beam), uncorrected aberrations (residual aberration) of the STED pattern, SNR, as well as the STED beam’s relatively high power and propensity for label photo-damage.

Photo-damage can be mitigated by working with solid state fluorescent nanodiamonds hosting negatively charged nitrogen-vacancy (NV) point defects. Using such photo-stable labels, resolutions of ≈ 10 nm were demonstrated [378, 379]. However, the complex functionalization of relatively large size 10–15 nm solid-state probes, including issues related to specificity and cell permeability, limit their applications especially in live-cell.

While we have focused on 2D thus far, by using interference of two depletion beams (see implementation of 4pi microscopy introduced in Sec. IV B 3), STED super-resolution imaging has been extended to 3D [380, 381] though, in practice, axial resolution gain comes at the cost of lower lateral resolution.

2. Reversible saturable optically linear fluorescence transition microscopy

Numerous efforts in the last two decades have been undertaken to improve upon STED’s need for high power depletion beams [372]. RESOLFT, a more general method encompassing STED as a special case, was one such effort proposed in the early 2000’s [371], leveraging fluorophore photo-physics. This, in turn, renders RESOLFT more appropriate for live-cell, and long-term experiments [370] including 3D live-cell imaging using a recent implementation of highly parallelized image acquisition with an interference pattern [382].

In contrast to STED, whose high laser power is required to deplete the excited state back to the ground state, RESOLFT uses donut-shape beams to transition fluorophores into any dark state, not just the ground state; Fig. 54. Thus RESOLFT requires fluorophores controllably switchable between dark (OFF), and bright (ON) states; see Fig. 54. For instance, such fluorophores include reversibly switchable fluorescent proteins, and dyes [383, 384]. One such dark state is the triplet state (see Sec. II) at the basis of ground state depletion (GSD) [385], a special case of RESOLFT requiring less intense depletion laser powers; see Fig. 54.

3. Minimal photon fluxes

Due to the limited photo-stability of fluorophores, e.g., due to photo-bleaching, first generation nanoscopy methods such as STED and RESOLFT reached practical resolution limits of 20–40 nm. This motivated the development of a second generation of fluorescence nanoscopy techniques achieving 1–10 nm resolutions [386392] leveraging patterned illumination.

The first implementations of such nanoscopy techniques includes MINimal photon FLUXes (MINFLUX) introduced in 2017 [386] which extracts information from a limited photon budget and uses minimal laser intensities [386, 393, 394]. In contrast to STED, MINFLUX uses a donut-shape beam for excitation with the intensity minimum at its center. Here, to illustrate the MINFLUX concept, we assume a single fluorophore as shown in Fig. 55. The excitation beam is scanned across the sample and the fluorescence signal is collected by a confocal microscope. The number of collected photons depends on the excitation intensity received by the fluorophore and can be used to calculate the fluorophore’s distance from the beam’s center. For instance, fluorophores precisely at the donut-shape beam center, have minimal emission. However, as the exact fluorophore’s location is unknown, the beam scans the area at a few locations (see Fig. 55) and the fluorophore’s distance from the beam center’s locations (designated by blue dots in Fig. 55) are calculated to pinpoint the fluorophore with nanometer precision.

FIG. 55:

FIG. 55:

MINFLUX’s working principle. MINFLUX employs a donut-shape excitation beam (orange) with the donut translated to four locations (blue circles) at which fluorescence signals are measured and used to determine fluorophore’s position. The red and dark stars, respectively, indicate the excited and ground state fluorophores; see details in the text.

Recently, MINFLUX has been used to simultaneously perform 3D and multi-color imaging [394] achieving high isotropic localization precision (1–3 nm). In addition, MINFLUX has been used in SPT [393, 395] localizing with a precision below 20 nm within 100μs [396].

The concept of localizing with respect to a patterned illumination has also been implemented using wide-field microscopy for faster imaging substituting donut-shaped illumination with other illumination patterns [387389]. For instance, in SIMFLUX [387] fluorophore locations are realized with respect to a sinusoidal pattern.

B. Stochastic switching super-resolution microscopy

Previously we described super-resolution methods based on targeted switching of fluorophores. Here, we discuss single molecule based super-resolution methods, a family of super-resolution techniques, achieving sub-diffraction resolution by imaging independent, and stochastically blinking fluorophores over time [397399]. In these methods, the gain in spatial resolution is traded for temporal resolution as the acquisition of many camera frames is required to computationally reconstruct a single super-resolved image. In such experiments, a conventional wide-field microscope is typically used to collect fluorescent light from (photo)activatable, or switchable probes (see Sec. II A). Moreover, scanning image acquisitions have also been successfully used to implement super-resolution microscopy [400].

The most common use of stochastic switching is applied to techniques termed Single-Molecule Localization Microscopy (SMLM) [399]. In SMLM, spatially overlapping fluorophores are temporally separated by acquiring image frame sequences. As in each frame only few fluorophores switch on (< 1%), high precision localization is achieved by avoiding overlapping PSFs; see Fig. 2. The set of nanometer-resolved localizations are then used to reconstruct super-resolved structures; see Fig. 56.

FIG. 56:

FIG. 56:

Single emitters are stochastically activated to become fluorescent. The activated emitters can be precisely localized provided they are spaced further apart than the Nyquist limit; see Sec. I C. The process is repeated for tens of thousands of frames. In each frame, single-emitters are identified and fitted to obtain their center of mass, allowing super-resolved pointillistic image reconstruction (see bottom panel right). Repetitive activation, localization, and deactivation temporally separate spatially unresolved structures in a reconstructed image with apparent resolution gain compared to the standard diffraction-limited image; see bottom row.

The latter methods however require localizing, by chance, well-separated molecules thereby imposing long data acquisition times. Therefore, more recently, a range of alternative techniques have been developed to improve image resolution while avoiding identifying and localizing single molecules [401, 402]. Rather, such methods analyze fluctuations in fluorescence emission over time, and tolerate a wider range of switching behavior, and imaging conditions including SOFI [403, 404], (see Sec. V B 1), and others e.g., SRRF [405], SPARCOM [406], MSSR [407], and 3B [408]. A common feature of fluctuation-based techniques is that they provide lower resolutions compared to SMLM methods though requiring fewer input frames, and lower laser powers as compared with SMLM, making them more live-cell compatible.

1. Super-resolution optical fluctuation imaging

Super-resolution Optical Fluctuation Imaging (SOFI) [409, 410] is a computational post-processing tool for super-resolution single molecule data. In contrast to SMLM, SOFI is not aimed at resolving isolated molecules and is robust to the presence of overlapping PSFs. Concretely, SOFI improves resolution by exploiting correlations in the stochastic switching of the underlying fluorophores, i.e., by leveraging the fact that a molecule’s emission fluctuations only spatiotemporally correlate with itself and not with neighboring molecules.

The data processed in SOFI consists of photon counts (intensity) wnk at pixel n in frame k (time point k) detected on a wide-field camera

wnk=+I0m=1MUrn-rmsmk+εnk (131)

with M denoting fluorophore number, I0 the molecular brightness assumed uniform across molecules, U the optical system’s PSF, smk describing the state of fluorophore m as off or on-state, an average background, rn the location of pixel n, and εnk the additive noise. Moreover, the sample is assumed stationary over image acquisition such that the PSF’s integral over the pixel area is approximated by the integrand’s value at the pixel center.

In its simplest implementation, SOFI computes cumulants, κw1:N1:K, of the pixel intensities across frames. For instance, the second order temporal cross-cumulant coincides with the co-variance in signal intensity across frames in one pixel for different time lags. The lth order cumulant can be approximated as [411]

κlw1:N1:KI0lflρonm=1Msm1:KlUlr1:N-rm, (132)

where flρon denotes the lth order cumulant of smk given as an lth order polynomial with respect to the probability of the molecule (ratio of molecules) to be on designated by ρon. Moreover, under assumptions of uncorrelated noise and stationary background, cumulants of the noise and background are zero. In Eq. 132, critical to SOFI analysis, the PSF is raised to the lth power. Thus the lth order cumulant, if plotted instead of the original image, yields a PSF l narrower than the original PSF and offers an up to l-fold enlarged frequency support in Fourier space. As such, the resolution can be increased up to l-fold with post-processing either by Fourier reweighing [412] or deconvolution [409, 413] as discussed earlier, e.g., see confocal (Sec. IV B 1) and ISM (Sec. IV B 2) microscopy. This can be further generalized to spatiotemporal cross-cumulants with various time-lags across different pixel combinations to leverage spatial information albeit at higher computational cost [412414].

One challenge with SOFI post-processing is the possibility of amplifying signal heterogeneities and potentially mask dimmer structures [413] partly addressed by a deconvolution method termed balanced SOFI (bSOFI) [411, 413]. Furthermore, compared to SMLM, SOFI is relatively insensitive to background, tolerates higher labeling densities, higher on-time ratios, lower SNR, and only hundreds to thousands of frames to compute cumulants allowing less photo-damaging, and faster live-cell imaging. Moreover, SOFI achieves optical sectioning and resolution improvement in the z-direction using simultaneously acquired multi-plane data [350, 415].

2. Single molecule localization microscopy

Almost a decade preceding its experimental realization [114, 416], the idea underlying SMLM was theoretically proposed by Eric Betzig [417] with experimental implementations employing photo-activatable genetically encoded proteins [418] and quantum dots [416].

An initial iteration, termed (f)PALM [114, 115], was followed by Stochastic Optical Reconstruction Microscopy (STORM) [107] exploiting photo-switching in organic dyes. While differing only in their means to achieve temporal separation of spatially overlapping fluorophores, PALM leverages photo-activatable or photo-convertible fluorescent proteins [419], allowing for genetic expression of fluorescent proteins and is compatible with live-cell imaging [419], and thus stoichiometric labeling of target proteins used in counting [76, 160]. On the other hand, organic fluorophore photon emission rates are typically higher compared to photo-activatable or photo-convertible fluorescent proteins, resulting in STORM’s slightly better resolution. Further resolution improvements spurred the development of the more general dSTORM introducing a pallet of synthetic organic fluorophores as photo-switchable probes [420] allowing live cell imaging with site-specific tagging [421].

A more recent SMLM approach termed DNA Point Accumulation for Imaging in Nanoscale Topography (DNA-PAINT) employs stochastic transient binding of diffusing dyes in solution with a complementary molecules binding to the target structure [422]; see Fig. 57. Upon binding, the dye molecule is temporally immobilized, and detected by the camera while the freely diffusing dyes, strongly aliased and difficult to track, are approximately treated as background. Longer imager strands, increasing binding time, typically lead to a higher photon numbers over one binding event and improved SNR alongside higher spatial resolutions; see Fig. 57b. DNA-PAINT exhibits limited photo-bleaching as imaging can be continued so long as diffusing dyes are present in solution and is furthermore compatible with multiplexing using color and assortment of DNA strands’ lengths [423425].

FIG. 57:

FIG. 57:

Imaging with DNA-PAINT. (a) Schematics illustrate DNA-PAINT where dye-conjugated oligo (imager oligo) transiently hybridizes with a complementary (docking) oligo. (b) The binding time τB (or the dissociation rate 1/τB) depends on imager strand length. (c) Increasing either imager strand concentration or docking site density decreases dark times, τD (inter-event lifetime). The figure is adapted from Ref. [422].

3. SMLM data analysis

In SMLM, data, w1:N, typically consist of a set of pixel values (observation) organized as 2D arrays, called image frames. Localizations are then determined, probabilistically, from pixel values, wn, using a likelihood.

To build the likelihood, we begin with the expected photon counts for the pixel n given as

Λn=+m=1bmIm𝒫mn. (133)

where we have immediately generalized our model to the practical case with unknown emitter numbers. That is, we adopt a non-parametric framework with an infinite number of emitters (m=1:) with load bm associated to each emitter (see Sec. I B). The loads associated to the emitters not contributing photons are, as usual, recovered as zero. Moreover, Im and , respectively, represent the intensity of the mth emitter and uniform background. Here, 𝒫mn is the probability of a photon from emitter m reaching pixel n given by (see Eqs. 34)

𝒫mn=𝒜ndxdyUx,y;rm, (134)

where 𝒜n is the pixel area and rm=xm,ym,zm is the emitter position. As a simplification, the PSF is sometimes substituted for its value evaluated at the middle of the pixel [426] or the integral can be evaluated using error functions, say, for Gaussian PSFs. For more complicated cases (engineered PSFs in Sec. V C), the PSF appearing in the integral of Eq. 134 can also be numerically evaluated over a sub-pixel grid. Further improvements are also possible by using linear or spline PSF interpolations [427, 428] between PSF values typically calibrated at select axial positions.

For concreteness here, we use a CCD detector noise model (see Appendix A) and arrive at the following likelihood for pixel n

Pwnϑ=Gaussianwn;gΛnϑ+o,σw2. (135)

where g,o and σw2 are, respectively, the detector gain, offset, and variance. As before, we collect all unknown parameters in ϑ={b,r-,I,} where the overbar denotes quantities over all emitters. Finally, since pixel values are iid (see Sec. I B), the likelihood of a ROI containing N pixels assumes a product form

Pw1:Nϑ=n=1NPwnϑ. (136)

In parametric frameworks, the likelihood from Eq. 136 is simplified given known emitter numbers, M,

Λn=+m=1MIm𝒫mn. (137)

In such frameworks, the number of emitters are typically heuristically set separately using alternate criteria, e.g., Bayesian Information Criteria (BIC) [38], thresholding [429], or other methods [183, 430]. In contrast, in joint (non-parametric) optimization, the number of active emitters are treated as random variables (unknowns) on which we place priors [246, 261]. In other words, we obtain the BNP posterior from the product of the likelihood Eq. 136, and the priors over ϑ; see Sec. I B, We may adopt an empirical prior for fluorophore intensity obtained by fitting isolated emitters from sparse regions of the data [431], and adopt computationally convenient Beta-Bernoulli process priors for the loads; see Sec. I B.

Here, we discussed localization of emitters using information from one frame though leveraging information across frames improves spatial resolution by increasing the photon budget available for analysis. The challenge in using multiple frames is that several low-quality putative localizations, if performed in each frame, must then be linked across frames to improve high resolution localization. This essentially becomes equivalent to the problem of single molecule tracking dealt with rigorously later in this section where molecule number determination alongside localization and linking are performed simultaneously and self-consistently. However, to avoid computational overhead, a method termed BaGoL [432] uses frame-to-frame localization to identify which localizations belong to which emitter. Further, BaGoL efficiently accomplishes sub-nanometer precision under dense labeling conditions by removing nanometer residual drift within the input data and combining the set of identified localizations from each emitter [432]. The idea of combining localizations to improve precision has been also employed in conjunction with orthogonal DNA-sequences to achieve Angström resolutions [433].

Having focused on static emitters thus far, we now broaden our discussion to mobile emitters, namely tracking emitters across frames. In SPT, data consists of N pixel values for each frame k=1:K denoted by w1:N1:K=w11,w21,,wN1,w12,,wNK. The parameter set ϑ is now expanded to include particle trajectories across time, rm(t) for each m particle. By approximation, these may be reduced to locations across frames, rm1:K, though, in full generality, positions can be interpolated for any inter-frame time [101, 102, 148].

To obtain the SPT likelihood, similar to SMLM, we start from the expected photon count per pixel. As particles evolve over each exposure, the expected photon count for pixel n in frame k,Λnk(ϑ), follows from Eq. 133 [101, 102]

Λnkϑ=+m=1bmexposure kdtμt𝒫mnt. (138)

Here, 𝒫mn(t) is adapted from Eq. 134 with time dependent location, and μ(t) is the time dependent fluorescence emission rate, e.g., due to blinking. The time integral of Eq. 138 is stochastic and numerical integration is often used in its evaluation. Under slow dynamics, for simplicity only, we may approximate the integrand as a constant resulting in Eq. 134 [434]. This approximation fails due to motion blurring artifacts, i.e., aliasing, when particles diffuse rapidly compared to the camera frame rate or exposure time [435, 436].

As an alternative, an improved approximation is afforded by the trapezoidal rule

Λnk(ϑ)=+m=1bml=1L-1δt2μmtlk𝒫mntlk+μmtl+1k𝒫mntl+1k (139)

with

𝒫mntlk=𝒜ndxdyUx,y;rmtlk. (140)

In this equation, t1k represents the beginning of the exposure for frame k while tLk represents its end. The entire exposure period, δT, is divided into L-1 equal panels of length δt=δTL-1. A motion model, such as free diffusion or any other, can be introduced to connect positions rmtl+1krmtlk~Normalrmtlk,2Dδt, where D is the diffusion coefficient of the emitters, assuming they all satisfy the same diffusive dynamics.

Though diffusion models are most commonly invoked, alternative models, such as anomalous diffusion, are also used [437]. It remains to be seen however whether alternative models can be useful in light of dramatic approximations often already made in the analysis including, but not limited to, often assuming: a number of emitters by hand [438]; a time independent integrand in Eq. 138; general corrupting noise from photon count and detectors [438], and multiple other error sources.

The emission rates μm of the emitters can also be described using Markovian models [76, 160]; see Sec. II. However, for the sake of simplicity, we assume that all emitters maintain the same brightness throughout all frames resulting in the simplification of Eq. 139 to

Λnkϑ=+μm=1bml=1L-1δt2𝒫mntlk+𝒫mntl+1k.

Again assuming, for simplicity alone, a CCD camera noise model (see Appendix A), the likelihood for pixel n in frame k reads

Pwnϑ=Gaussianwnk;gΛnkϑ+o,σw2. (141)

Now, similar to the SMLM likelihood of Eq. 136, the likelihood of the frame sequence is

Pϑw1:N1:K=nkPϑwnk. (142)

By specifying all terms in Eq. 142 explicitly, we would see that ϑ now includes ϑ=b,r-t1:L1:K,μ,,D where overbar denotes the set of all emitters. Sampling of the resulting posterior is outlined in the box below [101, 102].

We do highlight that, parametrically, the unknowns exclude the loads, ϑ=r-t1:L1:K,μ,,D, and the number of trajectories (emitters) are individually estimated with ad hoc metrics [438]. In contrast, non-parametrically, trajectories and emitter numbers are jointly estimated alongside other parameters [101, 102, 246, 261].

We note that the above tracking reveals the z-position only up to a mirror symmetry above or below the focal plane when using a single illumination plane. Thus, here, a note is warranted regarding 3D SMLM. In standard SMLM, localizing the molecule’s position along the axial direction is challenging due to the limited depth-of-field and symmetry of the wide-field PSF with respect to the focal plane, i.e., lack of optical sectioning; see Sec. III C. To address these issues, multiple approaches have been employed including multi-plane microscopy (see Sec. IV F) and PSF engineering [439442], now detailed in Sec. V C.

C. PSF engineering

To overcome the limited optical sectioning of SMLM imposed by wide-field PSFs (see Sec. III C), engineered PSFs have been used to intentionally introduce aberrations. This typically involves inserting extra optical components into the setup [439] or adaptive optical element, such as a deformable mirrors [443] at the Fourier plane; see Fig. 1 [192, 221, 442]. The resulting aberrations break the PSF’s axial symmetry and thereby encode axial positions of molecules used in 3D localization [444].

Most initial efforts in PSF engineering coincide with PSFs maintaining their shape throughout de-focus. One of the earliest PSF engineering applications reduced in-focus spot sizes, at the cost of larger side-lobes. This was achieved by implementing a series of amplitude and phase rings in the Fourier plane [445]. As another example, toward achieving Extended Depth Of Field (EDOF), a cubic phase mask was used leading to a PSF minimally changing over a desired axial range [446]; see Fig. 58a. While maintaining EDOF, other improvements were aimed at reducing the required computation and raising the SNR, e.g., the log-asphere lens [447], Bessel Beams [448], and others [449].

FIG. 58:

FIG. 58:

PSF engineering. (a) Frequently used engineered PSFs, simulated for an objective lens with NA = 1.49 and pixel size of 110 nm. The top row is the wide-field PSF. Other rows present commonly used phase masks and their corresponding PSFs over a range of axial positions. (b) CRLB (see Sec. I B) of the 3D position (each axis individually) plotted as a function of the axial position, assuming the system is laterally shift-invariant. Here, the subscripts in the axes labels indicate the coordinate for which CRLB was calculated.

Recently, PSFs have been engineered, either heuristically, or algorithmically (more details later), to provide improved axial resolutions across different experimental conditions [450] such as emitter density and wavelength. That is, at the other extreme end of design space where PSFs remain similar throughout de-focus, reside PSFs intentionally sensitive to de-focus. The purpose of such z-encoding PSFs is to encode axial information (depth) in their shape enabling 3D tracking, or imaging [450].

An early instance of z-encoding PSF engineering is induced astigmatism, typically implemented with a cylindrical lens, for evaluation of de-focus in compact disc players [451]; an idea adapted for SMLM [439]. The astigmatic PSF provides high axial resolution over an axial range of 1μm.

Following similar ideas, larger axial ranges were attained using rotating PSFs, based on a linear combination of Laguerre-Gaussian functions [452], later adapted to SMLM using the Double Helix PSF [440]. In contrast to wide-field PSFs that spread signal over a large area resulting in low SNRs away from the focus (see first row in Fig. 58), multiple 3D engineered PSFs have been designed including the corkscrew [441], self-bending beams [453], tetrapods [442], and others [454, 455]. These often attain high resolutions over wider axial ranges and maintain high SNR even at greater de-focus.

Several examples of engineered phase masks, i.e., phase intentionally added to the Fourier plane phase (Fourier plane phase is also sometimes termed pupil phase), and associated PSFs are shown in Fig. 58. We show both PSFs maintaining their shapes over a wide axial range and those encoding axial location in their shapes.

Now, we turn to the question of how we can design phase masks to engineer a desired PSF shape, e.g., a PSF maintaining high axial resolution or high SNR over a wide range. This requires first finding the relation between the measured PSF and the phase mask at the Fourier plane.

To address this, we note the relation between the field at the Fourier plane, and the measured PSF intensity, as described in Eqs. 68 and 75. Indeed, the measured PSF intensity contains a Fourier transform of the electric field, and an absolute value operation, resulting in the loss of image plane phase information. As such, the problem of recovering the Fourier plane phase, i.e., pupil phase Φθ,ϕ (see Eq. 75), at the heart of PSF engineering, is known as phase retrieval [456]. The phase retrieval problem in our context, involves estimating the pupil phase Φθ,ϕ from the measurements w1:N encoding the real space PSF through, for example, detector models such as Eq. A9. This ill-posed non-convex optimization presents various challenges, including degenerate solutions, unstable derivatives, and more [456]. As it is impossible to determine phase using data from one plane, i.e., a single PSF slice, we use data from several planes (z-stack) acquired, for example, by scanning the objective to capture slices of a fluorescent bead’s PSF, or by using a multi-plane setup; see Sec. IV F.

Following the logic presented on SMLM data analysis, to construct a likelihood, we write the expected photon count Λnq(ϑ,Φ), for pixel n at plane q of the z-stack, encoding pupil phase, Φ, information. For simplicity, we consider a single fluorophore here.

Using this model, a likelihood can be constructed given data wnq,n=1:N,q=1:Q similar to Eqs. 135136. Working, for convenience, with the log-likelihood, we write the z-stack log-likelihood

w1:N1:Q;ϑ,Φ=n=1Nq=1Qwnq;ϑ,Φ, (143)

where wnq;ϑ,Φ is the log-likelihood of pixel n within plane q. In the most general case, detector and shot noise must both be simultaneously considered as in Eq. A9. However, ignoring detector noise for now, we arrive at the single pixel log-likelihood used in Eq. 143

wnq;ϑ,Φ=Λnqϑ,Φ-wnqlogΛnqϑ,Φ. (144)

To maximize the likelihood in Eqs. 143144, we employ iterative optimization often relying on knowledge of the likelihood’s gradient with respect to the phase [190, 457]

Φ=ΛnqΛnqΦ. (145)

The first term on the right can be analytically evaluated as /Λnq=1-wnq/Λnq. The next term involves the derivative of the PSF model Λnq with respect to the pupil phase Φl where, in practice, we discretize the set of spatial frequencies in the Fourier plane l=1:L and write

ΛnqΦl=2REnΦlEn*. (146)

In the above, En is the given electric field in the image plane from Eq. 68, R indicates the real portion of the expression within the parenthesis, and EnΦl and ΛnqΦl are, respectively, complex and real matrices of size N×L.

Finally, we must evaluate EnΦl. The electric field in the image plane is obtained by a Fourier transform of the electric field in the Fourier plane (designated by El˜) also containing the pupil phase Φ

EnΦl=Φll˜El˜=iexp-i2πnlMEl˜δl,l˜, (147)

where l˜ is a discrete Fourier transform operation over index l˜ and δl,l˜ is a Kronecker delta. Finally, if L=N the summation over n of Eq. 143 and the exponential of Eq. 147 can be evaluated as a compact Fourier transform providing the desired derivative

Φl=2RElniEn*Λnq. (148)

The approach described above can be used both to learn the pupil phase producing a measured PSF or, equivalently, design a PSF and learn the required pupil phase.

In the realm of high SNR, it is also common to approximate the likelihood Eq. 143 by a Gaussian distribution and use least squares minimization to determine the pupil phase. The approximate log-likelihood can then be minimized using iterative optimization, e.g., Gerchberg-Saxton or its variants [458, 459], possibly estimated over a constrained Zernike polynomial set [427, 460].

After describing the approach to derive the pupil phase for a given PSF shape, we turn to the problem of seeking an optimal PSF shape following pre-defined metrics. The engineered PSFs of Fig. 58 represent the result of various optimization metrics and numerical approaches. For instance, different PSFs exhibit different CRLBs [461]; CRLB optimization on the phase mask expanded in terms of Zernike polynomials yields the tetrapod PSF [442] while optimization on the phase mask expanded in terms over Laguerre-Gaussian functions yields the Double Helix PSF [440, 462]. Similarly, in the panel on DS3D (standing for DeepSTORM3D) [463], the PSF is optimized to localize emitters within a dense environment using a neural network. Finally, for the EDOF PSF, a cost function is optimized to obtain PSFs maintaining their in-focus shape over wider axial ranges [464].

As an example of optimization, to attain a PSF achieving optimal localization precision over a wide axial range, we use the Fisher information and CRLB metrics. To derive the relevant CRLB, we start from the Fisher information matrix elements [𝒬(ϑ;Φ)]i,j of the log-likelihood given in Eq. 144 (see Sec. I B)

[𝒬(ϑ;Φ)]i,j=n=1NϑiΛn(ϑ,Φ)ϑjΛn(n;ϑ,Φ)1Λn(n;ϑ,Φ)+n, (149)

where ϑj is a parameter within the set of unknowns designated by ϑ. After evaluating the Fisher information entries, we can evaluate the CRLB given by Eq. 10.

In a practical implementation of an iterative optimization, the PSFs are scaled to match realistic signal counts encountered in SMLM imaging, i.e, on the scale of a few hundred photons per emitter per frame.

Heuristic and CRLB optimized PSFs, optimized for just one emitter, can drastically limit their performance at high labeling density where engineered PSFs, such as the tetrapod [442], suffer from PSF overlaps due to their large lateral footprint. In such cases, fitting algorithms like MLE designed for sparse cases, exhibit a significant drop in performance with performance slightly improved for the compact DS3D PSF [463]. One solution toward axial localization in dense environments is to let a neural net learn the optimal pupil phase design [463]. In this case, 3D localization and the encoding pupil phase are simultaneously optimized.

In a similar vein toward optimizing PSFs for dense localization, similar design strategies have been used in multi-color imaging [457, 465] where neural networks have been used to optimize phase masks to optimally discriminate between colors [466].

VI. PERSPECTIVES

The world of microscopy, and biology have been intertwined from the onset. As early as humankind could peer at the world beyond its visual range, it peered into life [11] and we continue doing so from nuclear pore complexes [467] key in intra-cellular communication, individual synaptic spines [318], to cell adhesion [468] at the basis of tissue formation, to actin filaments [469471] involved in cell motion and division, and may more.

Life presents events at all spatiotemporal scales with no clear means of discriminating between object of interest and background. Discrimination from background motivated fluorescence [472], while probing smaller and faster spatiotemporal scales continues to motivate the experimental and theoretical methodology development. Along these lines, major improvements in fluorescent microscopy have followed four main fronts: fluorescent probes; optical setups; detectors; and data analysis.

Regarding fluorescent probes (see Sec. II), the discovery of Green Fluorescent Proteins (GFP) was a milestone in fluorescence microscopy [80, 473]. Next came the ability to switch biomarkers from dark and bright states [70, 474] resulting in super-resolution microscopy and nanometer resolution [78, 399]; see Sec. V.

Concerning optical setups (see Sec. III and IV), the invention of the confocal microscope [60] marked a milestone accomplishing optical sectioning by inserting a pinhole in the detection arm to filter out-of-focus light. Research in this area is ongoing leading to development of different microscopy modalities, e.g., light-sheet, SIM and others, discussed in Sec. IV, yielding unprecedented optical sectioning as well as high lateral resolutions.

On the detector front (see Appendix A), cameras, including CCDs, EMCCDs, and CMOSs, revolutionized fluorescence microscopy and enabled rapid wide-field imaging. Indeed, the need to amplify signal lead to the development of EMCCDs capable of imaging dim fluorescent probes [475]. The recent advent of CMOS cameras then accelerated data acquisition up to hundreds of frames per second over large FOVs with reduced read-out noise [476]. While we mostly focused on integrative detectors, increasingly available Single Photon Avalanche Diodes (SPAD) arrays [477, 478], may herald an era of unparalleled spatiotemporal resolution.

Finally data analysis methods grounded in statistics are naturally suited to process fluorescent microscopy data while considering all sources of uncertainty; see Sec. I B. Moreover, considering the fundamental problem of model selection inherent to fluorescence microscopy, BNP frameworks (see Sec. I B) show promise across applications. Deep learning methods [479482] have also recently gained popularity and may likely be critical to the analysis of large, volumetric, fluorescence data sets [483, 484] though these tools require continued model training for different applications. A concrete future avenue for data analysis might very well merge the ideas from both Bayesian and deep learning [485].

Despite continued progress in fluorescence microscopy [221, 486, 487], multiple challenges remain including potentially perturbative effects of fluorescent probes on the labeled systems; uncontrolled probe interaction with themselves and their environment; phototoxic effects naturally arising from any form of illumination; labeling and detection challenges in thicker samples and complex environments; rapid volumetric imaging; manipulating large data set sizes; and many others.

Indeed, as we move to complex environments complementary read-outs beyond fluorescence are often desired and, along these fronts, a number of other methods continue to be developed. These include refractive index tomography [488, 489], Raman imaging [490, 491], phase imaging [14, 15], lens-free imaging [492], ghost imaging [493, 494], rotating coherent scattering microscopy [495, 496], expansion microscopy [497, 498] and others proven useful at the nanoscale.

Together, these approaches, alongside the development of theoretical and numerical tools, may help us visualize life’s events otherwise unfolding in environments that remain impenetrable and at scales still beyond our reach.

Statistical Framework V B 3: Tracking.

Data: pixel values in ADUs

w1:N1:K=w11,,wNK.

Parameters: loads, fluorophore trajectories, emission rate, background, diffusion coefficient

ϑ=b,r- t1:L1:K,μ,,D.

Likelihood:

Pw1:N1:Kϑ=n=1Nk=1KGaussianwnk;gΛnk(ϑ)+o,σw2.

Priors:

qm ~BetaAq,Bq,m=1:
bm~Bernoulliqm,
rmt11~Normalr0,σr2,
rmtl+1krmtlk~Normalrmtlk,2Dδt,
μ~Gammaαμ,βμ,
~Gammaα,β,
D~InvGammaαD,βD.

Posterior:

Pϑw1:N1:KPw1:N1:KϑPϑ.

ACKNOWLEDGMENTS

We thank the Quantitative BioImaging Society (QBI) for providing a venue in which many of the authors first met and discussed the topics presented herein. We also deeply thank Weiqing Xu, Peter T. Brown, Ayush Saurabh, Shep Bryan IV, Ioannis Sgouralis, Alex Rojewski, Tristan Manha, Douglas Shepherd, Thorsten Wohland, Sheng Liu, Kunihiko Ishii, and Tahei Tahara for carefully reading portions of this review and providing detailed feedback. SP also acknowledges NIH NIGMS (R01GM130745), NIH NIGMS (R01GM134426) and NIH MIRA R35GM148237. BF and YS acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement No. 802567). AR acknowledges support from the Swiss National Science Foundation through the National Centre of Competence in Research Bio-Inspired Materials. JE thanks the European Research Council (ERC) for financial support via project “smMIET” (grant agreement No. 884488) under the European Union’s Horizon 2020 research and innovation program. KS thanks the Department of Bionanoscience, TU Delft for support and the Kavli Institute for Nanosciences Delft for KIND Synergy seed funding.

Appendix A: Detectors physics

Every photon carries with it information that can be recorded by detectors and later employed to draw inferences. These detectors are comprised of one or many pixels arranged as 2D arrays. The former is mostly employed in point scanning microscopy to record single photon arrival times, e.g., in FLIM and FRET. The latter is suitable to wide-field fluorescence.

Ideally, in wide-field detectors, pixel values would histogram the photon counts incident on a particular pixel over the course of an exposure. Similarly, single photon detectors would record precise photon arrival times. However, due to the stochastic noise inherent to detectors, pixel values and recorded arrival times are only probabilistically related to photon counts and direct photon emission times, respectively [53, 499, 500]. This section lays out noise models for reported values by different detectors motivated by the detector physics. Once the model is formulated, its parameters are estimated for specific detectors using data from calibration experiments [257, 258, 501504]. In what follows, we first describe wide-field detectors (integrative detectors) and next turn to single photon detectors.

There are three common types of wide-field detectors used in fluorescence microscopy: CCDs [505507]; EMCCDs [475, 508, 509]; and CMOS [507, 510]. In what follows, we describe the architecture and physics of each detector and, in turn, derive the appropriate noise model.

We begin with a cartoon of detector devices. Fig. 59 depicts the main components of CCDs/EMCCDs. The green pixel grid represents photo-active capacitors accumulating photo-electrons proportional to the incident photon counts. The blue grid is a set of capacitors that temporarily hold the resulting photo-electrons. The blue grid then transfers its electrons to the red register, one row at a time. In CCD cameras there is no Electron Multiplier (EM) stage and the transferred electrons follow the arrows to the right in Fig. 59 and go to the charge-to-voltage converter. The voltages are then converted into ADUs and recorded as pixel values.

FIG. 59:

FIG. 59:

A cartoon illustration of the CCD/EMCCD detector design detailed in the text.

By contrast, in EMCCD detectors, the electrons transferred follow the arrows to the left in the red register and undergo an amplification stage shown in pink before going to the charge-to-voltage converter; see Fig. 59. In the EM stage, electrons are fed through a chain of avalanche EMs where an electric field is applied to the electrons, giving them sufficient kinetic energy to knock other electrons into the material’s conducting band. This creates new electron-hole pairs thereby amplifying the current. Each stage of the EM process has a small expected gain (1%) but the device has many stages dramatically amplifying the current prior to reaching the charge-tovoltage converter.

While CMOS detectors have similar architectures, they use transistors instead of capacitors and every CMOS pixel has its own amplifier; see Fig. 60. This allows for faster data acquisition, a larger FOV, lower power consummation, and larger quantum efficiency. However, such architecture imposes pixel-dependent noise requiring maps of pixel gain, variance and offset [502, 511, 512].

FIG. 60:

FIG. 60:

A cartoon illustration of CMOS detector design detailed in the text.

Noise models

Every stage involved in detecting and converting incident photons to ADUs in detectors introduces noise to the final reported pixel values. Here, we discuss main noises introduced at every stage:

  1. The first source of noise stems from the discrete nature of the photons. Given the expected photon count for pixel n,Λn, over a fixed exposure time (e.g., see Eq. 137), the measured photon count, NPh,n, is Poisson distributed, i.e., shot noise limited [502, 513515]
    NPh,n~PoissonΛn, (A1)
    where we use notation introduced in Sec. I B.
  2. Only a fraction of the photons incident on the detector generate photo-electrons where the expected number of photo-electrons per incident photon is called the quantum efficiency β [514, 516, 517]. The number of generated photo-electrons, Npe,n, therefore follows a Binomial distribution [516]
    Npe,n~BinomialNPh,n,β. (A2)
    The distribution over the number of photo-electrons given the expected number of photons, Λn, can then be obtained by marginalizing over the incident number of photons (see Sec. I B), as follows
    PoissonNpe,n;βΛn=NPh,n=Npe,nPoissonNPh,n;Λi×BinomialNpe,n;NPh,n,β. (A3)
    where, to be clear, we have distinguished between the Binomial distribution of Eq. A1 and the Binomial density of Eq. A3 as detailed in Sec. I B.
  3. The third source of noise is due to spurious charge consisting of unwanted electrons generated during the transfer process, termed Clock Induced Charge (CIC) [516, 518]. The CIC noise follows a Poisson distribution and gives rise to additional electrons while transferring to the register
    Nte ,n~ Poisson βΛn+C, (A4)
    where Nte,n and C are the number of electrons after the transferring stage and the mean value of the CIC. This noise is small but can be greatly amplified during the electron multiplier step of EMCCD detectors.
  4. The EM process consists of many stages in which new electrons are excited through impact ionization, which can be considered a cascade of stochastic events. These steps are approximately identical thus the EM process can be modeled as a cascade of Poisson processes, or Bernoulli trials [516, 519], or a geometric model of multiplication [520, 521].

    The number of electrons after the EM stage, Nae,n, is a random variable that approximately follows a Gamma distribution [509, 515, 516]
    Nae,n~GammaNte,n,gˆ, (A5)
    where gˆ denotes the amplification gain given by the ratio of the output and input electrons to the EM stage. For large values of Nte,n, this process is further approximated by a Gaussian, which is computationally more efficient [509, 516]
    Nae,n~GaussiangˆNte,n,gˆ2Nte,n. (A6)
  5. The last stage termed “read-out” takes the input electrons following the EM stage, Nae,n, and converts these (with noise) into discrete pixel values reported as the data wn in ADUs. This stage introduces another gain γ (ADUs per electron, also sometimes referred to as the sensitivity and typically smaller than one) and offset μ (output ADU at zero input electron often added to avoid negative pixel values) modeled by a Gaussian distribution and termed “read-out noise”
    wn~GaussianγNae,n+μ,σro2, (A7)
    where σro2 is the read-out noise variance.

The combination of the noises introduced via the amplification and read-out stages is obtained by marginalizing the intermediate parameter Nae,n (namely the number of electrons after the EM stage) between Eqs. A6 and A7, resulting in

wn~Gaussiang˜Nte,n+μ,σw2, (A8)

where g˜=γgˆ and σw2=γ2gˆ2Nte,n+σro2 denote total gain and variance, respectively. Finally, the entire detector model, which relates the expected photon count Λn to the reported pixel value wn is obtained by marginalizing the other intermediate parameter Nte,n (namely the number of electrons after the transferring stage) between Eqs. A4 and A8

PwnΛn=Nte,n=0PoissonNte,n;βΛn+C×Gaussianwn;g˜Nte,n+μ,σw2. (A9)

Since we did not make any assumptions about the gain, offset and other parameters to derive the noise model above, it is valid for both CCD and EMCCD detectors. Moreover, if we assume pixel-dependent parameters, e.g., gain, offset and others, this model would be valid for CMOS detectors as well. As Eq. A9 remains complex, we make appropriate approximations for the sake of computational efficiency to derive simpler noise models specialized to each detector.

We start with CCD detectors lacking an EM amplification stage gˆ1 and σw2σro2. These are therefore suitable in detecting large input signals compared to the read-out noise variance. This can be quantitatively expressed as

SNR=Λnσro1, (A10)

implying the signal is not buried by read-out noise. Under the large signal Λn assumption, the Poisson distribution Eq. A4 is approximated by a Gaussian where both mean and variance are given by the Poisson’s mean

PwnΛn Nte,n=0GaussianNte,n;βΛn,βΛn×Gaussianwn;γNte,n+μ,σw2, (A11)

where we assumed g˜=γ and σw2=σro2 and further neglected the spurious charge C in the absence of amplification in CCD cameras. Therefore, Eq. A11 leads to

wnΛn~GaussiangΛn+o,σw2, (A12)

where g=γβ, o=μ and σw2=σro2, respectively, denote gain, offset, and variance for CCD detectors. It is also common to apply the offset and gain to the pixel values (data) and write the above equation for gain and offset corrected pixel values

wn-o/gΛn~GaussianΛn,σw2/g2. (A13)

Next, we consider EMCCD detectors. These detectors are suitable for low SNR. The EM stage of these detectors amplify signal above the read-out noise gˆσw. In an effort to simplify Eq. A9 for EMCCDs, we write Eq. A9 in explicit form

PwnΛn =Nte,n=0βΛnNte,ne-βΛnNte,n!×12πσD2exp-Nte,n-wn-μg˜22σw2/g˜2 (A14)

where we have factorized g˜ in the exponent. For large amplifications the standard deviation σw2/g˜2 becomes small and results in a narrow Gaussian approximated by a delta function. Therefore, upon marginalization and some algebra we recover [502].

wn-o/gΛn~1Γ1+wn-oge-βΛnβΛnwn-og (A15)

where o=μ and g=g˜ denote offset and gain, respectively. The above EMCCD model for the corrected pixel values is similar to a Poisson noise model where the corrected pixel values do not need to be integers [502]. An alternative EMCCD camera noise model could be obtained by convolution of the Poisson distribution Eq. A4 and the Gamma noise model for EM amplification Eq. A5 resulting in an approximate Gamma noise model for EMCCD detectors [76, 509, 516]. Both noise models asymptotically converge to the same model for large gain g.

After deriving noise models for CCD and EMCCD detectors with pixel-independent gain, g, offset, o, and variance, σw2, we continue in deriving the noise model for CMOSs where the gain, variance and offset are pixel-dependent [502, 511, 512]. Therefore, every pixel follows a different noise model similar to Eq. A9

PwnΛn =Nte,n=0PoissonNte,n;βΛn×Gaussianwn;g˜nNte,n+μn,σw,n2, (A16)

where n indexes pixels. Provided small gain g˜n for CMOSs, the above convolution can be approximated by a Poisson distribution assuming an extra source of photon for each n pixel contributing σw,n2/gn2 photons [502]

wˆnΛn~PoissonΛn+σw,n2/gn2, (A17)

where wˆn=wn-on/gn+σw,n2/gn2,gn is the gain, characterized from calibration experiments, and on=μn is the pixel-dependent offset.

After considering wide-field detectors, we proceed to describe noise models for single photon detector. We do so by assuming fluorophore excitation using a pulsed laser. This is illustrated in Fig. 61 where the laser pulses are designated by blue spikes with inter-pulse window T. Here, a fluorophore gets excited during a pulse at time text, spends Δtext time in the excited state and emits a photon, in the most general case, after n pulses at tems. However, the photon arrival time is recorded as tdet by a Δ2 delay in the detector and is reported with respect to the immediate previous pulse given by Δtk for the kth photon.

FIG. 61:

FIG. 61:

Single photon detector. Laser pulses and their centers are, respectively, shown by blues spikes and red dashed lines with inter-pulse window T. The fluorophore excitation, photon emission and photon detection events take place, respectively, at text,tems and tdet designated by black dashed lines. The fluorophore spends time Δtext in the excited state and emits a photon after n pulses. The reported photon arrival time, Δtk, is measured with respect to the immediate previous pulse center. Moreover, Δ1 and Δ2 denote the difference of the excitation pulse center and the detector delay in reporting the photon arrival time.

From Fig. 61, we can write the following relation for the reported photon arrival time

Δtk=Δtext+ΔtIRF-nT, (A18)

where ΔtIRF=Δ1+Δ2 is the noise introduced by the IRF due to the laser pulses’ finite width and stochastic delay of the detector. Here, the reported arrival time is the sum of three random variables. As such, the noise model is given by the convolution of three probability distributions

PΔtkλ =P(nN)PΔtextλPΔtIRFτIRF,σIRF2, (A19)

where λ,τIRF,σIRF2 and N are, respectively, the rate of excited state decay (inverse of the excited state lifetime; see Eq. 18), the IRF offset, the IRF variance, and the maximum possible number of pulses after which the fluorophore emits. These distributions are given by

Δtextλ~ExponentialΔtext;λ, (A20)
ΔtIRFτIRF,σIRF2~GaussianΔtIRF;τIRF,σIRF2, (A21)
nN~CategoricalA0,,AN, (A22)

where the time spent in the excited state and the IRF time are sampled from Exponential and Gaussian distributions. The pulse at which the fluorophore emits is sampled from a Categorical distribution where An is given by the integral of the term inside brackets in Eq. A20 over pulse n [53]. Finally, calculating the convolutions in Eq. A20, we obtain the following noise model for single photon detectors under pulsed illumination [53]

PΔtkλ=n=0Nλ2erfcτIRF-Δtk-nT+λσIRF2σIRF2×expλ22τIRF-Δtk-nT+λσIRF2, (A23)

where erfc(.) denotes the complementary error function [187]. In many practical cases, the inter-pulse time is much larger than the fluorophore lifetime (inverse of the fluorophore radiative decay, T1/λ) where the fluorophore emits before the next pulse. In such case, the noise model can be simplified by setting N = 0 [499].

References

  • [1].Darrigol O., A history of optics from Greek antiquity to the nineteenth century (Oxford University Press, 2012). [Google Scholar]
  • [2].Thibodeau P., Ancient optics: Theories and problems of vision, A Companion to Science, Technology, and Medicine in Ancient Greece and Rome, 2 Volume Set, 130 (2016). [Google Scholar]
  • [3].Wang J. and Wang C., Optics in China, in Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures, edited by Selin H. (Springer Netherlands, Dordrecht, 2008) pp. 1790–1792. [Google Scholar]
  • [4].Smith A. M. et al. , Ptolemy’s theory of visual perception: an English translation of the optics, Vol. 82 (American Philosophical Society, 1996). [Google Scholar]
  • [5].Nasr S. H. and De Santillana G., Science and civilization in Islam, Vol. 16 (Harvard University Press; Cambridge, MA, 1968). [Google Scholar]
  • [6].Kriss T. C. and Kriss V. M., History of the operating microscope: from magnifying glass to microneurosurgery, Neurosurgery 42, 899 (1998). [DOI] [PubMed] [Google Scholar]
  • [7].Tbakhi A. and Amr S. S., Ibn al-haytham: father of modern optics, Annals of Saudi medicine 27, 464 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Al-Khalili J., In retrospect: Book of optics, Nature 518, 164 (2015). [Google Scholar]
  • [9].Bardell D., The invention of the microscope, Bios 75, 78 (2004). [Google Scholar]
  • [10].Chung K.-T. and Liu J.-K., Pioneers in microbiology: The human side of science (World Scientific, 2017). [Google Scholar]
  • [11].Gest H., The discovery of microorganisms by Robert Hooke and Antoni Van Leeuwenhoek, fellows of the Royal Society, Notes and records of the Royal Society of London 58, 187 (2004). [DOI] [PubMed] [Google Scholar]
  • [12].Planck M., On the law of distribution of energy in the normal spectrum, Annalen der physik 4, 1 (1901). [Google Scholar]
  • [13].Einstein A., Über die von der molekularkinetischen theorie der wärme geforderte bewegung von in ruhenden flüssigkeiten suspendierten teilchen, Annalen der physik 4 (1905). [Google Scholar]
  • [14].Popescu G., Quantitative phase imaging of cells and tissues (McGraw-Hill Education, 2011). [Google Scholar]
  • [15].Park Y., Depeursinge C., and Popescu G., Quantitative phase imaging in biomedicine, Nature photonics 12, 578 (2018). [Google Scholar]
  • [16].Lichtman J. W. and Conchello J.-A., Fluorescence microscopy, Nature methods 2, 910 (2005). [DOI] [PubMed] [Google Scholar]
  • [17].Poynting J. H., Xv. on the transfer of energy in the electromagnetic field, Philosophical Transactions of the Royal Society of London, 343 (1884). [Google Scholar]
  • [18].de Laplace P. S., Théorie analytique des probabilités, Vol. 7 (Courcier, 1820). [Google Scholar]
  • [19].marquis de Laplace P. S., Essai philosophique sur les probabilités (Bachelier, 1840). [Google Scholar]
  • [20].Dale A. I., Bayes or Laplace? An examination of the origin and early applications of Bayes’ theorem, Archive for History of Exact Sciences, 23 (1982). [Google Scholar]
  • [21].Pressé S. and Sgouralis I., Data Modeling for the Sciences: Applications, Basics, Computations (Cambridge University Press, 2023). [Google Scholar]
  • [22].Cramér H., Mathematical methods of statistics, Vol. 43 (Princeton university press, 1999). [Google Scholar]
  • [23].Rao C. R., Information and the accuracy attainable in the estimation of statistical parameters, in Break-throughs in statistics (Springer, 1992) pp. 235–247. [Google Scholar]
  • [24].McNeish D., On using Bayesian methods to address small sample problems, Structural Equation Modeling: A Multidisciplinary Journal 23, 750 (2016). [Google Scholar]
  • [25].Smid S. C., McNeish D., Miočević M., and van de Schoot R., Bayesian versus frequentist estimation for structural equation models in small sample contexts: A systematic review, Structural Equation Modeling: A Multidisciplinary Journal 27, 131 (2020). [Google Scholar]
  • [26].Van de Schoot R. and Miocević M., Small sample size solutions: A guide for applied researchers and practitioners (Taylor & Francis, 2020). [Google Scholar]
  • [27].Zitzmann S., Lüdtke O., Robitzsch A., and Hecht M., On the performance of Bayesian approaches in small samples: a comment on Smid, McNeish, Miocevic, and van de Schoot (2020), Structural Equation Modeling: A Multidisciplinary Journal 28, 40 (2021). [Google Scholar]
  • [28].Geman S. and Geman D., Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images, IEEE Transactions on pattern analysis and machine intelligence, 721 (1984). [DOI] [PubMed] [Google Scholar]
  • [29].Metropolis N., Rosenbluth A. W., Rosenbluth M. N., Teller A. H., and Teller E., Equation of state calculations by fast computing machines, The journal of chemical physics 21, 1087 (1953). [Google Scholar]
  • [30].Hastings W. K., Monte Carlo sampling methods using Markov chains and their applications, Biometrika 57, 97 (1970). [Google Scholar]
  • [31].Murray I., Adams R., and MacKay D., Elliptical slice sampling, in Proceedings of the thirteenth international conference on artificial intelligence and statistics (JMLR Workshop and Conference Proceedings, 2010) pp. 541–548. [Google Scholar]
  • [32].Bishop C. M. and Nasrabadi N. M., Pattern recognition and machine learning, Vol. 4 (Springer, 2006). [Google Scholar]
  • [33].Brooks S., Gelman A., Jones G., and Meng X.-L., Handbook of Markov chain Monte Carlo (CRC press, 2011). [Google Scholar]
  • [34].Richardson S. and Green P. J., On Bayesian analysis of mixtures with an unknown number of components (with discussion), Journal of the Royal Statistical Society: series B (statistical methodology) 59, 731 (1997). [Google Scholar]
  • [35].Neal R. M., Markov chain sampling methods for Dirichlet process mixture models, Journal of computational and graphical statistics 9, 249 (2000). [Google Scholar]
  • [36].Gelfand A. E., Kottas A., and MacEachern S. N., Bayesian nonparametric spatial modeling with Dirichlet process mixing, Journal of the American Statistical Association 100, 1021 (2005). [Google Scholar]
  • [37].Sgouralis I. and Pressé S., An introduction to infinite HMMs for single-molecule data analysis, Biophysical journal 112, 2021 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Quan T., Zhu H., Liu X., Liu Y., Ding J., Zeng S., and Huang Z.-L., High-density localization of active molecules using structured sparse model and Bayesian information criterion, Optics express 19, 16963 (2011). [DOI] [PubMed] [Google Scholar]
  • [39].Ferguson T. S., A Bayesian analysis of some nonparametric problems, The annals of statistics, 209 (1973). [Google Scholar]
  • [40].Green P. J., Reversible jump Markov chain Monte Carlo computation and Bayesian model determination, Biometrika 82, 711 (1995). [Google Scholar]
  • [41].Orieux F., Sepulveda E., Loriette V., Dubertret B., and Olivo-Marin J.-C., Bayesian estimation for optimized structured illumination microscopy, IEEE Transactions on image processing 21, 601 (2011). [DOI] [PubMed] [Google Scholar]
  • [42].Hines K. E., Bankston J. R., and Aldrich R. W., Analyzing single-molecule time series via nonparametric Bayesian inference, Biophysical journal 108, 540 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [43].Gabitto M. I., Marie-Nelly H., Pakman A., Pataki A., Darzacq X., and Jordan M. I., A bayesian nonparametric approach to super-resolution single-molecule localization, The Annals of Applied Statistics 15, 1742 (2021). [Google Scholar]
  • [44].Hjort N. L., Nonparametric Bayes estimators based on beta processes in models for life history data, the Annals of Statistics, 1259 (1990). [Google Scholar]
  • [45].Paisley J. and Carin L., Nonparametric factor analysis with beta process priors, in Proceedings of the 26th annual international conference on machine learning (2009) pp. 777–784. [Google Scholar]
  • [46].Broderick T., Jordan M. I., and Pitman J., Beta processes, stick-breaking and power laws, Bayesian analysis 7, 439 (2012). [Google Scholar]
  • [47].Shah A., Knowles D., and Ghahramani Z., An empirical study of stochastic variational inference algorithms for the beta Bernoulli process, in International Conference on Machine Learning (PMLR, 2015) pp. 1594–1603. [Google Scholar]
  • [48].Al Labadi L. and Zarepour M., On approximations of the beta process in latent feature models: Point processes approach, Sankhya A 80, 59 (2018). [Google Scholar]
  • [49].Sgouralis I., Whitmore M., Lapidus L., Comstock M. J., and Pressé S., Single molecule force spectroscopy at high data acquisition: A bayesian nonparametric analysis, The Journal of chemical physics 148 (2018). [DOI] [PubMed] [Google Scholar]
  • [50].Rasmussen C. E., Gaussian processes in machine learning, in Summer school on machine learning (Springer, 2003) pp. 63–71. [Google Scholar]
  • [51].Quinonero-Candela J. and Rasmussen C. E., A unifying view of sparse approximate Gaussian process regression, The Journal of Machine Learning Research 6, 1939 (2005). [Google Scholar]
  • [52].Bryan J. S., Sgouralis I., and Pressé S., Inferring effective forces for langevin dynamics using gaussian processes, The journal of chemical physics 152 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].Fazel M., Jazani S., Scipioni L., Vallmitjana A., Gratton E., Digman M. A., and Pressé S., High resolution fluorescence lifetime maps from minimal photon counts, ACS Photonics 9, 1015 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [54].Goodsell D. S., The machinery of life (Springer Science & Business Media, 2009). [Google Scholar]
  • [55].Abbe E., Beiträge zur theorie des mikroskops und der mikroskopischen wahrnehmung, Archiv für mikroskopische Anatomie 9, 413–418 (1873). [Google Scholar]
  • [56].Axelrod D., Cell-substrate contacts illuminated by total internal reflection fluorescence., The Journal of cell biology 89, 141 (1981). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [57].Enderlein J., Ruckstuhl T., and Seeger S., Highly efficient optical detection of surface-generated fluorescence, Applied optics 38, 724 (1999). [DOI] [PubMed] [Google Scholar]
  • [58].Ruckstuhl T. and Verdes D., Supercritical angle fluorescence (SAF) microscopy, Optics express 12, 4246 (2004). [DOI] [PubMed] [Google Scholar]
  • [59].Chizhik A. I., Rother J., Gregor I., Janshoff A., and Enderlein J., Metal-induced energy transfer for live cell nanoscopy, Nature Photonics 8, 124 (2014). [Google Scholar]
  • [60].Marvin M., Microscopy apparatus (1961), US Patent 3,013,467.
  • [61].Sheppard C. J., Super-resolution in confocal imaging, Optik (Jena) 80, 53 (1988) [Google Scholar]
  • [62].Müller C. B. and Enderlein J., Image scanning microscopy, Physical Review Letters 104, 198101 (2010). [DOI] [PubMed] [Google Scholar]
  • [63].Denk W., Strickler J. H., and Webb W. W., Two-photon laser scanning fluorescence microscopy, Science 248, 73 (1990). [DOI] [PubMed] [Google Scholar]
  • [64].Hell S. and Stelzer E. H., Fundamental improvement of resolution with a 4 pi-confocal fluorescence microscope using two-photon excitation, Optics Communications 93, 277 (1992). [Google Scholar]
  • [65].Bailey B., Farkas D. L., Taylor D. L., and Lanni F., Enhancement of axial resolution in fluorescence microscopy by standing-wave excitation, Nature 366, 44 (1993). [DOI] [PubMed] [Google Scholar]
  • [66].Voie A. H., Burns D., and Spelman F., Orthogonalplane fluorescence optical sectioning: three-dimensional imaging of macroscopic biological specimens, Journal of microscopy 170, 229 (1993). [DOI] [PubMed] [Google Scholar]
  • [67].Huisken J., Swoger J., Del Bene F., Wittbrodt J., and Stelzer E. H., Optical sectioning deep inside live embryos by selective plane illumination microscopy, Science 305, 1007 (2004). [DOI] [PubMed] [Google Scholar]
  • [68].Blanchard P. M. and Greenaway A. H., Simultaneous multiplane imaging with a distorted diffraction grating, Applied Optics 38, 6692 (1999) [DOI] [PubMed] [Google Scholar]
  • [69].Prabhat P., Ram S., Sally Ward E., and Ober R. J., Simultaneous imaging of different focal planes in fluorescence microscopy for the study of cellular dynamics in three dimensions, IEEE Transactions on Nanobioscience 3, 237 (2004) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [70].Hell S. W. and Wichmann J., Breaking the diffraction resolution limit by stimulated emission: stimulatedemission-depletion fluorescence microscopy, Optics letters 19, 780 (1994). [DOI] [PubMed] [Google Scholar]
  • [71].Moerner W., Shechtman Y., and Wang Q., Single-molecule spectroscopy and imaging over the decades, Faraday discussions 184, 9 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Specht E. A., Braselmann E., and Palmer A. E., A critical and comparative review of fluorescent tools for live-cell imaging, Annual Review of Physiology 79, 93 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [73].Ciruela F., Fluorescence-based methods in the study of protein-protein interactions in living cells, Current Opinion in Biotechnology 19, 338 (2008) [DOI] [PubMed] [Google Scholar]
  • [74].Luo F., Qin G., Xia T., and Fang X., Single-molecule imaging of protein interactions and dynamics, Annual Review of Analytical Chemistry 13, 337 (2020) [DOI] [PubMed] [Google Scholar]
  • [75].Gruβmayer K. S., Yserentant K., and Herten D. P., Photons in - numbers out: Perspectives in quantitative fluorescence microscopy for in situ protein counting, Methods and Applications in Fluorescence 7, 10.1088/2050-6120/aaf2eb (2019). [DOI] [PubMed] [Google Scholar]
  • [76].Bryan J. S. IV, Sgouralis I., and Pressé S., Diffraction-limited molecular cluster quantification with Bayesian nonparametrics, Nature Computational Science 2, 102 (2022) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].Hummert J., Yserentant K., Fink T., Euchner J., Ho Y. X., Tashev S. A., and Herten D.-P., Photobleaching step analysis for robust determination of protein complex stoichiometries, Molecular biology of the cell 32, ar35 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [78].Huang B., Bates M., and Zhuang X., Super-resolution fluorescence microscopy, Annual review of biochemistry 78, 993 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [79].Schermelleh L., Ferrand A., Huser T., Eggeling C., Sauer M., Biehlmaier O., and Drummen G. P., Super-resolution microscopy demystified, Nature Cell Biology 21, 72 (2019) [DOI] [PubMed] [Google Scholar]
  • [80].Tsien R. Y., The green fluorescent protein, Annual review of biochemistry 67, 509 (1998). [DOI] [PubMed] [Google Scholar]
  • [81].Zhang J., Campbell R. E., Ting A. Y., and Tsien R. Y., Creating new fluorescent probes for cell biology, Nature reviews Molecular cell biology 3, 906 (2002). [DOI] [PubMed] [Google Scholar]
  • [82].Dedecker P., De Schryver F. C., and Hofkens J., Fluorescent proteins: Shine on, you crazy diamond, Journal of the American Chemical Society 135, 2387 (2013). [DOI] [PubMed] [Google Scholar]
  • [83].Dempsey G. T., Vaughan J. C., Chen K. H., Bates M., and Zhuang X., Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging, Nature methods 8, 1027 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [84].Lavis L. D., Chemistry is dead. Long live chemistry!, Biochemistry 56, 5165 (2017). [DOI] [PubMed] [Google Scholar]
  • [85].Resch-Genger U., Grabolle M., Cavaliere-Jaricot S., Nitschke R., and Nann T., Quantum dots versus organic dyes as fluorescent labels, Nature Methods 5, 763 (2008). [DOI] [PubMed] [Google Scholar]
  • [86].Giepmans B. N., Adams S. R., Ellisman M. H., and Tsien R. Y., The fluorescent toolbox for assessing protein location and function, Science 312, 217 (2006) [DOI] [PubMed] [Google Scholar]
  • [87].Li H. and Vaughan J. C., Switchable fluorophores for fingle-molecule localization microscopy, Chemical Reviews 118, 9412 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [88].Aharonovich I. and Neu E., Diamond nanophotonics, Advanced Optical Materials 2, 911 (2014). [Google Scholar]
  • [89].Jin D., Xi P., Wang B., Zhang L., Enderlein J., and Van Oijen A. M., Nanoparticles for super-resolution microscopy and single-molecule tracking, Nature Methods 15, 415 (2018). [DOI] [PubMed] [Google Scholar]
  • [90].Saurabh A., Niekamp S., Sgouralis I., and Pressé S., Modeling non-additive effects in neighboring chemically identical fluorophores, The Journal of Physical Chemistry B (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [91].Lakowicz J. R., Principles of Fluorescence Spectroscopy, 3rd ed. (Springer US, New York, 2006) pp. 1–954. [Google Scholar]
  • [92].Valeur B. and Berberan-Santos M. N., Molecular Fluorescence: Principles and Applications, Second Edition (Wiley-VCH, 2012). [Google Scholar]
  • [93].Digman M. A., Caiolfa V. R., Zamai M., and Gratton E., The phasor approach to fluorescence lifetime imaging analysis, Biophysical journal 94, L14 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [94].Datta R., Heaster T. M., Sharick J. T., Gillette A. A., and Skala M. C., Fluorescence lifetime imaging microscopy: Fundamentals and advances in instrumentation, analysis, and applications, Journal of biomedical optics 25, 071203 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [95].Fereidouni F., Bader A. N., and Gerritsen H. C., Spectral phasor analysis allows rapid and reliable unmixing of fluorescence microscopy spectral images, Optics express 20, 12729 (2012). [DOI] [PubMed] [Google Scholar]
  • [96].Valm A. M., Cohen S., Legant W. R., Melunis J., Hershberg U., Wait E., Cohen A. R., Davidson M. W., Betzig E., and Lippincott-Schwartz J., Applying systems-level spectral imaging and analysis to reveal the organelle interactome, Nature 546, 162 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [97].Scipioni L., Rossetta A., Tedeschi G., and Gratton E., Phasor s-flim: a new paradigm for fast and robust spectral fluorescence lifetime imaging, Nature methods 18, 542 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [98].Kasha M., Characterization of electronic transitions in complex molecules, Discuss. Faraday Soc. 9, 14 (1950) [Google Scholar]
  • [99].Zheng Q., Jockusch S., Zhou Z., and Blanchard S. C., The contribution of reactive oxygen species to the photobleaching of organic fluorophores, Photochemistry and photobiology 90, 448 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [100].Shen H., Tauzin L. J., Baiyasi R., Wang W., Moringo N., Shuang B., and Landes C. F., Single particle tracking: From theory to biophysical applications, Chemical Reviews 117, 7331 (2017) [DOI] [PubMed] [Google Scholar]
  • [101].Sgouralis I., Jalihal A. P., Xu L. W., Walter N. G., and Presse S., Dynamic superresolution by bayesian non-parametric image processing, bioRxiv, 2023 (2023). [Google Scholar]
  • [102].Xu L. W., Sgouralis I., Kilic Z., and Presse S., Bnptrack: A framework for multi-particle superresolved tracking, bioRxiv, 2023 (2023). [Google Scholar]
  • [103].Förster T., Zwischenmolekulare energiewanderung und fluoreszenz, Annalen der physik 437, 55 (1948). [Google Scholar]
  • [104].Lerner E., Cordes T., Ingargiola A., Alhadid Y., Chung S., Michalet X., and Weiss S., Toward dynamic structural biology: Two decades of single-molecule Förster resonance energy transfer, Science 359, eaan1133 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [105].Aitken C. E., Marshall R. A., and Puglisi J. D., An oxygen scavenging system for improvement of dye stability in single-molecule fluorescence experiments, Biophysical journal 94, 1826 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [106].Vogelsang J., Kasper R., Steinhauer C., Person B., Heilemann M., Sauer M., and Tinnefeld P., A reducing and oxidizing system minimizes photobleaching and blinking of fluorescent dyes, Angewandte Chemie - International Edition 47, 5465 (2008). [DOI] [PubMed] [Google Scholar]
  • [107].Rust M. J., Bates M., and Zhuang X., Sub-diffractionlimit imaging by stochastic optical reconstruction microscopy (STORM), Nature methods 3, 793 (2006), publisher: Nature Publishing Group. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [108].Jradi F. M. and Lavis L. D., Chemistry of photosensitive fluorophores for single-molecule localization microscopy, ACS Chemical Biology 14, 1077 (2019) [DOI] [PubMed] [Google Scholar]
  • [109].Diekmann R., Kahnwald M., Schoenit A., Deschamps J., Matti U., and Ries J., Optimizing imaging speed and excitation intensity for single-molecule localization microscopy, Nature Methods 17, 909 (2020) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [110].Dempsey G. T., Bates M., Kowtoniuk W. E., Liu D. R., Tsien R. Y., and Zhuang X., Photoswitching mechanism of cyanine dyes, Journal of the American Chemical Society 131, 18192 (2009) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [111].Vaughan J. C., Jia S., and Zhuang X., Ultrabright photoactivatable fluorophores created by reductive caging, Nature Methods 9, 1181 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [112].Lehmann M., Gottschalk B., Puchkov D., Schmieder P., Schwagerus S., Hackenberger C. P., Haucke V., and Schmoranzer J., Multicolor caged dSTORM resolves the ultrastructure of synaptic vesicles in the brain, Angewandte Chemie - International Edition 54, 13230 (2015) [DOI] [PubMed] [Google Scholar]
  • [113].Zheng Q., Ayala A. X., Chung I., Weigel A. V., Ranjan A., Falco N., Grimm J. B., Tkachuk A. N., Wu C., Lippincott-Schwartz J., Singer R. H., and Lavis L. D., Rational design of fluorogenic and spontaneously blinking labels for super-resolution imaging, ACS Central Science 5, 1602 (2019) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [114].Betzig E., Patterson G. H., Sougrat R., Lind-wasser O. W., Olenych S., Bonifacino J. S., Davidson M. W., Lippincott-Schwartz J., and Hess H. F., Imaging intracellular fluorescent proteins at nanometer resolution, Science 313, 1642 (2006), publisher: American Association for the Advancement of Science. [DOI] [PubMed] [Google Scholar]
  • [115].Hess S. T., Girirajan T. P., and Mason M. D., Ultra-high resolution imaging by fluorescence photoactivation localization microscopy, Biophysical journal 91, 4258 (2006), publisher: Elsevier. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [116].Manley S., Gillette J. M., Patterson G. H., Shroff H., Hess H. F., Betzig E., and Lippincott-Schwartz J., High-density mapping of single-molecule trajectories with photoactivated localization microscopy, Nature Methods 5, 155 (2008) [DOI] [PubMed] [Google Scholar]
  • [117].Kumagai A., Ando R., Miyatake H., Greimel P., Kobayashi T., Hirabayashi Y., Shimogori T., and Miyawaki A., A bilirubin-inducible fluorescent protein from eel muscle, Cell 153, 1602 (2013). [DOI] [PubMed] [Google Scholar]
  • [118].Kwon J., Park J. S., Kang M., Choi S., Park J., Kim G. T., Lee C., Cha S., Rhee H. W., and Shim S. H., Bright ligand-activatable fluorescent protein for highquality multicolor live-cell super-resolution microscopy, Nature Communications 11, 1 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [119].Balleza E., Kim J. M., and Cluzel P., Systematic characterization of maturation time of fluorescent proteins in living cells, Nature Methods 15, 47 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [120].Helmerich D. A., Beliu G., Matikonda S. S., Schnermann M. J., and Sauer M., Photoblueing of organic dyes can cause artifacts in super-resolution microscopy, Nature Methods 18, 253 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [121].Cho Y., An H. J., Kim T., Lee C., and Lee N. K., Mechanism of cyanine 5 to cyanine3 photoconversion and its application for high-density single-particle tracking in a living cell, Journal of the American Chemical Society 143, 14125 (2021) [DOI] [PubMed] [Google Scholar]
  • [122].Rybina A., Lang C., Wirtz M., Grußmayer K., Kurz A., Maier F., Schmitt A., Trapp O., Jung G., and Herten D. P., Distinguishing alternative reaction pathways by single-molecule fluorescence spectroscopy, Angewandte Chemie - International Edition 52, 6322 (2013) [DOI] [PubMed] [Google Scholar]
  • [123].Cordes T. and Blum S. A., Opportunities and challenges in single-molecule and single-particle fluorescence microscopy for mechanistic studies of chemical reactions, Nature Chemistry 5, 993 (2013) [DOI] [PubMed] [Google Scholar]
  • [124].Annibale P., Vanni S., Scarselli M., Rothlisberger U., and Radenovic A., Quantitative photo activated localization microscopy: Unraveling the effects of photoblinking, PLoS ONE 6, 10.1371/journal.pone.0022678 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [125].De Zitter E., Thédié D., Mönkemöller V., Hugelier S., Beaudouin J., Adam V., Byrdin M., Van Meervelt L., Dedecker P., and Bourgeois D., Mechanistic investigation of mEos4b reveals a strategy to reduce track interruptions in sptPALM, Nature Methods 16, 707 (2019). [DOI] [PubMed] [Google Scholar]
  • [126].Wang Q. and Moerner W. E., Lifetime and spectrally resolved characterization of the photodynamics of single fluorophores in solution using the anti-brownian electrokinetic trap, Journal of Physical Chemistry B 117, 4641 (2013) [DOI] [PubMed] [Google Scholar]
  • [127].Escudero D., Revising intramolecular photoinduced electron transfer (PET) from first-principles, Accounts of Chemical Research 49, 1816 (2016). [DOI] [PubMed] [Google Scholar]
  • [128].Kobayashi H., Picard L.-P., Schönegge A.-M., and Bouvier M., Bioluminescence resonance energy transfer-based imaging of protein-protein interactions in living cells, Nature Protocols 14, 1084 (2019). [DOI] [PubMed] [Google Scholar]
  • [129].Syed A. J. and Anderson J. C., Applications of bioluminescence in biotechnology and beyond, Chemical Society Reviews 50, 5668 (2021). [DOI] [PubMed] [Google Scholar]
  • [130].Myong S., Cui S., Cornish P. V., Kirchhofer A., Gack M. U., Jung J. U., Hopfner K.-P., and Ha T., Cytosolic viral sensor RIG-I is a 5’-Triphosphate-dependent translocase on double-stranded RNA, Science 323, 1070 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [131].Hwang H., Kim H., and Myong S., Protein induced fluorescence enhancement as a single molecule assay with short distance sensitivity, Proceedings of the National Academy of Sciences 108, 7414 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [132].Graham T. G. W., Ferrie J. J., Dailey G. M., Tjian R., and Darzacq X., Proximity-assisted photoactivation (PAPA): Detecting molecular interactions in live-cell single-molecule imaging, bioRxiv, 2021.12.13.472508 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [133].Wu L., Huang C., Emery B. P., Sedgwick A. C., Bull S. D., He X.-P., Tian H., Yoon J., Sessler J. L., and James T. D., Förster resonance energy transfer (FRET)-based small-molecule sensors and imaging agents, Chemical Society Reviews 49, 5110 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [134].Agam G., Gebhardt C., Popara M., Mächtel R., Folz J., Ambrose B., Chamachi N., Chung S. Y., Craggs T. D., de Boer M., et al. , Reliability and accuracy of single-molecule fret studies for characterization of structural dynamics and distances in proteins, Nature Methods 20, 523 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [135].Förster T., Zwischenmolekulare Energiewanderung und Fluoreszenz, Annalen der Physik 437, 55 (1948). [Google Scholar]
  • [136].Jones G. A. and Bradshaw D. S., Resonance energy transfer: from fundamental theory to recent applications, Frontiers in Physics 7, 100 (2019). [Google Scholar]
  • [137].Hellenkamp B., Schmid S., Doroshenko O., Opanasyuk O., Kühnemuth R., Rezaei Adariani S., Ambrose B., Aznauryan M., Barth A., Birkedal V., Bowen M. E., Chen H., Cordes T., Eilert T., Fijen C., Gebhardt C., Götz M., Gouridis G., Gratton E., Ha T., Hao P., Hanke C. A., Hartmann A., Hendrix J., Hildebrandt L. L., Hirschfeld V., Hohlbein J., Hua B., Hübner C. G., Kallis E., Kapanidis A. N., Kim J. Y., Krainer G., Lamb D. C., Lee N. K., Lemke E. A., Levesque B., Levitus M., McCann J. J., NarediRainer N., Nettels D., Ngo T., Qiu R., Robb N. C., Röcker C., Sanabria H., Schlierf M., Schröder T., Schuler B., Seidel H., Streit L., Thurn J., Tinnefeld P., Tyagi S., Vandenberk N., Vera A. M., Weninger K. R., Wünsch B., Yanez-Orozco I. S., Michaelis J., Seidel C. A., Craggs T. D., and Hugel T., Precision and accuracy of single-molecule FRET measurements-a multi-laboratory benchmark study, Nature Methods 15, 669 (2018) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [138].Gradinaru C. C., Marushchak D. O., Samim M., and Krull U. J., Fluorescence anisotropy: From single molecules to live cells, Analyst 135, 452 (2010) [DOI] [PubMed] [Google Scholar]
  • [139].Bader A. N., Hoetzl S., Hofman E. G., Voortman J., Van Bergen En Henegouwen P. M., Van Meer G., and Gerritsen H. C., Homo-FRET imaging as a tool to quantify protein and lipid clustering, ChemPhysChem 12, 475 (2011) [DOI] [PubMed] [Google Scholar]
  • [140].Hulleman C. N., Thorsen R. Ø., Kim E., Dekker C., Stallinga S., and Rieger B., Simultaneous orientation and 3D localization microscopy with a vortex point spread function, Nature Communications 12, 1 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [141].Saurabh A., Safar M., Fazel M., Sgouralis I., and Pressé S., Single photon smFRET. II. application to continuous illumination, Biophysical Reports, 100087 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [142].Safar M., Saurabh A., Fazel M., Sgouralis I., and Pressé S., Single photon smFRET. III. application to pulsed illumination, Biophysical Reports, 100088 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [143].Saurabh A., Fazel M., Safar M., Sgouralis I., and Pressé S., Single photon smFRET. I. theory and conceptual basis, Biophysical Reports, 100089 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [144].Gillespie D. T., A general method for numerically simulating the stochastic time evolution of coupled chemical reactions, Journal of computational physics 22, 403 (1976). [Google Scholar]
  • [145].Roy R., Hohng S., and Ha T., A practical guide to single-molecule FRET, Nature methods 5, 507 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [146].Bacia K., Petrášek Z., and Schwille P., Correcting for spectral cross-talk in dual-color fluorescence cross-correlation spectroscopy, ChemPhysChem 13, 1221 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [147].Sgouralis I., Madaan S., Djutanta F., Kha R., Hariadi R. F., and Pressé S., A Bayesian nonparametric approach to single molecule forster resonance energy transfer, The Journal of Physical Chemistry B 123, 675 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [148].Kilic Z., Sgouralis I., and Pressé S., Generalizing HMMs to continuous time for fast kinetics: Hidden Markov jump processes, Biophysical journal 120, 409 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [149].Gopich I. and Szabo A., Theory of photon statistics in single-molecule Förster resonance energy transfer, The Journal of chemical physics 122, 014707 (2005). [DOI] [PubMed] [Google Scholar]
  • [150].Van Kampen N. G., Stochastic processes in physics and chemistry, Vol. 1 (Elsevier, 1992). [Google Scholar]
  • [151].Lee J. and Pressé S., A derivation of the master equation from path entropy maximization, The Journal of chemical physics 137 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [152].Gopich I. V. and Szabo A., Theory of the statistics of kinetic transitions with application to single-molecule enzyme catalysis, The Journal of chemical physics 124, 154712 (2006). [DOI] [PubMed] [Google Scholar]
  • [153].McKinney S. A., Déclais A.-C., Lilley D. M., and Ha T., Structural dynamics of individual Holliday junctions, Nature structural biology 10, 93 (2003). [DOI] [PubMed] [Google Scholar]
  • [154].Patel L., Gustafsson N., Lin Y., Ober R., Henriques R., and Cohen E., A hidden Markov model approach to characterizing the photo-switching behavior of fluorophores, The annals of applied statistics 13, 1397 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [155].Bronson J. E., Fei J., Hofman J. M., Gonzalez R. L., and Wiggins C. H., Learning rates and states from biophysical time series: a Bayesian approach to model selection and single-molecule FRET data, Biophysical journal 97, 3196 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [156].Pirchi M., Tsukanov R., Khamis R., Tomov T. E., Berger Y., Khara D. C., Volkov H., Haran G., and Nir E., Photon-by-photon hidden Markov model analysis for microsecond single-molecule FRET kinetics, The Journal of Physical Chemistry B 120, 13065 (2016). [DOI] [PubMed] [Google Scholar]
  • [157].Kilic Z., Sgouralis I., Heo W., Ishii K., Tahara T., and Pressé S., Extraction of rapid kinetics from smFRET measurements using integrative detectors, Cell Reports Physical Science 2, 100409 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [158].Hobolth A. and Stone E. A., Simulation from endpoint-conditioned, continuous-time Markov chains on a finite state space, with applications to molecular evolution, The annals of applied statistics 3, 1204 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [159].Ulbrich M. H. and Isacoff E. Y., Subunit counting in membrane-bound proteins, Nature methods 4, 319 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [160].Rollins G. C., Shin J. Y., Bustamante C., and Pressé S., Stochastic approach to the molecular counting problem in superresolution microscopy, Proceedings of the National Academy of Sciences 112, E110 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [161].Tsekouras K., Custer T. C., Jashnsaz H., Walter N. G., and Pressé S., A novel method to accurately locate and count large numbers of steps by photobleaching, Molecular biology of the cell 27, 3601 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [162].Lee A., Tsekouras K., Calderon C., Bustamante C., and Pressé S., Unraveling the thousand word picture: an introduction to super-resolution data analysis, Chemical reviews 117, 7276 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [163].Scott S. L., Bayesian methods for hidden Markov models: Recursive computing in the 21st century, Journal of the American statistical Association 97, 337 (2002). [Google Scholar]
  • [164].Mansuripur M., Abbe’s sine condition, Optics and Photonics News 9, 56 (1998). [Google Scholar]
  • [165].Mansuripur M., Classical optics and its applications (Cambridge University Press, 2002). [Google Scholar]
  • [166].Hopkins H., Herschel’s condition, Proceedings of the Physical Society 58, 100 (1946). [Google Scholar]
  • [167].Born M. and Wolf E., Principles of optics: electromagnetic theory of propagation, interference and diffraction of light (Elsevier, 2013). [Google Scholar]
  • [168].Braat J. J., Abbe sine condition and related imaging conditions in geometrical optics, in Fifth International Topical Meeting on Education and Training in Optics, Vol. 3190 (SPIE, 1997) pp. 59–64. [Google Scholar]
  • [169].Gross H., Handbook of Optical Systems (Wiley Online Library, 2005). [Google Scholar]
  • [170].Steward G., On Herschel’s condition and the optical cosine law, in Mathematical Proceedings of the Cambridge Philosophical Society, Vol. 23 (Cambridge University Press, 1927) pp. 703–712. [Google Scholar]
  • [171].Botcherby E. J., Juškaitis R., Booth M. J., and Wilson T., An optical technique for remote focusing in microscopy, Optics Communications 281, 880 (2008). [Google Scholar]
  • [172].Jackson J. D., Classical electrodynamics (1999).
  • [173].Moskovits M. and DiLella D., Intense quadrupole transitions in the spectra of molecules near metal surfaces, The Journal of Chemical Physics 77, 1655 (1982). [Google Scholar]
  • [174].Binnemans K., Interpretation of europium (III) spectra, Coordination Chemistry Reviews 295, 1 (2015). [Google Scholar]
  • [175].Sommerfeld A., Partial Differential Equations in Physics (Academic Press, New York, 1949). [Google Scholar]
  • [176].Weyl H., Ausbreitung elektromagnetischer wellen über einem ebenen leiter, Annalen der Physik 365, 481 (1919). [Google Scholar]
  • [177].Banõs A., Dipole radiation in presence of conducting half space (Pergamon Press, Oxford, 1966). [Google Scholar]
  • [178].Mertz J., Introduction to optical microscopy (Cambridge University Press, 2019). [Google Scholar]
  • [179].Richards B. and Wolf E., Electromagnetic diffraction in optical systems, II. structure of the image field in an aplanatic system, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 253, 358 (1959). [Google Scholar]
  • [180].Enderlein J., Toprak E., and Selvin P. R., Polarization effect on position accuracy of fluorophore localization, Optics express 14, 8111 (2006). [DOI] [PubMed] [Google Scholar]
  • [181].Backlund M. P., Lew M. D., Backer A. S., Sahl S. J., and Moerner W., The role of molecular dipole orientation in single-molecule fluorescence microscopy and implications for super-resolution imaging, ChemPhysChem 15, 587 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [182].Deschout H., Zanacchi F. C., Mlodzianoski M., Diaspro A., Bewersdorf J., Hess S. T., and Braeckmans K., Precisely and accurately localizing single emitters in fluorescence microscopy, Nature methods 11, 253 (2014). [DOI] [PubMed] [Google Scholar]
  • [183].Fazel M. and Wester M. J., Analysis of super-resolution single molecule localization microscopy data: A tutorial, AIP Advances 12, 010701 (2022). [Google Scholar]
  • [184].Backer A. S., Biebricher A. S., King G. A., Wuite G. J., Heller I., and Peterman E. J., Single-molecule polarization microscopy of DNA intercalators sheds light on the structure of S-DNA, Science advances 5, eaav1083 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [185].Wu T., Lu J., and Lew M. D., Dipole-spread-function engineering for simultaneously measuring the 3d orientations and 3d positions of fluorescent molecules, Optica 9, 505 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [186].Rimoli C. V., Valades-Cruz C. A., Curcio V., Mavrakis M., and Brasselet S., 4polar-STORM polarized super-resolution imaging of actin filament organization in cells, Nature communications 13, 1 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [187].Olver F. W., Lozier D. W., Boisvert R. F., and Clark C. W., NIST handbook of mathematical functions hardback and CD-ROM (Cambridge university press, 2010). [Google Scholar]
  • [188].Stallinga S. and Rieger B., Accuracy of the Gaussian point spread function model in 2D localization microscopy, Optics Express 24, 7237 (2010). [DOI] [PubMed] [Google Scholar]
  • [189].Santos A. and Young I. T., Model-based resolution: applying the theory in quantitative microscopy, Applied Optics 39, 2948 (2000). [DOI] [PubMed] [Google Scholar]
  • [190].Ferdman B., Nehme E., Weiss L. E., Orange R., Alalouf O., and Shechtman Y., VIPR: vectorial implementation of phase retrieval for fast and accurate microscopic pixel-wise pupil estimation, Optics express 28, 10179 (2020). [DOI] [PubMed] [Google Scholar]
  • [191].Siemons M., Hulleman C., Thorsen R., Smith C., and Stallinga S., High precision wavefront control in point spread function engineering for single emitter localization, Optics express 26, 8397 (2018). [DOI] [PubMed] [Google Scholar]
  • [192].Backer A. S. and Moerner W., Extending single-molecule microscopy using optical Fourier processing, The Journal of Physical Chemistry B 118, 8313 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [193].Oppenheim A. V. and Lim J. S., The importance of phase in signals, Proceedings of the IEEE 69, 529 (1981). [Google Scholar]
  • [194].Noll R. J., Zernike polynomials and atmospheric turbulence, JOsA 66, 207 (1976). [Google Scholar]
  • [195].Roddier F., Adaptive optics in astronomy (Cambridge University Press, 1999). [Google Scholar]
  • [196].Moser S., Ritsch-Marte M., and Thalhammer G., Model-based compensation of pixel crosstalk in liquid crystal spatial light modulators, Optics express 27, 25046 (2019). [DOI] [PubMed] [Google Scholar]
  • [197].EliasNehme B., Weiss L., TalNaor D., Michaeli T., and Shechtman Y., Learning optimal wavefront shaping for multi-channel imaging, IEEE transactions on pattern analysis and machine intelligence 43, 2179 (2021). [DOI] [PubMed] [Google Scholar]
  • [198].Abrahamsson S., Chen J., Hajj B., Stallinga S., Katsov A. Y., Wisniewski J., Mizuguchi G., Soule P., Mueller F., Darzacq C. D., et al. , Fast multicolor 3D imaging using aberration-corrected multifocus microscopy, Nature methods 10, 60 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [199].Booth M. J., Neil M. A., Juškaitis R., and Wilson T., Adaptive aberration correction in a confocal microscope, Proceedings of the National Academy of Sciences 99, 5788 (2002). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [200].Ji N., Milkie D. E., and Betzig E., Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues, Nature methods 7, 141 (2010). [DOI] [PubMed] [Google Scholar]
  • [201].Tao X., Azucena O., Fu M., Zuo Y., Chen D. C., and Kubby J., Adaptive optics microscopy with direct wave-front sensing using fluorescent protein guide stars, Optics letters 36, 3389 (2011). [DOI] [PubMed] [Google Scholar]
  • [202].Gould T. J., Burke D., Bewersdorf J., and Booth M. J., Adaptive optics enables 3 d sted microscopy in aberrating specimens, Optics express 20, 20998 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [203].Ji N., Adaptive optical fluorescence microscopy, Nature methods 14, 374 (2017). [DOI] [PubMed] [Google Scholar]
  • [204].Liu T.-L., Upadhyayula S., Milkie D. E., Singh V., Wang K., Swinburne I. A., Mosaliganti K. R., Collins Z. M., Hiscock T. W., Shea J., et al. , Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms, Science 360, eaaq1392 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [205].Shajkofci A. and Liebling M., Spatially-variant CNN-based point spread function estimation for blind deconvolution and depth estimation in optical microscopy, IEEE Transactions on Image Processing 29, 5848 (2020). [DOI] [PubMed] [Google Scholar]
  • [206].Shack R. V. and Thompson K., Influence of alignment errors of a telescope system on its aberration field, in Optical Alignment I, Vol. 251 (International Society for Optics and Photonics, 1980) pp. 146–153. [Google Scholar]
  • [207].Novotny L. and Hecht B., Principles of nano-optics (Cambridge university press, 2012). [Google Scholar]
  • [208].Stock K., Sailer R., Strauss W. S., Lyttek M., Steiner R., and Schneckenburger H., Variable-angle total internal reflection fluorescence microscopy (VA-TIRFM): realization and application of a compact illumination device, Journal of microscopy 211, 19 (2003). [DOI] [PubMed] [Google Scholar]
  • [209].Saffarian S. and Kirchhausen T., Differential evanescence nanometry: live-cell fluorescence measurements with 10-nm axial resolution on the plasma membrane, Biophysical journal 94, 2333 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [210].El Arawi D., Vézy C., Déturche R., Lehmann M., Kessler H., Dontenwill M., and Jaffiol R., Advanced quantification for single-cell adhesion by variable-angle TIRF nanoscopy, Biophysical Reports 1, 100021 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [211].Winterflood C. M., Ruckstuhl T., Verdes D., and Seeger S., Nanometer axial resolution by three-dimensional supercritical angle fluorescence microscopy, Phys. Rev. Lett. 105, 108103 (2010). [DOI] [PubMed] [Google Scholar]
  • [212].Deschamps J., Mund M., and Ries J., 3d superresolution microscopy by supercritical angle detection, Opt. Express 22, 29081 (2014). [DOI] [PubMed] [Google Scholar]
  • [213].Oheim M., Salomon A., and Brunstein M., Supercritical angle fluorescence microscopy and spectroscopy, Biophysical Journal 118, 2339 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [214].Dasgupta A., Deschamps J., Matti U., Hübner U., Becker J., Strauss S., Jungmann R., Heintzmann R., and Ries J., Direct supercritical angle localization microscopy for nanometer 3D superresolution, Nature communications 12, 1 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [215].Heavens O. S., Optical properties of thin solid films (Dover, New York, 1965) p. 69. [Google Scholar]
  • [216].Yeh P., Optical Waves in Layered Media (Wiley, New York, 1988) p. 102. [Google Scholar]
  • [217].Knittel Z., Optics of Thin Films (Wiley, London, 1976) p. 41. [Google Scholar]
  • [218].Chance R. R., Prock A., and Silbey R., Molecular fluorescence and energy transfer near interfaces, in Advances in Chemical Physics (Wiley-Blackwell, 2007) pp. 1–65. [Google Scholar]
  • [219].Moerland R. J. and Hoogenboom J. P., Subnanometeraccuracy optical distance ruler based on fluorescence quenching by transparent conductors, Optica 3, 112 (2016). [Google Scholar]
  • [220].Ghosh A., Sharma A., Chizhik A. I., Isbaner S., Ruhlandt D., Tsukanov R., Gregor I., Karedla N., and Enderlein J., Graphene-based metal-induced energy transfer for sub-nanometre optical localization, Nature Photonics 13, 860 (2019). [Google Scholar]
  • [221].Pawley J., Handbook of biological confocal microscopy, Vol. 236 (Springer Science & Business Media, 2006). [Google Scholar]
  • [222].Sahl S. J. and Hell S. W., High-resolution 3D light microscopy with STED and RESOLFT, High resolution imaging in microscopy and ophthalmology, 3 (2019). [PubMed] [Google Scholar]
  • [223].Zhang B., Zerubia J., and Olivo-Marin J.-C., Gaussian approximations of fluorescence microscope point-spread function models, Applied optics 46, 1819 (2007). [DOI] [PubMed] [Google Scholar]
  • [224].Sheppard C. J., Gan X., Gu M., and Roy M., Signalto-noise ratio in confocal microscopes, in Handbook of biological confocal microscopy (Springer, 2006) pp. 442–452. [Google Scholar]
  • [225].York A. G., Parekh S. H., Nogare D. D., Fischer R. S., Temprine K., Mione M., Chitnis A. B., Combs C. A., and Shroff H., Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy, Nature Methods 9, 749 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [226].De Luca G. M., Breedijk R. M., Brandt R. A., Zeelenberg C. H., de Jong B. E., Timmermans W., Azar L. N., Hoebe R. A., Stallinga S., and Manders E. M., Re-scan confocal microscopy: scanning twice for better resolution, Biomedical Optics Express 4, 2644 (2013) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [227].York A. G., Chandris P., Nogare D. D., Head J., Wawrzusin P., Fischer R. S., Chitnis A., and Shroff H., Instant super-resolution imaging in live cells and embryos via analog image processing, Nature Methods 10, 1122 (2013) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [228].Roth S., Sheppard C. J., Wicker K., and Heintzmann R., Optical photon reassignment microscopy (OPRA), Optical Nanoscopy 2, 1 (2013). [Google Scholar]
  • [229].Azuma T. and Kei T., Super-resolution spinning-disk confocal microscopy using optical photon reassignment, Optics Express 23, 15003 (2015). [DOI] [PubMed] [Google Scholar]
  • [230].Gregor I., Spiecker M., Petrovsky R., Großhans J., Ros R., and Enderlein J., Rapid nonlinear image scanning microscopy, Nature methods 14, 1087 (2017). [DOI] [PubMed] [Google Scholar]
  • [231].Roth S., Sheppard C. J., and Heintzmann R., Superconcentration of light: circumventing the classical limit to achievable irradiance, Optics Letters 41, 2109 (2016). [DOI] [PubMed] [Google Scholar]
  • [232].Gregor I. and Enderlein J., Image scanning microscopy, Current opinion in chemical biology 51, 74 (2019). [DOI] [PubMed] [Google Scholar]
  • [233].Lang M., Müller T., Engelhardt J., and Hell S. W., 4pi microscopy of type A with 1-photon excitation in biological fluorescence imaging, Optics express 15, 2459 (2007). [DOI] [PubMed] [Google Scholar]
  • [234].Bewersdorf J., Schmidt R., and Hell S., Comparison of I5M and 4pi-microscopy, Journal of microscopy 222, 105 (2006). [DOI] [PubMed] [Google Scholar]
  • [235].Hao X., Li Y., Fu S., Li Y., Xu Y., Kuang C., and Liu X., Review of 4pi fluorescence nanoscopy, Engineering 11, 146 (2022). [Google Scholar]
  • [236].Bewersdorf J., Egner A., and Hell S. W., 4pi microscopy, in Handbook of biological confocal microscopy (Springer, 2006) pp. 561–570. [Google Scholar]
  • [237].Liu S. and Huang F., Enhanced 4pi single-molecule localization microscopy with coherent pupil based localization, Communications biology 3, 1 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [238].Denk W., Principles of multiphoton-excitation fluorescence microscopy, Cold Spring Harbor Protocols 2007, pdb (2007). [DOI] [PubMed] [Google Scholar]
  • [239].Denk W. and Svoboda K., Photon upmanship: Why multiphoton imaging is more than a gimmick, Neuron 18, 351 (1997). [DOI] [PubMed] [Google Scholar]
  • [240].Williams R. M., Zipfel W. R., and Webb W. W., Multi-photon microscopy in biological research, Current opinion in chemical biology 5, 603 (2001). [DOI] [PubMed] [Google Scholar]
  • [241].Tauer U., Advantages and risks of multiphoton microscopy in physiology, Experimental physiology 87, 709 (2002). [DOI] [PubMed] [Google Scholar]
  • [242].Sprague B. L., Pego R. L., Stavreva D. A., and McNally J. G., Analysis of binding reactions by fluorescence recovery after photobleaching, Biophysical journal 86, 3473 (2004). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [243].Digman M. A. and Gratton E., Lessons in fluctuation correlation spectroscopy, Annual review of physical chemistry 62, 645 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [244].Wunderlich B., Nettels D., Benke S., Clark J., Weidner S., Hofmann H., Pfeil S. H., and Schuler B., Microfluidic mixer designed for performing single-molecule kinetics with confocal detection on timescales from milliseconds to minutes, Nature protocols 8, 1459 (2013). [DOI] [PubMed] [Google Scholar]
  • [245].Jazani S., Sgouralis I., and Pressé S., A method for single molecule tracking using a conventional single-focus confocal setup, The Journal of chemical physics 150, 114108 (2019). [DOI] [PubMed] [Google Scholar]
  • [246].Jazani S., Sgouralis I., Shafraz O. M., Levitus M., Sivasankar S., and Pressé S., An alternative framework for fluorescence correlation spectroscopy, Nature communications 10, 1 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [247].Berland K. M., So P., and Gratton E., Two-photon fluorescence correlation spectroscopy: method and application to the intracellular environment, Biophysical journal 68, 694 (1995). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [248].Rossow M. J., Sasaki J. M., Digman M. A., and Gratton E., Raster image correlation spectroscopy in live cells, Nature protocols 5, 1761 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [249].Kristoffersen A. S., Erga S. R., Hamre B., and Frette ∅., Testing fluorescence lifetime standards using two-photon excitation and time-domain instrumentation: rhodamine B, coumarin 6 and lucifer yellow, Journal of fluorescence 24, 1015 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [250].Thiele J. C., Helmerich D. A., Oleksiievets N., Tsukanov R., Butkevich E., Sauer M., Nevskyi O., and Enderlein J., Confocal fluorescence-lifetime single-molecule localization microscopy, ACS nano 14, 14190 (2020). [DOI] [PubMed] [Google Scholar]
  • [251].Karpf S., Riche C. T., Di Carlo D., Goel A., Zeiger W. A., Suresh A., Portera-Cailliau C., and Jalali B., Spectro-temporal encoded multiphoton microscopy and fluorescence lifetime imaging at kilohertz frame-rates, Nature communications 11, 1 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [252].Nettels D., Hoffmann A., and Schuler B., Unfolded protein and peptide dynamics investigated with single-molecule FRET and correlation spectroscopy from picoseconds to seconds, The Journal of Physical Chemistry B 112, 6137 (2008). [DOI] [PubMed] [Google Scholar]
  • [253].Gregor I., Patra D., and Enderlein J., Optical saturation in fluorescence correlation spectroscopy under continuous-wave and pulsed excitation, ChemPhysChem 6, 164 (2005). [DOI] [PubMed] [Google Scholar]
  • [254].Lorén N., Hagman J., Jonasson J. K., Deschout H., Bernin D., Cella-Zanacchi F., Diaspro A., McNally J. G., Ameloot M., Smisdom N., et al. , Fluorescence recovery after photobleaching in material and life sciences: putting theory into practice, Quarterly reviews of biophysics 48, 323 (2015). [DOI] [PubMed] [Google Scholar]
  • [255].Moud A. A., Fluorescence recovery after photobleaching in colloidal science: introduction and application, ACS Biomaterials Science & Engineering 8, 1028 (2022). [DOI] [PubMed] [Google Scholar]
  • [256].Suhling K., Hirvonen L. M., Levitt J. A., Chung P.-H., Tregidgo C., Le Marois A., Rusakov D. A., Zheng K., Ameer-Beg S., Poland S., et al. , Fluorescence lifetime imaging (FLIM): Basic concepts and some recent developments, Medical Photonics 27, 3 (2015). [Google Scholar]
  • [257].Elson E. L. and Magde D., Fluorescence correlation spectroscopy. I. conceptual basis and theory, Biopolymers: Original Research on Biomolecules 13, 1 (1974). [DOI] [PubMed] [Google Scholar]
  • [258].Magde D., Elson E. L., and Webb W. W., Fluorescence correlation spectroscopy. II. an experimental realization, Biopolymers: Original Research on Biomolecules 13, 29 (1974). [DOI] [PubMed] [Google Scholar]
  • [259].Bright G. R., Fisher G. W., Rogowska J., and Taylor D. L., Fluorescence ratio imaging microscopy, Methods in cell biology 30, 157 (1989). [DOI] [PubMed] [Google Scholar]
  • [260].Lakowicz J. R., Principles of fluorescence spectroscopy (Springer, 2006). [Google Scholar]
  • [261].Jazani S., Xu L. W. Q., Sgouralis I., Shepherd D. P., and Presse S., Computational proposal for tracking multiple molecules in a multifocus confocal setup, ACS Photonics 9, 2489 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [262].Tavakoli M., Jazani S., Sgouralis I., Shafraz O. M., Sivasankar S., Donaphon B., Levitus M., and Pressé S., Pitching single-focus confocal data analysis one photon at a time with Bayesian nonparametrics, Physical Review X 10, 011021 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [263].Hjort N. L., Holmes C., Müller P., and Walker S. G., Bayesian nonparametrics, Vol. 28 (Cambridge University Press, 2010). [Google Scholar]
  • [264].Gershman S. J. and Blei D. M., A tutorial on Bayesian nonparametric models, Journal of Mathematical Psychology 56, 1 (2012). [Google Scholar]
  • [265].Lessard G. A., Goodwin P. M., and Werner J. H., Three-dimensional tracking of individual quantum dots, Applied Physics Letters 91, 224106 (2007). [Google Scholar]
  • [266].Wells N. P., Lessard G. A., Goodwin P. M., Phipps M. E., Cutler P. J., Lidke D. S., Wilson B. S., and Werner J. H., Time-resolved three-dimensional molecular tracking in live cells, Nano letters 10, 4732 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [267].Bacia K. and Schwille P., Practical guidelines for dual-color fluorescence cross-correlation spectroscopy, Nature protocols 2, 2842 (2007). [DOI] [PubMed] [Google Scholar]
  • [268].Torres T. and Levitus M., Measuring conformational dynamics: a new FCS-FRET approach, The Journal of Physical Chemistry B 111, 7392 (2007). [DOI] [PubMed] [Google Scholar]
  • [269].Schuler B., Perspective: Chain dynamics of unfolded and intrinsically disordered proteins from nanosecond fluorescence correlation spectroscopy combined with single-molecule FRET, The Journal of Chemical Physics 149, 010901 (2018). [DOI] [PubMed] [Google Scholar]
  • [270].Zosel F., Mercadante D., Nettels D., and Schuler B., A proline switch explains kinetic heterogeneity in a coupled folding and binding reaction, Nature communications 9, 1 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [271].Xu L. W.Q., Jazani S., Kilic Z., and Presse S., Single-molecule Reaction-Diffusion, bioRxiv, 556378 (2023) [Google Scholar]
  • [272].Enderlein J., Gregor I., Patra D., and Fitter J., Art and artefacts of fluorescence correlation spectroscopy, Current pharmaceutical biotechnology 5, 155 (2004). [DOI] [PubMed] [Google Scholar]
  • [273].Sarkar A., Gallagher J., Wang I., Cappello G., Enderlein J., Delon A., and Derouard J., Confocal fluorescence correlation spectroscopy through a sparse layer of scattering objects, Optics Express 27, 19382 (2019). [DOI] [PubMed] [Google Scholar]
  • [274].Fazel M., Vallmitjana A., Scipioni L., Gratton E., Digman M. A., and Presse S., Fluorescence lifetime: Beating the IRF and interpulse window, Biophysical Journal 10.1016/j.bpj.2023.01.014 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [275].Rowley M. I., Coolen A. C., Vojnovic B., and Barber P. R., Robust Bayesian fluorescence lifetime estimation, decay model selection and instrument response determination for low-intensity FLIM imaging, PLoS One 11, e0158404 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [276].Kaye B., Foster P. J., Yoo T. Y., and Needleman D. J., Developing and testing a Bayesian analysis of fluorescence lifetime measurements, PLoS One 12, e0169337 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [277].Fazel M., Jazani S., Scipioni L., Vallmitjana A., Zhu S., Gratton E., Digman M., and Presse S., Building fluorescence lifetime maps photon-by-photon by leveraging spatial correlations, bioRxiv, 2022 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [278].Wu Y. and Shroff H., Faster, sharper, and deeper: Structured illumination microscopy for biological imaging, Nature Methods 15, 1011 (2018). [DOI] [PubMed] [Google Scholar]
  • [279].Ströhl F. and Kaminski C. F., Frontiers in structured illumination microscopy, Optica 3, 667 (2016). [Google Scholar]
  • [280].Heintzmann R. and Cremer C. G., Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating, in Proc.SPIE, Vol. 3568 (1999). [Google Scholar]
  • [281].Gustafsson M. G., Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy, Journal of Microscopy 198, 82 (2000). [DOI] [PubMed] [Google Scholar]
  • [282].Neil M. A. A., Juškaitis R., and Wilson T., Method of obtaining optical sectioning by using structured light in a conventional microscope, Optics Letters 22, 1905 (1997). [DOI] [PubMed] [Google Scholar]
  • [283].Heintzmann R., Saturated patterned excitation microscopy with two-dimensional excitation patterns, Micron 34, 283 (2003). [DOI] [PubMed] [Google Scholar]
  • [284].Planchon T. A., Gao L., Milkie D. E., Davidson M. W., Galbraith J. A., Galbraith C. G., and Betzig E., Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination, Nature methods 8, 417 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [285].Mudry E., Belkebir K., Girard J., Savatier J., Le Moal E., Nicoletti C., Allain M., and Sentenac A., Structured illumination microscopy using unknown speckle patterns, Nature Photonics 6, 312 (2012). [Google Scholar]
  • [286].Heintzmann R. and Huser T., Super-resolution structured illumination microscopy, Chemical Reviews 117, 13890 (2017) [DOI] [PubMed] [Google Scholar]
  • [287].Ma Y., Wen K., Liu M., Zheng J., Chu K., Smith Z. J., Liu L., and Gao P., Recent advances in structured illumination microscopy, JPhys Photonics 3, 10.1088/2515-7647/abdb04 (2021). [DOI] [Google Scholar]
  • [288].York A. G., Parekh S. H., Nogare D. D., Fischer R. S., Temprine K., Mione M., Chitnis A. B., Combs C. A., and Shroff H., Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy, Nature methods 9, 749 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [289].York A. G., Chandris P., Nogare D. D., Head J., Wawrzusin P., Fischer R. S., Chitnis A., and Shroff H., Instant super-resolution imaging in live cells and embryos via analog image processing, Nature methods 10, 1122 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [290].Gregor I. and Enderlein J., Image scanning microscopy, Current Opinion in Chemical Biology 51, 74 (2019). [DOI] [PubMed] [Google Scholar]
  • [291].Müller M., Mönkemöller V., Hennig S., Hübner W., and Huser T., Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ, Nature communications 7, 1 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [292].Lal A., Shan C., and Xi P., Structured illumination microscopy image reconstruction algorithm, IEEE Journal of Selected Topics in Quantum Electronics 22, 50 (2016). [Google Scholar]
  • [293].Lukeš T., Křížek P., Švindrych Z., Benda J., Ovesnỳ M., Fliegel K., Klíma M., and Hagen G. M., Three-dimensional super-resolution structured illumination microscopy with maximum a posteriori probability image estimation, Optics express 22, 29805 (2014). [DOI] [PubMed] [Google Scholar]
  • [294].Perez V., Chang B.-J., and Stelzer E. H. K., Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution, Scientific reports 6, 1 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [295].Huang X., Fan J., Li L., Liu H., Wu R., Wu Y., Wei L., Mao H., Lal A., Xi P., et al. , Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy, Nature biotechnology 36, 451 (2018). [DOI] [PubMed] [Google Scholar]
  • [296].Lai-Tim Y., Mugnier L. M., Orieux F., Baena-Gallé R., Paques M., and Meimon S., Jointly super-resolved and optically sectioned Bayesian reconstruction method for structured illumination microscopy, Optics Express 27, 33251 (2019). [DOI] [PubMed] [Google Scholar]
  • [297].Jin L., Liu B., Zhao F., Hahn S., Dong B., Song R., Elston T. C., Xu Y., and Hahn K. M., Deep learning enables structured illumination microscopy with low light levels and enhanced speed, Nature communications 11, 1 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [298].Christensen C. N., Ward E. N., Lu M., Lio P., and Kaminski C. F., ML-SIM: universal reconstruction of structured illumination microscopy images using transfer learning, Biomedical optics express 12, 2720 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [299].Smith C. S., Slotman J. A., Schermelleh L., Chakrova N., Hari S., Vos Y., Hagen C. W., Müller M., van Cappellen W., Houtsmuller A. B., et al. , Structured illumination microscopy with noise-controlled image reconstructions, Nature methods 18, 821 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [300].Shah Z. H., Müller M., Wang T.-C., Scheidig P. M., Schneider A., Schüttpelz M., Huser T., and Schenck W., Deep-learning based denoising and reconstruction of super-resolution structured illumination microscopy images, Photonics Research 9, B168 (2021). [Google Scholar]
  • [301].Cai M., Zhu H., Sun Y., Yin L., Xu F., Wu H., Hao X., Zhou R., Kuang C., and Liu X., Total variation and spatial iteration-based 3D structured illumination microscopy, Optics Express 30, 7938 (2022). [DOI] [PubMed] [Google Scholar]
  • [302].Qiao C., Li D., Liu Y., Zhang S., Liu K., Liu C., Guo Y., Jiang T., Fang C., Li N., et al. , Rationalized deep learning super-resolution microscopy for sustained live imaging of rapid subcellular processes, Nature biotechnology, 1 (2022). [DOI] [PubMed] [Google Scholar]
  • [303].Demmerle J., Innocent C., North A. J., Ball G., Müller M., Miron E., Matsuda A., Dobbie I. M., Markaki Y., and Schermelleh L., Strategic and practical guidelines for successful structured illumination microscopy, Nature Protocols 12, 988 (2017). [DOI] [PubMed] [Google Scholar]
  • [304].Chung E., Kim D., and So P. T., Extended resolution wide-field optical imaging: Objective-launched standing-wave total internal reflection fluorescence microscopy, Optics Letters 31, 945 (2006). [DOI] [PubMed] [Google Scholar]
  • [305].Guo Y., Li D., Zhang S., Yang Y., Liu J. J., Wang X., Liu C., Milkie D. E., Moore R. P., Tulu U. S., Kiehart D. P., Hu J., Lippincott-Schwartz J., Betzig E., and Li D., Visualizing intracellular organelle and cytoskeletal interactions at nanoscale resolution on millisecond timescales, Cell 175, 1430 (2018). [DOI] [PubMed] [Google Scholar]
  • [306].Chen B. C., Legant W. R., Wang K., Shao L., Milkie D. E., Davidson M. W., Janetopoulos C., Wu X. S., Hammer J. A., Liu Z., English B. P., MimoriKiyosue Y., Romero D. P., Ritter A. T., Lippincott-Schwartz J., Fritz-Laylin L., Mullins R. D., Mitchell D. M., Bembenek J. N., Reymann A. C., Böhme R., Grill S. W., Wang J. T., Seydoux G., Tulu U. S., Kiehart D. P., and Betzig E., Lattice light-sheet microscopy: Imaging molecules to embryos at high spatiotemporal resolution, Science 346, 10.1126/science.1257998 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [307].Chang B. J., Meza V. D. P., and Stelzer E. H., csiLSFM combines light-sheet fluorescence microscopy and coherent Structured illumination for a lateral resolution below 100 nm, Proceedings of the National Academy of Sciences of the United States of America 114, 4869 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [308].Chen B., Chang B.-J., Roudot P., Zhou F., Sapoznik E., Marlar-Pavey M., Hayes J. B., Brown P. T., Zeng C.-W., Lambert T., et al. , Resolution doubling in light-sheet microscopy via oblique plane structured illumination, Nature Methods 19, 1419 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [309].Gustafsson M. G., Shao L., Carlton P. M., Wang C. J., Golubovskaya I. N., Cande W. Z., Agard D. A., and Sedat J. W., Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination, Biophysical Journal 94, 4957 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [310].Shao L., Kner P., Rego E. H., and Gustafsson M. G., Super-resolution 3D microscopy of live whole cells using structured illumination, Nature Methods 8, 1044 (2011). [DOI] [PubMed] [Google Scholar]
  • [311].Heintzmann R., Jovin T. M., and Cremer C., Saturated patterned excitation microscopy - a concept for optical resolution improvement, Journal of the Optical Society of America A 19, 1599 (2002) [DOI] [PubMed] [Google Scholar]
  • [312].Gustafsson M. G., Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution, Proceedings of the National Academy of Sciences of the United States of America 102, 13081 (2005) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [313].Rego E. H., Shao L., Macklin J. J., Winoto L., Johansson G. A., Kamps-Hughes N., Davidson M. W., and Gustafsson M. G., Nonlinear structured-illumination microscopy with a photoswitchable protein reveals cellular structures at 50-nm resolution, Proceedings of the National Academy of Sciences of the United States of America 109, 10.1073/pnas.1107547108 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [314].Li D., Shao L., Chen B. C., Zhang X., Zhang M., Moses B., Milkie D. E., Beach J. R., Hammer J. A., Pasham M., Kirchhausen T., Baird M. A., Davidson M. W., Xu P., and Betzig E., Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics, Science 349, 10.1126/science.aab3500 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [315].Power R. M. and Huisken J., A guide to light-sheet fluorescence microscopy for multiscale imaging, Nature methods 14, 360 (2017). [DOI] [PubMed] [Google Scholar]
  • [316].Olarte O. E., Andilla J., Gualda E. J., and Loza-Alvarez P., Light-sheet microscopy: a tutorial, Advances in Optics and Photonics 10, 111 (2018). [Google Scholar]
  • [317].Stelzer E. H., Strobl F., Chang B.-J., Preusser F., Preibisch S., McDole K., and Fiolka R., Light sheet fluorescence microscopy, Nature Reviews Methods Primers 1, 1 (2021). [Google Scholar]
  • [318].Chakraborty T., Driscoll M. K., Jeffery E., Murphy M. M., Roudot P., Chang B.-J., Vora S., Wong W. M., Nielson C. D., Zhang H., et al. , Light-sheet microscopy of cleared tissues with isotropic, subcellular resolution, Nature Methods 16, 1109 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [319].Stelzer E. H., Light-sheet fluorescence microscopy for quantitative biology, Nature methods 12, 23 (2015). [DOI] [PubMed] [Google Scholar]
  • [320].Keller P. J. and Stelzer E. H., Quantitative in vivo imaging of entire embryos with digital scanned laser light sheet fluorescence microscopy, Current opinion in neurobiology 18, 624 (2008). [DOI] [PubMed] [Google Scholar]
  • [321].Toader B., Boulanger J., Korolev Y., Lenz M. O., Manton J., Schönlieb C.-B., and Mureşan L., Image reconstruction in light-sheet microscopy: spatially varying deconvolution and mixed noise, Journal of mathematical imaging and vision 64, 968 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [322].Fahrbach F. O., Simon P., and Rohrbach A., Microscopy with self-reconstructing beams, Nature photonics 4, 780 (2010). [Google Scholar]
  • [323].Chen B.-C., Legant W. R., Wang K., Shao L., Milkie D. E., Davidson M. W., Janetopoulos C., Wu X. S., Hammer III J. A., Liu Z., et al. , Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution, Science 346, 1257998 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [324].Vettenburg T., Dalgarno H. I., Nylk J., Coll-Lladó C., Ferrier D. E., Čižmár T., Gunn-Moore F. J., and Dholakia K., Light-sheet microscopy using an Airy beam, Nature methods 11, 541 (2014). [DOI] [PubMed] [Google Scholar]
  • [325].Yang Z., Prokopas M., Nylk J., Coll-Lladó C., Gunn-Moore F. J., Ferrier D. E., Vettenburg T., and Dholakia K., A compact Airy beam light sheet microscope with a tilted cylindrical lens, Biomedical optics express 5, 3434 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [326].Fahrbach F. O. and Rohrbach A., A line scanned light-sheet microscope with phase shaped self-reconstructing beams, Optics express 18, 24229 (2010). [DOI] [PubMed] [Google Scholar]
  • [327].Zhao T., Lau S. C., Wang Y., Su Y., Wang H., Cheng A., Herrup K., Ip N. Y., Du S., and Loy M., Multicolor 4D fluorescence microscopy using ultrathin Bessel light sheets, Scientific reports 6, 26159 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [328].Remacha E., Friedrich L., Vermot J., and Fahrbach F. O., How to define and optimize axial resolution in light-sheet microscopy: a simulation-based approach, Biomedical optics express 11, 8 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [329].Chang B.-J., Dean K. M., and Fiolka R., Systematic and quantitative comparison of lattice and Gaussian light-sheets, Optics express 28, 27052 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [330].Shi Y., Daugird T. A., and Legant W. R., A quantitative analysis of various patterns applied in lattice light sheet microscopy, Nature communications 13, 4607 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [331].Liu G., Ruan X., Milkie D. E., Görlitz F., Mueller M., Hercule W., Killilea A., Betzig E., and Upadhyayula S., Characterization, comparison, and optimization of lattice light sheets, Science Advances 9, eade6623 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [332].Palero J., Santos S. I., Artigas D., and Loza-Alvarez P., A simple scanless two-photon fluorescence microscope using selective plane illumination, Optics express 18, 8491 (2010). [DOI] [PubMed] [Google Scholar]
  • [333].Keller P. J., Schmidt A. D., Santella A., Khairy K., Bao Z., Wittbrodt J., and Stelzer E. H., Fast, high-contrast imaging of animal development with scanned light sheet-based structured-illumination microscopy, Nature methods 7, 637 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [334].Friedrich M., Gan Q., Ermolayev V., and Harms G. S., STED-SPIM: stimulated emission depletion improves sheet illumination microscopy resolution, Biophysical journal 100, L43 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [335].Hoyer P., De Medeiros G., Balázs B., Norlin N., Besir C., Hanne J., Kräusslich H.-G., Engelhardt J., Sahl S. J., Hell S. W., et al. , Breaking the diffraction limit of light-sheet fluorescence microscopy by RESOLFT, Proceedings of the National Academy of Sciences 113, 3442 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [336].Gebhardt J. C. M., Suter D. M., Roy R., Zhao Z. W., Chapman A. R., Basu S., Maniatis T., and Xie X. S., Single-molecule imaging of transcription factor binding to DNA in live mammalian cells, Nature methods 10, 421 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [337].Galland R., Grenci G., Aravind A., Viasnoff V., Studer V., and Sibarita J.-B., 3D high-and super-resolution imaging using single-objective SPIM, Nature methods 12, 641 (2015). [DOI] [PubMed] [Google Scholar]
  • [338].Meddens M. B., Liu S., Finnegan P. S., Edwards T. L., James C. D., and Lidke K. A., Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution, Biomedical Optics Express 7, 2219 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [339].Swoger J., Verveer P., Greger K., Huisken J., and Stelzer E. H., Multi-view image fusion improves resolution in three-dimensional microscopy, Optics express 15, 8029 (2007). [DOI] [PubMed] [Google Scholar]
  • [340].Huisken J. and Stainier D. Y., Even fluorescence excitation by multidirectional selective plane illumination microscopy (mSPIM), Optics letters 32, 2608 (2007). [DOI] [PubMed] [Google Scholar]
  • [341].Preibisch S., Amat F., Stamataki E., Sarov M., Singer R. H., Myers E., and Tomancak P., Efficient Bayesianbased multiview deconvolution, Nature methods 11, 645 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [342].Guo M., Li Y., Su Y., Lambert T., Nogare D. D., Moyle M. W., Duncan L. H., Ikegami R., Santella A., ReySuarez I., et al. , Rapid image deconvolution and multiview fusion for optical microscopy, Nature biotechnology 38, 1337 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [343].Dean K. M., Roudot P., Welf E. S., Danuser G., and Fiolka R., Deconvolution-free subcellular imaging with axially swept light sheet microscopy, Biophysical journal 108, 2807 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [344].Dean K. M., Chakraborty T., Daetwyler S., Lin J., Garrelts G., M’Saad O., Mekbib H. T., Voigt F. F., Schaettin M., Stoeckli E. T., et al. , Isotropic imaging across spatial scales with axially swept light-sheet microscopy, Nature protocols 17, 2025 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [345].Wu Y., Ghitani A., Christensen R., Santella A., Du Z., Rondeau G., Bao Z., Colón-Ramos D., and Shroff H., Inverted selective plane illumination microscopy (i SPIM) enables coupled cell identity lineaging and neurodevelopmental imaging in Caenorhabditis elegans, Proceedings of the National Academy of Sciences 108, 17708 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [346].Dunsby C., Optically sectioned imaging by oblique plane microscopy, Optics express 16, 20306 (2008). [DOI] [PubMed] [Google Scholar]
  • [347].Sapoznik E., Chang B.-J., Huh J., Ju R. J., Azarova E. V., Pohlkamp T., Welf E. S., Broadbent D., Carisey A. F., Stehbens S. J., et al. , A versatile oblique plane microscope for large-scale and high-resolution imaging of subcellular dynamics, Elife 9, e57681 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [348].Yang B., Lange M., Millett-Sikking A., Zhao X., Bragantini J., VijayKumar S., Kamb M., Gómez-Sjöberg R., Solak A. C., Wang W., et al. , Daxi—high-resolution, large imaging volume and multi-view single-objective light-sheet microscopy, Nature methods 19, 461 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [349].Abrahamsson S., Chen J., Hajj B., Stallinga S., Katsov A. Y., Wisniewski J., Mizuguchi G., Soule P., Mueller F., Darzacq C. D., Darzacq X., Wu C., Bargmann C. I., Agard D. A., Dahan M., and Gustafsson M. G., Fast multicolor 3D imaging using aberration-corrected multifocus microscopy, Nature Methods 10, 60 (2013) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [350].Descloux A., Grußmayer K. S., Bostan E., Lukes T., Bouwens A., Sharipov A., Geissbuehler S., Mahul-Mellier A. L., Lashuel H. A., Leutenegger M., and Lasser T., Combined multi-plane phase retrieval and super-resolution optical fluctuation imaging for 4D cell microscopy, Nature Photonics 12, 165 (2018). [Google Scholar]
  • [351].Mertz J., Strategies for volumetric imaging with a fluorescence microscope, Optica 6, 1261 (2019). [Google Scholar]
  • [352].Engelhardt M. and Grußmayer K., Mapping volumes to planes: Camera-based strategies for snapshot volumetric microscopy, Frontiers in Physics 10, 10.3389/fphy.2022.1010053 (2022). [DOI] [Google Scholar]
  • [353].Ma Q., Khademhosseinieh B., Huang E., Qian H., Bakowski M. A., Troemel E. R., and Liu Z., Three-dimensional fluorescent microscopy via simultaneous illumination and detection at multiple planes, Scientific Reports 6, 1 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [354].Ram S., Kim D., Ober R. J., and Ward E. S., 3D single molecule tracking with multifocal plane microscopy reveals rapid intercellular transferrin transport at epithelial cell barriers, Biophysical Journal 103, 1594 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [355].Louis B., Camacho R., Bresolí-Obach R., Abakumov S., Vandaele J., Kudo T., Masuhara H., Scheblykin I. G., Hofkens J., and Rocha S., Fast-tracking of single emitters in large volumes with nanometer precision, Optics Express 28, 28656 (2020). [DOI] [PubMed] [Google Scholar]
  • [356].Hajj B., Wisniewski J., Beheiry M. E., Chen J., Revyakin A., Wu C., and Dahan M., Whole-cell, multi-color superresolution imaging using volumetric multifocus microscopy, Proceedings of the National Academy of Sciences of the United States of America 111, 17480 (2014) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [357].Babcock H. P., Multiplane and spectrally-resolved single molecule localization microscopy with industrial grade CMOS cameras, Scientific Reports 8, 4 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [358].Geissbuehler S., Sharipov A., Godinat A., Bocchio N. L., Sandoz P. A., Huss A., Jensen N. A., Jakobs S., Enderlein J., Gisou Van Der Goot F., Dubikovskaya E. A., Lasser T., and Leutenegger M., Live-cell multiplane three-dimensional super-resolution optical fluctuation imaging, Nature Communications 5, 1 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [359].Abrahamsson S., Blom H., Agostinho A., Jans D. C., Jost A., Müller M., Nilsson L., Bernhem K., Lambert T. J., Heintzmann R., and Brismar H., Multifocus structured illumination microscopy for fast volumetric super-resolution imaging, Biomedical Optics Express 8, 4135 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [360].Descloux A., Müller M., Navikas V., Markwirth A., Van Den Eynde R., Lukes T., Hübner W., Lasser T., Radenovic A., Dedecker P., and Huser T., High-speed multiplane structured illumination microscopy of living cells using an image-splitting prism, Nanophotonics 9, 143 (2020) [Google Scholar]
  • [361].Xiao S., Gritton H., Tseng H.-A., Zemel D., Han X., and Mertz J., High-contrast multifocus microscopy with a single camera and z-splitter prism, Optica 7, 1477 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [362].Hansen J. N., Gong A., Wachten D., Pascal R., Turpin A., Jikeli J. F., Kaupp U. B., and Alvarez L., Multifocal imaging for precise, label-free tracking of fast biological processes in 3d, Nature Communications 12, 4574 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [363].Mojiri S., Isbaner S., Mühle S., Jang H., Bae A. J., Gregor I., Gholami A., and Enderlein J., Rapid multi-plane phase-contrast microscopy reveals torsional dynamics in flagellar motion, Biomedical Optics Express 12, 3169 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [364].Abrahamsson S., McQuilken M., Mehta S. B., Verma A., Larsch J., Ilic R., Heintzmann R., Bargmann C. I., Gladfelter A. S., and Oldenbourg R., Multifocus polarization microscope (MF-PolScope) for 3D polarization imaging of up to 25 focal planes simultaneously, Optics Express 23, 7734 (2015) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [365].Itano M. S., Bleck M., Johnson D. S., and Simon S. M., Readily accessible multiplane microscopy: 3D tracking the HIV-1 genome in living cells, Traffic 17, 179 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [366].Gregor I., Butkevich E., Enderlein J., and Mojiri S., Instant three-color multiplane fluorescence microscopy, Biophysical Reports 1, 100001 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [367].Abrahamsson S., Ilic R., Wisniewski J., Mehl B., Yu L., Chen L., Davanco M., Oudjedi L., Fiche J.-B., Hajj B., Jin X., Pulupa J., Cho C., Mir M., El Beheiry M., Darzacq X., Nollmann M., Dahan M., Wu C., Lionnet T., Liddle J. A., and Bargmann C. I., Multifocus microscopy with precise color multi-phase diffractive optics applied in functional neuronal imaging, Biomedical Optics Express 7, 855 (2016) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [368].Hajj B., Oudjedi L., Fiche J. B., Dahan M., and Nollmann M., Highly efficient multicolor multifocus microscopy by optimal design of diffraction binary gratings, Scientific Reports 7, 1 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [369].Klar T. A., Engel E., and Hell S. W., Breaking Abbe’s diffraction resolution limit in fluorescence microscopy with stimulated emission depletion beams of various shapes, Physical Review E 64, 066613 (2001), publisher: APS. [DOI] [PubMed] [Google Scholar]
  • [370].Hofmann M., Eggeling C., Jakobs S., and Hell S. W., Breaking the diffraction barrier in fluorescence microscopy at low light intensities by using reversibly photoswitchable proteins, Proceedings of the National Academy of Sciences 102, 17565 (2005), publisher: National Acad Sciences. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [371].Hell S. W., Far-field optical nanoscopy, science 316, 1153 (2007), publisher: American Association for the Advancement of Science. [DOI] [PubMed] [Google Scholar]
  • [372].Eggeling C., Willig K. I., Sahl S. J., and Hell S. W., Lens-based fluorescence nanoscopy, Quarterly reviews of biophysics 48, 178 (2015). [DOI] [PubMed] [Google Scholar]
  • [373].Schneider J., Zahn J., Maglione M., Sigrist S. J., Marquard J., Chojnacki J., Kräusslich H.-G., Sahl S. J., Engelhardt J., and Hell S. W., Ultrafast, temporally stochastic STED nanoscopy of millisecond dynamics, nature methods 12, 827 (2015), publisher: Nature Publishing Group. [DOI] [PubMed] [Google Scholar]
  • [374].Curdt F., Herr S. J., Lutz T., Schmidt R., Engelhardt J., Sahl S. J., and Hell S. W., isoSTED nanoscopy with intrinsic beam alignment, Optics express 23, 30891 (2015), publisher: Optical Society of America. [DOI] [PubMed] [Google Scholar]
  • [375].Einstein A., Strahlungs-Emission und Absorption nach der Quantentheorie, Deutsche Physikalische Gesellschaft 18, 318 (1916). [Google Scholar]
  • [376].Pawley J., Handbook of biological confocal microscopy, Vol. 236 (Springer Science & Business Media, 2006). [Google Scholar]
  • [377].Sahl S. J., Hell S. W., and Jakobs S., Fluorescence nanoscopy in cell biology, Nature reviews Molecular cell biology 18, 685 (2017), publisher: Nature Publishing Group. [DOI] [PubMed] [Google Scholar]
  • [378].Wildanger D., Medda R., Kastrup L., and Hell S., A compact STED microscope providing 3D nanoscale resolution, Journal of microscopy 236, 35 (2009), publisher: Wiley Online Library. [DOI] [PubMed] [Google Scholar]
  • [379].Arroyo-Camejo S., Adam M.-P., Besbes M., Hugonin J.-P., Jacques V., Greffet J.-J., Roch J.-F., Hell S. W., and Treussart F., Stimulated emission depletion microscopy resolves individual nitrogen vacancy centers in diamond nanocrystals, ACS nano 7, 10912 (2013), publisher: ACS Publications. [DOI] [PubMed] [Google Scholar]
  • [380].Wildanger D., Patton B. R., Schill H., Marseglia L., Hadden J., Knauer S., Schönle A., Rarity J. G., O’Brien J. L., and Hell S. W., Solid immersion facilitates fluorescence microscopy with nanometer resolution and subÅngström emitter localization, Advanced Materials 24, OP309 (2012), publisher: Wiley Online Library. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [381].Osseforth C., Moffitt J. R., Schermelleh L., and Michaelis J., Simultaneous dual-color 3D STED microscopy, Optics express 22, 7028 (2014), publisher: Optical Society of America. [DOI] [PubMed] [Google Scholar]
  • [382].Bodén A., Pennacchietti F., Coceano G., Damenti M., Ratz M., and Testa I., Volumetric live cell imaging with three-dimensional parallelized RESOLFT microscopy, Nature Biotechnology 39, 609 (2021), publisher: Nature Publishing Group. [DOI] [PubMed] [Google Scholar]
  • [383].Grotjohann T., Testa I., Reuss M., Brakemann T., Eggeling C., Hell S. W., and Jakobs S., rsEGFP2 enables fast RESOLFT nanoscopy of living cells, Elife 1, e00248 (2012), publisher: eLife Sciences Publications Limited. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [384].Pennacchietti F., Serebrovskaya E. O., Faro A. R., Shemyakina I. I., Bozhanova N. G., Kotlobay A. A., Gurskaya N. G., Bodén A., Dreier J., and Chudakov D. M., Fast reversibly photoswitching red fluorescent proteins for live-cell RESOLFT nanoscopy, Nature methods 15, 601 (2018), publisher: Nature Publishing Group. [DOI] [PubMed] [Google Scholar]
  • [385].Hell S. W. and Kroug M., Ground-state-depletion fluorescence microscopy: A concept for breaking the diffraction resolution limit, Applied Physics B 60, 495 (1995). [Google Scholar]
  • [386].Balzarotti F., Eilers Y., Gwosch K. C., Gynnå A. H., Westphal V., Stefani F. D., Elf J., and Hell S. W., Nanometer resolution imaging and tracking of fluorescent molecules with minimal photon fluxes, Science 355, 606 (2017), publisher: American Association for the Advancement of Science. [DOI] [PubMed] [Google Scholar]
  • [387].Cnossen J., Hinsdale T., Thorsen R. Ø., Siemons M., Schueder F., Jungmann R., Smith C. S., Rieger B., and Stallinga S., Localization microscopy at doubled precision with patterned illumination, Nature methods 17, 59 (2020), publisher: Nature Publishing Group. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [388].Gu L., Li Y., Zhang S., Xue Y., Li W., Li D., Xu T., and Ji W., Molecular resolution imaging by repetitive optical selective exposure, Nature methods 16, 1114 (2019), publisher: Nature Publishing Group. [DOI] [PubMed] [Google Scholar]
  • [389].Jouchet P., Cabriel C., Bourg N., Bardou M., Poüs C., Fort E., and Lévêque-Fort S., Nanometric axial localization of single fluorescent molecules with modulated excitation, Nature Photonics 15, 297 (2021), publisher: Nature Publishing Group. [Google Scholar]
  • [390].Reymond L., Ziegler J., Knapp C., Wang F.-C., Huser T., Ruprecht V., and Wieser S., Simple: Structured illumination based point localization estimator with enhanced precision, Optics express 27, 24578 (2019). [DOI] [PubMed] [Google Scholar]
  • [391].Masullo L. A., Lopez L. F., and Stefani F. D., A common framework for single-molecule localization using sequential structured illumination, Biophysical Reports 2, 100036 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [392].Masullo L. A., Szalai A. M., Lopez L. F., Pilo-Pais M., Acuna G. P., and Stefani F. D., An alternative to minflux that enables nanometer resolution in a confocal microscope, Light: Science & Applications 11, 199 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [393].Eilers Y., Ta H., Gwosch K. C., Balzarotti F., and Hell S. W., MINFLUX monitors rapid molecular jumps with superior spatiotemporal resolution, Proceedings of the National Academy of Sciences 115, 6117 (2018), publisher: National Acad Sciences. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [394].Gwosch K. C., Pape J. K., Balzarotti F., Hoess P., Ellenberg J., Ries J., and Hell S. W., MINFLUX nanoscopy delivers 3D multicolor nanometer resolution in cells, Nature methods 17, 217 (2020), publisher: Nature Publishing Group. [DOI] [PubMed] [Google Scholar]
  • [395].Pape J. K., Stephan T., Balzarotti F., Büchner R., Lange F., Riedel D., Jakobs S., and Hell S. W., Multicolor 3D MINFLUX nanoscopy of mitochondrial MICOS proteins, Proceedings of the National Academy of Sciences 117, 20607 (2020), publisher: National Acad Sciences. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [396].Schmidt R., Weihs T., Wurm C. A., Jansen I., Rehman J., Sahl S. J., and Hell S. W., MINFLUX nanometer-scale 3D imaging and microsecond-range tracking on a common fluorescence microscope, Nature communications 12, 1 (2021), publisher: Nature Publishing Group. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [397].Sigal Y. M., Zhou R., and Zhuang X., Visualizing and discovering cellular structures with super-resolution microscopy, Science 361, 880 (2018), publisher: American Association for the Advancement of Science. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [398].Schermelleh L., Ferrand A., Huser T., Eggeling C., Sauer M., Biehlmaier O., and Drummen G. P., Superresolution microscopy demystified, Nature cell biology 21, 72 (2019), publisher: Nature Publishing Group. [DOI] [PubMed] [Google Scholar]
  • [399].Lelek M., Gyparaki M. T., Beliu G., Schueder F., Griffié J., Manley S., Jungmann R., Sauer M., Lakadamyali M., and Zimmer C., Single-molecule localization microscopy, Nature Reviews Methods Primers 1, 1 (2021), publisher: Nature Publishing Group. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [400].York A. G., Ghitani A., Vaziri A., Davidson M. W., and Shroff H., Confined activation and subdiffractive localization enables whole-cell PALM with genetically expressed probes, Nature methods 8, 327 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [401].Opstad I. S., Acuña S., Hernandez L. E. V., Cauzzo J., Škalko-Basnet N., Ahluwalia B. S., and Agarwal K., Fluorescence fluctuations-based super-resolution microscopy techniques: An experimental comparative study, arXiv preprint arXiv:2008.09195 (2020). [Google Scholar]
  • [402].Pawlowska M., Tenne R., Ghosh B., Makowski A., and Lapkiewicz R., Embracing the uncertainty: The evolution of SOFI into a diverse family of fluctuation-based super-resolution microscopy methods, JPhys Photonics 4, 10.1088/2515-7647/ac3838 (2022). [DOI] [Google Scholar]
  • [403].Dertinger T., Colyer R., Iyer G., Weiss S., and Enderlein J., Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI), Proceedings of the National Academy of Sciences 106, 22287 (2009), publisher: National Acad Sciences. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [404].Dertinger T., Heilemann M., Vogel R., Sauer M., and Weiss S., Superresolution optical fluctuation imaging with organic dyes, Angewandte Chemie International Edition 49, 9441 (2010), publisher: Wiley Online Library. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [405].Gustafsson N., Culley S., Ashdown G., Owen D. M., Pereira P. M., and Henriques R., Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations, Nature communications 7, 1 (2016), publisher: Nature Publishing Group. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [406].Solomon O., Mutzafi M., Segev M., and Eldar Y. C., Sparsity-based super-resolution microscopy from correlation information, Optics express 26, 18238 (2018), publisher: Optical Society of America. [DOI] [PubMed] [Google Scholar]
  • [407].Torres-García E., Pinto-Cámara R., Linares A., Martínez D., Abonza V., Brito-Alarcón E., CalcinesCruz C., Valdés-Galindo G., Torres D., Jabloñski M., et al. , Extending resolution within a single imaging frame, Nature Communications 13, 1 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [408].Cox S., Rosten E., Monypenny J., JovanovicTalisman T., Burnette D. T., Lippincott-Schwartz J., Jones G. E., and Heintzmann R., Bayesian localization microscopy reveals nanoscale podosome dynamics, Nature methods 9, 195 (2012), publisher: Nature Publishing Group. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [409].Dertinger T., Colyera R., Iyer G., Weiss S., and Enderlein J., Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI), Proceedings of the National Academy of Sciences of the United States of America 106, 22287 (2009) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [410].Grußmayer K., Lukes T., Lasser T., and Radenovic A., Self-blinking dyes unlock high-order and multi-plane super-resolution optical fluctuation imaging, ACS Nano 14, 9156 (2020) arXiv:2002.10224. [DOI] [PubMed] [Google Scholar]
  • [411].Deschout H., Lukes T., Sharipov A., Szlag D., Feletti L., Vandenberg W., Dedecker P., Hofkens J., Leutenegger M., Lasser T., and Radenovic A., Complementarity of PALM and SOFI for super-resolution live-cell imaging of focal adhesions, Nature Communications 7, 10.1038/ncomms13693 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [412].Dertinger T., Colyer R., Vogel R., Enderlein J., and Weiss S., Achieving increased resolution and more pixels with superresolution optical fluctuation imaging (SOFI), Optics Express 18, 18875 (2010) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [413].Geissbuehler S., Bocchio N. L., Dellagiacoma C., Berclaz C., Leutenegger M., and Lasser T., Mapping molecular statistics with balanced super-resolution optical fluctuation imaging (bSOFI), Optical Nanoscopy 1, 1 (2012). [Google Scholar]
  • [414].Girsault A., Lukes T., Sharipov A., Geissbuehler S., Leutenegger M., Vandenberg W., Dedecker P., Hofkens J., and Lasser T., SOFI simulation tool: A software package for simulating and testing super-resolution optical fluctuation imaging, PLoS ONE 11, 1 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [415].Geissbuehler S., Sharipov A., Godinat A., Bocchio N. L., Sandoz P. A., Huss A., Jensen N. A., Jakobs S., Enderlein J., Gisou Van Der Goot F., Dubikovskaya E. A., Lasser T., and Leutenegger M., Live-cell multiplane three-dimensional super-resolution optical fluctuation imaging, Nature Communications 5, 1 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [416].Lidke K. A., Rieger B., Jovin T. M., and Heintzmann R., Superresolution by localization of quantum dots using blinking statistics, Optics express 13, 7052 (2005). [DOI] [PubMed] [Google Scholar]
  • [417].Betzig E., Proposed method for molecular optical imaging, Optics letters 20, 237 (1995), publisher: Optical Society of America. [DOI] [PubMed] [Google Scholar]
  • [418].Lippincott-Schwartz J. and Patterson G. H., Photoactivatable fluorescent proteins for diffraction-limited and super-resolution imaging, Trends in cell biology 19, 555 (2009), publisher: Elsevier. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [419].Shroff H., Galbraith C. G., Galbraith J. A., and Betzig E., Live-cell photoactivated localization microscopy of nanoscale adhesion dynamics, Nature methods 5, 417 (2008), publisher: Nature Publishing Group. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [420].Heilemann M., Van De Linde S., Schüttpelz M., Kasper R., Seefeldt B., Mukherjee A., Tinnefeld P., and Sauer M., Subdiffraction-resolution fluorescence imaging with conventional fluorescent probes, Angewandte Chemie International Edition 47, 6172 (2008), publisher: Wiley Online Library. [DOI] [PubMed] [Google Scholar]
  • [421].Wombacher R., Heidbreder M., Van De Linde S., Sheetz M. P., Heilemann M., Cornish V. W., and Sauer M., Live-cell super-resolution imaging with trimethoprim conjugates, Nature methods 7, 717 (2010), publisher: Nature Publishing Group. [DOI] [PubMed] [Google Scholar]
  • [422].Schnitzbauer J., Strauss M. T., Schlichthaerle T., Schueder F., and Jungmann R., Super-resolution microscopy with DNA-PAINT, Nature protocols 12, 1198 (2017), publisher: Nature Publishing Group. [DOI] [PubMed] [Google Scholar]
  • [423].Jungmann R., Avendaño M. S., Woehrstein J. B., Dai M., Shih W. M., and Yin P., Multiplexed 3D cellular super-resolution imaging with DNA-PAINT and exchange-PAINT, Nature methods 11, 313 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [424].Wade O. K., Woehrstein J. B., Nickels P. C., Strauss S., Stehr F., Stein J., Schueder F., Strauss M. T., Ganji M., Schnitzbauer J., et al. , 124-color super-resolution imaging by engineering DNA-PAINT blinking kinetics, Nano letters 19, 2641 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [425].Strauss S. and Jungmann R., Up to 100-fold speedup and multiplexing in optimized DNA-PAINT, Nature methods 17, 789 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [426].Agarwal K. and Macháň R., Multiple signal classification algorithm for super-resolution fluorescence microscopy, Nature communications 7, 1 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [427].Liu S., Kromann E. B., Krueger W. D., Bewersdorf J., and Lidke K. A., Three dimensional single molecule localization using a phase retrieved pupil function, Optics express 21, 29462 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [428].Li Y., Mund M., Hoess P., Deschamps J., Matti U., Nijmeijer B., Sabinina V. J., Ellenberg J., Schoen I., and Ries J., Real-time 3d single-molecule localization using experimental point spread functions, Nature methods 15, 367 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [429].Babcock H., Sigal Y. M., and Zhuang X., A high-density 3D localization algorithm for stochastic optical reconstruction microscopy, Optical nanoscopy 1, 1 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [430].Huang F., Schwartz S. L., Byars J. M., and Lidke K. A., Simultaneous multiple-emitter fitting for single molecule super-resolution imaging, Biomedical optics express 2, 1377 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [431].Fazel M., Wester M. J., Mazloom-Farsibaf H., Meddens M., Eklund A. S., Schlichthaerle T., Schueder F., Jungmann R., and Lidke K. A., Bayesian multiple emitter fitting using reversible jump Markov chain Monte Carlo, Scientific reports 9, 1 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [432].Fazel M., Wester M. J., Schodt D. J., Cruz S. R., Strauss S., Schueder F., Schlichthaerle T., Gillette J. M., Lidke D. S., Rieger B., et al. , High-precision estimation of emitter positions using Bayesian grouping of localizations, Nature Communications 13, 7152 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [433].Reinhardt S. C., Masullo L. A., Baudrexel I., Steen P. R., Kowalewski R., Eklund A. S., Strauss S., Unterauer E. M., Schlichthaerle T., Strauss M. T., et al. , Ångström-resolution fluorescence microscopy, Nature 617, 711 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [434].Cheezum M. K., Walker W. F., and Guilford W. H., Quantitative comparison of algorithms for tracking single fluorescent particles, Biophysical journal 81, 2378 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [435].Wong Y., Lin Z., and Ober R. J., Limit of the accuracy of parameter estimation for moving single molecules imaged by fluorescence microscopy, IEEE Transactions on Signal Processing 59, 895 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [436].Michalet X. and Berglund A. J., Optimal diffusion coefficient estimation in single-particle tracking, Physical Review E 85, 061916 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [437].Muñoz-Gil G., Volpe G., Garcia-March M. A., Aghion E., Argun A., Hong C. B., Bland T., Bo S., Conejero J. A., Firbas N., et al. , Objective comparison of methods to decode anomalous diffusion, Nature communications 12, 1 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [438].Tinevez J.-Y., Perry N., Schindelin J., Hoopes G. M., Reynolds G. D., Laplantine E., Bednarek S. Y., Shorte S. L., and Eliceiri K. W., TrackMate: An open and extensible platform for single-particle tracking, Methods 115, 80 (2017). [DOI] [PubMed] [Google Scholar]
  • [439].Huang B., Wang W., Bates M., and Zhuang X., Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy, Science 319, 810 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [440].Pavani S. R. P., Thompson M. A., Biteen J. S., Lord S. J., Liu N., Twieg R. J., Piestun R., and Moerner W. E., Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function, Proceedings of the National Academy of Sciences 106, 2995 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [441].Lew M. D., Lee S. F., Badieirostami M., and Moerner W., Corkscrew point spread function for far-field three-dimensional nanoscale localization of pointlike objects, Optics letters 36, 202 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [442].Shechtman Y., Sahl S. J., Backer A. S., and Moerner W. E., Optimal point spread function design for 3D imaging, Physical review letters 113, 133902 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [443].Izeddin I., El Beheiry M., Andilla J., Ciepielewski D., Darzacq X., and Dahan M., PSF shaping using adaptive optics for three-dimensional single-molecule superresolution imaging and tracking, Optics express 20, 4957 (2012), publisher: Optical Society of America. [DOI] [PubMed] [Google Scholar]
  • [444].Siemons M., Cloin B. M., Salas D. M., Nijenhuis W., Katrukha E. A., and Kapitein L. C., Comparing strategies for deep astigmatism-based single-molecule localization microscopy, Biomedical optics express 11, 735 (2020), publisher: Optical Society of America. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [445].Di Francia G. T., Super-gain antennas and optical resolving power, Il Nuovo Cimento (1943–1954) 9, 426 (1952). [Google Scholar]
  • [446].Dowski E. R. and Cathey W. T., Extended depth of field through wave-front coding, Applied optics 34, 1859 (1995). [DOI] [PubMed] [Google Scholar]
  • [447].Chi W. and George N., Electronic imaging using a logarithmic asphere, Optics Letters 26, 875 (2001). [DOI] [PubMed] [Google Scholar]
  • [448].McGloin D. and Dholakia K., Bessel beams: Diffraction in a new light, Contemporary physics 46, 15 (2005). [Google Scholar]
  • [449].Ben-Eliezer E., Zalevsky Z., Marom E., and Konforti N., All-optical extended depth of field imaging system, Journal of Optics A: Pure and Applied Optics 5, S164 (2003). [DOI] [PubMed] [Google Scholar]
  • [450].von Diezmann L., Shechtman Y., and Moerner W., Three-dimensional localization of single molecules for super-resolution imaging and single-particle tracking, Chemical reviews 117, 7244 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [451].Kao H. P. and Verkman A., Tracking of single fluorescent particles in three dimensions: use of cylindrical optics to encode particle position, Biophysical journal 67, 1291 (1994). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [452].Schechner Y. Y., Piestun R., and Shamir J., Wave propagation with rotating intensity distributions, Physical Review E 54, R50 (1996). [DOI] [PubMed] [Google Scholar]
  • [453].Jia S., Vaughan J. C., and Zhuang X., Isotropic three-dimensional super-resolution imaging with a self-bending point spread function, Nature photonics 8, 302 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [454].Baddeley D., Cannell M. B., and Soeller C., Three-dimensional sub-100 nm super-resolution imaging of biological samples using a phase ramp in the objective pupil, Nano Research 4, 589 (2011). [Google Scholar]
  • [455].Prasad S., Rotating point spread function via pupilphase engineering, Optics letters 38, 585 (2013). [DOI] [PubMed] [Google Scholar]
  • [456].Shechtman Y., Eldar Y. C., Cohen O., Chapman H. N., Miao J., and Segev M., Phase retrieval with application to optical imaging: a contemporary overview, IEEE signal processing magazine 32, 87 (2015). [Google Scholar]
  • [457].Smith C., Huisman M., Siemons M., Grünwald D., and Stallinga S., Simultaneous measurement of emission color and 3D position of single molecules, Optics express 24, 4996 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [458].Gerchberg R. W., A practical algorithm for the determination of phase from image and diffraction plane pictures, Optik 35, 237 (1972). [Google Scholar]
  • [459].Fienup J. R., Reconstruction of an object from the modulus of its Fourier transform, Optics letters 3, 27 (1978). [DOI] [PubMed] [Google Scholar]
  • [460].Petrov P. N., Shechtman Y., and Moerner W., Measurement-based estimation of global pupil functions in 3D localization microscopy, Optics express 25, 7945 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [461].Badieirostami M., Lew M. D., Thompson M. A., and Moerner W., Three-dimensional localization precision of the double-helix point spread function versus astigmatism and biplane, Applied physics letters 97, 161103 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [462].Pavani S. R. P. and Piestun R., High-efficiency rotating point spread functions, Optics express 16, 3484 (2008). [DOI] [PubMed] [Google Scholar]
  • [463].Nehme E., Freedman D., Gordon R., Ferdman B., Weiss L. E., Alalouf O., Naor T., Orange R., Michaeli T., and Shechtman Y., DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning, Nature methods 17, 734 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [464].Nehme E., Ferdman B., Weiss L. E., Naor T., Freedman D., Michaeli T., and Shechtman Y., Learning optimal wavefront shaping for multi-channel imaging, IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 2179 (2021). [DOI] [PubMed] [Google Scholar]
  • [465].Shechtman Y., Weiss L. E., Backer A. S., Lee M. Y., and Moerner W., Multicolour localization microscopy by point-spread-function engineering, Nature photonics 10, 590 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [466].Hershko E., Weiss L. E., Michaeli T., and Shechtman Y., Multicolor localization microscopy and point-spreadfunction engineering by deep learning, Optics express 27, 6158 (2019). [DOI] [PubMed] [Google Scholar]
  • [467].Thevathasan J. V., Kahnwald M., Cieśliński K., Hoess P., Peneti S. K., Reitberger M., Heid D., Kasuba K. C., Hoerner S. J., Li Y., et al. , Nuclear pores as versatile reference standards for quantitative superresolution microscopy, Nature methods 16, 1045 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [468].Fischer L. S., Klingner C., Schlichthaerle T., Strauss M. T., Böttcher R., Fässler R., Jungmann R., and Grashoff C., Quantitative single-protein imaging reveals molecular complex formation of integrin, talin, and kindlin during cell adhesion, Nature communications 12, 1 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [469].Riedl J., Crevenna A. H., Kessenbrock K., Yu J. H., Neukirchen D., Bista M., Bradke F., Jenne D., Holak T. A., Werb Z., et al. , Lifeact: a versatile marker to visualize F-actin, Nature methods 5, 605 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [470].Andrews N. L., Lidke K. A., Pfeiffer J. R., Burns A. R., Wilson B. S., Oliver J. M., and Lidke D. S., Actin restricts FceRI diffusion and facilitates antigen-induced receptor immobilization, Nature cell biology 10, 955 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [471].Mazloom-Farsibaf H., Farzam F., Fazel M., Wester M. J., Meddens M. B., and Lidke K. A., Comparing lifeact and phalloidin for super-resolution imaging of actin in fixed cells, PLoS One 16, e0246138 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [472].Renz M., Fluorescence microscopy—a historical and technical perspective, Cytometry Part A 83, 767 (2013). [DOI] [PubMed] [Google Scholar]
  • [473].Shimomura O., Johnson F. H., and Saiga Y., Extraction, purification and properties of aequorin, a bioluminescent protein from the luminous hydromedusan, aequorea, Journal of cellular and comparative physiology 59, 223 (1962). [DOI] [PubMed] [Google Scholar]
  • [474].Dickson R. M., Cubitt A. B., Tsien R. Y., and Moerner W. E., On/off blinking and switching behaviour of single molecules of green fluorescent protein, Nature 388, 355 (1997). [DOI] [PubMed] [Google Scholar]
  • [475].Madan S. K., Bhaumik B., and Vasi J., Experimental observation of avalanche multiplication in charge-coupled devices, IEEE Transactions on Electron Devices 30, 694 (1983). [Google Scholar]
  • [476].Bigas M., Cabruja E., Forest J., and Salvi J., Review of CMOS image sensors, Microelectronics journal 37, 433 (2006). [Google Scholar]
  • [477].Ulku A. C., Bruschini C., Antolović I. M., Kuo Y., Ankri R., Weiss S., Michalet X., and Charbon E., A 512× 512 SPAD image sensor with integrated gating for widefield FLIM, IEEE Journal of Selected Topics in Quantum Electronics 25, 1 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [478].Bruschini C., Homulle H., Antolovic I. M., Burri S., and Charbon E., Single-photon avalanche diode imagers in biophotonics: review and outlook, Light: Science & Applications 8, 1 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [479].Belthangady C. and Royer L. A., Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction, Nature methods 16, 1215 (2019). [DOI] [PubMed] [Google Scholar]
  • [480].de Haan K., Rivenson Y., Wu Y., and Ozcan A., Deep-learning-based image reconstruction and enhancement in optical microscopy, Proceedings of the IEEE 108, 30 (2019). [Google Scholar]
  • [481].Möckl L., Roy A. R., and Moerner W., Deep learning in single-molecule microscopy: fundamentals, caveats, and recent developments, Biomedical optics express 11, 1633 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [482].Volpe G., Wählby C., Tian L., Hecht M., Yakimovich A., Monakhova K., Waller L., Sbalzarini I. F., Metzler C. A., Xie M., et al. , Roadmap on deep learning for microscopy, ArXiv (2023). [Google Scholar]
  • [483].Wang Z., Zhu L., Zhang H., Li G., Yi C., Li Y., Yang Y., Ding Y., Zhen M., Gao S., et al. , Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning, Nature methods 18, 551 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [484].Patel K. B., Liang W., Casper M. J., Voleti V., Li W., Yagielski A. J., Zhao H. T., Perez Campos C., Lee G. S., Liu J. M., et al. , High-speed light-sheet microscopy for the in-situ acquisition of volumetric histological images of living tissue, Nature Biomedical Engineering 6, 569 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [485].Winter S., Campbell T., Lin L., Srivastava S., and Dunson D. B., Machine learning and the future of bayesian computation, arXiv preprint arXiv:2304.11251 (2023). [Google Scholar]
  • [486].Sahl S. J., Hell S. W., and Jakobs S., Fluorescence nanoscopy in cell biology, Nature reviews Molecular cell biology 18, 685 (2017). [DOI] [PubMed] [Google Scholar]
  • [487].Stockert J. C. and Blázquez-Castro A., Fluorescence microscopy in life sciences (Bentham Science Publishers, 2017). [Google Scholar]
  • [488].Lee K., Kim K., Jung J., Heo J., Cho S., Lee S., Chang G., Jo Y., Park H., and Park Y., Quantitative phase imaging techniques for the study of cell pathophysiology: from principles to applications, Sensors 13, 4170 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [489].Kim K., Yoon J., Shin S., Lee S., Yang S.-A., and Park Y., Optical diffraction tomography techniques for the study of cell pathophysiology, Journal of Biomedical Photonics & Engineering 2, 020201 (2016). [Google Scholar]
  • [490].Smith E. and Dent G., Modern Raman spectroscopy: A practical approach (John Wiley & Sons, 2019). [Google Scholar]
  • [491].Camp C. H. Jr, Lee Y. J., Heddleston J. M., Hartshorn C. M., Walker A. R. H., Rich J. N., Lathia J. D., and Cicerone M. T., High-speed coherent Raman fingerprint imaging of biological tissues, Nature photonics 8, 627 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [492].Bishara W., Su T.-W., Coskun A. F., and Ozcan A., Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution, Optics express 18, 11181 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [493].Gatti A., Brambilla E., Bache M., and Lugiato L. A., Ghost imaging with thermal light: comparing entanglement and classical correlation, Physical review letters 93, 093602 (2004). [DOI] [PubMed] [Google Scholar]
  • [494].Shapiro J. H., Computational ghost imaging, Physical Review A 78, 061802 (2008). [Google Scholar]
  • [495].Ruh D., Mutschler J., Michelbach M., and Rohrbach A., Superior contrast and resolution by image formation in rotating coherent scattering (rocs) microscopy, Optica 5, 1371 (2018). [Google Scholar]
  • [496].Jünger F., Ruh D., Strobel D., Michiels R., Huber D., Brandel A., Madl J., Gavrilov A., Mihlan M., Daller C. C., et al. , 100 hz rocs microscopy correlated with fluorescence reveals cellular dynamics on different spatiotemporal scales, Nature Communications 13, 1758 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [497].Chen F., Tillberg P. W., and Boyden E. S., Expansion microscopy, Science 347, 543 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [498].Gambarotto D., Zwettler F. U., Le Guennec M., Schmidt-Cernohorska M., Fortun D., Borgers S., Heine J., Schloetel J.-G., Reuss M., Unser M., et al. , Imaging cellular ultrastructures using expansion microscopy (U-ExM), Nature methods 16, 71 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [499].Tavakoli M., Jazani S., Sgouralis I., Heo W., Ishii K., Tahara T., and Pressé S., Direct photon-by-photon analysis of time-resolved pulsed excitation data using Bayesian nonparametrics, Cell Reports Physical Science 1, 100234 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [500].Michalet X., Siegmund O., Vallerga J., Jelinsky P., Millaud J., and Weiss S., Detectors for single-molecule fluorescence imaging and spectroscopy, Journal of modern optics 54, 239 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [501].Weiss S., Fluorescence spectroscopy of single biomolecules, Science 283, 1676 (1999). [DOI] [PubMed] [Google Scholar]
  • [502].Huang F., Hartwich T. M., Rivera-Molina F. E., Lin Y., Duim W. C., Long J. J., Uchil P. D., Myers J. R., Baird M. A., Mothes W., et al. , Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms, Nature methods 10, 653 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [503].Afanasyev P., Ravelli R. B., Matadeen R., De Carlo S., van Duinen G., Alewijnse B., Peters P. J., Abrahams J.-P., Portugal R. V., Schatz M., et al. , A posteriori correction of camera characteristics from large image data sets, Scientific reports 5, 1 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [504].Heintzmann R., Relich P. K., Nieuwenhuizen R. P., Lidke K. A., and Rieger B., Calibrating photon counts from a single image, arXiv preprint arXiv:1611.05654 (2016). [Google Scholar]
  • [505].Boyle W. S. and Smith G. E., Charge coupled semiconductor devices, Bell System Technical Journal 49, 587 (1970). [Google Scholar]
  • [506].Amelio G. F., Tompsett M. F., and Smith G. E., Experimental verification of the charge coupled device concept, Bell System Technical Journal 49, 593 (1970). [Google Scholar]
  • [507].Fossum E. R. and Hondongwa D. B., A review of the pinned photodiode for CCD and CMOS image sensors, IEEE Journal of the electron devices society (2014). [Google Scholar]
  • [508].Jerram P., Pool P. J., Bell R., Burt D. J., Bowring S., Spencer S., Hazelwood M., Moody I., Catlett N., and Heyes P. S., The LLCCD: Low-light imaging without the need for an intensifier, in Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications II, Vol. 4306 (International Society for Optics and Photonics, 2001) pp. 178–186. [Google Scholar]
  • [509].Basden A., Haniff C., and Mackay C., Photon counting strategies with low-light-level CCDs, Monthly notices of the royal astronomical society 345, 985 (2003). [Google Scholar]
  • [510].Tian H., Noise analysis in CMOS image sensors (Stanford University, 2000). [Google Scholar]
  • [511].Liu S., Mlodzianoski M. J., Hu Z., Ren Y., McEl-murry K., Suter D. M., and Huang F., sCMOS noisecorrection algorithm for microscopy images, Nature methods 14, 760 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [512].Mandracchia B., Hua X., Guo C., Son J., Urner T., and Jia S., Fast and accurate sCMOS noise correction for fluorescence microscopy, Nature communications 11, 1 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [513].Ober R. J., Ram S., and Ward E. S., Localization accuracy in single-molecule microscopy, Biophysical journal 86, 1185 (2004). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [514].Zhang W. and Chen Q., Signal-to-noise ratio performance comparison of electron multiplying CCD and intensified CCD detectors, in 2009 International Conference on Image Analysis and Signal Processing (IEEE, 2009) pp. 337–341. [Google Scholar]
  • [515].Harpsøe K. B., Andersen M. I., and Kjægaard P., Bayesian photon counting with electron-multiplying charge coupled devices (EMCCDs), Astronomy & Astrophysics 537, A50 (2012). [Google Scholar]
  • [516].Hirsch M., Wareham R. J., Martin-Fernandez M. L., Hobson M. P., and Rolfe D. J., A stochastic model for electron multiplication charge-coupled devices-from theory to practice, PloS one 8, e53671 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [517].Quan T., Zeng S., and Huang Z., Localization capability and limitation of electron-multiplying charge-coupled, scientific complementary metal-oxide semiconductor, and charge-coupled devices for superresolution imaging, Journal of biomedical optics 15, 066005 (2010). [DOI] [PubMed] [Google Scholar]
  • [518].Daigle O., Carignan C., Gach J.-L., Guillaume C., Lessard S., Fortin C.-A., and Blais-Ouellette S., Extreme faint flux imaging with an EMCCD, Publications of the Astronomical Society of the Pacific 121, 866 (2009). [Google Scholar]
  • [519].Tubbs R. N., Lucky exposures: Diffraction limited astronomical imaging through the atmosphere, arXiv preprint astro-ph/0311481 (2003). [Google Scholar]
  • [520].Chao J., Ward E. S., and Ober R. J., Fisher information matrix for branching processes with application to electron-multiplying charge-coupled devices, Multidimensional systems and signal processing 23, 349 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [521].Chao J., Ram S., Ward E. S., and Ober R. J., Two approximations for the geometric model of signal amplification in an electron-multiplying charge-coupled device detector, in Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XX, Vol. 8589 (International Society for Optics and Photonics, 2013)p. 858905. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from ArXiv are provided here courtesy of arXiv

RESOURCES