Skip to main content
NIST Author Manuscripts logoLink to NIST Author Manuscripts
. Author manuscript; available in PMC: 2024 Apr 18.
Published in final edited form as: J Microsc. 2016 Mar 11;263(3):238–249. doi: 10.1111/jmi.12384

Centroid precision and orientation precision of planar localization microscopy

C MCGRAY *,§, CR COPELAND †,, SM STAVIS , J GEIST *
PMCID: PMC11025010  NIHMSID: NIHMS790319  PMID: 26970565

Summary

The concept of localization precision, which is essential to localization microscopy, is formally extended from optical point sources to microscopic rigid bodies. Measurement functions are presented to calculate the planar pose and motion of microscopic rigid bodies from localization microscopy data. Physical lower bounds on the associated uncertainties – termed centroid precision and orientation precision – are derived analytically in terms of the characteristics of the optical measurement system and validated numerically by Monte Carlo simulations. The practical utility of these expressions is demonstrated experimentally by an analysis of the motion of a microelectromechanical goniometer indicated by a sparse constellation of fluorescent nanoparticles. Centroid precision and orientation precision, as developed here, are useful concepts due to the generality of the expressions and the widespread interest in localization microscopy for super-resolution imaging and particle tracking.

Keywords: Centroid precision, localization microscopy, localization precision, orientation precision, uncertainty

1. Introduction

Localization microscopy comprises a class of rapidly advancing and broadly applicable experimental and computational methods for measuring the positions and motions of small optical indicators (Deschout et al., 2014a, b; Chenouard et al., 2014). Although the resolution of optical microscopy is ordinarily limited by diffraction to the Rayleigh limit of approximately half the imaging wavelength, the position of an optical point source that is individually resolved within an image with pixels and noise can be measured with an uncertainty that is orders of magnitude smaller (Bobroff, 1986). The physical lower bound of uncertainty in estimating the position of a fluorescent point source was derived (Thompson et al., 2002) and corrected (Mortensen et al., 2010) to:

LP=16(σG2+a212)9N+8πb2(σG2+a212)2a2N2, (1)

where σG is the standard deviation of the Gaussian approximation of the microscope point spread function, a is the pixel pitch of the imaging sensor, b2 is the expected number of background photons per pixel and N is the total number of detected signal photons. This expression of localization precision denotes one standard deviation along a single spatial axis.

Localization precision is widely used as a metric of minimum uncertainty for designing measurements for low uncertainty and for assessing empirical uncertainties (Thompson et al., 2002; Ober et al., 2004; Betzig et al., 2006; Huang et al., 2009, 2013; Mortensen et al., 2010; Smith et al., 2010; Deschout et al., 2014b; Chenouard et al., 2014; Endesfelder & Heilemann, 2014). For a given experimental measurement or analysis, Eq. (1) can be used to predict if a target uncertainty is physically possible to achieve. Eq. (1) can also be informative of critical parameters in the design of an experimental measurement system to achieve a target uncertainty. In an analysis of experimental measurement results, if the empirical uncertainty is significantly larger than the localization precision, then experimental errors not modelled by Eq. (1), such as microscope drift (Elmokadem & Yu, 2015), vibration, or fixed pattern noise (Fox-Roberts et al., 2014; Long et al., 2014), must dominate the uncertainty in the measurement. If the empirical uncertainty is approximately equal to the localization precision, however, then it might be possible to reduce the empirical uncertainty by changing one of the experimental parameters in Eq. (1), such as increasing the number of detected signal photons emitted by an optical indicator.

Common optical indicators for localization microscopy include individual fluorophores, which are widely used in super-resolution imaging, and subresolution particles containing many fluorophores, which are frequently tracked as probes and fiducials (Ribeck & Saleh, 2008). Such fluorescent nanoparticles are better localized than individual fluorophores in two ways. An ensemble of many fluorophores with random orientations increases N in Eq. (1), other things being equal, and produces approximately isotropic emission for which simple algorithms produce optimal results (Enderlein et al., 2006; Stallinga & Rieger, 2010) that otherwise require more complex methods (Stallinga & Rieger, 2012). For many practical applications, the localization precision of an individual optical indicator is on the order of 1 to 10 nm.

Many microscopic objects of interest are rigid bodies with planar motions which can be measured by localizing and tracking multiple optical indicators in or on the bodies. For such experimental systems, information from multiple optical indicators can be mathematically combined to further reduce the uncertainty of a position measurement and to enable an orientation measurement with low uncertainty. Related measurements have diverse applications in microtechnology, nanotechnology, biology, materials and metrology (Freeman, 2001; Ropp et al., 2013; Berfield et al., 2006, 2007; McGray et al., 2013; Samuel et al., 2007; Yoshida et al., 2011; Teyssieux et al., 2011; Ueno et al., 2010). Recently, constellations of fluorescent nanoparticles indicating the motion of microscopic actuators were localized and tracked, demonstrating the utility of the measurement method (McGray et al., 2013; Copeland et al., 2015). McGray et al. (2013) presented an expression for the minimum uncertainty of the centroid of a constellation of point sources of equal brightness. However, metrics comparable to localization precision have not been developed for either the centroid or the orientation of a sparse constellation of point sources of variable brightness on a microscopic rigid body. Such metrics would be useful to design measurements and assess uncertainties in these experimental systems.

In this paper, the concept of localization precision is extended to the analogous concepts of centroid precision and orientation precision. Just as localization precision provides the minimum uncertainty of localizing a point source by optical microscopy, centroid precision and orientation precision provide minimum uncertainties of the position and orientation of a sparse constellation of point sources of variable brightness on a microscopic rigid body in the image plane of an optical microscope. In Section 2, measurement functions and associated uncertainties for planar pose and motion are formally derived in terms of the localization precision and weighting of individual point sources in the constellation. This leads to the expressions for centroid precision and orientation precision that are summarized in Section 3. In Section 4, the expressions are numerically tested by Monte Carlo simulation, giving confidence in their validity. In Section 5, the expressions are applied to evaluate the experimental measurement uncertainties of the motion of a microelectromechanical goniometer labelled with fluorescent nanoparticles, providing a specific example of the type of analysis for which the expressions are useful. Future directions are indicated in Section 6, and conclusions are made in Section 7. Details of the measurement functions and associated uncertainties are presented in the Appendix.

2. Measurement functions and uncertainties for planar pose and motion

A sparse constellation of point sources in an invariant configuration can be used to measure the pose and motion of a rigid body in the imaging plane of an optical microscope by localizing and tracking the point sources, as illustrated in Figure 1. The optical indicators can be inherent to the body or applied for the purpose of the measurement. In such a measurement, an optical micrograph of a set of indicators in or on the body is captured, and the position of each point source is localized relative to the coordinate frame of the micrograph using an estimation technique, such as least squares or maximum likelihood (Mortensen et al., 2010). The images of the point sources in a micrograph captured after the body has moved are similarly localized. The full planar motion of the body can then be tracked by calculating the rigid planar transform that maps the points in the prior micrograph to the points in the subsequent micrograph. This transform can be expressed as a rotation followed by a translation.

Fig. 1.

Fig. 1.

Schematic illustrating the measurement concept, in which a sparse constellation of point sources in an invariant constellation indicates the planarmotion of a microscopic rigid body. Planar motion of the body (gray) can be expressed by a rotation (ΔΘ) followed by a translation, (ΔX, ΔY). The initial positions of the point sources (blue) are Pk={pk,1,,pk,η}, where k is the index of a series of images, η is the number of point sources, and pi,j is the position of the jth point source in the ith frame of an image sequence. After the body has moved, the final positions of the point sources (green) are Pk+1={pk+1,1,,pk+1,η}. Uncertainties of each of the three motion parameters are derived from the uncertainties of the positions of the point sources. The fundamental limits of uncertainty of position and orientation measurements are termed centroid precision and orientation precision, respectively. These limits are derived from the localization precision and radial position of the individual point sources in the constellation.

Let Pk={pk,1,,pk,η} be a set of point source positions estimated from an image of a rigid body, and let P^0={p^0,1,,p^0,η} be the set of true positions of those point sources when an arbitrarily defined coordinate system intrinsic to the body is aligned with the (x, y) axes of the measurement. The diacritical hat, such as appears in P^0, is used to denote a true value, as opposed to an estimated value. Let TP(v) be a transform over (x, y) vectors in the Cartesian plane. The pose of the body is the triple (X, Y, Θ) such that TP(v)=[cosΘsinΘsinΘcosΘ]v+(X,Y) is the transform that best maps P^0 onto P.

Similarly, if Pk={pk,1,,pk,η} is a set of point source positions estimated prior to some planar rigid motion, and Pk+1={pk+1,1,,pk+1,η} is the set of point source positions estimated subsequent to the motion, then Mk,k+1(v)=TPk+1(v)TPk(v)=[cosΔΘsinΔΘsinΔΘcosΔΘ]v+(ΔX,ΔY) is the transform characterizing the motion. Importantly, the triple (ΔX, ΔY, ΔΘ) is independent of P^0, allowing motion measurements to be performed independently of the choice of the θ origin, which can be arbitrarily defined. The sets Pk and Pk+1 may represent the positions of all point sources observed in the two images or the intersection of two different sets of point sources observed in the two images, as, for example, in the case of super-resolution imaging (Betzig et al., 2006; Hess et al., 2006; Rust et al., 2006).

The measurement functions utilized in calculating Mk,k+1 from Pk and Pk+1 are best selected to minimize the uncertainty of each coordinate of the motion (ΔX, ΔY, ΔΘ). In selecting these measurement functions and in calculating the associated uncertainties of motion, random and independent errors in the position estimates of point sources are assumed. The rotation and translation components of the motion can be treated separately (Arun et al., 1987). The translation, (ΔX, ΔY), is considered first. Each coordinate of each position estimate of a point source, pk,iPk or pk+1,iPk+1, has some associated measurement uncertainty, σx,k,iSx,k, σx,k+1,iSx,k+1, σy,k,iSy,k, or σy,k+1,iSy,k+1, which is at best the localization precision of that point source. If the uncertainties in Sx,k, Sy,k, Sx,k+1, and Sy,k+1 are all equal, then (ΔX, ΔY) can be estimated with minimum uncertainty by the centroid displacement. However, in the experimentally relevant case that some of the points have lower uncertainties than others, for example due to a larger number of detected signal photons, then it is appropriate to weight the contribution of each point to the measurement, producing the following measurement function and associated uncertainty:

ΔXw=i=1ηwΔXi(xk+1,ixk,i),U(ΔXw)=i=1ηwΔXi2(σx,k,i2+σx,k+1,i2),wΔXiOpt=1((σx,k,i2+σx,k+1,i2)j=1η1σx,k,j2+σx,k+1,j2), (2)

where ΔXw is the x coordinate component of the weighted estimate of the object motion, U(ΔXw) is the associated uncertainty, and the set WΔX={wΔXi:i1..η} is a set of weights applied to the point sources. Optimal choices of WΔX and WΔY are inversely proportional to the sum of the variance, as shown in Eq. (2).

Similarly, an estimate of the minimum uncertainty of object rotation, ΔΘw, can be calculated from the optimally weighted measurement function with uncertainty U(ΔΘw) as follows:

ΔΘw=i=1ηwΔΘirk,irk+1,i(θk+1,iθk,i)i=1ηwΔΘirk,irk+1,i,U(ΔΘw)=i=1ηwΔΘi2(rk+1,i2σk,i2+rk,i2σk+1,i2)i=1ηwΔΘirk,irk+1,i,wΔΘiOpt=rk,irk+1,i(rk+1,i2σk,i2+rk,i2σk+1,i2)j=1ηrk,jrk+1,jrk+1,j2σk,j2+rk,j2σk+1,j2, (3)

where ΔΘw is the weighted estimate of the rotation of the object, (rk,i, θk,i) and (rk+1,i, θk+1,i) are the polar coordinates of the measured position of the ith point source with respect to the unweighted centroid of the constellation in the first and second image, respectively, U(ΔΘw) is the uncertainty of ΔΘw, wΔΘi is the weight applied to the ith point source, and wΔΘiOpt is the value of the ith weight in a normalized, optimized weighting. Derivations are presented in the Appendix.

If σx,k,iσx,k+1,iσy,k,iσy,k+1,i for all i{1,,η}, then the uncertainty of each component of the motion is a factor of 2 greater than the corresponding component of the pose. The true positions P^0 are not ordinarily known or estimated in measurement applications, since there is no axis to which the orientation of the rigid body is absolutely registered. In such cases, rotation is a meaningful metric, but orientation is not. In contrast, the true positions, P^0, are typically known in simulations, such as the one reported in the Section 4.

3. Centroid precision and orientation precision

The expression of localization precision given by Eq. (1) is the minimum uncertainty of a position measurement of a point source. Analogously, the equations in Table 3 can be used to express the minimum uncertainties of position and orientation measurements of a constellation of point sources on a microscopic rigid body in the image plane of a microscope. Most notable of these expressions are the centroid precision and the orientation precision. Centroid precision and orientation precision are minimum values of the centroid uncertainty and orientation uncertainty of the constellation, respectively, as determined from optimally weighted measurements of the pose of the constellation:

CXP=1i=1η1σi2,CYP=1i=1η1σi2,OP=1i=1η(riσi)2,σi=(16a2Ni+72πb2)(σG2+a212)3aNi, (4)

where (CXP, CYP) is the centroid precision, Op is the orientation precision and Ni is the total number of signal photons detected from the ith point source. The minimum values for the associated measurements of motion are given in Table 3. These expressions have several practical implications. Both centroid precision and orientation precision can be improved by increasing the brightness of individual point sources and the number of point sources in the constellation. Orientation precision can be further improved by increasing the radius of the constellation.

Table 3.

Measurement functions and associated uncertainties.

Pose(X, Y, Θ)
Motion(ΔX,ΔY,ΔΘ)
Measurand Measurement Uncertainty Measurement Uncertainty
Xu, ΔXu 1ηi=1ηxi 1ηi=1ησi2 1ηi=1η(xk+1,ixk,i) 1η2i=1η(σk,i2+σk+1,i2)
Xw, ΔXw i=1ηxiσi2i=1η1σi2 1i=1η1σi2 i=1ηxk+1,ixk,iσk,i2+σk+1,i2i=1η1σk,i2+σk+1,i2 1i=1η1σk,i2+σk+1,i2
Θu, ΔΘu i=1ηris^i(γ^iθi)i=1ηris^i i=1ησi2ri2i=1ηri2 i=1ηrk,irk+1,i(θk+1,iθk,i)i=1ηrk,irk+1,i i=1η(rk+1,i2σk,i2+rk,i2σk+1,i2)i=1ηrk,irk+1,i
Θw, ΔΘw i=1ηris^i(γ^iθi)σi2i=1ηris^iσi2 1i=1η(riσi)2 i=1ηrk,i2rk+1,i2(θk+1,iθk,i)rk+1,i2σk,i2+rk,i2σk+1,i2i=1ηrk,i2rk+1,i2rk+1,i2σk,i2+rk,i2σk+1,i2 1i=1ηrk,i2rk+1,i2rk+1,i2σk,i2+rk,i2σk+1,i2
Expressions for x-axis pose and motion are isomorphic to those for y-axis pose and motion.

η is the number of point sources used in the measurement; xi, xk,i, xk+1,i the estimated x coordinate of the ith point in the only image, image k, and image k+1; σi, σk,i, σk+1,i the single-axis uncertainties of the ith point in the only image, image k, and image k+1; ri, rk,i, rk+1,i the estimated r coordinate of the ith point in the only image, image k, and image k+1; θi, θk,i, θk+1,i the estimated θ coordinate of the ith point in the only image, image k, and image k+1; s^i, γ^i the true centroid-offset position of the ith point in polar coordinates; Xu, Xw the uniformly weighted and optimally weighted estimates of x-axis centroid position; Θu, Θw the uniformly weighted and optimally weighted estimates of orientationl; ΔXu, ΔXw the uniformly and optimally weighted estimates of x-axis centroid displacement; ΔΘu, ΔΘw the uniformly weighted and optimally weighted estimates of rotation.

4. Numerical validation of uncertainty equations

A Monte Carlo simulation was conducted to validate the measurement uncertainties shown in Table 3. Sets of points were randomly generated, and each set of points was subjected to a randomly generated rigid planar transformation. Two images were synthesized from each set of points, one from the untransformed configuration of the set and one from the transformed configuration. In the synthetic images, each point was represented by a two-dimensional Gaussian intensity function with a randomly generated total number of photons. After adding a randomly generated uniform background photon intensity to the image, the intensity value of each pixel in the image was used as the lambda parameter for generating a value from a Poisson distribution. In this way, each image was constructed to resemble the image of a set of point sources recorded by an ideal sensor (Geist et al., 1982), and to represent isotropic emitters, as opposed to dipole emitters, for which synthetic images can be found elsewhere (Sage et al., 2015). The ranges of the parameters used and other details of the simulation are summarized in Section A5 of the Appendix.

The positions of the points in each synthetic image were estimated by regression to a bivariate Gaussian with parameters {x0,y0,α,s,C}, where (x0, y0) are the coordinates of the Gaussian peak, αkN2πσG2 is the Gaussian amplitude, sσG is the Gaussian standard deviation, Ckb2a2 is an offset approximating the background intensity, and k is the ratio of camera pixel intensity counts to photon counts. Poses and motions of the simulated rigid body were calculated from the estimated point positions using the measurement functions in Table 3. For simulated pose measurements, the true positions of the untransformed points were treated as P^0.

A set of τ=200 image pairs was simulated according to the above procedure. Each pair was simulated ω=2500 times for a total of 1 million images. For each of the 12 measurands, Mi, of Table 3, the standard deviation of the set of estimates is the simulated uncertainty of the measurement, Usim(Mi). The calculated uncertainty, Ucalc(Mi), is determined from the associated uncertainty equation in Table 3. For each measurand, the normalized root mean square residual is defined as 1τi=1τ(Usim(Mi)Ucalc(Mi)Ucalc(Mi))2, the residual bias is defined as 1τi=1τUsim(Mi)Ucalc(Mi)Ucalc(Mi), and the coefficient of determination is defined as R2=1i=1τ(Usim(Mi)Ucalc(Mi))2i=1τ(Usim(Mi)j=1τUsim(Mj)τ)2. All of the normalized root mean square residuals were less than 2.5 %, the absolute values of all of the residual biases were less than 0.8 %, and the R2 values of all of the measurands were greater than 0.996, indicating good agreement between the calculated and simulated uncertainties.

Normal probability plots (not shown) indicated that the residuals were normally distributed. Graphical residual analysis demonstrated good randomness in the residuals with respect to the calculated uncertainties and all model parameters except for the standard deviation of the Gaussian point spread function, σG. A slight positive relationship between σG and the normalized residual of each measurand was fit to a linear regression model. For each measurand, the slope of this trend line was less than 0.5 % per pixel of σG. This systematic difference between the analytical calculations and the Monte Carlo calculations may have been due to truncation of wide Gaussians, since the region of interest employed in calculating each point source position had a width of 30 pixels. The largest simulated Gaussians with σG=6 were therefore truncated at ±2.5σG. The systematic variation between the analytical and Monte Carlo models due to this effect was only 3% across the full domain of point spread functions tested.

5. Experimental pose and motion measurements

The utility of the derived measurement functions and associated uncertainties was demonstrated by analysis of the uncertainties from experimental measurements of the motion of a microelectromechanical system (MEMS). This particular MEMS (Oak et al., 2011) took the form of a goniometer, articulated by a chevron-type electrothermal actuator (Sinclair, 2000; Baker et al., 2004), with an indicator needle rotating around a pivot, pointing to a dial gauge with graduations at increments of 17.5 mrad (1°) for readout, as shown in Figure 2. The indicator needle was labelled with subresolution fluorescent nanoparticles, and the system was imaged using a widefield epifluorescence microscope equipped with a light emitting diode for excitation at approximately 630 nm, an objective lens with a nominal magnification of 50× and a numerical aperture of 0.55, and a complementary metal–oxide–semiconductor camera with a nominal pixel size of 6.5 μm × 6.5 μm for detection at approximately 660 nm. A region of the imaging sensor of 812 pixels × 1856 pixels was used, resulting in a field of view of 103 μm × 236 μm. Representative parameters of the experimental measurement system that are relevant to the calculation of uncertainties are given in Table 1.

Fig. 2.

Fig. 2.

(Top) Optical brightfield micrograph showing a microelectromechanical system (MEMS) in the form of a goniometer, with an electrothermal actuator linked to an indicator needle. Linear actuation of the electrothermal actuator resulted in rotary motion of the indicator needle around a pivot. Coarse measurement of rotation was enabled by a graduated dial gauge. A region of interest is indicated by a white box. (Middle) Optical fluorescence micrograph showing a constellation of fluorescent nanoparticles on the indicator needle in the region of interest. (Bottom) Optical fluorescence micrographs showing four of the nanoparticles that label the indicator needle, appearing as the point spread function of the imaging system. The nanoparticles are ordered from left to right by increasing numbers of detected signal photons (N), which vary due to polydispersity in nanoparticle size and heterogeneity in illumination intensity. A larger number of detected signal photons results in a lower measurement uncertainty, motivating weighting of the contributions of individual nanoparticles to the overall measurement.

Table 1.

Representative parameters of the experimental measurement system.

Parameter Value Units Subsystem
a 127 nm Microscope
b2 228 photons per pixel Microscope
σG 286 nm Microscope
Ni 104 to 105 photons Particles
ri 10 to 100 μm Particles

To evaluate the uncertainties of position measurements of individual fluorescent nanoparticles, 2000 sequential fluorescence micrographs of the labelled indicator needle were recorded at a rate of 8 Hz in the absence of intended motion. Images were processed after the experiment at a rate of 2.5 Hz, including target detection, registration, Gaussian estimation and motion calculation. The x and y position of each nanoparticle was measured by least squares Gaussian estimation from the image data, which is equivalent to maximum likelihood estimation for the number of detected signal photons. The corresponding uncertainty of each coordinate of the position of each nanoparticle was determined from the root mean square displacement of the nanoparticle between successive images divided by 2, which is equivalent to a pooled standard deviation of successive image pairs (ISO Technical Advisory Group 4, 1995). The position uncertainties of individual nanoparticles were then used to calculate predicted uncertainties for motion measurements of the nominally rigid indicator needle, according to the uncertainty expressions of Table 3.

Empirical uncertainties for motion measurements of the indicator needle were then estimated from the apparent motion of the needle between each pair of successive images, due to vibration, drift, or other non-ideal behaviour, as well as photon shot noise described by Eq. (1). The motion of the indicator needle between each successive pair of images was estimated using the expressions of Table 3, after establishing a correspondence between the points in each image (Besl & McKay, 1992; Pennec & Thirion, 1997; Zitova & Flusser, 2003). The empirical uncertainty of each coordinate of this motion was calculated as the standard deviation of the set of estimates of the corresponding coordinate. The resulting empirical uncertainties are compared in Table 2 with the uncertainties predicted by the expressions of Table 3, and also with the minimum uncertainties given by the centroid and orientation precision.

Table 2.

Measurement uncertainties of the planar motion of a MEMS goniometer.

Uncertainty Uniform
ΔXu (pixels)
Optimal
ΔXw (pixels)
Uniform
ΔYu (pixels)
Optimal
ΔYw (pixels)
Uniform
ΔΘu (μ-rad)
Optimal
ΔΘw (μ-rad)
Predicted 0.0093 0.0081 0.0096 0.0081 21.4 18.5
Empirical 0.0101 0.0093 0.0107 0.0097 23.4 18.9
Minimum 0.0031 0.0030 0.0031 0.0030 6.7 6.5

The predicted and empirical uncertainties agree to within 2.9–16.5%. This near equality of uncertainties validates the assumption of planar rigid motion, as violation of this assumption would lead to much higher empirical uncertainties. In contrast, the predicted and empirical uncertainties are approximately a factor of three times greater than the minimum uncertainties defined by Eq. (4). This is a useful result, indicating that the limiting factor of the experimental measurement was not one of the parameters modelled by the measurement functions.

Such factors might include vibration and drift of the microscope system, or fixed pattern noise of the complementary metal–oxide–semiconductor camera. The empirical uncertainty might therefore be improved by up to a factor of three by additional investigation and mitigation of such factors, for example, by drift correction (Lee et al., 2012; Elmokadem & Yu, 2015) or camera calibration (Huang et al., 2013) If the empirical uncertainties and predicted uncertainties were then to become approximately equal to the associated minimum uncertainties, the signal-to-noise ratio would be the limiting factor of the measurement. Only at this point would the detection of more signal photons, for example, be useful to further reduce the measurement uncertainty. Such comparison of measured to minimum uncertainties provides practical guidance for ongoing improvement of the experimental measurement system.

Once the uncertainty of the rotation measurement was established, the repeatability of the MEMS rotation was tested. The actuator was driven alternately with an applied voltage of 0 and 5 V for 1000 cycles, which repeatedly rotated the indicator needle by an angle of ±5.00 mrad (±0.287°). Fluorescence micrographs were taken after each motion, and the rotation of the indicator needle was determined using the optimally weighted and uniformly weighted estimators, ΔΘw and ΔΘu. The rotational repeatability of the indicator needle, as determined by the standard deviation of the motion measurement, was 18.9 μrad – the same as the empirical uncertainty – indicating that the observed repeatability was dominated by measurement uncertainty. Therefore, the repeatability of the MEMS may have been even better than indicated by this measurement. This is an interesting result, considering the sliding contact in the linkage coupling the electrothermal actuator and indicator needle. The motions of such systems can be complex, as will be investigated in future studies.

6. Future directions

The measurement functions and associated uncertainties presented in this paper have expressed the physical lower bounds of uncertainty of measurements of planar pose and motion, in terms of the parameters of the experimental measurement system used to perform localization microscopy. The direct calculation of these minimum uncertainties was facilitated by the existing theoretical basis of localization precision. This is an important distinction between the mathematical expressions presented here, and many generic algorithms for image registration (Zitova & Flusser, 2003). Although measurement uncertainties derived from such algorithms are beginning to be addressed (Fitzpatrick & West, 2001; Simonson et al., 2007), a similar physical basis for uncertainty is not ordinarily available for such generic algorithms. Future work might trace the uncertainties of generic algorithms to a physical basis in the quantization of light.

The measurement functions and associated uncertainties presented in this paper apply to the broadly relevant case of the pose and motion of rigid bodies within the image plane of an optical microscope. However, in some applications of localization microscopy, microscopic rigid bodies exhibit a component of motion out of the image plane. Such motion may lead to systematic effects that are not modelled here, and would be indicated by a discrepancy between the empirical and predicted uncertainties. The effects of such motion will be addressed in future work by a generalization of the image mapping from rigid to affine transformations.

The measurement functions, associated uncertainties, and experimental analysis presented in this paper are predicated on the assumption stated in Section 2 that errors in position estimates of point sources are random and independent. However, this assumption could be invalidated by some motions of optical microscopes, such as drift and vibration, which could result in correlated errors in position estimates of multiple point sources. Related effects will be addressed in future work.

Finally, the experimental measurements in this paper were performed on a microfabricated device that implemented motion as an engineered function. The measurement functions and associated uncertainties presented in this paper are similarly applicable in different experimental contexts, for example, reference measurements of multiple point sources in a fiducial constellation for investigation and correction of errors from unintended motion of an optical microscope.

7. Conclusions

This paper has extended the concept of localization precision from optical point sources to microscopic rigid bodies in the imaging plane of a widefield microscope. This has established a firm foundation for related measurements involving localization microscopy. A complete set of measurement functions in closed form, along with the associated set of measurement uncertainties, was calculated for the planar pose and motion of rigid bodies indicated by a sparse constellation of multiple point sources of light in an invariant configuration. Physical limits on the minimum uncertainties, termed centroid precision and orientation precision, were expressed in terms of the characteristic properties of the optical measurement system. These measurement functions and uncertainties were numerically validated by Monte Carlo simulation. The utility of the expressions was demonstrated by analysis of the empirical uncertainty of motion measurements of a microelectromechanical goniometer. Because of the generality of centroid precision and orientation precision, and the widespread interest in super-resolution imaging and particle tracking, these innovations are broadly applicable to designing measurements for low uncertainty, interpreting the significance of measurement uncertainties, and identifying sources of uncertainty in measurement systems.

Acknowledgements

This research was performed in the Physical Measurement Laboratory and the Center for Nanoscale Science and Technology at the National Institute of Standards and Technology (NIST). The authors acknowledge support of this research under the NIST Innovations in Measurement Science Program. C.R.C. acknowledges support of this research under the Cooperative Research Agreement between the University of Maryland and the National Institute of Standards and Technology Center for Nanoscale Science and Technology, Award 70NANB10H193, through the University of Maryland.

Appendix

Detailed descriptions of measurement functions, associated uncertainties, analytical derivations, and numerical simulations are presented in this appendix.

A1. Measurement functions for planar motion of a rigid body

If a sparse constellation of multiple point sources in an invariant configuration is located in or on a rigid body, then the planar pose of the body can be determined from the positions of the point sources. Similarly, the motion of the body can be determined from the change in the positions of the point sources resulting from the motion. Measurement functions for each coordinate of pose and motion, reflecting uniform weighting of the point sources and optimal weighting of the point sources, along with associated uncertainties for the measurements, are presented in Table 3. Derivations and discussion of the measurement functions are presented in the remainder of this section. The associated uncertainties are derived and discussed in Section A2.

A1.1. Measurement functions for planar motion of a rigid body.

Let P^={p^1,,p^η} be the true positions of a set of η point sources on a rigid body and let P^0={p^0,1,,p^0,η} be the set of true positions of those point sources when an arbitrarily-defined coordinate system intrinsic to the body is aligned with the (x, y) axes of the measurement. The true pose of the body is defined by the proper rigid planar transform, T^P(v) over (x, y) vectors in the Cartesian plane, that maps P^0 to P^. The transform T^P(v) can be expressed in terms of three scalar parameters as follows:

T^P(v)=[cosΘ^sinΘ^sinΘ^cosΘ^](v(X^,Y^))+(X^,Y^), (A1)

where the triple, (X^, Y^, Θ^), is an equivalent expression of the true object pose, with X^ corresponding to the x-axis position, Y^ to the y-axis position, and Θ^ to the orientation.

Let P={p1,,pη} be a set of estimates of the positions in P^. Estimates of the object pose can be calculated from P as follows:

X=i=1ηwxixiY=i=1ηwyiyiΘ=i=1ηwθiris^i(γ^iθi)i=1ηwθiris^i (A2)

where X, Y and Θ are estimates of X^, Y^ and Θ^, respectively; xi and yi are the Cartesian coordinates of pi; ri and θi are the polar coordinates of pi with respect to an origin at the (weighted) centroid of P; s^i and γ^i are the polar coordinates of p^0,i with respect to an origin at the (weighted) centroid of P^0 and the sets Wx={wxi:i1..η}, Wy={wyi:i1..η} and Wθ={wθi:i1..η} are weights applied to the measurements.

A1.2. Motion measurement functions.

If two images bracket a motion of a planar rigid body, then the motion can be determined from the change in the estimated positions of corresponding point sources in the two images, as determined by localization microscopy. Let P^k={p^k,1,,p^k,η} be the true positions of a set of η point sources on a rigid body prior to some motion, and let P^k+1={p^k+1,1,,p^k+1,η} be the set of true positions of those point sources after the motion. The motion of the body is defined by the proper rigid planar transform, T^k,k+1(v) over (x, y) vectors in the Cartesian plane, that maps P^k to P^k+1. The transform T^k,k+1(v) can be expressed in terms of three scalar parameters as follows:

T^k,k+1(v)=[cosΔΘ^sinΔΘ^sinΔΘ^cosΔΘ^](v(ΔX^,ΔY^))+(ΔX^,ΔY^), (A3)

where (ΔX^, ΔY^, ΔΘ^) is an equivalent expression of the true motion of the object, with ΔX^, ΔY^ and ΔΘ^ corresponding to the x-axis displacement, y-axis displacement, and rotation, respectively.

Let Pk={pk,1,,pk,η} be a set of estimates of the positions in P^k, and let Pk+1={pk+1,1,,pk+1,η} be a set of estimates of the positions in P^k+1. Estimates of the motion of the object can be calculated from Pk and Pk+1 as follows:

ΔX=i=1ηwΔXi(xk+1,ixk,i),ΔY=i=1ηwΔYi(yk+1,iyk,i),ΔΘ=i=1ηwΔΘirk+1,irk,i(θk,iθk+1,i)i=1ηwΔΘirk+1,irk,i, (A4)

where ΔX, ΔY and ΔΘ are estimates of ΔX^, ΔY^ and ΔΘ^, respectively; (xk,i, yk,i) and (xk+1,i, yk+1,i) are the Cartesian coordinates of pk,i and pk+1,i; (rk,i, θk,i) and (rk+1,i, θk+1,i) are the polar coordinates of pk,i and pk+1,i with respect to origins at the (weighted) centroids of Pk and Pk+1, respectively; and the sets WΔX={wΔXi:i1..η}, WΔY={wΔYi:i1..η} and WΔΘ={wΔΘi:i1..η} are weights applied to the measurements.

A1.3. Derivation of the orientation measurement function.

The expression for Θ in Eq. (A2) is derived by finding the value of Θ that minimizes the weighted sum of squared error between P and TP(P^0):

E(α)=i=1ηwΘiεi2=i=1ηwΘi(ri2+s^i22ris^i)(cos(γ^i(θi+α))), (A5)

where (ri, θi) are the coordinates of the ith point in P(X,Y) and (s^i, γ^i) are the coordinates of the corresponding point in P^0. The offset, α, is the free variable for the optimization and represents a test rotation between the two sets of points. The error, εi, is the distance between p^i and the point found by rotating pi by α, and wθi is the weight applied to the squared error εi2. The optimal choice of rotation determined by this method, Θ, is found by minimizing E:

Θ=[α:dEdα=0]=[α:i=1η(2wθiris^isin(γ^i(θi+α)))=0]=[α:i=1η2wθiris^i(sin(γ^iθi)cosα)][cos(γ^iθi)sinα)=0]. (A6)

The optimal value of α can then be found by solving Eq. (A6), which yields:

Θ=α=atani=1ηwθiris^isin(γ^iθi)i=1ηwθiris^icos(γ^iθi). (A7)

Alternatively, the residuals (γ^i(θi+α)) in Eq. (A6) can be treated with a small angle approximation yielding the simpler expression:

Θ=α=i=1ηwθiris^i(γ^iθi)i=1ηwθiris^i=βλ, (A8)

where the sums β and λ are implicitly defined by the numerator and denominator to simplify use of the expression in the next section.

A2. Uncertainties of motion measurements by localization microscopy

Given uncertainties, (σxi, σyi), of the position of each point source piP, the combined standard uncertainty of each component of the pose of P can be calculated from the associated measurement function using the law of propagation of uncertainty (ISO Technical Advisory Group 4, 1995). Here the pose and motion uncertainties are calculated for the common case in which σi=σx=σy for each value of i.

A2.1. Position uncertainty.

The x-axis position uncertainty of a pose measurement by localization microscopy can be calculated from the x-axis position uncertainties of the individual points, Sx={σi:i1..η}, and the associated weights, WX, using the law of propagation of uncertainty as follows:

U(X)=i=1η(dXdxi)(U(xi))2=i=1ηwXi2σi2. (A9)

The expression for U(Y) is isomorphic to U(X).

A2.2. Orientation uncertainty.

With Eq. (A8) defining the measured value of Θ, the measurement uncertainty associated with Θ can be calculated from the law of propagation of uncertainty (ISO Technical Advisory Group 4, 1995) as follows:

(U(Θ))2=i=1η((dΘdθiU(θi))2+(dΘdγ^iU(γ^i))2)+((dΘdriU(ri))2+(dΘds^iU(s^i))2)i=1η((wΘiris^iλU(θi))2)+((wθis^i(γ^iθi)λwΘis^iβλ2U(ri))2), (A10)

where U(Θ) is the uncertainty of Θ, U(θi) is the uncertainty of θi, U(ri) is the uncertainty of ri, U(γ^i) is the uncertainty of γ^i, U(s^i) is the uncertainty of s^i and β and λ are defined in Eq. (A8). Since γ^i and s^i are true point coordinates, U(γ^i) and U(s^i) are both zero.

U(θi) and U(ri) can be derived from the position uncertainties, U(xi)=U(yi)=σi, again by using the law of propagation of uncertainty:

U(θi)=σiri,U(ri)=σi. (A11)

The uncertainty of the rotation can then be calculated as:

(U(Θ))21λ2i=1η((wΘis^iσi)2(1+(γ^iθiΘ)2)). (A12)

Noting that the residuals, (γ^iθiΘ), are far smaller than unity and that the centroid distance errors, (ris^i), are far smaller than the centroid distances themselves in all but degenerate cases, the orientation uncertainty can then be approximated as:

U(Θ)i=1η(wΘiriσi)2i=1η(wΘiri2). (A13)

It is evident from Eq. (A13) that the orientation uncertainty depends neither on the choice of the (x, y, θ) origin, nor on the orientation, Θ, but only on the weight, the position uncertainty, and the distance from the centroid of each point in P.

A3. Minimum uncertainty weights

An optimal weighting, WM, for each measurand, M{X,Y,Θ,ΔX,ΔY,ΔΘ}, can be determined from the estimated coordinates of the individual point sources and their associated uncertainties, such that the uncertainty of the measurand is minimized. The calculations for optimally weighted measurement functions are presented below and uncertainties are presented in Table 3. The optimal weight calculations are presented in Sections A3.1-A3.3.

A3.1. Optimal weighting for position.

Weightings for measurements of the position of a rigid body are restricted to those in which the weights sum to unity, to avoid scaling the measurand. The method of Lagrange is utilized to minimize the uncertainties of the weighted position calculated in Section 2, subject to the normalization constraint.

Consider the square of the x-axis position uncertainty of Eq. (A9):

U2(X)=1ηwXi2σi2. (A14)

To minimize the squared uncertainty subject to the constraint i=1ηwXi=1, a Lagrangian is constructed as follows:

Λ(wX1,,wXη,λ)=(i=1ηwXi2σi2)+λ((i=1ηwXi)1), (A15)

where Λ is the Lagrangian and λ the Lagrange multiplier. The zeros of the partial derivatives of Λ with respect to WX, coupled with the normalization constraint, provide the following system of η+1 equations:

wXi=λ2σi2,i=1ηwXi=1, (A16)

which yields uniquely optimal weights:

wXi=1σi2j=1η1σj2. (A17)

Substituting Eq. (A17) into Eq. (A9) produces the minimum uncertainty in x-axis position that is achievable:

U(X)=1i=1η1σi2 (A18)

The weighting for minimum uncertainty of y-axis position is isomorphic.

A3.2. Optimal weighting for orientation.

Since orientation is scale invariant, the method of Lagrange need not be used, and a set of weightings can be determined all of which are optimal with respect to orientation uncertainty. For simplicity, the unique normalized optimal weighting is selected.

Consider the square of the orientation uncertainty of Eq. (A13):

U2(Θ)i=1η(wΘiriσi)2(i=1ηwΘiri2))2. (A19)

Optimal weightings occur where the gradient of U2(Θ) with respect to the set of wi is zero. Each component of the gradient can be expressed as:

U2(Θ)wΘiwΘi[j=1η(wΘjrjσj)2Z2],Z=j=1η(wΘiri2). (A20)

The partials can then be calculated as follows:

U2(Θ)wΘi2wΘiri2σi2Z2ri2j=1η(wΘjrjσj)2Z3, (A21)

which at their roots yield a unique set of optimal normalized weights:

wΘi=1σi2j=1η1σj2. (A22)

Substituting Eq. (A22) into Eq. (A13) produces the minimum uncertainty in orientation:

U(Θ)=1i=1η(riσi)2. (A23)
Table A1.

Parameters of the Monte Carlo simulation.

Parameter Units Domain Probability distribution
σG Pixels {σGR:1σG6} f(σG)=15
b2 Photons per pixel {b2Z:1b2250} Pr(b2)=(b2ln10)1(log10250)1
η Points {ηZ:3η20} Pr(η)=118
Nμ Photons {NμZ:103Nμ108} Pr(Nμ)=1(5Nμln10)
NS Photons {NSZ:0.9NμNS0.9Nμ} Pr(NS)=1(1.8Nμ+1)
Ni Photons {NiZ:Nμ12NSNiNμ+12NS} Pr(Ni)=1(NS+1)
r^i Pixels {r^iR:0r^i150} f(r^i)=2r^i22,500
θ^i Radians {θ^iR:0θ^i2π} f(θ^i)=12π
ΔX^ Pixels {ΔX^R:50ΔX^50} f(ΔX^)=1100
ΔY^ Pixels {ΔY^R:50ΔY^50} f(ΔY^)=1100
ΔΘ^ Radians {ΔΘ^R:0ΔΘ^2π} f(ΔΘ^)=12π

A4. Motion uncertainty

Uncertainties of motion measurements can be calculated from the measurement functions of Eq. (A4) in the same way that uncertainties of pose measurements were calculated from Eq. (A2) in the previous section, resulting in the following expressions:

U(ΔX)=i=1ηwΔXi2(σk,i2+σk+1,i2),U(ΔY)=i=1ηwΔYi2(σk,i2+σk+1,i2),U(ΔΘ)i=1ηwΔΘi2(rk+1,i2σk,i2+rk,i2σk+1,i2)i=1ηwΔΘirk,irk+1,i. (A24)

In cases that the uncertainties of the positions of the points in the two images are the same in the two images that bracketed the measured motion, the above motion uncertainties are simply a factor of 2 greater than the corresponding pose uncertainty.

A5. Optimal weightings for motion measurements.

Optimal weightings for motion measurements can be calculated in the same way as those for pose measurements described in Section A.3, resulting in the following expressions:

U(ΔX)=U(ΔY)=1i=1η1σk,i2+σk+1,i2,U(ΔΘ)=1i=1ηrk,i2rk+1,i2rk+1,i2σk,i2+rk,i2σk+1,i2. (A25)

A5. Range of simulation parameters

A total of 200 configurations were randomly generated for the Monte Carlo method described above. Each configuration consisted of set of parameters, {σG, b2, η, N, P, ΔX, ΔY, ΔΘ}, where σG is the standard deviation of the Gaussian intensity function used to approximate the image of each point source, b2 is the expected number of background photons detected at each pixel, η is the number of point sources, each Ni of N={N1,,Nη} is the expectation value of the number of photons detected from the ith point source, P is the set of untransformed points, and the triple (ΔX^, ΔY^, ΔΘ^) denotes the true transform of the points from the first image to the second. The parameters σG, η, ΔX^, ΔY^ and ΔΘ^ are generated from uniform distributions, whereas the distribution from which b2 is generated is logarithmically scaled. The η points piP are generated as polar coordinates pi=(r^i,θ^i), where θ^i is generated from a uniform distribution over the range 0 to 2π, and r^i is generated over the range 0 to 150 from a distribution that is quadratically scaled to ensure that points are distributed uniformly over a circle of diameter 300. Each of the η photon counts in N is generated from a uniform distribution over the range of scale NS centred on a mean value, Nμ, that is generated from a logarithmic distribution. The domain and probability distribution of each parameter are shown in Table A1.

References

  1. Arun KS, Huang TS & Blostein SD (1987) Least-squares fitting of two 3-d point sets. IEEE Trans. Pattern Anal. Mach. Intell 9, 698–700. [DOI] [PubMed] [Google Scholar]
  2. Baker MS, Plass RA, Headley TJ& Walraven JA(2004)Final report: compliant thermo-mechanical MEMS actuators, LDRD#52553. Sandia National Laboratory, Albuquerque, New Mexico. [Google Scholar]
  3. Berfield TA, Patel JK, Shimmin RG, Braun PV, Lambros J & Sottos NR (2006) Fluorescent image correlation for nanoscale deformation measurements. Small 2, 631–635. [DOI] [PubMed] [Google Scholar]
  4. Berfield TA, Patel JK, Shimmin RG, Braun PV, Lambros J & Sottos NR(2007) Micro- and nanoscale deformation measurement of surface and internal planes via digital image correlation. Proc. Soc. Exp. Mech 64, 51–62. [Google Scholar]
  5. Besl PJ & McKay HD (1992)A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell 14, 239–256. [Google Scholar]
  6. Betzig E, Patterson GH, Sougrat R, et al. (2006) Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642–1645. [DOI] [PubMed] [Google Scholar]
  7. Bobroff N (1986) Position measurement with a resolution and noise-limited instrument. Rev. Sci. Instrum 57, 1152–1157. [Google Scholar]
  8. Chenouard N, Smal I, de Chaumont F, et al. , (2014) Objective comparison of particle tracking methods. Nat. Methods 11, 281–U247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Copeland CR, McGray CD, Geist J, Aksyuk VA & Stavis SM (2015) Characterization of electrothermal actuation with nanometer and microradian precision. In: Proceedings of the 18th International Conference on Solid-State Sensors, Actuators and Microsystems (TRANSDUCERS), 2015 Transducers, Anchorage, AK. [Google Scholar]
  10. Deschout H, Shivanandan A, Annibale P, Scarselli M & Radenovic A (2014a) Progress in quantitative single-molecule localization microscopy. Histochem. Cell Biol 142, 5–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Deschout H, Zanacchi FC, Mlodzianoski M, Diaspro A, Bewersdorf J, Hess ST & Braeckmans K (2014b) Precisely and accurately localizing single emitters in fluorescence microscopy. Nat. Methods 11, 253–266. [DOI] [PubMed] [Google Scholar]
  12. Elmokadem A & Yu J (2015) Optimal drift correction for superresolution localization microscopy with Bayesian inference. Biophys. J 109, 1772–1780. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Enderlein J, Toprak E & Selvin PR (2006) Polarization effect on position accuracy of fluorophore localization. Opt. Express 14, 8111–8120. [DOI] [PubMed] [Google Scholar]
  14. Endesfelder U & Heilemann M (2014) Art and artifacts in single-molecule localization microscopy: beyond attractive images. Nature Methods 11, 235–238. [DOI] [PubMed] [Google Scholar]
  15. Fitzpatrick JM & West JB (2001) The distribution of target registration error in rigid-body point-based registration. IEEE Trans. Med. Imaging 20, 917–927. [DOI] [PubMed] [Google Scholar]
  16. Fox-Roberts P, Wen TQ, Suhling K & Cox S (2014) Fixed pattern noise in localization microscopy. Chemphyschem 15, 677–686. [DOI] [PubMed] [Google Scholar]
  17. Freeman DM (2001) Measuring motions of MEMS. Mrs Bulletin 26, 305–306. [Google Scholar]
  18. Geist J, Gladden WK & Zalewski EF (1982) Physics of photon-flux measurements with silicon photodiodes. J. Opt. Soc. Am 72, 1068. [Google Scholar]
  19. Hess ST, Girirajan TPK & Mason MD (2006) Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys. J 91, 4258–4272. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Huang B, Bates M & Zhuang X (2009) Super-resolution fluorescence microscopy. Ann. Rev. Biochem 78, 993–1016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Huang F, Hartwich TMP, Rivera-Molina FE, et al. (2013) Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms. Nat. Methods 10, 653–658. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. ISO Technical Advisory Group 4, G. W (1995) Guide to the expression of uncertainty in measurement. International Standards Organization, Geneva, Switzerland. [Google Scholar]
  23. Lee SH, Baday M, Tjioe M, Simonson PD, Zhang RB, Cai E & Selvin PR (2012) Using fixed fiduciary markers for stage drift correction. Opt. Express 20, 12177–12183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Long F, Zeng SQ & Huang ZL (2014) Effects of fixed pattern noise on single molecule Localization microscopy. Phys. Chem. Chem. Phys 16, 21586–21594. [DOI] [PubMed] [Google Scholar]
  25. McGray CD, Stavis SM, Giltinan J, Eastman E, Firebaugh S, Piepmeier J, Geist J & Gaitan M (2013) MEMS kinematics by super-resolution fluorescence microscopy. J. Microelectromech. Syst 22, 115–123. [Google Scholar]
  26. Mortensen KI, Churchman LS, Spudich JA & Flyvbjerg H (2010) Optimized localization analysis for single-molecule tracking and super-resolution microscopy. Nat. Methods 7, 377–U359. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Oak S, Rawool S, Sivakumar G, Hendrikse EJ, Buscarello D & Dallas T (2011) Development and testing of a multilevel chevron actuator-based positioning system. J. Microelectromech. Syst 20, 1298–1309. [Google Scholar]
  28. Ober RJ, Ram S & Ward ES (2004) Localization accuracy in single-molecule microscopy. Biophys. J 86, 1185–1200. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Pennec X & Thirion JP (1997) A framework for uncertainty and validation of 3-D registration methods based on points and frames. Int. J. Comput. Vis 25, 203–229. [Google Scholar]
  30. Ribeck N & Saleh OA (2008) Multiplexed single-molecule measurements with magnetic tweezers. Rev. Sci. Instrum 79, 094301-1–094301-6. [DOI] [PubMed] [Google Scholar]
  31. Ropp C, Cummins Z, Nah S, Fourkas JT, Shapiro B & Waks E (2013) Nanoscale imaging and spontaneous emission control with a single nano-positioned quantum dot. Nat. Commun. 4, 1–8. [DOI] [PubMed] [Google Scholar]
  32. Rust MJ, Bates M & Zhuang XW (2006) Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nature Methods 3, 793–795. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Sage D, Kirshner H, Pengo T, Stuurman N, Min J, Manley S & Unser M (2015) Quantitative evaluation of software packages for single-molecule localization microscopy. Nature Methods 12, 717–U737. [DOI] [PubMed] [Google Scholar]
  34. Samuel BA, Demirel MC & Haque A (2007) High resolution deformation and damage detection using fluorescent dyes. J. Micromech. Microeng 17, 2324–2327. [Google Scholar]
  35. Simonson KM, Drescher SM & Tanner FR (2007) A statistics-based approach to binary image registration with uncertainty analysis. IEEE Trans. Pattern Anal. Mach. Intell 29, 112–125. [DOI] [PubMed] [Google Scholar]
  36. Sinclair MJ (2000) A high force low area MEMS thermal actuator. In: Proceedings of the Itherm 2000: Seventh Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, Vol. I (ed. by Kromann GB, Culham JR & Ramakrishna K). IEEE, New York. [Google Scholar]
  37. Smith CS, Joseph N, Rieger B & Lidke KA (2010) Fast, single-molecule localization that achieves theoretically minimum uncertainty. Nat. Methods 7, 373–U352. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Stallinga S & Rieger B (2010) Accuracy of the Gaussian point spread function model in 2D localizationmicroscopy. Opt. Express 18, 24461–24476. [DOI] [PubMed] [Google Scholar]
  39. Stallinga S & Rieger B (2012) Position and orientation estimation of fixed dipole emitters using an effective Hermite point spread function model. Opt. Express 20, 5896–5921. [DOI] [PubMed] [Google Scholar]
  40. Teyssieux D, Euphrasie S & Cretin B (2011) MEMS in-plane motion/vibration measurement system based CCD camera. Measurement 44, 2205–2216. [Google Scholar]
  41. Thompson RE, Larson DR & Webb WW (2002) Precise nanometer localization analysis for individual fluorescent probes. Biophys. J 82, 2775–2783. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Ueno H, Nishikawa S, Iino R, Tabata KV, Sakakihara S, Yanagida T & Noji H (2010) Simple dark-field microscopy with nanometer spatial precision and microsecond temporal resolution. Biophys. J 98, 2014–2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Yoshida S, Yoshiki K, Namazu T, Araki N, Hashimoto M, Kurihara M, Hashimoto N & Inoue S (2011) Development of strain visualization system for microstructures using single fluorescent molecule tracking on three dimensional orientation microscope. In: Optics and photonics for information processing V (ed. By M. K A. A Iftekharuddin AS), pp. 81340E-1–81340E-7. SPIE Press, Bellingham WA. [Google Scholar]
  44. Zitova B& Flusser J (2003) Image registration methods: a survey. Image Vis. Comput 21, 977–1000. [Google Scholar]

RESOURCES