Abstract
We present an embedded real-time 1D position tracking device at a nanometer precision. The embedded algorithm extracts the most appropriate region of the signal without manual intervention and estimates the position based on the phase shift from the signal’s first Fourier harmonic. Using simulated datasets, we demonstrate that the proposed approach can achieve a similar precision to the state-of the-art maximum likelihood fitting-based method while executing over four orders of magnitude faster. We further implemented this algorithm on a low-power microprocessor and developed a simple, compact, and low-cost embedded position tracking device. We demonstrate nanometer tracking precision in real-time drift tracking experiments.
One-dimensional (1D) position tracking system is an essential component for drift stabilization in most high-resolution microscopy systems (e.g., super-resolution microscopy [1–3]) to maintain long-term system stability. It is also commonly used in chromatic confocal microscopy [4] for measuring nanoscale surface topography. In most drift stabilization systems (e.g., Nikon Perfect Focus system [5], PgFocus [6]), the 1D position tracking system is used to track the position of the reflected reference beam for sensing the drift of the focal plane, and the focus drift is then compensated for in real time with a piezo translation stage mounted with the sample or the objective. In chromatic confocal microscopy, the spectral profile of the detected light is projected along the linear image sensor, which encodes axial positions of the sample. Additionally, the 1D position tracking system is used to detect the spectral peak of the reflected light, which is determined by the sample surface.
The existing 1D position tracking system generally detects the reflected light at the interface with a strong refractive index mismatch (either coverslip–sample or air–sample interface), which is projected onto a low-noise linear CCD sensor. A computer with a high-performance processor is often required to estimate the lateral position of the peak from the reflected signals with model-fitting-based algorithms to track the axial position of the focus (or surface profile). Although such an approach can achieve the best accuracy with the theoretical minimum uncertainty [4], it is bulky and expensive (see Supplement 1 Figure S1 for details). An efficient strategy to reduce the complexity, cost, and size of the conventional position tracking system is to develop an all-in-one embedded device to track the position on the fly without compromising measurement precision or manual intervention. An independent stand-alone compact and low-cost embedded online 1D position tracking device is highly advantageous, as it is entirely decoupled from the main microscopy system without the need for a computer.
An early attempt for embedded active drift stabilization in high-resolution microscopy was PgFocus [6]. It monitors the focus change by estimating the peak position of reflected light beam on a microprocessor. However, as it is limited by the low computation power of the microprocessor (megahertz), which is orders of magnitude lower than that of current desktop-level CPUs (gigahertz), the non-iterative centroid algorithm is utilized for position estimation with a limited accuracy. Moreover, the low sampling rate and high noise of the employed linear image sensor further decreased the performance, and the overall precision for drift compensation was in the range of tens of nanometers. A highly precise position estimation algorithm executed on a low-power microprocessor is highly desirable to further improve the embedded position tracking system. In addition, a low-noise linear image sensor with a high sampling rate is also preferred to further improve the accuracy.
In the past decade, various fast non-iterative two-dimensional (2D) position estimation algorithms (e.g., radial symmetry [7,8], Fourier transform [9,10]) have been developed to achieve real-time single-molecule localization. Unfortunately, they are not designed and evaluated for 1D position estimation, and existing fast 1D position tracking algorithms are still based on the centroid based method [4,6,11,12], which leaves significant room for improvement.
In this Letter, we first present a fast and precise 1D position estimation algorithm that can be implemented in the embedded position tracking system with high computational efficiency. This algorithm extracts the most appropriate region from the raw dataset to suppress the influence of the outer background noise without manual intervention, which is essential to ensure the best precision for position tracking in the embedded system. Then it estimates the position by calculating the phase shift of the first harmonic of signals in the Fourier domain. We next use the simulated datasets to evaluate its performance and compare it with the state-of-the-art maximum likelihood (MLE) fitting-based algorithm, as well as the recently reported optimized adaptive centroid algorithm [12]. Further, we demonstrate an embedded position tracking system, which implements our algorithm in the microprocessor for online position tracking, adapting a low-noise linear image sensor for light sensing. Lastly, we demonstrate its nanometer position tracking precision in online axial drift tracking experiments.
The presented 1D position estimation algorithm is inspired by 2D Fourier transform-based localization algorithms [9,10]. Its principle is shown in Fig. 1. The entire processing procedure is composed of three major steps: (1) peak finding, (2) subregion extraction, and (3) position estimation.
Fig. 1.

Principle and scheme of our 1D position tracking system. The size of our system is ~50 × 30 × 30 mm.
The raw data are obtained from the linear image sensor. We first search its maximum and minimum value. The pixel with the maximum value is recognized as the peak pixel, and the baseline (b) is calculated with Eq. (1). When processing weak signals, a low-pass filter (e.g., rolling average filter) can be used to suppress the high-frequency noise:
| (1) |
where Ij is the intensity of the j th pixel along the linear image sensor with N pixels.
Next, we extract a subregion from the raw data with a center at the above recognized pixel with maximum intensity. The width of the subregion can significantly affect the overall precision. A smaller width can miss the useful signal, while a larger width induces the noisy background, and both cases can increase estimation error. Therefore, a proper width for the selected subregion is critical to achieve the minimum error. When the signal profile is approximated as a Gaussian function, as shown in Fig. 2, the minimum errors are achieved at a region at a half-width of about 3–5 times the Gaussian kernel width (σ), which can be estimated by Eq. (2):
| (2) |
where c is the centroid center of the signal, and σ is the width of the Gaussian kernel [13]. In this Letter, we used a subregion with a half-width of 4σ to achieve the near-optimal precision for signals under various scenarios.
Fig. 2.

Relative position estimation error map. A series of datasets were simulated with a peak intensity from 1000 to 10000 photons. This map indicates that a subregion at a half-width of 3–5 times the Gaussian kernel width (σ) can achieve the minimum error (highlighted in the dashed box).
At last, we calculate the first harmonic (H1) of the Fourier transform of the above extracted subregion with a width of W pixels as described in Eq. (3):
| (3) |
Its phase (angle of H1) can directly be used to estimate the peak position (p) of the signal [9] in Eq. (4):
| (4) |
We refer to this 1D position estimation algorithm as enhanced Phasor (ePhasor).
This algorithm can automatically extract the most proper region to estimate the position at the best precision under various scenarios, which significantly enhances the robustness and precision over the conventional Fourier transform-based methods [9,14] that require manual interventions.
We first used the simulated data to evaluate the performance of the Phasor algorithm. We compared it with three other 1D position estimation algorithms: the state-of-the-art MLE fitting-based algorithm that achieves the theoretical best precision [4], the recently reported optimized adaptive centroid-based algorithm (ATCA) [12] that can execute three orders of magnitude faster than MLE, and the conventional Phasor algorithm [9].
We then compared their estimation precision under a wide range of signal levels with a peak intensity from 1000 to 10000 photons, a background noise level of 200 to 2000 photons (20% of the peak intensity), and a signal kernel width from one to five pixels. The signal is simulated with a Gaussian profile model and a Poisson noise model [8,15,16], and 100 electrons of random noise were added to all simulated datasets. As shown in Fig. 3(a), our algorithm for position estimation achieved a similar precision as the MLE estimator at all signal levels. Especially at high signal levels (>10000 photons), the estimation error of our algorithm can be less than 0.01 pixel, corresponding to <0.8 nm on our drift tracking system (Fig. 4). When the region width is properly defined, the MLE is still the most precise estimator, however, only by less than 0.002 pixels (~0.16 nm) over our ePhasor estimator. More importantly, in contrast to the MLE, our ePhasor estimator is a model-free estimator that does not require prior knowledge of the signal profile model and noise model, suggesting that it is more robust to various imperfect conditions in routine experiments. As expected, the centroid-based algorithm ATCA has a significantly larger estimation error [Fig. 3(a)], ~400% (0.03 pixels) larger than the MLE and ePhasor. For datasets with various signal widths [Fig. 3(b)], our ePhasor can robustly achieve a state-of-the-art precision at all signal width conditions. A Mismatch between the user-defined region width and the signal width dramatically deceases the precision of the conventional Phasor algorithm and MLE, which is even worse than the optimized centroid-based ATCA. Therefore, a proper region width is important to achieve robust and state-of-the-art performance in routine experiments.
Fig. 3.

Comparison of precision in position estimation among the ATCA (green), the MLE (blue), conventional Phasor algorithm (magenta), and our ePhasor (red), using the root mean square error (RMSE). In (a), we used a Gaussian kernel with a width of one pixel. A region with a half-width of 4 times the calculated Gaussian kernel width (σ) was used for all algorithms. In (b), the simulated peak intensity is 10000 photons, the background is 2000 photons, and the Gaussian kernel width varied from one to five pixels to demonstrate the robustness of these algorithms. The MLE (4) and Phasor (4) used a user-defined region with a half-width of four pixels, which is ideal for signals with σPSF of one pixel; the MLE (20) and Phasor (20) used a user-defined region with a half-width of 20 pixels, which is ideal for signals with σPSF of five pixels. The ATCA and ePhasor do not require the user to define the region width.
Fig. 4.

Optical setup for the drift tracking experiment. A total internal reflection fluorescence objective lens (60×, NA 1.49, oil immersion, Olympus) with a high-precision (sub-nanometer) piezo translation stage (Nano-Z100, MadCityLabs) was used in this system to provide the high-precision evaluation results. The final optical magnification for drift tracking is ~100×, corresponding to a pixel size of 80 nm.
In addition, we compared the computation speed of the three algorithms for position estimation [Fig. 1(a)] in MATLAB with a laptop (Intel core i7-8750H, 2.2 GHz, single thread). We found that our algorithm can estimate ~1.7 × 106 positions per second, over four orders of magnitude faster than the MLE (~1.6 × 102 positions per second) and two orders of magnitude faster than the ATCA (~9.1 × 104 positions per second). This result indicates that our phasor-based algorithm has a great potential to be implemented on a low-power microprocessor.
Next, we implemented our algorithm in the microprocessor to build an embedded system for real-time position tracking. We employed a highly sensitive and low-noise linear CCD image sensor (TCD1304, Toshiba) for signal acquisition. A Teensy 3.6 development board was used to drive the linear image sensor and execute our ePhasor-based position estimation algorithm. We used the Arduino IDE and Teensyduino to implement the linear image sensor driving firmware and the real-time position estimation firmware ([17] for more details). The calculated positions are transferred to a computer through the serial port (speed: 115200 bps), and a MATLAB script was used to receive the tracking results.
We used a high-precision closed-loop piezo positioning system (Nano-Z100, MadCityLabs) to evaluate the performance of our embedded position tracking system. We adjusted the piezo stage to a series of positions at a step size of 100 nm and quantified the precision for position tracking by calculating the standard deviation of the estimated positions at each step [18]. For each step, 100 positions were employed for quantification. In this experiment, we did not implement an online MLE-based approach, because the MLE fitting-based algorithm can cost tens of seconds to calculate one position in such a low-power microprocessor, which is too slow for our applications (~100 Hz).
We found that our position tracking system can achieve a precision of 1.7–2.1 nm over a wide axial range from +40 to −40 μm (Fig. 5), which is sufficient for the state-of-the-art super-resolution imaging experiment (axial resolution: 10–20 nm). We also note that the tracking precision in the experiment is slightly worse than our simulation results (<1 nm). One probable reason is the residual vibration that cannot be isolated by the air-floated optical table [18]. To verify this point, we quantified the tracking precision of the MLE algorithm, which exhibits a similar performance (~1.9 nm). In contrast, the ATCA algorithm achieves a precision of ~7.5 nm, which is too large for axial resolution of 10 nm.
Fig. 5.

Actual drift tracking results by our position tracking device at an axial position of +40, 0, and −40 μm.
In conclusion, we present a nanometer position tracking algorithm that can be used for embedded drift stabilization systems and chromatic confocal microscopy. We utilized simulated data to demonstrate that it can achieve a similar precision as a MLE fitting-based approach and significantly outperform the centroid-based method. We also adapted it to the drift tracking experiments to validate that our system can achieve nanometer precision in real time. Our system is simple, compact, and costs less than$100, which has a great potential to be widely used for drift stabilization and chromatic confocal microscopy. The proposed 1D Phasor algorithm may also be extended to the 2D and 3D scenarios to further improve the precision and speed of existing single molecule localization algorithms.
Supplementary Material
Funding.
National Institutes of Health (R01CA232593, R01CA254112, R33CA225494).
Footnotes
Disclosures.
The authors filed a provisional patent on the position tracking system.
Supplemental document. See Supplement 1 for supporting content.
Data Availability.
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
REFERENCES
- 1.Deschout H, Zanacchi FC, Mlodzianoski M, Diaspro A, Bewersdorf J, Hess ST, and Braeckmans K, Nat. Methods 11, 253 (2014). [DOI] [PubMed] [Google Scholar]
- 2.Coelho S, Baek J, Graus MS, Halstead JM, Nicovich PR, Feher K, Gandhi H, Gooding JJ, and Gaus K, Sci. Adv 6, eaay8271 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Ma H and Liu Y, APL Photonics 5, 060902 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Blateyron F, in Optical Measurement of Surface Topography (Springer, 2011), pp. 71–106. [Google Scholar]
- 5.The Nikon Perfect Focus System (PFS) | Nikon’s MicroscopyU, https://www.microscopyu.com/tutorials/the-nikon-perfect-focus-system-pfs.
- 6.PgFocus - Wiki, http://big.umassmed.edu/wiki/index.php/PgFocus#Firmware.
- 7.Parthasarathy R, Nat. Methods 9, 724 (2012). [DOI] [PubMed] [Google Scholar]
- 8.Ma H, Long F, Zeng S, and Huang Z-L, Opt. Lett 37, 2481 (2012). [DOI] [PubMed] [Google Scholar]
- 9.Martens KJA, Bader AN, Baas S, Rieger B, and Hohlbein J, J. Chem. Phys 148, 123311 (2018). [DOI] [PubMed] [Google Scholar]
- 10.Yu B, Chen D, Qu J, and Niu H, Opt. Lett 36, 4317 (2011). [DOI] [PubMed] [Google Scholar]
- 11.Bai J, Li X, Wang X, Zhou Q, and Ni K, Sensors 19, 3592 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Chen C, Leach R, Wang J, Liu X, Jiang X, and Lu W, Opt. Lett 46, 1616 (2021). [DOI] [PubMed] [Google Scholar]
- 13.Henriques R, Lelek M, Fornasiero EF, Valtorta F, Zimmer C, and Mhlanga MM, Nat. Methods 7, 339 (2010). [DOI] [PubMed] [Google Scholar]
- 14.Fereidouni F, Bader AN, and Gerritsen HC, Opt. Express 20, 12729 (2012). [DOI] [PubMed] [Google Scholar]
- 15.Ma H, Xu J, Jin J, Gao Y, Lan L, and Liu Y, Sci. Rep 5, 14335 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Ma H, Xu J, and Liu Y, Sci. Adv 5, eaaw0683 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. https://github.com/yangliulab/Embedded_PosTracking.
- 18.Ma H, Xu J, Jin J, Huang Y, and Liu Y, Biophys. J 112, 2196 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
