Skip to main content
Biomedical Optics Express logoLink to Biomedical Optics Express
. 2022 Feb 22;13(3):1581–1594. doi: 10.1364/BOE.452507

Robust Fourier ptychographic microscopy via a physics-based defocusing strategy for calibrating angle-varied LED illumination

Chuanjian Zheng 1, Shaohui Zhang 1,2, Guocheng Zhou 1, Yao Hu 1, Qun Hao 3
PMCID: PMC8973181  PMID: 35414977

Abstract

Fourier ptychographic microscopy (FPM) is a recently developed computational imaging technique for wide-field, high-resolution microscopy with a high space-bandwidth product. It integrates the concepts of synthetic aperture and phase retrieval to surpass the resolution limit imposed by the employed objective lens. In the FPM framework, the position of each sub-spectrum needs to be accurately known to ensure the success of the phase retrieval process. Different from the conventional methods with mechanical adjustment or data-driven optimization strategies, here we report a physics-based defocusing strategy for correcting large-scale positional deviation of the LED illumination in FPM. Based on a subpixel image registration process with a defocused object, we can directly infer the illumination parameters including the lateral offsets of the light source, the in-plane rotation angle of the LED array, and the distance between the sample and the LED board. The feasibility and effectiveness of our method are validated with both simulations and experiments. We show that the reported strategy can obtain high-quality reconstructions of both the complex object and pupil function even the LED array is randomly placed under the sample with both unknown lateral offsets and rotations. As such, it enables the development of robust FPM systems by reducing the requirements on fine mechanical adjustment and data-driven correction in the construction process.

1. Introduction

Fourier ptychography (FP) [13] is a recently developed computational imaging technique, which addresses the inherent trade-off between large field-of-view (FOV) and high spatial resolution in conventional optical systems. Based on the corresponding relationship between the illumination direction and the spatial spectrum position, FP can obtain an expanded range of spectrum by providing varied illumination angles, and achieve quantitative phase imaging with a gigapixels space-bandwidth product (SBP) [4] by integrating phase retrieval [5] and synthetic aperture [6] algorithms.

Since its first demonstration in 2013, FP has evolved from a microscopic imaging tool to a versatile imaging technique, embodied in different implementations including reflective FP [7,8], aperture-scanning FP [9], X-ray FP [10], multi-camera FP [11], among others. Meanwhile, FP has been applied in various applications, including quantitative 3D imaging [1214], digital pathology [15,16], aberration metrology [17] and extension for incoherent imaging [18], et. al.

In a typical FPM system, an LED array is used to provide angle-varied illumination for the sample. During the data acquisition process, LEDs are turned on sequentially to provide different angular illuminations and a camera is used to record the corresponding low-resolution (LR) images whose spatial frequencies are determined by the numerical aperture (NA) of the objective lens and the illumination wavevectors. In the original FPM model, each LED element is assumed to be a point source that emits quasi-monochromatic light, and the LED board should be aligned precisely. The former can be achieved by choosing LEDs with narrower spectral bandwidth and smaller light-emitting area. Nevertheless, the positional deviation of the LED board is unavoidable during the process of constructing or modifying FPM systems, and it will degrade the quality of reconstructed high-resolution (HR) images by making the sub-spectrum position used in the phase retrieval algorithm inconsistent with the wavevector corresponding to the actual illumination direction. One conventional way to address the problem is to pre-calibrate the LED array by mechanical adjustment stages [19], but aligning the LED array accurately is a time-consuming task and needs precise and multi-degree stages, which are always bulky and expensive. An intuitive and effective method [20] utilizing brightfield-to-darkfield features to get the location and orientation of the LED array is adjust-free, and no additional hardware or operations are required. Nevertheless, it may fail when there is no expected bright field overlapping zone for some specific system parameters, such as the adjacent LED spacing, the objective NA, and the distance between the sample and the LED array. Many data-driven optimization approaches taking use of the data redundancy of LR images are also useful solutions, such as pcFPM method [21], SC-FPM [22], QG-FPM [23], and tcFPM [24], et.al. However, all these methods use the intensity distribution characteristics of the LR images to seek optimal positional parameters, whereas, the LR images’ intensity distributions are also affected by illumination intensity fluctuation, objective aberrations, random noise, and other kinds of systematic errors. The influences of all errors are mixed, resulting in extracting positional deviation from the LR images constraint becoming an ill-posed problem. It is thus difficult for these methods to be performed with the optimization and correction methods of other non-ideal parameters. In addition, the data-driven optimization approaches also have the problem of being time-consuming and easy to fall into local traps.

Aiming to reduce the influences of other systematic errors and accurately obtain the positional deviation of the LED board, we propose a correction method with an explicit imaging physical model, termed pdcFPM. Based on the constraint that each LED element is arranged in a regular array, and the relationship between the lateral offset of the defocus image and the illumination direction [2527], the positional parameters of the LED board can be calculated precisely with the aid of subpixel image registration and non-linear regression algorithms. Since pdcFPM calculates the offset of binarized images to obtain positional parameters, and the offset mainly relies on the geometric information rather than the intensity distribution information of the LR images, it can avoid the influence of other systematic errors. Simulations and experiments have been conducted to demonstrate the feasibility and effectiveness of pdcFPM for large-scale positional deviations. The proposed method can significantly improve the quality of reconstructed complex amplitude images, obtain the objective pupil function without the influence of positional deviations, and therefore evidently reduce the accuracy requirements of the LED board in the process of building FPM platforms.

The remainder of this paper is organized in the following manner. In Section 2.1, we analyze the effect of LED positional deviation through the reconstructed results; in Section 2.2, we establish a mathematical model between LED positional parameters and LR image offset; and in Section 2.3, we give the flow process of the method. The effectiveness of pdcFPM is confirmed with simulations and experiments in Sections 3 and 4, respectively, and the summary and further discussion are given in Section 5.

2. Principle

2.1. Positional deviation in the FPM system

A typical FPM system consists of an LED array, a low-NA objective lens, and a monochromatic camera. LEDs are turned on sequentially to provide angle-varied illuminations, and the camera is used to acquire the corresponding LR intensity images as the raw data set. For each LED m,n (row m, column n) and its illumination wavevector (um,n,vm,n) , the LR intensity image Im,nc can be described as

Im,nc(x,y)=|F1{F[o(x,y)e(jxum,n,jyvm,n)]P(u,v)}|2, (1)

where F and F1 represent the Fourier and inverse Fourier transform operators respectively, o(x, y) is the sample’s complex transmission function, (x, y) denotes the two-dimensional (2D) Cartesian coordinates in the sample plane, j is the imaginary unit, P(u,v) is the pupil function, (u, v) is the wavevector in the pupil plane, and (x,y) is the 2D Cartesian coordinates in the image plane. The incident wavevector (um,n,vm,n) can be expressed as

um,n=2πλxcxm,n(xcxm,n)2+(ycym,n)2+h2,vm,n=2πλycym,n(xcxm,n)2+(ycym,n)2+h2, (2)

where (xc, yc) is the central position of each small segment of the sample, (xm,n, ym,n) represents the position of LED m,n , λ is the central wavelength, and h is the distance between the sample and LED array. Subsequently, these raw LR images are processed by the conventional FP reconstruction algorithm to obtain the HR complex amplitude images. The reconstruction process is composed of five steps. Firstly, initialize the guesses of the HR sample spectrum O0(u,v) and pupil function P0(u,v) to start the algorithm. Generally, the initial guess of pupil function is set as a circular low-pass filter with all ones inside the passband, zeros out of the passband, and uniform zero phase. The initial sample spectrum guess is set as the Fourier transform of the up-sampled central LR image. Secondly, use the guesses of pupil function and sample spectrum to generate the LR image in the (mth, nth) image plane as

om,ne(x,y)=F1{O0(uum,n,vvm,n)P0(u,v)}. (3)

Thirdly, replace the amplitude of the simulated LR image with the square-root of the actual measurement, and keep the phase unchanged to update the LR image as

o0,m,nu(x,y)=Im,nc(x,y)om,ne(x,y)|om,ne(x,y)|. (4)

Next, update the corresponding sub-spectrum of the HR sample spectrum using the updated LR image, which is given by [28]:

Oi(uum,n,vvm,n)=Oi(uum,n,vvm,n)+αPi(u,v)|Pi(u,v)|max2ΔOi,m,n,Pi(u,v)=Pi(u,v)+βOi(uum,n,vvm,n)|Oi(uum,n,vvm,n)|max2ΔOi,m,n, (5)

where α and β are the iterative step size and usually set to one, i denotes the time of iteration, and ΔOi,m,n is the auxiliary gradient function used for updating: ΔOi,m,n=F{oi,m,nu(x,y)}F{oi,m,ne(x,y)} . The last step is to repeat the above four steps until all the LR images are used, and the whole iterative process will be repeated until the solution convergences. Finally, the HR sample spectrum is inverse Fourier transformed to the spatial domain to recover the HR intensity and phase distributions.

It is worth noting that the accurate position of each sub-spectrum needs to be known for high recovery quality when updating the HR spectrum in the fourth step. In other words, the incident wave vector determined by the position of each LED element and the distance between LED board and sample needs to be precisely known, which puts high requirements on reconstructing and modifying FPM systems. Fortunately, some improved recovery strategies have been proposed to relax the demands. EPRY algorithm [28] is a helpful method to get a better solution by updating HR spectrum and pupil function simultaneously. If the deviation between ideal and actual positions of the sub-spectrum is small, it can be corrected by EPRY algorithm. However, as the deviation becomes more considerable, the performance of EPRY algorithm will decrease, resulting in wrinkle artifacts arising in the reconstructed amplitude and phase images.

Here, we demonstrate the influence of LED array positional deviation on the quality of reconstructed HR images by simulations. The parameters of simulations are chosen based on an actual system, and they are used in all the simulations through this article. A 21×21 programmable LED array (2.5mm spacing) with monochromatic wavelength (λ=470nm) is placed at 92mm beneath the sample to provide angle-varied illuminations. The NA and magnification of the objective lens are 0.1 and 4 respectively, and the pixel size of the camera is 2.4µm. For simplicity and without loss of generality, we assume the LED array is only moved along the x-axis, and use the symbol Δx to represent the value of the positional deviation. EPRY algorithm is utilized to recover the intensity and phase when Δx varies from 0 to 2000µm at 200µm intervals, and the corresponding results are shown in Fig. 1. Figures 1(a1) and (b1) shows the intensity and phase ground truths of the sample. Figures 1(a2) and (b2) show the reconstructed intensity and phase respectively when Δx equals 200µm. Although the images are somewhat blurred and the center of the recovered phase image becomes too dark, the change is not disastrous, and distribution characteristic of both the intensity and phase are still distinguishable. The reconstructed intensity and phase corresponding to the case when Δx increases to 800µm are shown in Figs. 1(a3) and (b3), respectively. Although most of the intensity information is resolvable, evident wrinkle artifacts appear in the intensity and phase images simultaneously, and the phase information is seriously distorted. As shown in Figs. 1(a4) and (b4), for the case Δx equals 2000µm, which will result in the wrong choice of central LED, obvious dark shadow artifacts and some unexpected phase information arise in the intensity image, and the whole phase image is submerged and indistinguishable.

Fig. 1.

Fig. 1.

Quality of reconstructed results versus the positional deviation of LED board. (a1) and (b1) are the amplitude and phase profiles of the sample; (a2)-(a4) are the reconstructed intensity images when Δx equals to 200µm, 800µm, and 2000µm, respectively; (b2)-(b4) are the corresponding reconstructed phase images; (c1) and (c2) are the RMSE between the reconstructed and true intensity and phase versus Δx.

By visually comparing the reconstructed intensity and phase images with the same Δx, we can infer that the degradation of reconstructed phase quality is more serious than that of the reconstructed intensity quality. That is because the transversal bias of sub-spectrum in the Fourier domain is more relevant to the phase information in the spatial domain. Figures 1(c1) and (c2) show the relationship of the root mean square error (RMSE) of the reconstructed and true images and Δx, where the RMSE of phase is greater than that of intensity with the same Δx. In addition, it is noteworthy that the RMSE is not linearly related to Δx, but remains stable when Δx changes in an interval (e.g., the RMSE of intensity is about 0.2 when Δx varies from 1400µm to 1800µm), and the RMSE dramatically increases when Δx exceeds a certain threshold (e.g., 800µm). The abnormal phenomenon is caused by the discretization of the Fourier domain coordinates when reconstructing HR images. The quality of the reconstructed images is stable, and the RMSE will not change significantly if the deviation between the real and ideal positions of the sub-spectrum is no more than one pixel in the Fourier domain. However, the recovered quality will drop sharply when the deviation is more than one pixel.

Moreover, in an actual FPM system, the LED array will also have other positional deviations besides the x-axis shift, such as the y-axis shift and the rotation along the optical axis, et.al. The recovered quality will degrade more under the combined effect of these deviations, so correcting the positional deviation of the LED array is a significant task.

2.2. Image offset model

A conventional FPM platform and the corresponding LED array with positional deviation are shown in Fig. 2. The blue LED element in Fig. 2(c) is considered as the central LED but point O is the actual center, and the orientation of LED array (the black dashed line) deviates from the x-axis. In fact, the high precision of LED board manufactured by existing technique enables us to consider the arrangement of LED elements is regular, and FPM system is always placed on a horizontal tale. We can assume the LED array shown in Fig. 2(b) is perpendicular to the optical axis z to simplify the LED based illumination model. Then, the positional coordinates of each LED element can be express as

xm,n=[mcos(θ)+nsin(θ)]dLED+Δx,ym,n=[msin(θ)+ncos(θ)]dLED+Δy, (6)

where θ denotes the rotation angle around the optical axis z, dLED is the distance between adjacent LED elements, Δx and Δy represent the shifts along the x-axis and y-axis, respectively.

Fig. 2.

Fig. 2.

FPM system and LED array model. (a) is an FPM system; (b) is the corresponding LED array model with positional deviation; (c) is the local magnification of (b).

If the sample is out of focus, the captured LR intensity image can be achieved by replacing Eq. (1) with

Im,n,dc(x,y)=|F1{O(uum,n,vvm,n)P(u,v)H(u,v,zd)}|2, (7)

where zd is the defocus distance, H(u,v,zd) is the defocus phase factor in the pupil plane, and it can be expressed as

H(u,v,zd)=exp[jAzd(2πλ)2(uum,n)2(vvm,n)2], (8)

where A is the magnification of the objective lens. By performing binominal expansion of the square root term in Eq. (8) and keeping the first two terms, we can get

H(u,v,zd)=exp[jAzd(uum,nwm,n+vvm,nwm,n)]exp[jAzdwm,n)exp(jAzdu2+v22wm,n)], (9)

where wm,n denotes the wave vector along the z-axis and can be written as

wm,n=(2πλ)2um,n2vm,n2. (10)

The first term of Eq. (9) will result in the lateral offset of defocus image, which is used to calculate the LED deviation in the proposed method. The second and third terms will make the quality of LR image degrade, which contribute little to the offset, so we take no account of them and derive the relationship between the defocus and focus images as

Im,n,dc(x,y)=Im,nc(x+Δxm,n,y+Δym,n),Δxm,n=Azdum,nwm,n,Δym,n=Azdvm,nwm,n, (11)

where Δxm,n and Δym,n are the offsets along the x and y directions, respectively. Since both A and zd are constants in a fixed system, the offset relies entirely on um,n and vm,n , which means the illumination direction of each LED element can be obtained by calculating the offset.

2.3. Correction strategy

The process of pdcFPM is shown in Fig. 3. Firstly, we adjust the sample to the focus position with the focusing knob of the microscopic platform shown in Fig. 2(a), then turn on LED0,0 and capture the corresponding LR image as the reference. The criteria for in-focus in an FPM system can be easily accomplished by turning on symmetrically positioned two LEDs and checking whether there is lateral movement of the feature in the images. If the sample is in the focus position, the feature of the images will have no shift because the corresponding zd in Eq. (9) is zero.

Fig. 3.

Fig. 3.

Block diagram of pdcFPM strategy.

Secondly, we adjust the sample to a defocus position, then turn on the LEDs within the bright field sequentially, and capture the LR images Im,n,dc(x,y),m,n=2,1,0,1,2 . In this step, the defocus distance is a crucial parameter. The calculated offset will have a low signal-to-noise ratio because the offset is too small when the defocus distance is not large enough. And on the contrary, when the defocus amount is too large, the image will be seriously distorted because of the defocus aberration corresponding to the last two terms in Eq. (9). In this paper, we choose the defocus distance to be 200µm, then the image offset is obvious and the defocus aberration is also acceptable for subpixel image registration.

Next, we calculate the offsets between the defocus images and the reference image. Considering that calculating the image offset of the whole FOV is unnecessary and time-consuming, we choose to calculate the offset of a small image segment, which can be a stripe on the resolution target, a small hole, or a cell, et al. Furthermore, because the grayscale of background will exert an effect on the precision of calculating the offset, we binarize the selected small segment and then calculate the offset of the centroid with subpixel image registration algorithm as

Δxm,n=x[xygm,n(x,y)]x,ygm,n(x,y)x[xyg0,0(x,y)]x,yg0,0(x,y),Δym,n=y[yxgm,n(x,y)x,ygm,n(x,y)y[yxg0,0(x,y)]x,yg0,0(x,y), (12)

where gm,n(x,y) is the grayscale distribution of binarized image corresponding to LED m,n .

Once we obtain the offsets, we utilize non-linear regression algorithm to obtain the positional parameters of the LED array. Mathematically, the non-linear regression process can be described as

E(Δx,Δy,θ,h)=m,n=22{[Δxm,nΔxm,n,e(Δx,Δy,θ,h)]2+[Δym,nΔym,n,e(Δx,Δy,θ,h)]2},(Δx,Δy,θ,h)u=argmin[E(Δx,Δy,θ,h)], (13)

where E(Δx,Δy,θ,h) is the defined non-linear regression function which should be minimized, [Δxm,n,e(Δx,Δy,θ,h),Δym,n,e(Δx,Δy,θ,h)] denotes the image offset estimation calculated with Eq. (8), and (Δx,Δy,θ,h)u is the optimal estimate of the positional parameters of the LED array.

At last, we use (Δx,Δy,θ,h)u to correct the position of each sub-spectrum in the conventional FPM reconstruction process of updating the HR sample spectrum. Nevertheless, obtaining the correct LED deviation is not enough to guarantee an optimal HR result. When the LED array has a considerable deviation (larger than half the LED spacing), the original optimal central LED may be replaced by other LED around it, and then the final reconstructed HR complex image will be significantly degraded if the sub-spectrum recovery order remains the original spiral order. In the process of reconstructing the HR image with pdcFPM, the central LED and the recovered path are reselected according to the calculated parameters to ensure fast and optimal convergence for arbitrary positional deviations.

3. Simulation

Before applying pdcFPM strategy to correct the positional deviations in actual FPM systems, we first validate its effectiveness with simulations. The platform is a laptop with a CPU (i7-7700HQ) and no parallel computing framework is utilized in the simulations. As introduced in section 2.3, we use the offset of a small image segment to represent that of a defocus image and choose the defocus distance zd to be 200µm. Considering that there are many rectangular stripes on the resolution target, we employ a square with a side length of 50µm as the sample, as shown in Fig. 4(a). The positional deviations are introduced artificially and taken in the range of Δx(2.5mm,2.5mm) , Δy(2.5mm,2.5mm) , h(87mm,97mm) , θ(5,5) . In an actual FPM system, the specific positional deviations used for verification experiments can be realized with mechanical adjustment devices. Even for the microscopic system without any mechanical adjustment devices, we can introduce the relative LED array deviation by choosing an appropriate LED element as the central one according to the characteristics of the spots acquired with the camera, in which the transversal deviation is smaller than the introduced maximum value.

Fig. 4.

Fig. 4.

Simulated results of defocus image offsets. (a) is the square sample; (b) is the focus LR image illuminated with LED0,0; (c) and (d) are two defocus LR images illuminated with LED0,0 and LED2,2 respectively; (e) shows the offsets distribution of 25 defocus LR images.

Figure 4 shows the simulated results of defocus image offsets with deviation parameters of (Δx=1mm,Δy=1mm,θ=5,h=92mm) . Figure 4(a) is the simulated square sample, whose focus LR image illuminated with LED0,0 is shown in Fig. 4(b), and its defocus LR images illuminated with LED0,0 and LED2,2 are shown in Figs. 4(c) and (d), respectively. The offsets distribution of 25 defocus LR images are illustrated in Fig. 4(e). After processing the image offsets with the proposed method, the recovered positional parameters (Δx=0.997mm,Δy=1.007mm,θ=5.243,h=92.15mm) obtained within 0.020s are in good agreement with the set parameters.

To further verify the performance of pdcFPM, we continue to perform a set of simulations with different positional parameters. At the same time, we use another data-driven position deviation correction approach termed SC-FPM as a comparative experiment, and both the recovered positional parameters and the processing time achieved with pdcFPM and SC-FPM are shown in Table 1. The corrected parameters with two methods are all in good agreement with the actual parameters, but the processing time of pdcFPM is three orders of magnitude smaller than that of SC-FPM.

Table 1. Recovered positional parameters and processing time of pdcFPM and SC-FPM.

Actual parameters Δx(mm) , Δy(mm) , θ(°) , h(mm) Correction method: pdcFPM
Correction method: SC-FPM
Δxu , Δyu , θu , hu Processing time(s) Δxu , Δyu , θu , hu Processing time(s)
1.0, 1.0, 5.0, 92 0.997, 1.007, 5.243, 92.15 0.020 1.051, 0.995, 4.914, 91.95 13.98
1.5, 1.5, 5.0, 92 1.494, 1.503, 5.200, 92.20 0.019 1.513, 1.447, 4.931, 92.12 13.96
2.0, 2.0, 5.0, 92 2.001, 1.999, 4.949, 92.04 0.027 1.940, 2.021, 4.854, 91.80 13.76
2.5, 2.5, 5.0, 92 2.499, 2.488, 4.951, 92.92 0.025 2.474, 2.460, 4.995, 91.97 14.06

Next, we utilize the recovered positional parameters to calibrate the position of each sub-spectrum in the Fourier domain during the process of reconstructing HR images and compare the performances of pdcFPM with that of EPRY-FPM and SC-FPM. Figures 5(a1) and (a2) show the ideal HR intensity and phase profiles. Figures 5(b1) and (c2) show the recovered HR intensity and phase images with EPRY-FPM when the introduced parameters are (Δx=1mm,Δy=1mm,θ=5,h=92mm) . The corresponding HR intensity and phase images recovered with SC-FPM and pdcFPM are shown in Figs. 5(d1)-(e1) and 5(f1)-(g1), respectively, where all artifacts have been eliminated and the recovered results are about the same. However, as reported in Ref. [22], SC-FPM is an iterative data-driven method based on the simulated annealing algorithm and requires at least 15 iterations (about 14s is needed for an LR image of 128×128 pixels) to guarantee stable solutions, while only 0.020s is needed when using pdcFPM. Figures 5(b2)-(b4) and 5(c2)-(c4) show the recovered results with EPRY-FPM under different positional deviations, where obvious and regular wrinkle artifacts disturb the observation of recovered information. After correcting the positional deviations with pdcFPM, the details of both intensity and phase images become resolvable, as shown in Figs. 5(f2)-(f4) and 5(g2)-g(4), which illustrates the effectiveness of pdcFPM for calibrating arbitrary positional deviation.

Fig. 5.

Fig. 5.

Recovered HR images of EPRY-FPM, SC-FPM and pdcFPM. (a1) and (a2) are the ideal HR intensity and phase profiles; (b1)-(b4) and (c1)-(c4) are the recovered HR intensity and phase images with EPRY-FPM; (d1)-(d4) and (e1)-(e4) are the recovered HR intensity and phase images with SC-FPM; (f1)-(f4) and (g1)-(g4) are the recovered HR intensity and phase images with pdcFPM.

4. Experiment

To evaluate the effectiveness of pdcFPM experimentally, we use a USA-1951 resolution target as the sample to compare the reconstructed intensity distribution of one image segment (256×256 pixels) and pupil function with EPRY-FPM, SC-FPM, and pdcFPM, respectively. A 21×21 LED array (2.5mm spacing, central wavelength 470nm with 20nm bandwidth), an objective lens (NA = 0.1, A = 4) and a camera (FLIR, BFS-U3-200S6M-C, sensor size 1”, dynamic range 71.89 dB, pixel size 2.4µm) are used to build the FPM system, as shown in Fig. 2(a).

Figure 6 shows the experimental results with different LED array positional deviations. Figure 6(a) is the full FOV LR image, and a small rectangle shown in Fig. 6(b1) is used to calculate the image offset for simplifying calculation and accelerating the speed of subpixel registration algorithm. Figure 6(b2) is an LR image of Fig. 6(a), which becomes blurry since the restriction of low NA objective lens. Figures 6(c1) and (c2) show the reconstructed HR intensity image and pupil function without positional deviation. The HR intensity image has excellent imaging quality and the reconstructed pupil has good symmetry which is consistent with prior knowledge. Next, we translate the LED array by 2 mm along the y-axis with a mechanical adjustment stage, and the corresponding reconstructed intensity image and pupil function with EPRY-FPM are shown in Figs. 6(d1) and (d2), where wrinkle artifacts appear, where the degradation of imaging quality is not severe, since EPRY-FPM will identify the positional deviations of LED array and consider it as a kind of aberration. Therefore, EPRY-FPM couples the positional deviation into the reconstructed pupil function, and compared with Fig. 6(c2), the pupil in Fig. 6(d2) is distorted. Similarly, we use the mechanical adjustment stage to move the LED array by 2mm along the x-axis and y-axis simultaneously, the reconstructed HR intensity image and pupil function with EPRY-FPM shown in Figs. 6(d3) and (d4) are severely degraded, and the details of the resolution line pair are irresolvable.

Fig. 6.

Fig. 6.

Experimental performance comparison of EPRY-FPM, SC-FPM, and pdcFPM. (a) is a full FOV LR image; (b1) is the small segment used to calculate image offset; (b2) is a blurry LR image of 256×256 pixels in (a); (c1) and (c2) are the reconstructed intensity image and pupil function with EPRY-FPM when there is no positional deviation; (d1)-(d2), (e1)-(e2), and (f1)-(f2) are the reconstructed intensity images and pupil functions of EPRY-FPM, SC-FPM, and pdcFPM when Δy = 2mm, respectively; (d3)-(d4), (e3)-(e4), and (f3)-(f4) are the reconstructed intensity images and pupil functions of EPRY-FPM, SC-FPM, and pdcFPM when Δx = 2mm and Δy = 2mm, respectively.

Groups (e) and (f) show the reconstructed HR intensity images and pupil functions of SC-FPM and pdcFPM, respectively. When the LED array is moved by 2mm along the y-axis, the reconstructed results of SC-FPM at 35 iterations are shown in Figs. (e1) and (e2), where the wrinkles have been removed with the final parameters of Δx=0.175mm , Δy=1.419mm , θ=0.111 , h=91.83mm . However, the recovered parameters are not consistent with the introduced one and it consumes 321.02s to obtain the intensity image with low contrast, and compared with Fig. 6(d2) the pupil function distorts more as well. Figs. (f1) and (f2) present the reconstructed results of pdcFPM with the corrected positional parameters of Δx=0.279mm , Δy=1.980mm , θ=0.393 , h=93.39mm , which can match the actual positional deviations well, and only 0.02s and 26.52s are respectively needed to obtain the accurate parameters and high-quality reconstructed images. When the LED array is shifted by 2mm along both the x-axis and y-axis, the reconstructed results of SC-FPM at 30 iterations with the final parameters of Δx=1.028mm , Δy=1.297mm , θ=1.395 , h=94.74mm are shown in Figs. (e3) and (e4), where the parameters are much different from the truth and the imaging quality is unacceptable due to the low contrast and the distortion of pupil function. Figs. (f3) and (f4) show the reconstructed results of pdcFPM with the corrected positional parameters of Δx=1.949mm , Δy=1.997mm , θ=0.327 , h=93.04mm , where the reconstructed intensity image with a uniform background has higher contrast and each line pair can be resolved.

In addition, to verify the generalizability of our method, we use a paramecium slice as the sample and randomly move and rotate the LED array to an arbitrary position. Figure 7(a) shows the full FOV LR image, and Fig. 7(b) is the enlargement of the small segment in the red box in Fig. 7(a). The reconstructed intensity, phase, and pupil function with EPRY-FPM are shown in Figs. 7(c1)-(c3), respectively. Many wrinkle artifacts appear in both the reconstructed intensity and phase distribution, decreasing the reconstructed quality and the image’s contrast severely. The reconstructed intensity, phase, and pupil function with the recovered parameters of Δx=1.680mm,Δy=0.089mm,θ=5.135,h=93.15mm are shown in Figs. 7(d1)-(d3), respectively. Compared with Fig. 7(c1)-(c3), the artifacts caused by the positional deviations have vanished and the imaging quality is also highly improved.

Fig. 7.

Fig. 7.

Reconstructed results of pdcFPM for a random positional deviation. (a) is the full FOV LR image of the paramecium slice; (b) is the enlargement of one small segment in (a); (c1)-(c3) are the reconstructed intensity, phase and pupil with EPRY-FPM; (d1)-(d3) are the reconstructed intensity, phase and pupil with pdcFPM.

5. Discussion

In this paper, we propose a positional correction method termed pdcFPM that yields high-quality reconstructed intensity, phase, and pupil function. The feasibility and effectiveness of pdcFPM are verified by both simulations and experiments. Different from the existing complex data-driven optimization strategies, pdcFPM constructs a clear physical model that can separate the positional deviations from many other coupled systematic errors effectively. We utilize the relationship between the offset of the defocus image and illumination direction to firstly obtain four key positional parameters of each LED, and then precisely calibrate the position of each sub-spectrum in the process of phase retrieval and aperture synthesis. The comparative experimental results corresponding to large-scale positional deviations prove the excellent performance of pdcFPM. In addition, even if the positional deviations are remarkably large (larger than the interval of adjacent LEDs) that can cause the wrong choice of the central LED, our method can still achieve good recovered results by rearranging the updating order. Thus, our method can handle arbitrary positional deviations.

Although pdcFPM can correct large-scale positional parameters and reduce the complexity of system construction, the accuracy of the focusing knob used for defocus adjustment is highly required because the image offset is as small as tens of microns and easily affected. For the focusing device with poor accuracy, it can also be feasible because the image offset caused by the mechanical adjustment can be calibrated before the experiment. To further reduce the requirements on the hardware system and eliminate the defocus adjustment part, we are trying to implement LED array positional correction based on the bright-dark field transition boundary characteristics.

Acknowledgements

The authors acknowledge Guoan Zheng for the valuable discussions.

Funding

National Natural Science Foundation of China10.13039/501100001809 (61735003, 61805011).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

  • 1.Zheng G., Horstmeyer R., Yang C., “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). 10.1038/nphoton.2013.187 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Ou X., Horstmeyer R., Yang C., Zheng G., “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). 10.1364/OL.38.004845 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Pan A., Zuo C., Yao B., “High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine,” Rep. Prog. Phys. 83(9), 096101 (2020). 10.1088/1361-6633/aba6f0 [DOI] [PubMed] [Google Scholar]
  • 4.Lohmann A. W., Dorsch R. G., Mendlovic D., Ferreira C., Zalevsky Z., “Space-bandwidth product of optical signals and systems,” J. Opt. Soc. Am. A 13(3), 470 (1996). 10.1364/JOSAA.13.000470 [DOI] [Google Scholar]
  • 5.Fienup J. R., “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). 10.1364/AO.21.002758 [DOI] [PubMed] [Google Scholar]
  • 6.Alexandrov S. A., Hillman T. R., Gutzler T., Sampson D. D., “Synthetic aperture Fourier holographic optical microscopy,” Phys. Rev. Lett. 97(16), 168102 (2006). 10.1103/PhysRevLett.97.168102 [DOI] [PubMed] [Google Scholar]
  • 7.Pacheco S., Zheng G., Liang R., “Reflective Fourier ptychography,” J. Biomed. Opt. 21(2), 026010 (2016). 10.1117/1.JBO.21.2.026010 [DOI] [PubMed] [Google Scholar]
  • 8.Lee H., Chon B. H., Ahn H. K., “Reflective Fourier ptychographic microscopy using a parabolic mirror,” Opt. Express 27(23), 34382 (2019). 10.1364/OE.27.034382 [DOI] [PubMed] [Google Scholar]
  • 9.Dong S., Horstmeyer R., Shiradkar R., Guo K., Ou X., Bian Z., Xin H., Zheng G., “Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging,” Opt. Express 22(11), 13586 (2014). 10.1364/OE.22.013586 [DOI] [PubMed] [Google Scholar]
  • 10.Wakonig K., Diaz A., Bonnin A., Stampanoni M., Bergamaschi A., Ihli J., Guizar-Sicairos M., Menzel A., “X-ray Fourier ptychography,” Sci. Adv. 5(2), eaav0282 (2019). 10.1126/sciadv.aav0282 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Chan A. C. S., Kim J., Pan A., Xu H., Nojima D., Hale C., Wang S., Yang C., “Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 Eyes),” Sci. Rep. 9(1), 11114 (2019). 10.1038/s41598-019-47146-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Tian L., Waller L., “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104 (2015). 10.1364/OPTICA.2.000104 [DOI] [Google Scholar]
  • 13.Horstmeyer R., Chung J., Ou X., Zheng G., Yang C., “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). 10.1364/OPTICA.3.000827 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Chowdhury S., Eldridge W. J., Wax A., Izatt J., “Refractive index tomography with structured illumination,” Optica 4(5), 537 (2017). 10.1364/OPTICA.4.000537 [DOI] [Google Scholar]
  • 15.Horstmeyer R., Ou X., Zheng G., Willems P., Yang C., “Digital pathology with Fourier ptychography,” Comput. Med. Imaging. Grap 42, 38–43 (2015). 10.1016/j.compmedimag.2014.11.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Williams A., Chung J., Ou X., Zheng G., Rawal S., Ao Z., Datar R., Yang C., Cote R., “Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,” J. Biomed. Opt. 19(6), 066007 (2014). 10.1117/1.JBO.19.6.066007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Song P., Jiang S., Zhang H., Huang X., Zhang Y., Zheng G., “Full-field Fourier ptychography (FFP): Spatially varying pupil modeling and its application for rapid field-dependent aberration metrology,” APL Photonics 4(5), 050802 (2019). 10.1063/1.5090552 [DOI] [Google Scholar]
  • 18.Dong S., Nanda P., Guo K., Liao J., Zheng G., “Incoherent Fourier ptychographic photography using structured light,” Photonics Res. 3(1), 19 (2015). 10.1364/PRJ.3.000019 [DOI] [Google Scholar]
  • 19.Zhang S., Zhou G., Wang Y., Hu Y., Hao Q., “A simply equipped fourier ptychography platform based on an industrial camera and telecentric objective,” Sensors 19(22), 4913 (2019). 10.3390/s19224913 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Zheng G., Shen C., Jiang S., Song P., Yang C., “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). 10.1038/s42254-021-00280-y [DOI] [Google Scholar]
  • 21.Sun J., Chen Q., Zhang Y., Zuo C., “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336 (2016). 10.1364/BOE.7.001336 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Pan A., Zhang Y., Zhao T., Wang Z, Dan D., Yao B, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 22(09), 1–11 (2017). 10.1117/1.JBO.22.9.096005 [DOI] [PubMed] [Google Scholar]
  • 23.Zhang J., Tao X., Sun P., Zheng Z., “A positional misalignment correction method for Fourier ptychographic microscopy based on the quasi-Newton method with a global optimization,” Opt. Commun. 452, 296–305 (2019). 10.1016/j.optcom.2019.07.046 [DOI] [Google Scholar]
  • 24.Wei H., Du J., Liu L., He Y., Yang Y., Hu S., Tang Y., “Accurate and stable two-step LED position calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt. 26(10), 106502 (2021). 10.1117/1.JBO.26.10.106502 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Zhang S., Zhou G., Zheng C., Li T., Hu Y., Hao Q., “Fast digital refocusing and depth of field extended Fourier ptychography microscopy,” Biomed. Opt. Express 12(9), 5544 (2021). 10.1364/BOE.433033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Zhou G., Zhang S., Zhai Y., Hu Y., Hao Q., “Single-shot through-focus image acquisition and phase retrieval from chromatic aberration and multi-angle illumination,” Front. Phys. 9, 648827 (2021). 10.3389/fphy.2021.648827 [DOI] [Google Scholar]
  • 27.Jiang S., Bian Z., Huang X., Song P., Zhang H., Zhang Y., Zheng G., “Rapid and robust whole slide imaging based on LED-array illumination and color-multiplexed single-shot autofocusing,” Quant. Imaging Med. Surg 9(5), 823–831 (2019). 10.21037/qims.2019.05.04 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ou X., Zheng G., Yang C., “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960 (2014). 10.1364/OE.22.004960 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.


Articles from Biomedical Optics Express are provided here courtesy of Optica Publishing Group

RESOURCES