Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Apr 27.
Published in final edited form as: Nat Protoc. 2023 May 29;18(7):2051–2083. doi: 10.1038/s41596-023-00829-4

Spatial and Fourier domain ptychography for high-throughput bio-imaging

Shaowei Jiang 1,5, Pengming Song 1,5, Tianbo Wang 1,5, Liming Yang 1, Ruihai Wang 1, Chengfei Guo 1,2, Bin Feng 1, Andrew Maiden 3,4, Guoan Zheng 1,*
PMCID: PMC12034321  NIHMSID: NIHMS2070644  PMID: 37248392

Abstract

First envisioned for solving crystalline structures, ptychography has proliferated in recent years and become an indispensable imaging tool in most national laboratories worldwide. However, this technique is little known to biomedical researchers due to its limited resolution and throughput in the visible light regime. New developments of spatial- and Fourier-domain ptychography have successfully addressed these issues and thus offer a unique solution for high-resolution, high-throughput optical imaging with minimal hardware modifications. Remarkably, they can shatter the intrinsic trade-off between resolution and field of view of imaging systems, allowing researchers to have the best of both worlds. Here, we aim to disseminate information that allows ptychography to be implemented by biomedical researchers in the visible light regime. We first discuss the intrinsic connections between spatial-domain coded ptychography and Fourier ptychography. A step-by-step guide is then provided for developing both systems. In the spatial-domain implementation, we show that a large-scale, high-performance blood-cell lens can be made in minutes at negligible expense. In the Fourier-domain implementation, we show that a low-cost light source added to a regular microscope can improve the resolution beyond the limit of the objective lens. The turnkey operations of these platforms can generate widespread impacts for both professional research laboratories and citizen scientists worldwide. Users with basic experience in optics and programming can complete the setups within a week. The DIY nature of the platforms also allows this protocol to be implemented in lab courses related to Fourier optics, biomedical instrumentation, digital image processing, robotics, and capstone projects.

Introduction

The original concept for ptychography was developed in 1969 to solve the phase problem of electron crystallography1. By measuring diffraction data as a narrow coherent probe beam translating across a crystalline specimen, it aimed to extract the phase of Bragg peaks and thereby recover a real-space image of the crystal structure. The name ‘ptychography’ (pronounced as tie-KOH-gra-fee) is derived from the Greek ptycho, meaning to fold in English and convolution in German2. In the original concept, the spatial-domain interaction between the probe beam and the crystalline specimen can be modelled by a Fourier-domain convolution between the Bragg peaks and the probe’s Fourier spectrum. The name remains appropriate today for more recent implementations where the convolution in reciprocal space remains a key aspect of the technology.

Since the first conceptualization in 1969, it took a few significant developments to make ptychography a practical and appealing imaging technique. One notable advance is the adoption of an iterative phase retrieval framework for image reconstruction, which brings the technique to its modern form3 (Box 1). The experimental procedure remains the same: the object is translated through a spatially-confined probe beam and the corresponding diffraction patterns are recorded in reciprocal space. Unlike the original concept that used an analytic inversion method but required an impractically large number of measurements, modern ptychography iteratively recovers the object from a significantly smaller dataset by imposing two sets of constraints. In real space, the spatially confined probe beam serves as the support constraint to limit the physical extent of the object for each measurement. In reciprocal space, the diffraction measurements serve as the Fourier magnitude constraints for the estimated solution. The iterative reconstruction process of ptychography essentially looks for an object estimate that satisfies both constraints. As the object translates over the confined probe beam, ptychography generates overlapped area of illumination to be captured by the image sensor at the far field. The overlapped area of illumination during adjacent measurements (panel b of Box 1) is the key to resolving ambiguities in the solution with additional information and accelerating the convergence speed of the phase retrieval process.

Box 1 ∣. The phase problem and the modern form of ptychography.

Phase information characterizes how much the light wave is delayed by propagation through a sample. Light detectors, however, can only measure intensity variations of the light wave; loss of associated phase information is termed the phase problem. The name comes from the field of crystallography4, where the phase problem needs to be solved for determining the crystal structure from far-field diffraction measurements. Iterative phase retrieval is one solution to the phase problem. The method was pioneered by the work of Gerchberg and Saxton5, dealing with a problem of recovering a complex-valued image from magnitude measurements at two different planes. The reconstruction algorithm typically consists of iteratively imposing different constraints on the object, for example, its measured real-space absorption, measured square modulus Fourier spectrum, finite support area, or its phase-only transmittivity under transparent condition.

The modern form of ptychography adopted the iterative phase retrieval framework to address the phase problem. A typical experimental procedure is shown in panel a, where the complex-valued object O(x,y), is translated through a spatially confined probe beam P(x,y). The product between the object profile and the probe beam propagates to the far-field via a Fourier transform. Therefore, we obtain the diffraction measurement as Ii(kx,ky)=FT{O(xxi,yyi)P(x,y)}2, where (kx,ky) are the coordinates of the Fourier plane, (xi, yi) is the ith translated position of the object, and FT denotes the Fourier transform. The resulting dataset Ii(i=1,2,3), termed ptychogram, is a collection of diffraction measurements for all translated positions of the object. The reconstruction process in panel b is taken iteratively, which is expedited by imposing two different constraint sets to recover the object profile. The first constraint is the support constraint for each measurement in real space, which is imposed by setting outside the probe beam area to be zero while keeping the signals inside the area unchanged. The second constraint is the Fourier magnitude constraint in reciprocal space, which is implemented by replacing the modulus of the estimated pattern at the detector plane with the measurement while keeping the phase unchanged. The iterative process will converge to an object recovery with both intensity and phase properties, as shown in panel b.

Box 1 ∣

With the help of the iterative phase retrieval framework, ptychography has since evolved into an enabling microscopy technique where image-forming optics are replaced by computational reconstructions. It does not require a stable reference beam as in holography; thus, partially coherent light sources such as LEDs can be used for sample illumination6. It also lifts the isolated-object requirement imposed by conventional lensless diffraction imaging approaches7,8, allowing contiguously connected samples to be imaged over an extended area. Its lensless nature allows it to be implemented in the extreme ultraviolet (EUV) and X-ray regimes, where high-resolution lenses are costly and challenging to make9,10, and crucially it also produces a complex-valued image that reveals both the absorption and phase-shift properties of the specimen. This capability for quantitative phase imaging (QPI)11 enables high-contrast, label-free imaging of transparent biospecimens and is one of the main motives for visible light and electron microscopy implementations. The richness of the ptychographic dataset further allows it to characterize optical components in the experimental setup. For example, it can measure and remove the effect of partial spatial and temporal coherence in light sources6,12 and can recover and computationally compensate for optical aberrations in microscope systems13. Over the past decade, this unique set of benefits offered by ptychography has attracted significant attention from different research communities. For X-ray microscopy, it has become an indispensable imaging tool at most synchrotrons and national laboratories worldwide14. For electron microscopy, the recent development has pushed the imaging resolution to the record-breaking deep sub-angstrom limit15.

The applications of ptychography in the visible light regime, however, are limited by its relatively low imaging throughput and resolution. As a result, this technique is little known to biomedical researchers. New developments of spatial and Fourier domain ptychography16-20 have successfully addressed these issues and thus offer a unique solution for high-resolution, high-throughput optical imaging with minimal hardware modifications. Remarkably, they can shatter the intrinsic trade-off between resolution and field of view of optical systems21, allowing researchers to have the best of both worlds. The demonstrated bio-imaging applications include high-throughput digital pathology17,18,22,23, 2D and 3D quantitative phase imaging24-29, large-scale live-cell monitoring with sub-cellular resolution19,30,31, microbial growth detection with nanometre topographic sensitivity20, antimicrobial susceptibility testing20, urine sediment screening19, differential blood count17,32, cytometric analysis19,31, and optofluidic screening33, among others. Image acquisition speed is now orders of magnitude higher than conventional robotic microscopes at a small fraction of the cost17.

In this protocol, we aim to facilitate the uptake of ptychography by biomedical researchers in different fields with a comprehensive step-by-step guide for system development. In the spatial-domain implementation, we show that a high-performance blood-cell lens can be made in minutes at negligible expense. In the Fourier-domain implementation, we show that a low-cost light source added to a regular light microscope can improve the resolution beyond the diffraction limit of the objective lens. Users with basic experience in optics and programming can complete the set-ups and perform ptychographic reconstructions in about one week. The DIY nature of the set-ups also allows this protocol to be implemented in lab courses related to Fourier optics, biomedical instrumentation, digital image processing, robotics, and capstone design projects.

New implementations of spatial- and Fourier-domain ptychography

Figure 1 summarizes the new implementations of the spatial-domain coded ptychography (CP) and the Fourier-domain ptychography (FP). We developed the FP approach in 2013 to improve the resolution of a regular lens-based microscope beyond the diffraction limit set by the objective lens16. A typical FP set-up consists of a programmable LED array and a regular microscope with a low numerical aperture (NA) objective lens. The system configuration in Fig. 1 defines three planes for the FP set-up: the specimen at the object plane (x, y), the pupil aperture at the Fourier plane (kx, ky), and the detector at the image plane (x, y). The objective lens performs a Fourier transform to convert the light waves from the object plane (x, y) to the aperture plane (kx, ky). The tube lens performs a second Fourier transform to convert the light waves from the aperture plane (kx, ky) to the image plane (x, y). In this imaging model, the pupil aperture at the Fourier plane effectively serves as a low-pass filter for the measurement in the image plane.

Fig. 1∣. Overview of the spatial-domain coded ptychography (CP) and Fourier-domain ptychography (FP).

Fig. 1∣

(Left) In the Fourier-domain implementation, a programmable LED array is used to illuminate the object with different plane waves. The captured images are used to recover the object spectrum in the Fourier domain. The recovered spectrum is then converted back to the spatial domain to obtain a complex-valued object image. (Right) In the spatial-domain implementation, a coded surface on the image sensor (e.g., a blood-cell layer) is used to modulate the diffracted light waves from the object. By translating the object (or the coded sensor) to different lateral positions, we capture a set of images to recover the object's exit wavefront at the coded surface plane. This wavefront is then propagated to the object plane to obtain a high-resolution complex-valued object image.

In the operation of FP, the LED array illuminates the object with different incident angles and the system records the corresponding low-resolution intensity images Ii(x,y)(i=1,2,). Each measurement corresponds to the information from a circular aperture region in the Fourier space (see the ‘Spectrum O^(kx,ky)’ image in Fig. 1). The size of the aperture is determined by the NA of the objective while its offset from the origin is determined by the illumination wave vector (kxi, kyi) of the ith LED element. Following the iterative phase retrieval process, we can synthesize the measurements in the Fourier domain and recover the object spectrum O^(kx,ky). The high-resolution object image O(x,y) can then be obtained by transforming the synthesized spectrum back to the spatial domain (see the imaging model section in Fig. 1). The resolution of this recovered image is no longer limited by the employed objective lens. Instead, it is determined by the maximum incident angle of the LED array: the large the maximum incident angle, the higher the resolution. On the other hand, this image retains the original large field of view of the low-NA objective lens.

Since its first demonstration, FP has evolved from a simple microscope tool to a general technique for different research communities34. For example, the FP concept has been integrated with diffraction tomography for high-resolution 3D microscopy imaging25,35. The concept has also been adopted in long-range super-resolution imaging36,37 that deviates from the original microscopy imaging goal.

Unlike the lens-based FP system, the spatial-domain CP was developed to perform high-resolution, high-throughput imaging without using any optical lens17,19. It replaces the lens system in FP with a simple blood-cell layer coated on the image sensor. The system configuration in Fig. 1 defines three planes for a typical CP set-up: the specimen at the object plane (x, y), the blood-cell layer at the modulation plane (x, y), and the detector at the image plane (x, y). The light waves propagate for a distance d1 from the object plane (x, y) to the modulation plane (x, y), and a distance d2 from the modulation plane (x, y) to the image plane (x, y). The blood-cell layer redirects light diffracted at a large angle by the object to smaller angles that can be detected by the pixel array. Therefore, previously inaccessible high-resolution object details can now be acquired using the image sensor. To this end, the blood-cell layer serves as a high-resolution bio-lens with a theoretically unlimited field of view. It can unlock an optical space with spatial extent (x, y) and spatial frequency content (kx, ky) that is inaccessible using conventional lens-based optics.

In the operation of CP, we first couple laser light to a single-mode fibre and use it to illuminate the object. By translating the object (or the blood-coated sensor) to different lateral positions, the system records a set of intensity images Ii(x,y)(i=1,2,). Following the iterative phase retrieval process, we can recover the complex object exit wavefront W(x,y) at the modulation plane. This wavefront is then propagated to the object plane to obtain the high-resolution object profile O(x,y).

The intrinsic connections between FP and CP are summarized in the ‘Imaging model’ and ‘Operation’ sections of Fig. 1. With FP, the object locates in the real space, and the two optical lenses transform it to the Fourier space and then to the real space again. The pupil aperture serves as the effective ptychographic probe in the Fourier plane. With CP, the object locates in real space, and the two propagation operations transform it to the modulation plane and then the image plane. The blood-coated layer in CP serves as the effective ptychographic probe in the modulation plane. We can see that the two Fourier transform operations in FP are replaced by two free-space propagation operations in CP. Both FP and CP illuminate the specimen using an unconfined extended beam that covers a large area for image acquisition. This is a key consideration for high-throughput microscopy imaging.

One distinction between FP and CP is the employed light source: low-coherent LEDs for FP and high coherent laser for CP. This difference can be explained by the temporal coherence requirements of the systems. With FP, chromatic dispersion can be partially compensated by the lenses of the microscope: light with a small spectral bandwidth (~15 nm for LEDs) can be brought to the same image plane. With CP, on the other hand, light at different wavelengths will be dispersed to different axial planes after free-space propagation. Therefore, CP has a more stringent requirement on the temporal coherence of the light source and the monochromatic laser is a proper choice.

Comparison with other methods

Drawing connections and distinctions between the two ptychographic implementations and other microscopy techniques helps to clarify and summarize their advantages and working principles. In the following, we discuss four related techniques: 1) the traditional ptychography approach (Box 1), 2) structured illumination microscopy, 3) common lensless microscopy techniques, and 4) lens-based robotic microscope.

The traditional ptychography approach uses a spatially confined probe beam for object illumination (Box 1). Such a probe beam often varies in-between different experiments due to the sample alignment or the light-source instability issues. For example, it is challenging to place different specimens in the exact same position of the confined probe beam. As a result, one may need to jointly recover both the object and the probe beam for each experiment38-40, a problem referred to as blind ptychography41,42. Unlike the confined probe used in traditional ptychography, FP uses the pupil aperture and CP uses the blood-cell layer as the effective probe. These probes are hardcoded into the imaging systems. Once characterized, they remain unchanged for all subsequent experiments. However, we note that the pupil aperture of FP varies at different spatial locations of the object’s field of view13,43. Characterizing such spatially varying pupil aberrations often requires expert knowledge of aberration mode43. In contrast, CP has the advantage that it does not use any optical lens in the set-up. As a result, its ptychographic probe (the blood-cell layer) remains spatially invariant for different regions of the object.

Structured illumination microscopy (SIM) uses non-uniform intensity patterns (e.g., sinusoidal and random speckle patterns) to modulate the high-frequency object information into the passband of the optical system44,45. In contrast, different plane waves are used for sample illumination in FP. The real-space product of the object and the plane wave corresponds to a shift of the object spectrum in the Fourier space. Therefore, the otherwise undetectable high-frequency object details can now be seen by the detector. Similarly, CP uses the blood-cell layer to redirect the light waves with large diffraction angles into smaller angles for detection. As such, the blood-cell layer serves the role of non-uniform illumination pattern as in SIM. We note that, however, the intensity-based nature of SIM allows it to be implemented for incoherent fluorescence microscopy. On the other hand, FP and CP are coherent imaging modalities and thus cannot be directly applied to fluorescence imaging. When the same objective lens is used for both pattern projection and object detection in SIM, the resolution enchantment factor is limited to 2. The resolution enhancement factor of FP can well exceed 2 by using a low-NA lens for image acquisition. The highest synthetic NA demonstrated for FP is 1.9 in free space46, close to the maximum possible synthetic NA of 2. The resolution of CP is determined by the largest diffraction angle that can be detected and this angle corresponds to an NA of 1 in free space. The currently achieved NA of 0.8 is close to the theoretical maximum17. We note that it is also possible to perform angle-varied illumination in CP to synthesize a maximum possible synthetic NA of 2.

Conventional multi-height lensless microscopy introduces different object-to-detector distances (or equivalently, different illumination wavelengths) to obtain multiple measurements for phase retrieval. The concept was first proposed in 1968 for electron microscopy47 and shows great potential for lensless imaging in the visible light regime48-50. However, it is challenging for this technique to recover the correct phase of the object. The reason can be explained by the concept of phase transfer function (PTF), which characterizes the transfer property of phase contents of different spatial frequencies51. For low-frequency contents, PTF is close to 0 for multi-height measurements. Therefore, the low-frequency contents are lost during the image acquisition process and cannot be restored from the subsequent phase retrieval process. We will use the following simple example to better illustrate this problem. In this thought experiment, we place an optical prism on top of the image sensor and illuminate it with a plane wave. The resulting multi-height measurements would be a constant value across the entire image and the phase information of the prism cannot be converted into intensity variations for detection. Thus, it is impossible to restore the linear phase ramp from these measurements. The issue raised by this thought experiment also applies to other conventional lensless microscopy techniques, including the support-constraint approach7,8, the digital inline holography52, the transport-of-intensity approach51,53, and the blind ptychography38-40. With CP, on the other hand, the blood-cell layer can effectively convert the object phase information into distortions in the diffraction patterns. Thus, CP enables the true quantitative phase recovery for all spatial-frequency contents with high sensitivity17. Also, the field of view of conventional lensless techniques is often limited to the area of the sensing surface, typically 20-40 mm2. In contrast, the lateral translation operation of CP naturally expands the field of view beyond what is limited by the sensor size. With these combined advantages, we have recently demonstrated the time-lapse monitoring of bacterial growth with a 120-mm2 field of view, a 15-second per frame temporal resolution, and a nanometre-range phase sensitivity20. We also note that it is possible to adopt the concept of CP for multi-height lensless imaging54. However, the achieved resolution is generally lower than that of CP due to the insufficient phase diversity provided by the multi-height measurements.

Lens-based robotic microscope has advanced significantly in recent years with a key milestone accomplished in 2017 when whole slide scanner was approved for primary diagnostic use in the USA55. However, such systems are costly and often have demanding requirements on mechanical stability. Images of microscope slides can be easily defocused due to the small depth of field of the high-NA objective lens used in these systems56. Similarly, in culture-based experiments, one often needs to constantly adjust the focus knob in a time-lapse experiment. Current high-end autofocusing system in robotic microscope (e.g., Nikon Perfect Focus) performs real-time tracking of a reference surface at the interface between the air and Petri dish. The user can then specify an offset distance related to the reference surface for image acquisition. However, the thickness of the dish substrate is not uniform, and the offset distance varies when the user scans different regions of the dish. Thus, it is challenging to perform large-scale monitoring of Petri dishes or multi-well plates over time. With both FP and CP, one can perform post-measurement refocusing, which eliminates the need to maintain a precise distance between the sample and the imaging system. With CP, the recovered wavefront can be digitally propagated to any position along the optical axis post-measurement. The translated x-y positions of the object can also be directly recovered from the CP measurements, allowing open-loop optical acquisition without requiring any feedback from the mechanical stage. To improve the imaging throughput, lens-based robotic microscope scans the samples at a high speed. The resultant motion blur needs to be addressed using pulsed illumination or time-delay-integration detection. In contrast, the scanning step size between adjacent CP measurements is at the microns level. Therefore, the sample can be in continuous motion without extra pulsed-illumination hardware. Lastly, it is challenging to perform parallel imaging using conventional robotic microscope systems. The scaling of complexity in array microscopy is an obstacle that prevents large-scale, high-throughput imaging for biomedical applications. For example, in drug screening, it is challenging to monitor cell culture growth in real-time across all wells simultaneously. Parallel optical processing using FP and CP has been demonstrated using multiple image sensors17,29,31. With FP, a parallel microscope system can simultaneously image all wells on a 96-well plate30. With CP, we recently demonstrated that gigapixel high-resolution microscopic images with a 240 mm2 effective field of view can be acquired in 15 seconds using an array of 8 coded sensors17.

Limitations

There are several limitations associated with both FP and CP implementations. First, a successful reconstruction in FP and CP relies on an accurate imaging model of the system. With FP, the imaging model discussed in Fig. 1 (the product between the object and the plane wave) assumes that the object section is infinitesimally thin. For any practical specimen with a certain thickness, tilting the illumination plane wave would also change the object spectrum in addition to shifting it in the Fourier space. Therefore, it is challenging for FP to image 3D thick samples like an optical lens or a large bacterial colony. While it is possible to partially address this issue by implementing diffraction tomography25,26 or multi-slice modelling57 in FP, the resulting computational cost may still be prohibitively high for biomedical researchers. In contrast, CP recovers object exit wavefront at the blood-cell layer plane, avoiding the direct modelling of the light-object interaction process. Its capability of handling thick specimens has been demonstrated by imaging inch-size optical lenses, large bacterial colonies, and thick cytology smears from fine needle aspiration17,18,20.

Second, the LED light source used in FP has a low optical flux. A longer exposure time is often required to achieve an adequate signal-to-noise ratio when capturing darkfield images. The laser source for CP, on the other hand, has a high optical flux for a much shorter exposure time, allowing continuous sample motion during the image acquisition process. The laser source in CP, however, introduces coherent artefacts to the captured images. One prominent artefact is the interference fringe pattern caused by multiple reflections between different glass-air interfaces. To reduce the impact of this artefact, we can use a dense blood-cell layer on the image sensor and increase the number of acquisitions. Another solution is to coat a thin lightabsorbing layer on the sensor’s coverglass. Light beams undergoing multiple reflections will be absorbed multiple times, thereby minimizing the coherent artefacts. The third solution is to use a laser with low spatial coherence but high temporal coherence. Such a light source could be the optimal choice for both FP and CP.

Third, both FP and CP are coherent imaging modalities. Their native forms cannot be used for incoherent fluorescence imaging. However, it is possible to extend the concepts for incoherent imaging, in a way similar to SIM58-61.

Fourth, in a time-lapse experiment, both FP and CP necessitate significant computational resources for storing, transferring, and processing a large amount of dataset (terabytes of data). One strategy is to take advantage of the temporal correlation of the specimens and reduce the number of acquisitions. To this end, the reconstruction from the previous time point can be used as the initial guess of the current time point20,33.

Fifth, the distance between the object and the coded sensor is often less than 1 mm in CP. Therefore, the heat generated by the sensor may affect the specimen, especially for cell culture experiments. We can use heatsinks and small fans to partially address this issue. However, this passive solution is sub-optimal as the temperature of the coded sensor surface is still higher than that of the surrounding environment. If the temperature is a critical consideration for the experiment, active thermoelectric cooler and temperature sensor feedback are needed for heat management, which is beyond the scope of this protocol. Another solution is to integrate the lensless CP set-up with a microscope system so that the current lensless detector plane can be relayed to the remote image plane of the microscope62,63.

Overview of the protocol

The remainder of this protocol provides an in-depth guide for developing the fully-functional FP and CP systems. Figure 2 shows the overall workflow of the protocol: Steps 1-65 for FP system development and Steps 66-116 for CP.

Fig. 2∣. Overview of the procedure for developing the FP and CP platforms.

Fig. 2∣

The procedure consists of system integration, system calibration, image acquisition and reconstruction, and application demonstrations.

The procedure of implementing FP begins with the development of a programmable light source (Steps 1-12). We discuss three designs of the light source: 1) a custom planar LED matrix built with small-pitch surface mount elements, 2) an assembled planar LED array built with off-the-shelf LED boards, and 3) a non-planar pyramid LED illuminator. Next, the light source module is mounted with a regular light microscope for system integration (Steps 13-31). With a properly aligned LED source, we then provide two options to calibrate the incident angles of different LED elements (Steps 32-48). The first option relies on the brightfield-to-darkfield transition features of the captured image. It can be used to calibrate LED matrix with a well-defined grid pattern. The second option is to place a calibration target at a defocused plane and use a high-NA objective lens to acquire a set of images under illumination with different LED elements. Cross-correlation analysis is then performed to obtain the incident angle information from the captured images. This second angle-calibration option can be used for any freeform LED illuminator. Next, we calibrate the pupil aberration of the objective lens (Steps 49-56). In this process, we use a blood smear slide as the calibration object and acquire a set of images under illumination with different LED elements. The pupil aberration is then jointly recovered with the calibration object. Once the system is fully calibrated, we can run FP imaging experiments for different bio-specimens (Steps 57-65). Two approaches are adopted to shorten the acquisition time: 1) different camera gains are applied according to the illumination NAs, and 2) a portion of LED elements are skipped in the acquisition process.

The procedure of implementing CP begins with the preparation of the blood-cell layer on the image sensor (Steps 66-73). A good CP imaging result can be obtained with a dense and uniform monolayer of goat blood cells on the sensor. Next, we discuss the development of a motorized stage for sensor translation (Steps 74-86). A low-cost XYZ 3-axis stage is modified by replacing its x and y manual actuators with stepper motors. Once the hardware development is completed, we then perform system calibration using a blood smear slide as the calibration object (Steps 87-106). The goal of this calibration process is to recover the transmission profile of the blood-cell coded layer on the image sensor. In this calibration experiment, we scan the coded sensor to different lateral positions and acquire the corresponding images for the joint recovery of the coded layer and the calibration object. The positional shift of the coded sensor is obtained using an iterative correlation-analysis approach (Box 3). With the recovered coded layer profile, we can then run CP imaging experiments for different bio-specimens (Steps 107-116).

Box 3 ∣. Tracking the motion of the coded sensor in CP.

In the image acquisition process, CP translates the blood-coated sensor to different x-y positions and records a set of intensity images Ii(x,y)s(i=1,2,) of the object. The translated positions of the coded sensor need to be recovered for the subsequent reconstruction process. The captured images, however, contain information about both the object and the blood-coated surface. To track the positional shift of the coded sensor, we need to minimize the impact of the blood-coded layer. To this end, we generate an initial reference image Iref(x,y) as follows33:

Iref(x,y)=I1(x,y)i=1TIi(x,y) (6)

Since the blood-cell layer remains stationary during the sensor translation process, the modulation effect can be approximated by the sum of all measurements i=1TIi(x,y) in Eq. (6). Dividing this term thus effectively removes the modulation effect of the blood-cell layer. The translational shift (xi, yi) of the ith measurement can then be identified by locating the maximum point of the cross-correlation map (panel a):

(xi,yi)=argmax(x,y){IrefIi}(x,y) (7)

where ‘’ denotes the cross-correlation operation. With the initially estimated shifts (xi, yi) from Eq. (7), we then update the reference image as follows (panel b):

Irefupdate(x,y)=i=1TIi(x+xi,y+yi), (8)

where the measurements are shifted back to the first translated position and summed to remove the modulation effect of the blood-cell layer. We typically repeat the processes in panels a and b 2-4 times to obtain a motion estimate of the coded sensor with deep sub-pixel accuracy (panel c).

Box 3 ∣

The DIY nature of the FP and CP platforms allows them to be implemented with minimum resources. A competent student or any citizen scientist with basic knowledge of imaging optics, electronics, and programming can complete the setups within a week.

Experimental design

Hardware selection for FP.

Microscope selection for FP:

We use an up-right microscope (Nikon, Eclipse Ci) as the imaging system in Fig. 3a. This system can be configurated to image any bio-specimen mounted on a regular microscope slide. For imaging cell culture on a Petri dish or multi-well plate, an inverted microscope can be used instead. One can also build a custom microscope using a low-NA objective lens and a tube lens.

Fig. 3∣. Hardware implementation of FP.

Fig. 3∣

(a) An FP set-up built with an LED illuminator and an upright microscope. (b) Three different designs of the LED illuminator, including a custom-made planar LED matrix built with small-pitch surface mount elements (b1), an assembled planar LED array built with off-the-shelf LED boards (b2), and a non-planar pyramid LED illuminator for high-NA illumination (b3). Also refer to Supplementary Video 1 for the operation of the FP platform.

Objective lens selection for FP:

If the targeted application is to perform large-field-of-view imaging, we recommend the following low-NA objective lens: 1) 2×, 0.1-NA objective lens (Nikon, Plan Apo), 2) 4×, 0.2-NA objective lens (Nikon, Plan Apo), 3) 2×, 0.1-NA objective lens (Thorlabs, TL2X-SAP), and 4) 4×, 0.2-NA objective lens (Thorlabs, TL4X-SAP). If the targeted application is to push the resolution limit to the theoretical maximum, we can choose an objective lens with a high NA, for example, a 40×, 0.95-NA objective lens in free space (Nikon, Plan Apo).

Camera selection for FP:

We recommend choosing the following camera pixel size for FP64:

λMag4NAobjSamplingpixelsizeλMag2NAobj, (1)

where Mag is the magnification factor of the microscope system, NAobj is the NA of the objective lens, and λ is the illumination wavelength. The right-hand side of Eq. (1) is the sampling condition for the coherent imaging system. If the camera pixel size is larger than this limit, it will cause aliasing issues for the measurements and require a sub-sampling scheme65,66 in the phase retrieval process. The left-hand side of Eq. (1) comes from the sampling condition for the incoherent imaging system. When the pixel size is smaller than this limit, it would lead to data redundancy: no additional object information can be obtained compared to the image captured using a pixel size of (λMag)(4NAobj). Instead, a smaller pixel size than this limit often implies a lower full well capacity of the pixel and a higher level of read noise. For the FP system discussed in this protocol, we have λ=470nm,Mag=2, and NAobj=0.1. The preferred pixel size is from 2.34 μm to 4.7 μm according to Eq. (1). To this end, we recommend a low-cost 20-megapixel camera with a 2.4-μm pixel size (Sony IMX 183, model no. DMK 33UX183, The Imaging Source, ~$700). The relatively small pixel size of this sensor assures adequate sampling for most FP configurations. If budget allows, other large-format, high-pixel-count cameras can also be used for FP, for example, Sony IMX 540 (24.5 megapixels, 2.75-μm pixel size, model no. BFS-U3-244S8M-C, Teledyne FLIR, ~$2600).

Illuminator selection for FP:

We recommend using small and bright surface-mounted LED elements for building the illuminator. For the same illuminator-to-sample distance, a smaller pitch implies a larger spectrum overlap in the Fourier space, thereby enabling an easier and faster FP construction process. One can also bring the illuminator closer to the object to increase the optical flux per area. For illumination at large incident angles, the LED elements are better angled toward the sample for improving the light delivery efficiency. Figure 3b shows the designs of three different LED illuminators in this protocol.

Software implementation of FP.

A successful FP reconstruction relies on the correct estimation of the illumination angle and the recovery of the spatially-variant pupil aberration. Box 2 discusses how to obtain the incident wavevectors of different LED elements using a calibration experiment. For pupil aperture recovery, we model the pupil aberration as a summation of different Zernike modes as follows:

Pupil(kx,ky)=circ(NA2πλ)exp(in=1NwnZn(kx,ky)) (5)

where ‘circ(NA2πλ)’ represents a circular mask with a radius of NA2πλ, corresponding to the aperture size of the microscope system. The term Zn(kx,ky) in Eq. (5) represents the nth Zernike mode and wn represents the weight of this mode. In the pupil calibration experiment, we use a blood smear as the calibration object and recover the weight wn via gradient descent43. Once the weights wn(n=1,2,) are recovered from the calibration experiment, we can generate the corresponding pupil using Eq. (5) and use it as the initial pupil for all subsequent imaging experiments. Figure 4 shows the reconstruction pipeline of the FP approach. It consists of three major steps: object initialization, iterative reconstruction, and post-processing transformation. To further refine the pupil aberration, we can jointly update the object and pupil using the rPIE algorithm67 in lines 11-12 of Fig. 4.

Box 2 ∣. Incident angle calibration and alignment for FP.

We discuss two approaches for obtaining the incident wavevectors of different LED elements. The first one is for the planar LED matrix with a well-defined pitch, and the second one is for freeform illuminators with LED elements arranged at arbitrary 3D locations. In the first approach, we use a blank glass slide as the object and visually select an LED element underneath the blank glass slide as the reference point. Based on this reference point, we then select 4 adjacent centrosymmetric LEDs such that their incident angles are close to the maximum acceptance angle of the objective lens. By turning on these 4 LEDs, the captured image in panel a exhibits centrosymmetric brightfield-to-darkfield transition features34. To facilitate the alignment process, we display a cross marker on top of the captured image in panel a. The position of the LED matrix can be adjusted so that the brightfield-to-darkfield transition features are aligned with the cross marker. The spatial position (xi, yi)s of different LED elements can then be calculated based on the pitch of the matrix and the detector pixel size at the object plane. The red dots in panel a denote the recovered positions of the LED elements with respect to the reference point at the center. For a given position (xc, yc) on the captured image, the incident wavevector (kxi, kyi) can be calculated via:

(kxi,kyi)=2πλ((xcxi)(xcxi)2+(ycyi)2+h2,(ycyi)(xcxi)2+(ycyi)2+h2), (2)

where h is the measured distance between the object and the planner LED matrix. Panel b shows the recovered wavevectors of the planar LED matrix in Fig. 3b1.

In the second approach, we use a high-NA objective lens to infer the incident wavevectors of the freeform LED elements located at arbitrary positions. First, we place a calibration target (e.g., a blood smear slide) at an out-of-focus plane with a defocused distance Δz. We then select and turn on a reference LED element underneath the sample and capture the corresponding image Iref(x,y). Second, a set of defocused images Ii(x,y)(i=1,2,3) are captured by turning on different LED elements of the freeform illuminator. As shown in panel c, we crop the central regions of both Iref and Ii to estimate the positional shift (Δxi, Δyi) between them:

(Δxi,Δyi)=argmax(x,y){IrefcropIicrop}(x,y), (3)

where ‘’ denotes the cross-correlation operation. We then use a mechanical stage to move the freeform illuminator along the x-axis by a distance d and repeat the above steps to obtain another set of positional shifts (Δxi, Δyi). Panel d shows the relationship between these two sets of positional shifts, from which we can obtain the spatial location (xi, yi, zi) of the ith LED element:

(xi,yi,zi)=d(ΔxiΔxi)2+(ΔyiΔyi)2(Δxi,Δyi,Δz) (4)

The incident wavevector (kxi, kyi) of this element can be calculated by replacing ‘h’ in Eq. (2) with ‘zi’. Panel e shows the recovered wavevectors of the freeform pyramid illuminator in Fig. 3b3.

Box 2 ∣

Fig. 4∣. Reconstruction process of FP.

Fig. 4∣

(Left) The workflow of the FP reconstruction process. (Right) The detailed reconstruction process consists of initialization, iterative reconstruction, and image transformation steps.

Hardware implementation of CP.

Figure 5a shows a CP set-up built using a blood-coated image sensor, a modified low-cost motorized stage, and a fibre-coupled laser light source.

Fig. 5∣. Hardware implementation of CP.

Fig. 5∣

(a) A CP set-up built with a blood-coated image sensor, a 405-nm fibre-coupled laser, and a modified low-cost XYZ stage. The x and y manual actuators of the stage are replaced with stepper motors for motorized control. The z actuator is used to adjust the distance between the coded sensor and the specimen. (b) Preparation of the blood-coated sensor by smearing 2 μL goat blood directly on top of the sensor’s coverglass. (c) Preparation of blood-coated sensor by first smearing 2 μL goat blood on a thin coverslip and then permanently attaching the coverslip to the image sensor with PDMS (polydimethylsiloxane). For both (b) and (c), we fix the cells with alcohol to preserve their morphology over a long period. Also refer to Supplementary Video 2 for the operation of the CP platform.

Blood-coated image sensor for CP:

The selection of the image sensor is critical to the image quality of the CP reconstruction. We recommend a monochrome image sensor with a pixel size of 1.85 μm in this protocol (Sony IMX 226, The Imaging Source, DMM 37UX226). Other sensors with smaller pixel sizes can also be selected for implementing CP. However, a smaller pixel size often implies larger crosstalk between adjacent pixels. It may be challenging to quantitatively model this effect in the imaging process. Figure 5b and 5c provide two options for preparing the blood-coated surface on the image sensor. We recommend using goat blood as it can be purchased at a low cost and has the smallest blood cell size of 2-3 microns among all animals. One can also obtain a drop of human blood from a finger prick and smear it on the sensor. However, a contaminated lancet or an improper procedure of drawing blood would spread bloodborne diseases.

Mechanical stage selection for CP:

We choose a low-cost XYZ 3-axis manual stage for the CP set-up. As shown in Fig. 5a, the x and y manual actuators are replaced by two stepper motors for motorized control. The z actuator allows users to adjust the distance between the coded sensor and the specimen. The scanning positions of the modified motorized stage can be inferred from the captured raw images (Box 3). Therefore, it allows open-loop operation without requiring any position feedback from the stage. However, if the sample itself does not have any feature (e.g., a blank glass slide), the positional tracking process may not work and we may need to pair another image sensor for tracking the features from the sample holder18.

Light source selection for CP:

A off-the-shelf fibre-coupled 405-nm laser is adopted for the CP set-up in Fig. 5a. If needed, one can also obtain a 405-nm laser diode from a Blu-ray player and couple the light to a single-mode fibre for CP19. Different from the FP approach where chromatic dispersion can be partially compensated by the lens system, the free-space propagation process in CP disperses light at different wavelengths to different axial planes of the system. If an LED source is adopted for CP, we can use multiple coherent states to model different wavelengths within the spectrum of the low-coherence source6,68.

Software implementation of CP.

In parallel with the incident angle calibration process in FP, we need to estimate the positional shifts of the coded sensor during the image acquisition process. If the system does not contain the coded surface, one can directly track the positional shifts via cross-correlation analysis69. Box 3 discusses an iterative motion-tracking procedure that minimizes the modulation effect of the coded surface. It can precisely track the positional shifts of the coded sensor with deep sub-pixel accuracy. In additional to the solution provided in Box 3, other alternatives can also be taken for tracking the motion of the coded sensor. For example, one can generate a clear region on the coded surface by removing parts of the blood-cell layer. Object diffraction patterns passing through this clear region can then be used for cross-correlation analysis17,70. A precise piezo stage with an encoder can also be adopted for hardware-based motion tracking.

In parallel with the pupil recovery process in FP, we need to recover the transmission profile of the blood-cell layer and use it as the ptychographic probe for subsequent experiments. To this end, we use a blood smear slide as the calibration object and acquire 1500 images for the joint recovery of both the object exit wavefront and the coded surface profile at the modulation plane. Figure 6 shows the reconstruction pipeline of the CP approach, which consists of three major steps: object initialization, iterative reconstruction, and post-measurement refocusing. With the pre-recovered coded surface, we then acquire ~300 raw images for each subsequent experiment, where the coded surface profile remains unchanged in the iterative recovery process. By using this strategy, CP can recover the phase wrapping information of different types of thick objects, including optical lens, prism, bacterial colonies on an uneven agar plate, urine crystals, and cytology smears17-19. The phase profiles of these objects contain slow-varying contents with many 2π wraps, and they are challenging to recover using conventional lensless imaging techniques.

Fig. 6∣. Reconstruction process of CP.

Fig. 6∣

(Left) The workflow of the CP reconstruction process. (Right) The detailed reconstruction process consists of initialization, iterative reconstruction, and image refocusing steps.

Materials

Reagents

  • Goat blood 50 ml (Lampire Biological Laboratories, cat no.7202501)

    ▲CRITICAL Blood samples should be stored in the refrigerator and transported in a cool box (2–8 °C).

  • Silicone (Dow, SYLGARD 184 silicone elastomer kit)

  • Blood smear slide (Carolina, 313158)

Equipment

Equipment for FP

  • Computer with 64 GB RAM and a Graphic card (Nvidia, RTX 3090 Ti)

  • Upright microscope (Nikon, ECLIPSE Ci series)

  • Clamping forks (Thorlabs, model no. CF175 ×4)

  • Solid aluminum optical breadboard (Thorlabs, model no. MB18)

  • Camera (The Imaging Source, model no. DMK 33UX183)

  • USB cable for camera connection (The Imaging Source, model no. CA-USB30-AmB-BLS)

  • XYZ 3-axis manual stage (ToAuto, model no. LD60-LM)

  • Post holder (Thorlabs, model no. UPH1 ×4)

  • Post holder (Thorlabs, model no. UPH2 ×2)

  • 2×, 0.1 NA objective lens (Nikon, Plan APO)

  • 20×, 0.75 NA objective lens (Nikon, Plan APO)

  • Angle post clamp (Thorlabs, model no. RA90 ×2)

  • Angle post clamp (Thorlabs, model no. SWC)

  • Optical posts (Thorlabs, model no. TR1 ×4)

  • Optical posts (Thorlabs, model no. TR3 ×2)

  • Optical posts (Thorlabs, model no. TR6 ×2)

  • Tripod levelling base (Cavix, model no. LP-64)

  • 17×17 LED array (Custom order)

  • 8×8 LED array (Adafruit, cat no. 3444)

  • 200 mm × 200 mm × 6 mm acrylic sheet (Custom order)

  • Jumper wire (Amazon ×20)

  • M2 screws (NBK, cat no. SNSS ×8)

  • 8-32 × 3/8” cap screws (Thorlabs model no. SH8S025 ×2)

  • 1/4” 20 × 5/8” cap screws (Thorlabs model no. SH25S063 ×4)

  • 8-32 × 1/2" Setscrew (Thorlabs, model no. SS8S050)

  • Microscope stage calibration slide (BoliOptics, cat no. RT20201101)

  • 5V, 2A power source

  • 12V, 2A power source

Equipment for CP

  • Computer with 64 GB RAM and a graphic card (Nvidia, RTX 3090 Ti)

  • Camera (The Imaging Source, model no. DMM 37UX226)

  • USB cable for camera connection (Anker, USB 3.0 to USB C)

  • Image sensor heat sink (Alpha, model no. LPD40-3B ×2)

  • 22 mm × 22 mm coverslips (Thorlabs, model no. CG00C2)

  • XYZ 3-axis manual stage (ToAuto, model no. LD60-LM)

  • Stepper motor (NEMA model no.11 ×2)

  • Stepper motor controller (GRBL, model no. 1.1 V3.4)

  • 405-nm fibre-coupled laser (Thorlabs, model no. LP405-SF10)

  • (Optional, other current drivers can also be used) Laser driver (Thorlabs, model no. ITC4001)

  • Laser diode mount (Thorlabs, model no. LDM9LP)

  • Laser fibre adaptor (Thorlabs, model no. S120-FC)

  • Optical posts (Thorlabs, model no. TR3 ×4)

  • Optical posts (Thorlabs, model no. TR4)

  • Optical posts (Thorlabs, model no. TR1.5)

  • Angle post clamp (Thorlabs, model no. RA90)

  • 1-inch extension tube slip ring (Thorlabs, model no. SM1RC)

  • 1-inch optical tube (Thorlabs, model no. SM1L20)

  • (Optional) 2-inch 200-mm bi-convex lens (Thorlabs, model no. LB1199)

  • (Optional) 2-inch optical tubes (Thorlabs, model no. SM2L30 ×2)

  • (Optional) 2-inch extension tube slip ring (Thorlabs, model no. SM2RC)

  • (Optional) 2-inch adjustable optical tube (Thorlabs, model no. SM2V15)

  • (Optional) Optical tube adapter (Thorlabs, model no. SM1A2)

  • 250 mm × 200 mm × 12 mm acrylic sheet (Custom order)

  • 150 mm × 100 mm × 6 mm acrylic sheet (Custom order)

  • 1/4” 20 × 1/2” setscrews (Thorlabs, model no. SS6MS12 ×4)

  • 8-32 × 3/8" cap screws (Thorlabs model no. SH8S025 ×20)

  • M2 screws (NBK, cat no. SNSS ×4)

  • 24V, 2A power source

Software

Tools

  • 3D printer (MakerBot, model no. Replicator 2)

  • Milling machine (Roland, model no. monoFab SRM-20)

  • Manual driller (Delta, model no. 12 drill press)

  • Rotary carver (Uolor, model no. UH032)

  • 20 μL single channel pipette (Eppendorf, cat no. 3123000055)

  • Vacuum desiccator (Amazon, model no. SP Bel-Art 420200000)

  • Level (MSC, cat no. 67944728)

Procedure

Programable light source development for FP ●Timing 3-5 h

▲CRITICAL We discuss three options for programable light source development. Steps 1-3 discuss the preparation of a custom planar LED matrix with 17-by-17 LED elements. Steps 4-8 discuss an assembled planar LED illuminator using 4 off-the-shelf LED matrixes. Steps 9-12 discuss a non-planar pyramid LED illuminator.

  1. Illuminator 1: single planar LED array (Steps 1-3). Build a customized LED board following the printed circuit board (PCB) design in Supplementary Fig. 1a and Supplementary Data ‘FP_design_singleLEDarray.zip’.

  2. Connect the LED board in Step 1 to an Arduino Uno microcontroller board by following the pin connections in Supplementary Figs. 1b and 1c.

  3. Upload the control code to the Arduino Uno board. The code is provided in Supplementary Software, ‘FP_controlCode_SingleLEDarray.ino’.

  4. Illuminator 2: assembled planar LED illuminator (Steps 4-8). Prepare 4 off-the-shelf LED matrixes (Adafruit, product ID: 3444). Follow Supplementary Fig. 2a to cut the edge of the LED boards and smooth the incision with a rotary carver (Uolor, UH032). Solder 4 jumper wires to the input pins of each LED matrix.

  5. Prepare an acrylic base with 12 mounting holes: 8 holes for the 4 LED matrixes, 2 holes for the Arduino Uno board, and the remaining 2 for mounting a XYZ 3-axis manual stage (ToAuto, LD60-LM). The design file for the acrylic base is provided in Supplementary Data, ‘FP_4LEDholder.SLDPRT’.

  6. Attach the 4 LED matrixes on the acrylic base using 8 M2 screws, as shown in Supplementary Fig. 2b1.

  7. Connect the jumper wires from the LED matrixes to the Arduino Uno board according to Supplementary Fig. 2b2.

  8. Upload the control code to the Arduino Uno board. The Arduino code is provided in Supplementary Software, ‘FP_controlCode_Adafruit_DotStar.ino’.

  9. Illuminator 3: non-planar pyramid illuminator (Steps 9-12). Prepare a 3D printed holder according to Supplementary Data, ‘FP_3LEDholder.SLDPRT’. Prepare 3 off-the-shelf LED matrixes (Adafruit, product ID: 3444).

  10. Polish the attaching surfaces of the holder with sandpapers. Attach the 3 LED matrixes to the holder according to Supplementary Fig. 2c1.

  11. Connect the jumper wires from the LED matrixes to the Arduino Uno board according to Supplementary Fig. 2c2.

  12. Upload the control code to the Arduino Uno board. The Arduino control code is provided in Supplementary Software, ‘FP_controlCode_Adafruit_DotStar.ino’.

System integration and alignment for FP ●Timing 1-2 h

▲CRITICAL A proper optical alignment can facilitate the calibration process of FP. In this section, we explain how to align the LED illuminator with respect to the camera. For a planar LED matrix with a constant pitch, we use the brightfield-to-darkfield transition features for system alignment. For other freeform illuminators, we turn on one LED element as the reference point and use a blood smear slide as the calibration object. We then adjust the position of the illuminator using a mechanical stage so that the blood smear object does not shift laterally when we move the object to a defocus plane.

  • 13

    Install MATLAB and the Image Acquisition Toolbox on a computer.

  • 14

    Install IC Capture software and the driver for the 33U series (Version 5.1.0.1719) of the camera (The Imaging Source, DMK 33UX183).

  • 15

    Install CUDA driver (CUDA Toolkit v11.7) for the Nvidia GPU.

  • 16

    System integration (Steps 16-20). Prepare an optical breadboard (Thorlabs, MB18) as the base for the FP system.

  • 17

    Mount the microscope (Nikon, Eclipse Ci-S) on the optical breadboard using 4 clamping forks (Thorlabs, CF175). Mount the 2×, 0.1 NA objective lens (Nikon, Plan APO) and the 20×, 0.75 NA objective lens to the microscope.

  • 18

    Mount the XYZ 3-axis manual stage on the optical breadboard using 4 optical posts (Thorlabs, TR1) and 4 post holders (Thorlabs, UPH1).

  • 19

    Mount the LED illuminator on the XYZ 3-axis manual stage using 2 optical posts (Thorlabs, TR1.5 and TR6), an angle post clamps (Thorlabs, SWC) and a setscrew (Thorlabs, SS8S050). Use the XYZ stage to translate the illuminator laterally so that the illuminator is visually under the objective lens. Use the XYZ stage to translate the illuminator axially so that the distance between the illuminator and the sample stage is approximately 5 cm.

    ? TROUBLESHOOTING

  • 20

    Mount the camera to the photo port of the Nikon microscope with a C-mount adaptor. Connect the camera to the computer with the USB 3.0 cable (The Imaging Source, CA-USB30-AmB-BLS).

    ▲CRITICAL STEP Make sure the camera is connected to a USB 3.0 port (or above) on the computer. If connected to a USB 2.0 port, the camera cannot be operated at its maximum framerate.

  • 21

    System alignment for a planar LED matrix with a well-defined pitch (Steps 21-26). Use a level to check whether the planar illuminator is in parallel with the sample stage. If needed, attach the illuminator to a tripod levelling base (Cavix, LP-64) and then mount the tripod levelling base on the XYZ 3-axis manual stage.

  • 22

    Put the 2×, 0.1 NA objective lens in place for system alignment.

  • 23

    Use a blank glass slide as the sample and select an LED element underneath the center of the field of view as the reference point. Turn on 4 adjacent centrosymmetric LED elements for system alignment.

    ▲CRITICAL STEP The 4 centrosymmetric LEDs are chosen so that their incident angles are close to the maximum acceptance angle of the objective. As a result, the captured image corresponding to these 4 LEDs exhibits centrosymmetric brightfield-to-darkfield transition features. A dashed cross marker from these features can then be used to align the LED matrix, as shown in panel a of Box 2.

  • 24
    Use the following MATLAB code to generate a cross pattern on the captured image and display it on the computer screen.
    vid=videoinput(tisimaq_r2013_64,1,Y800(5472x3648));src=getselectedsource(vid);vid.FramesPerTrigger=1;%set the number of frames captured in single triggersrc.Exposure=0.01;%set the exposure time(in seconds)src.ExposureAuto=Off;%turn off the auto exposure time adjustmentsrc.FrameRate=18;%set the frame ratesrc.Gain=0;%set the gain to0src.GainAuto=Off;%turn off the auto gain adjustmentsrc.Sharpness=0;%set the sharpness to0vid.ROIPosition=[0054723648];%set the region of interestfprintf(LED_Matrix,F);pause(0.1);%send the command to Arduino to turn off all LEDs%Turn on 4 centrosymmetric LEDs with the illumination NAs close to the NA of the objective lens.fprintf(LED_Matrix,M);%Theforloop below displays a cross pattern on the captured image for the alignment process.Onecan adjust the LED matrix position so that the cross pattern matches with the brightfield-to-darkfieldtransition features.fori=1:300%adjust the loop number as neededimTemp=getsnapshot(vid);imshow(imTemp,[]);line(1:5472,1824.ones(1,5472),color,r,LineWidth,2);line(2736.ones(1,3648),1:3648,color,r,LineWidth,2);end
  • 25

    Adjust the position of the LED matrix so that the cross pattern overlaps with the brightfield-to-darkfield transition features of the captured image (Box 2).

    ? TROUBLESHOOTING

  • 26

    Turn off all the LEDs.

  • 27

    System alignment for non-periodic or non-planar illuminators (Steps 27-31). Put the 2×, 0.1 NA objective lens in place for system alignment. Place a blood smear slide (Carolina, 313158) on the sample stage.

  • 28

    Turn on the central LED element of the illuminator for sample illumination. Open the IC Capture software.

  • 29

    Adjust the exposure time in the IC Capture software. Adjust the focus knob to bring the sample into focus.

  • 30

    Adjust the focus knob to bring the sample to different defocus planes. Translate the LED illuminator using the XYZ manual stage so that the images of the blood smear sample do not shift laterally during the defocusing process.

  • 31

    Turn off the LED and close the IC Capture software.

Incident angle calibration for FP ●Timing 0.5-1 h

CRITICAL As discussed in Box 2, two approaches are provided for obtaining the incident wavevectors of different LED elements. The first one (Steps 37-39) is for planar LED matrixes with a well-defined pitch, and the second one (Steps 40-48) is for freeform illuminators with LED elements arranged at arbitrary 3D locations.

  • 32

    Turn on the reference LED element of the illuminator and open the IC Capture software.

  • 33

    Infer the magnification factor of the 2× objective lens (Steps 33-36). Place the length calibration slide (BoliOptics, RT20201101) on the sample stage. The slide contains 101 grid lines and the distance between adjacent grid lines is 10 μm. The distance between the first and the last grid lines is 1 mm.

  • 34

    Bring the slide to the centre of the camera field of view. Adjust the focus knob to bring the captured image into focus.

  • 35

    Align the orientation of the grid lines to the vertical direction of the captured image.

  • 36

    Measure the number of pixels across the 1-mm width of all grid lines. Calculate the magnification factor of the objective lens mag:mag=(pSize×pNum)(1mm), where ‘pSize’ represents the pixel size of the camera (2.4 μm), ‘pNum’ represents the measured pixel number covering the width of all grid lines. The measured magnification factor is usually slightly different from the value labelled on the objective lens.

    ? TROUBLESHOOTING

  • 37

    Incident angle calibration for planer LED matrix with a well-defined pitch (Steps 37-39). Place the blood smear slide on the sample stage and adjust the focus knob to bring the captured image into focus.

  • 38

    Measure the distance between the sample and the LED board using a ruler.

    ▲CRITICAL STEP If needed, we can test different distance values in the FP reconstruction process and select the one with the best reconstruction quality.

  • 39

    Calculate the incident wavevector based on the LED position and the distance between the sample (Box 2). The related code is provided in Supplementary Software, ‘FP_calibrateAngle.m’.

    ▲CRITICAL STEP We assume the central position of the field of view is (0,0). For different regions of the field of view, the incident wavevector will be different according to Eq. (2).

  • 40

    Incident angle calibration for freeform illuminators (Steps 40-48). Put the 20×, 0.75 NA objective lens (Nikon, Plan APO) in place for the calibration process.

    ▲CRITICAL STEP A high-NA objective lens is used so that all incident angles of the LED elements are within the NA of the objective lens. Other high-NA objective lenses can also be used here.

  • 41

    Place the blood smear slide on the sample stage and adjust the focus knob to bring the captured image into focus. Move the sample stage towards the objective lens by 80 μm. Close the IC Capture software.

    ▲CRITICAL STEP Each division on the fine focus knob represents a 1-μm shift in the axial direction. If the defocus distance is too short, the lateral shift induced by different LED elements will be too small for accurate incident angle estimation. If the defocus distance is too large, the defocus effect would compromise the correlation analysis process. A distance of 80 μm is a good choice considering these factors.

    ▲CRITICAL STEP Close the IC Capture software before running any MATLAB code that includes camera communication commands.

  • 42

    Adjust the exposure time using the following MATLAB code.

    ▲CRITICAL STEP The percentage of over-exposed pixels will be printed in the command window using the code below. We recommend setting an exposure time so that the ratio of over-exposed pixels is 0.01%.
    basicExptime=0.01;src.Exposure=basicExptime;%set the exposure timeimageTestExp=getsnapshot(vid);%capture an imageimshow(imageTestExp,[]);%show the image in a figure windowoverExpNum=(sum(imageTestExp(:)>=255));%the amount of the over-exposed pixelsdisp([Overexposurepercentage:,num2str(overExpNumlength(imageTestExp(:))),%]);
  • 43

    Capture a reference image ‘referenceImg’. Turn off the central LED element of the illuminator.

  • 44
    Turn on all LEDs sequentially and capture a set of images using the following MATLAB code. The captured images will be saved in ‘imageCalibration’.
    imNum=289;%set to capture 289 images for calibrationfori=1:imNumfprintf(lightSource,controlLed(i,ledColour));%send command to Arduino to turn on LEDimageCalibration(:,:,i)=getsnapshot(vid);%capture imagespause(0.1);%pause0.1seconds between each acquisitionend%Sent commands to the Arduino to turn off all LEDsfprintf(lightSource,F);
  • 45

    Estimate the positional shift between the cropped image ‘imageCrop’ and the cropped reference image ‘referenceCrop’ using the following MATLAB code with the function ‘dftregistration’71.

    ▲CRITICAL STEP We crop the central 1024×1024-pixel region ‘referenceCrop’ from the reference image ‘referenceImg’ captured in Step 43. We crop the same region from ‘imageCalibration’ captured in Step 44 to obtain ‘imageCrop’.
    [output,]=dftregistration(fft2(referenceCrop),fft2(imageCrop(:,:,i)),100);shiftX(i)=output(4);%the shift along the x directionshiftY(i)=output(3);%the shift along the y direction
  • 46

    Move the illuminator along the x direction by 5 mm using the XYZ 3-axis manual stage and repeat Steps 44 and 45. Move the illuminator back to the original position after the image acquisition process.

  • 47

    Calculate the 3D positions of light source elements based on the defocused distance and the estimated shifts obtained in Steps 45 and 46. Refer to Box 2 for the detailed discussion. The related code is provided in the Supplementary Software, ‘FP_estimate3DpositionLed.m’.

  • 48

    Calculate the incident wave vectors of different elements based on the 3D positions obtained from Step 47. Refer to Box 2 for the detailed discussion. The related code is provided in Supplementary Software, ‘FP_calibrateAngle.m’.

Recover the pupil aberration for FP ●Timing 2-3 h

CRITICAL In this section, we discuss how to acquire a calibration dataset to recover the pupil aberration of the FP platform. Since it is a one-time process, the acquisition time does not need to be optimized. In our implementation, we set a long exposure time for darkfield image acquisition and sequentially turn on all LED elements for increasing the data redundancy of the ptychogram dataset.

  • 49

    Put the 2×, 0.1 NA objective lens in place for image acquisition.

  • 50

    Put the calibration blood smear slide on the sample stage. Turn on the reference LED. Open the IC Capture software. Adjust the focus knob to bring the image into focus. Close the IC Capture software. ▲CRITICAL STEP In the calibration experiment, we recommend using a thin and uniform sample with rich spatial details. The blood smear slide is an ideal choice. An improper calibration sample (e.g., a sparse sample) may lead to failure of the joint object-pupil recovery process.

  • 51

    Adjust the exposure time as in Step 42. Sequentially turn on all LEDs to capture images as in Step 44.

    ▲CRITICAL STEP To reduce the impact of the limited dynamic range of the camera, we repeat the acquisition process with three different exposure times.

    ▲CRITICAL STEP We recommend using a dark room to minimize the impact of ambient light.

  • 52

    Process the 3 sets of images from Step 51 via a high dynamic range (HDR) combination. The related code is provided in Supplementary Software, ‘FP_hdrCombine.m’. The combined HDR images are divided into 512-by-512 pixels segments. For each small segment, the images containing brightfield-to-darkfield transition features are excluded from the dataset.

    ? TROUBLESHOOTING

  • 53

    Define the system parameters in MATLAB, including the NA, the magnification factor of the objective lens, the illumination wavelength, the camera pixel size, and the final desired pixel size of the reconstruction. Initialize the object and the pupil function using the following MATLAB code.

    ▲CRITICAL STEP For object initialization, we average all captured images to obtain an incoherent image and up-sample it to the same size as that of the reconstruction. The initial pupil function is defined as a circular aperture as in Eq. (5). The weights for Zernike modes are set to zeros.
    F=@(x)fftshift(fft2(x));%define Fourier transformationinvF=@(x)ifft2(ifftshift(x));%define inverse Fourier transformationimageAmpSeq=imageHDR.0.5;%the amplitude ofimageHDR;pratio=4;%the up-sampling factorinitialObj=mean(imageAmpSeq,3);%average all the imagesinitialObj=imresize(initialObj,pratio);%up sampleinitialObjinitialObjF=F(initialObj);%object initialization in the Fourier domaininitialPupil=((kxm.2+kym.2)<cutoff2);%pupil initialization
  • 54
    Recover the pupil using the following MATLAB code. Refer to the section ‘Software implementation of FP’. The related code is provided in Supplementary Software, ‘FP_simulationRecover.m’. The code for generating Zernike modes is provided in Supplementary Software, ‘FP_genZernike.m’ using the MATLAB function ‘zernfun’72.
    objectRecoverF=initialObjF;%the Fourier spectrum of the objectpupilRecover=initialPupil;%the pupil functionalphaO=1;%the parameter of rPIEalphaP=1;%the parameter of rPIEloopNum=50;%the iteration numbermethodType=GD;%the algorithm used for updating the pupilw=zeros(modeNum,1);%the weight of Zernike modesforiLoop=1:loopNumfori=1:imNum%Crop a sub-region from the Fourier spectrumsubObjF=objectRecoverF((M-m)2+kxi(i):(M + m)2+kxi(i)1,(N-n)2+kyi(i):(N+n)2+kyi(i)1);%Low pass filtered by the pupil and generate the low-resolution imagelowResFT=(1pratio)2subObjF.pupilRecover;imlowRes=invF(lowResFT);%Replace the amplitude and keep the phase unchangedimlowRes=(pratio)2imageAmpSeq(:,:,i).exp(1i.angle(imlowRes));%Use the rPIE algorithm to update the Fourier spactrumupdatedLowResFT=F(imlowRes);objectRecoverF((M-m)2+kxi(i):(M+m)2+kxi(i)1,(N-n)2+kyi(i):(N+n)2+kyi(i)1)=subObjF+conj(pupilRecover).((1alphaO)abs(pupilRecover).2+alphaOmax(max(abs(pupilRecover).2))).(updatedLowResFT-lowResFT);updateSubObjF=objectRecoverF((M-m)2+kxi(i):(M+m)2+kxi(i)1,(N-n)2+kyi(i):(N+n)2+kyi(i)1);switchmethodTypecaseGD%recover the pupil using gradient descent algorithmlowResFT=(1pratio)2updateSubObjF.pupilRecover;imlowRes=invF(lowResFT);imageDiff=1.max(max((pratio)2imageAmpSeq(:,:,i))).(1-(1pratio)2.imageAmpSeq(:,:,i).(abs(imlowRes)));parformodeIdx=1:modeNumgdTemp=ifft2(ifftshift(lowResFT.ZernikeModes(:,:,modeIdx)));%The gradient with respect to each weightgd=2.sum(sum(imageDiff.imag(conj(imlowRes).gdTemp)));w(modeIdx=w(modeIdx)+110(-4).gd;%update each weightendtmpzfun=zeros(m,n);formodeIdx=1:modeNum%Update the weight sum of Zernike modestmpzfun=tmpzfun+w(modeIdx).ZernikeModes(:,:,modeIdx);end%Update the pupil functionpupilRecover=exp[1j.tmpzfun).lowFilter;caserPIE%recover the pupil using the rPIE algorithmpupilRecover=pupilRecover+conj(updateSubObjF).((1alphaP)abs(updateSubObjF).2+alphaPmax(max(abs(updateSubObjF).2))).(updatedLowResFT-lowResFT);endendend
  • 55

    Check whether the reconstruction quality of the object is comparable to the typical results. The left panel of Fig. 4 shows the typical recovered images of blood cells.

  • 56

    Save the recovered pupil as a .mat file.

    ■PAUSE POINT Load the .mat file and set this pupil as the initial guess when performing subsequent FP reconstructions.

Image acquisition and reconstruction for FP ●Timing 0.5-1 h

CRITICAL Different from the previous pupil calibration section, we aim to shorten the acquisition time in regular FP experiments. We take two approaches as follows. First, we set the camera's digital gain to 26 dB for darkfield image acquisition. As such, we can substantially shorten the acquisition time (originally 1 s now becomes 0.05 s). Second, we skip some of the LED elements that have high illumination NAs (corresponding to darkfield images). Non-uniform sampling according to the illumination NAs is an effective strategy for reducing the number of acquisitions73,74. For all selected LED elements, we capture 3 sets of images of the same object with different camera gains and exposures. These 3 sets of images are then combined into one HDR dataset. In the reconstruction process, we use the pre-recovered pupil from the previous section as an initial guess and use the rPIE algorithm67 to further refine it in the iterative process.

  • 57

    Select a portion of the LED elements for the image acquisition process.

    ▲CRITICAL STEP As most signal energy concentrates in the central region of the Fourier spectrum, a higher density of sampling points can be adopted for the central region of the Fourier space. For darkfield images with large incident angles, the aperture overlap between adjacent acquisitions can be reduced to 20% or less74. To this end, we keep all brightfield LEDs while skipping every other darkfield LED in the acquisition process. With this illumination strategy, we reduce the number of acquisitions to 117 using the planar illuminator shown in Fig. 3b1. If needed, one can further reduce the number by skipping more LEDs in the darkfield regime. The index of the selected LEDs is saved as ‘usedLed’ in the MATLAB code.

  • 58

    Place the specimen under testing on the sample stage. Turn on the central LED of the illuminator. Open the IC Capture software. Adjust the focus knob to bring the image into focus. Close the IC Capture software.

  • 59

    Adjust the exposure time as in Step 42. This exposure time is set as the reference exposure.

  • 60

    Set different camera gains for the acquisition process. For each LED, we capture 3 images by setting the gain to 0 dB, 26 dB, and 26 dB, respectively (a gain of 26 dB amplifies the signals by 20 times).

    ? TROUBLESHOOTING

  • 61
    Turn on the selected LEDs sequentially using the following MATLAB code. For the 3 sets of images, we set the exposure times to be 1, 1, and 3.5-fold of the reference exposure time in Step 60. The total acquisition time is ~40 seconds with a framerate of 18 frames per second. We note that we do not need to capture all 3 sets of images for some of the LEDs. For example, the second and the third sets of images will be over-exposed under brightfield illumination. Similarly, the first set of images will be under-exposed under darkfield illumination. Excluding these over- and under-exposed images can further reduce the acquisition time.
    lrImageSeq=zeros(3648,5472,length(usedLed),3,uint8);expTimeArray=[1,1,3.5];%the ratio used for setting different exposure timesgainArray=[0,26,26];%gain array for image acquisitionstriggerconfig(vid,manual);%set the trigger mode to manualset(vid,FramesPerTrigger,1);%set the number of frames captured in single triggervid.TriggerRepeat=length(usedLed)-1;%set to capturelength(usedLed)images in totalforcaptureLoop=1:3%loops for image acquisitionframeAvail=0;%flag to check the camerasstatusstart(vid);pause(0.1);src.Gain=gainArray(captureLoop);%set gainsrc.Exposure=basicExptimeexpTimeArray(captureLoop);%set the exposure timefprintf(lightSource,F);pause(0.1);%turn off all the LEDsfori=1:length(usedLed)j=usedLed(i);%the index of the LED to turn onfprintf(lightSource,controlLed(j,ledColour));%turn on the LEDtrigger(vid);%trigger the camera to capture imagenumAvail=vid.FramesAvailable;%check the feedback of the camera%Check if the current trigger has endedwhile(numAvail=frameAvail+1)numAvail=vid.FramesAvailable;%check the feedback of the cameraendframeAvail=numAvail;pause(0.07);%pause 0.07 seconds between each acquisitionendfprintf(lightSource,F);%turn off all the LEDsframes=vid.FramesAvailable;%Extract the captured images tolrImageSeqlrImageSeq(:,:,:,captureLoop)=squeeze(getdata(vid,frames));%Capture a background imagebackgroundImage(:,:,captureLoop)=getsnapshot(vid);stop(vid)end
  • 62

    Perform HDR combination of the captured 3 sets of images as in Step 52.

    ? TROUBLESHOOTING

  • 63

    Recover the object and update the pupil as in Steps 53-54.

    ▲CRITICAL STEP In this refinement step, the pupil is initialized as the pre-recovered pupil in Step 54. The ‘methodType’ in the code is set as ‘rPIE’ for the refinement process.

    ? TROUBLESHOOTING

  • 64

    Check whether the image quality is comparable to the typical results. The left panel of Fig. 4 shows the recovered images of blood cells. Other typical results are provided in the ‘Anticipated results’ section.

  • 65

    Save the results and close MATLAB.

Prepare the blood-coated sensor for CP ●Timing 1 h

▲ CRITICAL In the CP set-up, a dense monolayer of blood cells severs as a high-performance computational bio-lens for the ptychographic reconstruction process. We recommend using goat blood for its low cost ($29 per 50 mL) and the smallest cell size (2-3 microns) among all animals. Human blood obtained from a finger prick can also be used at the risk of spreading bloodborne diseases.

▲CRITICAL The goat blood can be smeared on the image sensor and then fixed with methanol (Steps 67-68). Alternatively, it can be smeared on a thin coverslip (Thorlabs, CG00C2) followed by methanol fixation. This blood-coated coverslip can then be attached to the image sensor, with the blood cells sandwiched in between the coverslip and the sensor’s coverglass (Steps 69-73). For both options, blood fixation is critical to preserve cell morphology over a long period.

  • 66

    Take 1 mL of goat blood (Lampire, 7202501) with a pipette and put it into a centrifuge tube. Centrifuge the tube at 1500 rpm for 5 minutes to separate the blood cells from the plasma. Alternatively, leave the tube in the refrigerator at 4 °C for 1 day. Blood cells can be separated from plasma due to gravity.

  • 67

    Option 1: Smear blood on the image sensor (Steps 67-68). Use a pipette to transfer 2 μL blood cells to one end of the image sensor (The Imaging Source, DMM 37UX226-ML). Position a glass slide at an angle of 30-45 degrees to the image sensor. Drag the slide backward carefully to smear the blood over the image sensor, as shown in Fig. 5b.

    ▲CRITICAL STEP If the blood-cell layer is too thin or too thick, we can use alcohol and tissue paper to remove the layer and repeat the smearing process. An ideal coded surface contains a dense and uniform monolayer of blood cells covering the entire image sensor. The right panel of Fig. 2 shows a sample image of the blood-cell layer profile.

    ? TROUBLESHOOTING

  • 68

    Air dry the blood-cell layer and fix it with methanol for 2 seconds.

  • 69

    Option 2: Smear blood on a coverslip. Use a pipette to transfer 2 μL of goat blood on one end of a thin coverslip (Thorlabs, CG00C2).

  • 70

    Use a clean glass slide positioned at an angle of 30-45 degrees to the coverslip. Drag the slide backward carefully to smear the blood over the entire coverslip, as shown in Fig. 5c.

  • 71

    Air dry the blood-cell layer and fix it with methanol for 2 seconds.

  • 72

    Prepare 2 mL silicone (Dow, SYLGARD 184 silicone elastomer kit) following a 10 (base) to 1 (curing agent) mixing ratio.

    ▲CRITICAL STEP If needed, use a vacuum desiccator (Amazon, SP Bel-Art 420200000) to remove the air bubbles in the mixed liquid. The operation instruction for silicone preparation can be found on its official website75.

  • 73

    Use a pipette to transfer 2 μL silicone liquid to the centre of the image sensor. For the blood-coated coverslip, use a microscope to select a 7 mm by 7 mm region with a dense monolayer of blood cells. If such a region cannot be found, repeat Steps 69-71. Align this region with the image sensor. Attach the coverslip to the image sensor with the blood-cell layer and silicone liquid sandwiched in between. Press the coverslip to the image sensor and hold it for 1 day until they stick firmly with each other.

Motorized stage development and system integration for CP ●Timing 1 d

CRITICAL We modify a compact XYZ 3-axis manual stage for translating the blood-coated image sensor in the lateral directions. Figure 7 shows the overall stage assembling process, where we replace the manual actuators with stepper motors for motorized control. Figure 8 shows the assembling process of the CP platform, where we integrate the baseboard (Fig. 8a), the slide holder (Fig. 8b), the motorized stage (Fig. 8c), the coded sensor (Fig. 8d), and the control board (Fig. 8e) into a compact system (Fig. 8f).

Fig. 7∣. Stage assembling process for CP.

Fig. 7∣

(a) The original low-cost manual stage. (b) Different parts for stage modification. (c) Replace the actuator with a stepper motor. (d) The modified motorized stage.

Fig. 8∣. System integration for CP.

Fig. 8∣

(a) Baseboard preparation. (b) Slide holder assembling. (c) Motorized stage assembling. (d) Coded sensor assembling. (e) Control board integration. (f) The schematic of the CP system.

CRITICAL In general, we do not need to collimate the laser light from the single-mode fibre for sample illumination (Step 83). However, the image quality of reconstruction may be degraded if the fibre is placed too close to the sample (<10 cm). If needed, one can collimate the laser light using a lens (Steps 84-85). A key consideration is to generate a large collimated beam so that the intensity is uniform across the entire image sensor.

  • 74

    Attach two heat sinks (Alpha, LPD40-3B) to the integrated circuit chips of the image sensor board, as shown in Fig. 8d.

  • 75

    Mount the 405-nm fibre-coupled laser (Thorlabs, LP405-SF10) using the laser diode mount (Thorlabs, LDM9LP), and connect the laser with the driver (Thorlabs, ITC4001).

    ▲CRITICAL STEP We only need <5 mW output laser power and the corresponding exposure time is in the millisecond range. The temperature controlling module may be optional for such a low-power operation. If needed, one can use a USB fan to prevent the overheating of the laser diode for long-time operation. A simple current driver can also be used to drive the laser diode.

  • 76

    Prepare the 3D-printed parts for building the motorized stage. The related SolidWorks design files are provided in Supplementary Data, ‘CP_stageSolidWorks.zip’. Assemble the motorized stage according to Fig. 7. Another option is to use a commercially available motorized stage for translating the blood-coated sensor.

  • 77

    Prepare the acrylic base with holes using the drilling machine (Delta, 12 drill press). Figure 8a shows the design of the acrylic base. The SolidWorks design file is provided in Supplementary Data, ‘CP_base.SLDPRT’.

  • 78

    Make a sample holder using the milling machine (Roland, monoFab SRM-20). Attach the sample holder to the acrylic base using 4 optical posts (Thorlabs, TR3). Figure 8b shows the dimension of the sample holder and the detailed assembling process. The SolidWorks design file is provided in Supplementary Data, ‘CP_sampleHolder.SLDPRT’.

  • 79

    Attach the motorized stage from Step 76 to the acrylic base, as shown in Fig. 8c.

  • 80

    Make a sensor holder using the milling machine. Assemble the blood-coated sensor onto the sensor holder. Attach the sensor holder onto the motorized stage using screws, as shown in Fig. 8d. The SolidWorks design file is provided in Supplementary Data, ‘CP_sensorHolder.SLDPRT’.

  • 81

    Place the GRBL 3-axis control board (Amazon, ASIN: B083BFBBVY) at the side of the acrylic base. Connect the wires from the motorized stage to the control board, as shown in Fig. 8e. The SolidWorks design files are provided in Supplementary Data, ‘CP_motorHolderSolidWorks.zip’.

  • 82

    Attach an optical post (Thorlabs, TR4) to the corner of the slide holder.

  • 83

    Option 1: Non-collimated beam from the fibre (Step 83). Attach a 1-inch tube slip ring (Thorlabs, SM1RC) to the post on the slide holder. Attach a fibre adaptor (Thorlabs, S120-FC) to an optical tube (Thorlabs, SM1L20) and mount the assembly to the tube slip ring. Adjust the axial position of the assembly. Connect the single-mode fibre to the fibre adaptor, as shown in Fig. 8f.

    ▲CRITICAL STEP In the calibration process, the illumination beam information is coded into the blood-cell layer profile on the image sensor. Therefore, we can simply model the spherical wave from the fibre as a plane wave for both the calibration and the subsequent CP experiments. However, if the fibre source is placed too close to the sample (<10 cm), this plane-wave approximation may fail, and the resulting image quality will be degraded. We recommend a distance of >15 cm between the fibre source and the coded sensor.

    ? TROUBLESHOOTING

  • 84

    Option 2: Collimated beam (Steps 84-85). Connect the single-mode fibre to the fibre adaptor (Thorlabs, S120-FC). Turn on the laser. Collimate the fibre-coupled light source using a 2-inch, 200-mm bi-convex lens (Thorlabs, LB1199) mounted on an adjustable lens tube (Thorlabs, SM2V15). Turn off the laser after the collimation process.

    ! CAUTION Wear eye protection goggles when working with the laser. Proper laser safety training is also required.

  • 85

    Attach a 2-inch tube slip ring (Thorlabs, SM2RC) to the post on the slide holder. Attach the collimated light source to the tube slip ring. Adjust the axial position of the assembly. We recommend a distance of 5-10 cm between the collimated lens to the image sensor.

  • 86

    For both options, turn on the laser and align the beam so that the image sensor locates at the beam centre. Turn off the laser after the alignment process.

Motion tracking based on the captured CP images ●Timing 0.5-1 h

CRITICAL We estimate the positional shifts of the coded sensor via the iterative correlation analysis approach discussed in Box 3. Precise mechanical scanning is not required in our implementation.

  • 86

    Install MATLAB with the Image Acquisition Toolbox.

  • 87

    Install IC Capture and the driver for the 33U series (Version 5.1.0.1719) of the image sensor (The Imaging Source, DMM 37UX226-ML).

  • 88

    Install CUDA driver (CUDA Toolkit v11.7) for the Nvidia GPU.

  • 89

    Install the driver for GRBL motor control board. The driver is provided in Supplementary Software, ‘CP_CH340SER.EXE’.

  • 90

    Connect the motor control board to the computer. Connect the control board with a 24V/2A power source.

  • 91
    Connect the image sensor to the computer using a USB 3.0 type C cable. Use the following MATLAB code to initialize the parameters of the sensor:
    vid=videoinput(tisimaq_r2013_64,1,Y800(4000x3000));src=getselectedsource(vid);vid.FramesPerTrigger=1;%set the number of frames captured in single trigger eventsrc.Exposure=0.001;%initialize the exposure timesrc.ExposureAuto=Off;%turn off the auto exposure time adjustmentsrc.FrameRate=30;%set the frame rate to the maximum valuesrc.Gain=0;%set the gain of the sensor to0src.GainAuto=Off;%turn off the auto gain adjustmentsrc.Sharpness=0;%set the sharpness of the sensor to0src.Trigger=Enable;%enable trigger for image acquisition
  • 93

    Use a stained blood smear slide (Carolina, 313158) as a calibration object. Attach this slide to the sample holder (with the blood cells and the coverslip facing the coded sensor). Adjust the axial position of the coded sensor using the z stage. We recommend a small distance of <0.5 mm between the object and the coded surface.

    ▲CRITICAL STEP The blood smear slide is an ideal calibration object as it contains fine spatial features over the entire microscope slide.

  • 94

    Turn on the laser. Adjust the exposure time of the image sensor so that the ratio of over-exposed pixels is less than 0.01%.

    ▲CRITICAL STEP We recommend avoiding capturing severely over-exposed raw images in the acquisition process. A darkroom is also recommended to reduce the signals generated by ambient light.

    ? TROUBLESHOOTING

  • 95
    Turn on the stage motor, and initialize the stage scanning parameters using the following MATLAB code:
    fprintf(Stage,Set_XaxisAccel(500));pause(0.5);%maximize x-axis accelerationfprintf(Stage,Set_YaxisAccel(500));pause(0.5);%maximize y-axis accelerationfprintf(Stage,Set_XaxisSpeed(1000));pause(0.5);%maximize x-axis speedfprintf(Stage,Set_YaxisSpeed(1000));pause(0.5);%maximize y-axis speed
  • 96
    Capture 1500 raw images by translating the sensor in a spiral route. We recommend a step size of ~2 microns in the scanning process. The following MATLAB code is provided to move the stage and capture the raw images. The detailed image acquisition code is provided in Supplementary Software, ‘CP_imageAcquisitionCalibration.m’.
    vid.FramesPerTrigger=1;%capture one image for each trigger eventvid.TriggerRepeat=imNum1;%set to capture 1500 images in totalstart(vid);preNumAvail=0;%flag to check the image sensorsstatusfori=1:imNumdisp(i)x1=xPosDefined(i);%load pre-defined x positionsy1=yPosDefined(i);%load pre-defined y positionsfprintf(Stage,MotorXYAbsoluteMove(y1,x1));%move the stagepause(0.1);%pause 0.1 seconds for stage movingtrigger(vid);%trigger the image sensor to capture one imagenumAvail=vid.FramesAvailable;while(numAvail=preNumAvail+1)numAvail=vid.FramesAvailable;%check the feedback of the sensorendpreNumAvail=numAvail;pause(0.1);%pause 0.1 seconds between each acquisitionendfprintf(Stage,MotorXYAbsoluteMove(0,0));%move the stage back to the original positionframes=vid.FramesAvailable;imRaw=squeeze(getdata(vid,frames));%output images from the image sensorsRAMstop(vid);
  • 97

    Turn off the laser and save the captured raw images.

  • 98
    Crop 500×500 pixels for each captured image. Convert the image format from ‘uint8’ to ‘single’. Generate the initial reference image ‘imRefInitial’ using the following MATLAB code (also refer to Box 3 for the detailed discussion).
    imRefInitial=imRawCrop(:,:,1).(sum(imRawCrop,3));%imRefInitial: the initial reference image%imRawCrop: cropped raw images
  • 99

    Estimate the subpixel shifts by locating the maximum point of the cross-correlation maps between the reference image and all cropped tiles. The related MATLAB code is provided as follows, where the function ‘dregistration’71 is used for cross-correlation analysis.

    ? TROUBLESHOOTING
    %Estimate the shift between the reference image and all cropped raw images using cross correlation[output,]=dftregistration(fft2(imRefInitial),fft2(imRawCrop(:,:,i)),100);locY(i)=output(3);%locY: shift along y axislocX(i)=output(4);%locX: shift along x axis
  • 100
    Shift back the cropped tiles based on the estimated shifts. Generate the refined reference image ‘imRefRefine’ using the following MATLAB code:
    fori=1:imNumHs=exp(-1j2pi.(FX0.locX(i)imSize0+FY0.locY(i)imSize0));%generate the phase factorimRefRefineSum=imRefRefineSum+ifft2(fft2(imRawCrop(:,:,i).Hs);%sup-pixel level shiftendimRefRefine=imRefRefineSumimNum;%imRefRefine: the refined reference image
  • 101

    Refine the translational shifts via cross-correlation analysis between the reference image and all cropped tiles.

  • 102

    Repeat Steps 100-101 until the currently estimated route matches the previous one.

    ? TROUBLESHOOTING

Recover the coded surface profile for CP ●Timing 1-2 h

CRITICAL We use the calibration blood smear slide to recover the coded surface profile on the image sensor.

CRITICAL We recommend using a GPU with at least 24 GB of memory for the reconstruction process. One can either recover the full field of view of the sensor or crop a smaller region for testing. We note that the size of the cropped region is recommended to be at least 1024×1024 pixels. A small cropped region can lead to degradation of the spatial resolution.

  • 103

    Shift back and average all measurements. Initialize the object wavefront ‘objectIniGuess’ using the following MATLAB code.

    ▲CRITICAL STEP We recommend using at least 3-fold up-sampling for achieving a good spatial resolution.
    objectSum=zeros(imSize0,imSize0);fori=1:imNum%Generate the phase factor for sub-pixel level shiftHs=exp(-1j2pi.(FX0.locX(i)imSize0+FY0.locY(i)imSize0));%Sum the shifted back measurements,imRaw: captured raw imagesobjectSum=objectSum+ifft2(fft2(sqrt(imRaw(:,:,i))).Hs);end%Propagation back the up-sampled average to initialize the object wavefrontobjectIniGuess=ifft2(ifftshift(invH_d2.padarray(fftshift(fft2(objectSumimNum)),[imSize0(mag-1)2imSize0(mag1)2])));
  • 104
    Initialize the coded surface profile ‘CSIniGuess’ using the following MATLAB code:
    CSIniGuess=ifft2(ifftshift(invH_d2.padarray(fftshift(fft2(mean(sqrt(imRaw),3))),[imSize0(mag1)2imSize0(mag1)2])));
  • 105

    Use the following MATLAB code to perform pixel super-resolution reconstruction. The code for iterative reconstruction is provided in Supplementary Software, ‘CP_simulationRecover.m’.

    ▲CRITICAL STEP The parameters ‘alphaO’ and ‘alphaS’ in the rPIE algorithm are set to ‘1’ in our implementation (also known as the ePIE algorithm40).
    foriLoop=1:loopNumfori=1:imNum%Shift the object wavefrontHs=exp(-1j2pi.(FX.-locX(i)imSize0+FY.-locY(i)imSize0));objectRecoveryShift=ifft2(fft2(objectRecovery).Hs);%Exit wavefront at the coded surface planewaveCSPlane=objectRecoveryShift.CSRecovery;%Propagate exit wavefront to the sensor planewaveSensorPlane=ifft2(ifftshift(H_d2.fftshift(fft2(waveCSPlane))));%Downsample the intensity at the sensor planeintenSensorPlane=conv2(abs(waveSensorPlane).2,PSFpixel,same);intenDownSensorPlane=intenSensorPlane(centerPixel:mag:end,centerPixel:mag:end;%Update the wavefrontratioMap=sqrt(imRaw(:,:,i)).sqrt(intenDownSensorPlane);ratioMap=imresize(gather(ratioMap),mag,nearest);waveSensorPlaneUpdate=ratioMap.waveSensorPlane;%Propagate the updated exit wave back to the coded surface planewaveCSPlaneUpdate=ifft2(ifftshift(invH_d2.fftshift(fft2(waveSensorPlaneUpdate))));%Use rPIE algorithm to update the shifted object wavefrontobjectRecoveryShift=objectRecoveryShift+conj(CSRecovery).(waveCSPlaneUpdatewaveCSPlane).(alphaO.max(max(abs(CSRecovery).2))+(1alphaO).(abs(CSRecovery)).2);%Use rPIE algorithm to update the coded surface profile (calibraction only)CSRecovery=CSRecovery+conj(CSRecovery).(waveCSPlaneUpdate-waveCSPlane).(alphaS.max(max(abs(objectRecoveryShift).2))+(1alphaS).(abs(objectRecoveryShift)).2);%Shift back the object wavefrontHs=exp(-1j2pi.(FX.locX(i)imSize0+FY.locY(i)imSize0));objectRecovery=ifft2(fft2(objectRecoveryShift).Hs);endend
  • 106

    Check the quality of the recovered coded surface. The sample image of the coded surface can be found in the ‘System calibration’ section of Fig. 2. Save the calibrated coded surface profile as a .mat file.

    ■PAUSE POINT The saved .mat file can be used for the subsequent experiments. Load this file when imaging other specimens.

    ? TROUBLESHOOTING

Image acquisition and reconstruction for CP ●Timing 1-2 h

CRITICAL In this section, we perform image acquisition and reconstruction using the pre-recovered coded layer profile. The images are captured when the sample is in continuous motion. The acquisition of the entire dataset only takes ~12 seconds. The reconstruction process shares a similar workflow as in the previous section.

  • 107

    Attach the testing specimen to the sample holder. Adjust the axial position of the coded sensor using the z stage.

    ! CAUTION For live-cell imaging, we recommend covering the electronics with waterproof coating.

  • 108
    Turn on the laser. Modify the scanning parameters to a lower speed using the following MATLAB code. At this speed, the average step size between adjacent measurements is ~1.5 μm when operated at continuous acquisition mode.
    fprintf(Stage,Set_XaxisSpeed(25));pause(0.5);%reduce the x-axis speedfprintf(Stage,Set_YaxisSpeed(600));pause(0.5);%reduce the y-axis speed
  • 109

    Acquire 300 images at a framerate of 30 fps. The total acquisition time is ~12 seconds. The acquisition code is provided in Supplementary Software, ‘CP_imageAcquisitionExperiment.m’.

  • 110

    Repeat Steps 107 and 109 for additional sample acquisitions, if needed. Repeat Step 109 for time-lapse monitoring. Turn off the laser after the image acquisition process.

  • 111

    Refer to Steps 98-102 to estimate the positional shifts from the captured raw images. Supplementary Fig. 3 shows the estimated scanning route of the coded sensor.

    ▲CRITICAL STEP Avoid selecting the region with few objects for positional tracking. Otherwise, the estimated positional shifts may be inaccurate.

    ? TROUBLESHOOTING

  • 112

    Load the pre-recovered coded surface profile. Initialize the object wavefront as in Step 103.

  • 113

    Perform iterative reconstruction using the MATLAB code in Step 105.

    ▲CRITICAL STEP We typically use 5 iterations for image reconstruction. For the first 3-4 iterations, we only update the object exit wavefront, and for the last 1-2 iterations, we jointly update both the object and the coded surface profile for refinement.

    ▲CRITICAL STEP For live-cell imaging experiments, the temporal correlation between adjacent time points can be adopted as an additional constraint for the reconstruction process. One can use the recovered object exit wavefront at the previous time point as the initial guess for the current time point.

    ? TROUBLESHOOTING

  • 114

    Digitally propagate the recovered object exit wavefront back to the object plane.

  • 115

    Check whether the image quality is comparable to the typical results. The left panel of Fig. 6 shows the recovered images of blood cells. Other typical results are provided in the ‘Anticipated results’ section.

  • 116

    Save the results as a .mat file and close MATLAB.

Timing

Steps 1-12, programable light source development for FP: 3-5 h

Steps 13-31, system integration and alignment for FP: 1-2 h

Steps 32-48, incident angle calibration for FP: 0.5-1 h

Steps 49-56, recover the pupil aberration for FP: 2-3 h

Steps 57-65, image acquisition and reconstruction for FP: 0.5-1 h

Steps 66-73, prepare the blood-coated surface for CP: 1 h

Steps 74-86, motorized stage development and system integration for CP: 1 d

Steps 87-102, motion tracking based on the captured CP images: 0.5-1 h

Steps 103-106, recover the coded surface profile for CP: 1-2 h

Steps 107-116, image acquisition and reconstruction for CP: 1-2 h

Troubleshooting

For FP imaging, we summarize the representative problems in Fig. 9. The typical raw FP measurements of a blood smear slide are shown in Fig. 9a, and the corresponding FP recovered images are shown in Fig. 9b. In comparison, Fig. 9c shows FP reconstructions with different underlying problems. For lensless CP imaging, we summarize the representative problems in Fig. 10. The raw image and the correct CP reconstruction are shown in Fig. 10a. In comparison, Fig. 10b shows the raw images and their CP reconstructions with different underlying problems. Table 1 discusses the causes of these imaging problems in both systems. It also provides the corresponding troubleshooting advices.

Fig. 9∣. Troubleshooting for FP.

Fig. 9∣

(a) Representative raw measurements for FP. (b) The correct FP reconstruction. (c) The FP reconstructions with different underlying problems in the reconstruction process.

Fig. 10∣. Troubleshooting for CP.

Fig. 10∣

(a) The correct CP reconstruction. (b) The CP reconstructions with different underlying problems.

Table 1 ∣.

Troubleshooting table for FP and CP

Step Problem Possible reason Solution
19 Poor quality for the recovered phase in Fig. 9c1. The LED array is placed too close to the sample, resulting in insufficient spectrum overlap in the Fourier space. Increase the distance between the sample stage and the illuminator.
25 The recovered phase exhibits shadow effects in Fig. 9c2. The incident wavevectors of different LED elements are not accurate. Re-calibrate the incident wavevectors for the reconstruction process.
36 Degraded reconstruction quality in Fig. 9c3. The magnification factor of the system has not been calibrated. Follow Steps 33-36 to calibrate the magnification factor of the objective lens and obtain the correct detector pixel size at the object plane.
52 As shown in Fig. 9c4, the recovered phase has low contrast, and the background of the phase is not uniform. Exclude too many images at the brightfield-to-darkfield transition zone When the illumination NA is close to the objective NA, the captured image contains low-frequency phase information that is critical for phase recovery27. Instead of excluding the entire image in Step 52, we can generate a binary mask to only exclude the pixels in the brightfield-to-darkfield transition zone34,66. Similarly, we can exclude pixels exposed to the system stray light (also refer to Supplementary Fig. 4)
60 Noisy reconstruction as shown in Fig. 9c5. Camera gain is too small. Set a larger camera gain or increase the exposure time.
62 Degraded image quality as shown in Fig. 9c6. Without performing the high- dynamic-range combination. Capture three sets of images with different camera gains and exposures. Perform HDR combination in Step 52.
63 As shown in Fig. 9c7, the recovered phase has a higher value in the background than that in the feature region. The kx and ky axes are reversed in the reconstruction process. Reverse the kx and ky axes
63 As shown in Fig. 9c8, square artefacts are presented in the reconstruction. The kx and ky axes are swapped and/or reversed in the reconstruction process. Swap and/or reverse the kx and ky axes
63 Artefacts are presented in the recovered images, as shown in Fig. 9c9. The iterative process uses a random order of the measurements. Use a recovery sequence ranked by the illumination NAs of the LED elements74.
63 Degraded reconstruction quality at the edge of the field of view, as shown in the left bottom panel of Fig. 11. The pupil is not correctly initialized from the calibration experiment. Repeat the calibration experiment. Properly initialize the pupil aberration.
67 Degraded reconstruction quality as shown in Fig. 10b1. The blood-cell layer is nonuniform and sparse. Use alcohol and tissue paper to remove the bloodcell layer on the sensor. Repeat the smearing process as in Step 67.
67 Fail to recover the images as shown in Fig. 10b2. The blood-cell layer is too thick in certain regions. Use alcohol and tissue paper to remove the bloodcell layer on the sensor. Repeat the smearing process as in Step 67.
83 Degraded reconstruction quality as shown in Fig. 10b3. The laser fibre is placed too close to the sample. Place the fibre head at least 15 cm away from the coded sensor.
94 The raw image has too many over-exposed pixels, as shown in Fig. 10b4. The exposure time is not properly set, or the output power of the laser fluctuates over time. Adjust the exposure time as in Step 94. Check the mean intensity of different captured images. If it changes significantly between different measurements, check the drive current and the temperature of the laser diode.
94 The contrast of the raw images is low with a strong background. The ambient light creates an incoherent background on the captured images. Perform the experiment in a dark environment.
99 The estimated positions are all zeros. The blood-cell layer is too thick. Use alcohol and tissue paper to remove the bloodcell layer on the sensor. Repeat the smearing process as in Step 67.
102 The estimated scanning route is different from the preset spiral route in the calibration process. The estimated positions are inaccurate. Repeat Steps 100-101 to further refine the estimated route.
106 The recovered coded surface profile is defocused and blurry. The distance between the coded surface and the pixel array is not properly set. Digitally propagate the blood-coated surface profile to different z positions. Select one with the best performance and set it as the ‘d2’ (in Fig. 1). Repeat the calibration process in Steps 103-105.
111 The estimated scanning route deviates significantly from the preset route, as shown in Fig. 10b5. The estimated positions are inaccurate. Repeat Steps 100-101 to further refine the estimated route. If the problem still exists, select another region for motion tracking.
113 The phase wraps cannot be properly recovered, as shown in the right bottom panel of Fig. 11. The coded surface profile is jointly updated with the object in the reconstruction process. Do not update the coded surface profile in the reconstruction process.

We note that, for both FP and CP, it is important to obtain the ptychographic probe from the calibration experiment. Figure 11 shows the reconstructions with and without using the pre-calibrated ptychographic probes. We can see that both approaches fail to converge to the correct solutions via blind reconstruction.

Fig. 11∣. Reconstructions with pre-calibrated ptychographic probes versus blind reconstructions.

Fig. 11∣

(Left) Based on the pupil aberration obtained from the calibration experiment, FP can correctly recover the blood cells located at the edge of the field of view. (Right) Based on the blood-cell layer obtained from the calibration experiment, CP can recover the slow-varying bacterial colony profile with many 2π wraps. For both FP and CP, blind ptychographic reconstructions fail to converge to the correct solutions.

Anticipated results

Anticipated results for FP

The successful implementation of the protocol yields an FP system capable of imaging different biospecimens with high resolution and a large field of view. Figure 12a shows a recovered gigapixel image of a blood smear slide43. The insets of Fig. 12a1 show the aberration pupils of two different regions at the edge of the field of view. These pupils are recovered by updating the weights of Zernike modes. Figures 12a2 and 12a3 show the zoomed-in views of the recovered blood smear slide, where we can see significant resolution improvement from the raw images to the FP results. Since the employed LED illuminator contains red, green, and blue LEDs, we can also perform sequential FP acquisitions under these wavelengths for colour imaging. Figure 12b shows the raw images and the colour FP reconstructions of stained blood smear and tissue section samples34. FP can also be used for label-free live-cell imaging over a long period. Figure 12c1 shows the recovered phase image of live human cervical adenocarcinoma epithelial (HeLa) cells on a Petri dish27. Cell mitosis and apoptosis events can be monitored in a time-lapse experiment in Fig. 12c2.

Fig. 12∣. Representative imaging results using FP.

Fig. 12∣

(a1) The recovered gigapixel image of a blood smear slide and its zoomed-in views (a2-a3)43. (b) The recovered colour and phase images of a blood smear slide (b1) and a stained tissue section (b2)34. (c1) The recovered phase image of live HeLa cells on a Petri dish27. (c2) Monitoring the cell culture growth in a longitudinal study.

Anticipated results for lensless CP

CP shares the same benefits with FP on large-field-of-view and high-resolution imaging. The lensless nature and its modulation strategy provide additional benefits for imaging thick biospecimens with nanometre sensitivity. Figure 13a1 shows the recovered whole slide image of a blood smear slide17. The zoomed-in views are shown in Fig. 13a2-13a5, where we can clearly resolve the detailed structures of the white blood cells. Figure 13b shows the recovered images of different crystals of a urine sediment slide19. The phase profiles of these crystals contain many 2π wraps that are difficult to recover using other common phase retrieval approaches. Figure 13c1 shows the recovered whole slide phase image of an unstained thyroid smear18 and Fig. 13c2 shows the recovered height map of a zoomed-in region.

Fig. 13∣. Imaging fixed biospecimens using CP.

Fig. 13∣

(a1) The recovered whole slide image of a blood smear slide17. (a2-a5) Zoomed-in views of the white blood cells in (a1). (b) The recovered phase images of different crystals in a urine sediment slide19. (c1) The recovered whole slide phase image of an unstained thyroid smear18. (c2) The recovered height map of a zoomed-in region of (c1).

In addition to imaging fixed biospecimens, we can also use CP to perform high-resolution live-cell monitoring over a large field of view. Figure 14a1 shows the recovered phase images of bacterial cells at different time points20. Figure 14a2 shows the 3D phase map of the microcolonies, where we can directly observe the forming of layered structures. In particular, the projected line traces in Fig. 14a2 show two-layer and three-layer structures, with ~0.5 radians of phase accumulation for each layer. The multilayer forming process occurs at the centre while the monolayer remains in the outer regions. Figure 14b1 shows the recovered phase image of live human embryonic kidney 293 (HEK 293) cells over a large field of view. A zoomed-in view of the raw image is provided in Fig. 14b2. The corresponding recovered phase images at different time points are shown in Fig. 14b3-14b5, where we can clearly observe the cell proliferation process.

Fig. 14∣. Time-lapse monitoring of live cells using lensless CP.

Fig. 14∣

(a1) The recovered phase images of bacterial cultures at different time points. (a2) The recovered phase map reveals the layered structure of 3D bacterial colonies. (b1) The recovered high-resolution phase image of live HEK 293 cells over a large field of view. (b2) The zoomed-in view of the captured raw image. (b3-b5) The corresponding recovered phase images at different time points.

Supplementary Material

Supplementary Information
Supplementary Video 1
Download video file (15.5MB, mp4)
Supplementary Video 2
Download video file (9.9MB, mp4)

Acknowledgements

We thank Drs. Zichao Bian and Azady Pirhanov for their assistance in sample preparation. This work was partially supported by the UConn SPARK grant (G.Z.), National Science Foundation 2012140 (G.Z.), and National Institute of Health U01-NS113873 (B.F. and G.Z.). P.S. also acknowledges the support of the Thermo Fisher Scientific fellowship.

Footnotes

Data and code availability

The related SolidWorks design files, MATLAB codes, and Arduino codes are provided in Supplementary Data and Supplementary Software.

Competing interests

G.Z. is a named inventor on several patents related to Fourier ptychography. G.Z. has also filed several patent applications related to spatial-domain coded ptychography.

References:

  • 1.Hoppe W Diffraction in inhomogeneous primary wave fields. 1. Principle of phase determination from electron diffraction interference. Acta Crystallographica Section a-Crystal Physics Diffraction Theoretical and General Crystallography, 495-& (1969). [Google Scholar]
  • 2.Guizar-Sicairos M & Thibault P Ptychography: A solution to the phase problem. Physics Today 74, 42–48 (2021). [Google Scholar]
  • 3.Faulkner HML & Rodenburg J Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm. Physical review letters 93, 023903 (2004). [DOI] [PubMed] [Google Scholar]
  • 4.Sayre D Some implications of a theorem due to Shannon. Acta Crystallographica 5, 843–843 (1952). [Google Scholar]
  • 5.Gerchberg RW A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik 35, 237–246 (1972). [Google Scholar]
  • 6.Li P & Maiden A Lensless LED matrix ptychographic microscope: problems and solutions. Applied optics 57, 1800–1806 (2018). [DOI] [PubMed] [Google Scholar]
  • 7.Fienup JR Phase retrieval algorithms: a comparison. Applied optics 21, 2758–2769 (1982). [DOI] [PubMed] [Google Scholar]
  • 8.Mudanyali O et al. Compact, light-weight and cost-effective microscope based on lensless incoherent holography for telemedicine applications. Lab on a Chip 10, 1417–1428 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Thibault P et al. High-resolution scanning x-ray diffraction microscopy. Science 321, 379–382 (2008). [DOI] [PubMed] [Google Scholar]
  • 10.Seaberg MD et al. Tabletop nanometer extreme ultraviolet imaging in an extended reflection mode using coherent Fresnel ptychography. Optica 1, 39–44 (2014). [Google Scholar]
  • 11.Park Y, Depeursinge C & Popescu G Quantitative phase imaging in biomedicine. Nature photonics 12, 578–589 (2018). [Google Scholar]
  • 12.Stockmar M et al. Near-field ptychography: phase retrieval for inline holography using a structured illumination. Scientific reports 3, 1–6 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Ou X, Zheng G & Yang C Embedded pupil function recovery for Fourier ptychographic microscopy. Optics express 22, 4960–4972 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Pfeiffer F X-ray ptychography. Nature Photonics 12, 9–17 (2018). [Google Scholar]
  • 15.Jiang Y et al. Electron ptychography of 2D materials to deep sub-ångström resolution. Nature 559, 343–349 (2018). [DOI] [PubMed] [Google Scholar]
  • 16.Zheng G, Horstmeyer R & Yang C Wide-field, high-resolution Fourier ptychographic microscopy. Nature photonics 7, 739 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Jiang S et al. Resolution-Enhanced Parallel Coded Ptychography for High-Throughput Optical Imaging. ACS Photonics 8, 3261–3271, (2021). [Google Scholar]
  • 18.Jiang S et al. High-throughput digital pathology via a handheld, multiplexed, and AI-powered ptychographic whole slide scanner. Lab on a Chip 22, 2657–2670 (2022). [DOI] [PubMed] [Google Scholar]
  • 19.Jiang S et al. Blood-Coated Sensor for High-Throughput Ptychographic Cytometry on a Blu-ray Disc. ACS Sensors 7, 1058–1067 (2022). [DOI] [PubMed] [Google Scholar]
  • 20.Jiang S et al. Ptychographic sensor for large-scale lensless microbial monitoring with high spatiotemporal resolution. Biosensors and Bioelectronics 196, 113699 (2022). [DOI] [PubMed] [Google Scholar]
  • 21.Park J, Brady DJ, Zheng G, Tian L & Gao L Review of bio-optical imaging systems with a high space-bandwidth product. Advanced Photonics 3, 044001 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Horstmeyer R, Ou X, Zheng G, Willems P & Yang C Digital pathology with Fourier ptychography. Computerized Medical Imaging and Graphics 42, 38–43 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Williams AJ et al. Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis. Journal of biomedical optics 19, 066007 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Ou X, Horstmeyer R, Yang C & Zheng G Quantitative phase imaging via Fourier ptychographic microscopy. Optics letters 38, 4845–4848 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Horstmeyer R, Chung J, Ou X, Zheng G & Yang C Diffraction tomography with Fourier ptychography. Optica 3, 827–835 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Jiang S et al. Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation. Lab on a Chip 20, 1058–1065 (2020). [DOI] [PubMed] [Google Scholar]
  • 27.Sun J, Zuo C, Zhang J, Fan Y & Chen Q High-speed Fourier ptychographic microscopy based on programmable annular illuminations. Scientific reports 8, 1–12 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Tian L et al. Computational illumination for high-speed in vitro Fourier ptychographic microscopy. Optica 2, 904–911 (2015). [Google Scholar]
  • 29.Kim J, Henley BM, Kim CH, Lester HA & Yang C Incubator embedded cell culture imaging system (EmSight) based on Fourier ptychographic microscopy. Biomedical optics express 7, 3097–3110 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Chan AC et al. Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 Eyes). Scientific reports 9, 1–12 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Wakefield DL et al. Cellular analysis using label-free parallel array microscopy with Fourier ptychography. Biomedical optics express 13, 1312–1327 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Chung J, Ou X, Kulkarni RP & Yang C Counting white blood cells from a blood smear using Fourier ptychographic microscopy. PloS one 10, e0133489 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Song P et al. Optofluidic ptychography on a chip. Lab on a Chip 21, 4549–4556 (2021). [DOI] [PubMed] [Google Scholar]
  • 34.Zheng G, Shen C, Jiang S, Song P & Yang C Concept, implementations and applications of Fourier ptychography. Nature Reviews Physics 3, 207–223, doi: 10.1038/s42254-021-00280-y (2021). [DOI] [Google Scholar]
  • 35.Zuo C, Sun J, Li J, Asundi A & Chen Q Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography. Optics and Lasers in Engineering 128, 106003 (2020). [Google Scholar]
  • 36.Dong S et al. Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging. Optics express 22, 13586–13599 (2014). [DOI] [PubMed] [Google Scholar]
  • 37.Holloway J, Wu Y, Sharma MK, Cossairt O & Veeraraghavan A SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography. Science advances 3, e1602564 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Thibault P, Dierolf M, Bunk O, Menzel A & Pfeiffer F Probe retrieval in ptychographic coherent diffractive imaging. Ultramicroscopy 109, 338–343 (2009). [DOI] [PubMed] [Google Scholar]
  • 39.Guizar-Sicairos M & Fienup JR Phase retrieval with transverse translation diversity: a nonlinear optimization approach. Optics express 16, 7264–7278 (2008). [DOI] [PubMed] [Google Scholar]
  • 40.Maiden AM & Rodenburg JM An improved ptychographical phase retrieval algorithm for diffractive imaging. Ultramicroscopy 109, 1256–1262 (2009). [DOI] [PubMed] [Google Scholar]
  • 41.Chang H, Enfedaque P & Marchesini S Blind ptychographic phase retrieval via convergent alternating direction method of multipliers. SIAM Journal on Imaging Sciences 12, 153–185 (2019). [Google Scholar]
  • 42.Fannjiang A & Chen P Blind ptychography: uniqueness and ambiguities. Inverse Problems 36, 045005 (2020). [Google Scholar]
  • 43.Song P et al. Full-field Fourier ptychography (FFP): Spatially varying pupil modeling and its application for rapid field-dependent aberration metrology. APL Photonics 4, 050802 (2019). [Google Scholar]
  • 44.Gustafsson MG Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. Journal of microscopy 198, 82–87 (2000). [DOI] [PubMed] [Google Scholar]
  • 45.Mudry E et al. Structured illumination microscopy using unknown speckle patterns. Nature Photonics 6, 312–315 (2012). [Google Scholar]
  • 46.Liang M & Yang C Implementation of free-space Fourier Ptychography with near maximum system numerical aperture. Optics Express 30, 20321–20332 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Schiske P Image reconstruction by means of focus series. in Proceedings of the 4th European Conference on Electron Microscopy, Rome, Italy, 1968. (Tipografia poliglotta). [Google Scholar]
  • 48.Bao P, Zhang F, Pedrini G & Osten W Phase retrieval using multiple illumination wavelengths. Optics letters 33, 309–311 (2008). [DOI] [PubMed] [Google Scholar]
  • 49.Greenbaum A & Ozcan A Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy. Optics express 20, 3129–3143 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Luo W, Zhang Y, Feizi A, Göröcs Z & Ozcan A Pixel super-resolution using wavelength scanning. Light: Science & Applications 5, e16060–e16060 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Zuo C et al. Transport of intensity equation: a tutorial. Optics and Lasers in Engineering, 106187 (2020). [Google Scholar]
  • 52.Xu W, Jericho M, Meinertzhagen I & Kreuzer H Digital in-line holography for biological applications. Proceedings of the National Academy of Sciences 98, 11301–11305 (2001). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Gureyev TE & Nugent KA Rapid quantitative phase imaging using the transport of intensity equation. Optics communications 133, 339–346 (1997). [Google Scholar]
  • 54.Guo C et al. Quantitative multi-height phase retrieval via a coded image sensor. Biomedical Optics Express 12, 7173–7184, doi: 10.1364/BOE.443528 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Abels E & Pantanowitz L Current state of the regulatory trajectory for whole slide imaging devices in the USA. Journal of pathology informatics 8 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Bian Z et al. Autofocusing technologies for whole slide imaging and automated microscopy. Journal of Biophotonics 13, e202000227 (2020). [DOI] [PubMed] [Google Scholar]
  • 57.Maiden AM, Humphry MJ & Rodenburg J Ptychographic transmission microscopy in three dimensions using a multi-slice approach. JOSA A 29, 1606–1614 (2012). [DOI] [PubMed] [Google Scholar]
  • 58.Dong S, Nanda P, Shiradkar R, Guo K & Zheng G High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography. Optics express 22, 20856–20870 (2014). [DOI] [PubMed] [Google Scholar]
  • 59.Dong S, Nanda P, Guo K, Liao J & Zheng G Incoherent Fourier ptychographic photography using structured light. Photonics Research 3, 19–23 (2015). [Google Scholar]
  • 60.Guo K et al. 13-fold resolution gain through turbid layer via translated unknown speckle illumination. Biomedical optics express 9, 260–275 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Zhang H et al. Near-field Fourier ptychography: super-resolution phase retrieval via speckle illumination. Optics express 27, 7498–7512 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Song P et al. Super-resolution microscopy via ptychographic structured modulation of a diffuser. Optics letters 44, 3645–3648 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Bian Z et al. Ptychographic modulation engine: a low-cost DIY microscope add-on for coherent super-resolution imaging. Journal of Physics D: Applied Physics 53, 014005 (2019). [Google Scholar]
  • 64.Zheng G Fourier ptychographic imaging: a MATLAB tutorial. (Morgan & Claypool Publishers, 2016). [Google Scholar]
  • 65.Batey D et al. Reciprocal-space up-sampling from real-space oversampling in x-ray ptychography. Physical Review A 89, 043812 (2014). [Google Scholar]
  • 66.Dong S, Bian Z, Shiradkar R & Zheng G Sparsely sampled Fourier ptychography. Optics Express 22, 5455–5464, doi: 10.1364/OE.22.005455 (2014). [DOI] [PubMed] [Google Scholar]
  • 67.Maiden A, Johnson D & Li P Further improvements to the ptychographical iterative engine. Optica 4, 736–745 (2017). [Google Scholar]
  • 68.Dong S, Shiradkar R, Nanda P & Zheng G Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging. Biomedical optics express 5, 1757–1767 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Guizar-Sicairos M, Thurman ST & Fienup JR Efficient subpixel image registration algorithms. Optics letters 33, 156–158 (2008). [DOI] [PubMed] [Google Scholar]
  • 70.Song P et al. Synthetic aperture ptychography: coded sensor translation for joint spatial-Fourier bandwidth expansion. Photonics Research 10, 1624–1632 (2022). [Google Scholar]
  • 71.https://www.mathworks.com/matlabcentral/fileexchange/18401-efficient-subpixel-image-registration-by-cross-correlation.
  • 72.https://www.mathworks.com/matlabcentral/fileexchange/7687-zernike-polynomials.
  • 73.Bian L et al. Content adaptive illumination for Fourier ptychography. Optics letters 39, 6648–6651 (2014). [DOI] [PubMed] [Google Scholar]
  • 74.Guo K, Dong S, Nanda P & Zheng G Optimization of sampling pattern and the design of Fourier ptychographic illuminator. Optics express 23, 6171–6180 (2015). [DOI] [PubMed] [Google Scholar]
  • 75.https://www.dow.com/en-us/pdp.sylgard-184-silicone-elastomer-kit.01064291z.html#overview.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Information
Supplementary Video 1
Download video file (15.5MB, mp4)
Supplementary Video 2
Download video file (9.9MB, mp4)

RESOURCES