Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Jul 13.
Published in final edited form as: Lab Chip. 2010 Apr 19;10(11):1417–1428. doi: 10.1039/c000453g

Compact, Light-weight and Cost-effective Microscope based on Lensless Incoherent Holography for Telemedicine Applications

Onur Mudanyali a, Derek Tseng a, Chulwoo Oh a, Serhan O Isikman a, Ikbal Sencan a, Waheb Bishara a, Cetin Oztoprak a, Sungkyu Seo a, Bahar Khademhosseini a, Aydogan Ozcan a,b,*
PMCID: PMC2902728  NIHMSID: NIHMS206727  PMID: 20401422

Abstract

Despite the rapid progress in optical imaging, most of the advanced microscopy modalities still require complex and costly set-ups that unfortunately limit their use beyond well equipped laboratories. In the meantime, microscopy in resource-limited settings has requirements significantly different from those encountered in advanced laboratories, and such imaging devices should be cost-effective, compact, light-weight and appropriately accurate and simple to be usable by minimally trained personnel. Furthermore, these portable microscopes should ideally be digitally integrated as part of a telemedicine network that connects various mobile health-care providers to a central laboratory or hospital. Toward this end, here we demonstrate a lensless on-chip microscope weighing ~46 grams with dimensions smaller than 4.2cm × 4.2cm × 5.8cm that achieves sub-cellular resolution over a large field of view of ~24 mm2. This compact and light-weight microscope is based on digital in-line holography and does not need any lenses, bulky optical/mechanical components or coherent sources such as lasers. Instead, it utilizes a simple light-emitting-diode (LED) and a compact opto-electronic sensor-array to record lensless holograms of the objects, which then permits rapid digital reconstruction of regular transmission or differential interference contrast (DIC) images of the objects. Because this lensless incoherent holographic microscope has orders-of-magnitude improved light collection efficiency and is very robust to mechanical misalignments it may offer a cost-effective tool especially for telemedicine applications involving various global health problems in resource limited settings.

Introduction

For decades optical microscopy has been the workhorse of various fields including engineering, physical sciences, medicine and biology. Despite its long history, until relatively recently, there has not been a significant change in the design and working principles of optical microscopes. Over the last decade, motivated partially by the quest to better understand the realm of the nano-world, super-resolution techniques started a renaissance for optical microscopy by addressing some of the most fundamental limitations of optical imaging such as the diffraction limit.18 Besides these super-resolution techniques, several other novel imaging architectures were also implemented to improve the state of the art in optical microscopy towards better speed, signal to noise ratio (SNR), contrast, throughput, specificity, etc.914 This recent progress in microscopy utilized various innovative technologies to overcome the fundamental barriers in imaging and has created significant excitement in a diverse set of fields by enabling new discoveries to be made. However, together with this progress, the overall complexity and the cost of the optical imaging platform relatively increased which limits the wide spread use of some of these advanced optical imaging modalities beyond well equipped laboratories.

In the meantime, we have been also experiencing a rapid advancement in digital technologies, with much cheaper 2D solid state detector arrays having significantly larger areas with smaller pixels, better dynamic ranges, frame rates and signal to noise ratios, as well as much faster, cheaper and more powerful digital processors and memories. This on-going digital revolution, when combined with advanced imaging theories and numerical algorithms, also creates an opportunity for optical imaging and microscopy to face another dimension in this renaissance towards simplification of the optical imaging apparatus, making it significantly more compact, cost-effective and easy to use, potentially without a trade-off in its performance. As we illustrate in this manuscript, lensfree incoherent holographic on-chip imaging can be considered to be at the heart of this new opportunity and when combined with the advanced state of the art and cost-effective nature of digital electronics, it can provide a transformative solution to some of the unmet needs of cell biology and medical diagnostics especially for resource-limited environments.

Over the last decade various lensfree on-chip imaging architectures were also demonstrated.1523 Among these approaches, lensfree digital holography1620,22 deserves a special attention since with new computational algorithms and mathematical models,24 it has the potential to make the most out of this digital revolution that we have been experiencing. In this context, lensfree digital in-line holography has already been successfully demonstrated for high-resolution microscopy of cells and other micro-organisms.17

Conventional coherent lensfree in-line holography approaches demand near-perfect spatial coherence for illumination, and therefore require focusing of a laser light on a small aperture that is on the order of a wavelength for spatial filtering17,20. The use of a small aperture size (e.g., 1–2µm) requires a mechanically stable and a carefully aligned system together with a focusing lens to efficiently couple the laser radiation to the aperture for improved light throughput. In addition, keeping such a small aperture clean and operational over an extended period of time can be another challenge especially for field use. Further, the cells of interest are typically positioned far away (e.g., >1 cm) from the sensor surface such that the holographic signature of each cell is spread almost over the entire sensor area, where all the cells’ signatures significantly overlap. Such an approach unfortunately limits the imaging field-of-view (FOV) at the cell plane. All these requirements not only relatively increase the cost and the size of the optical instrument, but also make lensfree coherent in-line holography somewhat inconvenient for use in resource limited settings.

Incoherent or partially coherent sources in holography have also been utilized in different lens-based optical architectures.13,2528 These holographic imaging techniques are not on-chip as they utilize various bulky optical components and therefore they can be considered under the same category as the advanced imaging modalities discussed in the introduction making them much less suitable for field use. Simpler approaches using partially coherent or incoherent lensfree in-line holography have also been recently demonstrated for imaging of latex particles.19,29 However, these approaches also suffer from a small field-of-view as they position the objects-of-interest far away from the sensor surface, e.g., with a fringe magnification of >10, reducing the available field-of-view of the digital sensor by more than two orders of magnitude.19 Further, these studies used coupling optics for the illumination such as a microscope objective-lens (together with a small pinhole size of ~1–5 µm 29 or 10 µm 19) and had relatively coarse imaging performance

To provide an alternative solution to lensfree on-chip imaging towards telemedicine applications, here we illustrate an incoherent holographic microscope weighing ~46 grams with dimensions smaller than 4.2cm × 4.2cm × 5.8cm that achieves ~1–2 µm resolution (sufficient to image e.g., sub-cellular structures in a blood smear) over a large field of view (FOV) of ~24 mm2, which constitutes ~10 fold improvement compared to a typical 10× objective-lens FOV. This holographic microscope does not utilize any lenses, lasers or other bulky optical/mechanical components which greatly simplifies its architecture making it compact, light-weight, and cost-effective (see Fig. 1). Instead of using a coherent source (e.g., a laser) as one would normally find in conventional holography approaches, we utilize a simple light-emitting-diode (LED) for illumination which suppresses the coherent speckle noise and the undesired multiple-reflection interference effects on the detected holograms. This incoherent LED light is initially filtered by passing it through a large aperture of ~50–100 µm diameter, which also eliminates the need for any coupling and focusing optics/mechanics between the LED and the aperture plane (Fig. 1). This large aperture size also makes it robust to mechanical misalignments or potential clogging problems, making it highly suitable for use in the field by minimally trained personnel. The filtered LED light, after propagating in air a distance of e.g., ~3–4 cm, interacts with the object of interest (e.g., a whole blood sample) that is loaded from the side through a simple mechanical interface (see Fig. 1). Each object (e.g., blood cells) within the sample scatters, absorbs and refracts the incoming light based on its size, 3D morphology, sub-cellular elements, and refractive index. The interference of the light waves that passed through the cells with the unscattered LED light creates the hologram of each cell (with unit fringe magnification), which is detected without any lenses using a CMOS (complementary metal-oxide semiconductor) sensor array (Fig. 1). The digital hologram of each cell is extremely rich (despite the use of a simple LED through a large aperture and a unit fringe magnification) and permits rapid reconstruction of its optical phase (which was lost during the recording process) as well as its microscopic image (see e.g., Figs. 23 and the Supplementary Figures 1–2). This digital image reconstruction can be conveniently made at a central PC station located in a remote setting such as a hospital, where a compressed version of each holographic image (typically <2–3 MB for ~24 mm2 FOV) is transmitted over e.g., wireless communication links such as GSM networks that widely exist even in the developing parts of the world, including Africa.

Fig. 1.

Fig. 1

(a) A lensfree holographic on-chip microscope that weighs ~ 46 grams is shown. It utilizes an LED source (at 591 nm) with an aperture of ~50–100 µm in front of the source. The LED and the sensor are powered through USB connection from the side. This lensless holographic microscope operates with a unit fringe magnification to claim the entire active area of the sensor as its imaging field of view (~24 mm2). For different designs refer to Supplementary Fig. 1. (b) Schematics of the incoherent lensfree holographic microscope shown in (a). Drawing not to scale. Typical values: z1~2–5cm, z2<1–2mm, D~50–100 µm.

Fig. 2.

Fig. 2

(a) A digitally cropped lensfree hologram of a blood smear that is acquired with the unit in Fig. 1(a) is shown. Due to LED illumination spatial coherence diameter at the sample plane is much smaller than the imaging FOV, however it is sufficiently large to record the hologram of each cell individually. Integration time: 225ms; D=50µm; z1=~3.5cm, z2=~1mm. (b) Reconstruction result of the raw hologram shown in (a) for the same FOV illustrating the images of RBCs, platelets and a white blood cell. (c) 10× objective-lens (NA=0.2) microscope image of the same FOV as in (b) is shown for comparison purposes. Scale bar in (c) is 20 µm.

Fig. 3.

Fig. 3

Various objects imaged using the lensfree incoherent holographic microscope of Fig. 1(a) are illustrated and compared against 40× objective-lens (NA=0.6) images of the same FOV. The bottom row illustrates the lensfree incoherent holograms that are digitally processed to reconstruct the middle row images. The last 3 columns are taken from a blood smear sample, whereas the other 4 columns on the left are imaged within a solution/buffer. Same imaging parameters as in Fig. 2 are used.

Further, in this manuscript we illustrate that the same lensfree holographic microscope of Fig. 1(a) can also be converted into a differential interference contrast (DIC) microscope (also known as Nomarski microscope) by using inexpensive plastic polarizers together with thin birefringent crystal plates (e.g. quartz), which in total cost less than 2 USD.

Because this compact and light-weight lensless holographic microscope has orders-of-magnitude improved light collection efficiency and is very robust to mechanical misalignments it may offer a cost-effective tool especially for telemedicine applications involving various global health problems such as malaria, HIV and TB.

Results

The impact of spatially incoherent light emanating from a large aperture on lensless holographic microscopy

The Appendix presents the theoretical analysis of the impact of a large incoherent aperture (with a diameter of >100λ–200λ) on lensfree microscopy on a chip. Based on this analysis, as far as holography is concerned, we can conclude that by bringing the cell plane much closer to the sensor surface (with a fringe magnification of ~1), incoherent illumination through a large aperture can be made equivalent to coherent illumination of each cell individually. Further, we also prove that the spatial resolution at the cell plane will not be affected by the large incoherent aperture, which permits recording of coherent holograms of cells or other micro-objects with an imaging field-of-view that is equivalent to the sensor area (which in our case is ~24 mm2). In this theoretical analysis, we also illustrate that through the use of a large incoherent aperture the unwanted interference among different cells of the sample volume, as well as the speckle noise can be significantly avoided, which is especially an advantage for imaging of a dense cell solution such as whole blood samples. For further details refer to the Appendix.

Imaging performance of the lensless holographic microscope

We have tested the imaging performance of the handheld lensless microscope of Fig. 1(a) with various cells and particles (such as red blood cells, white blood cells, platelets, 3, 5, 7 and 10 µm polystyrene particles) as well as focused-ion beam (FIB) fabricated objects, the results of which are summarized in Figs. 23 and the Supplementary Figures 1–3. In these experiments, the reconstruction results of the presented digital microscope (Fig. 1(a)) were compared against conventional microscope images of the same FOV obtained with 10× and 40× objective lenses with numerical apertures of 0.2 and 0.6, respectively. This comparison (specifically Figs. 23 and Supplementary Fig. 1) illustrates that the presented lensless on-chip microscope achieves sub-cellular resolution sufficient to determine the type of a white blood cell (granulocyte, monocyte or lymphocyte – towards 3 part differential imaging) based on the texture of its stained nuclei (see the Experimental Methods Section).

To further investigate the imaging performance of our platform, Supplementary Figure 3 illustrates the recovery result for two squares that are precisely etched onto a glass substrate using FIB milling. In this experiment, the gap between the squares is estimated as 1.94 µm (FWHM) from the reconstructed image cross-section, which matches very well with the gap estimate from the 40× microscope image (1.95 µm FWHM). Considering the fact that the pixel size at the sensor is 2.2 µm, this result implies a sub-pixel resolution for our lensless microscope despite the fact that a unit fringe magnification is used together a large incoherent source. This is rather important, and will be further analyzed in the Discussions Section.

The digital image reconstruction process in our approach, as outlined in the Experimental Methods Section, is quite fast taking less than 4 seconds for a total image size of ~5 Mpixels using a regular CPU (central processing unit – e.g., Intel Q8300) and it gets >40× faster using a GPU (graphics processing unit – e.g., NVIDIA GeForce GTX 285) achieving <0.1 sec computation time for ~5 Mpixels. The holographic images that are saved for digital processing are compressed using Portable Network Graphics (png) format, yielding a typical image size of <2–3 MB for the entire ~24 mm2 FOV. Depending on the image, a much smaller FOV can also be selected to reduce the overall size of the raw hologram.

Next we demonstrated the proof-of-concept of lensfree DIC imaging with the handheld unit of Fig. 1(a). To achieve DIC performance with the same lensless holographic microscope, a thin birefringent crystal (e.g., quartz) is used in between two cross polarizers (see Fig. 4). The function of the birefringent crystal is to create, through the double-refraction process, two holograms of the object (the ordinary and the extraordinary holograms) that are spatially separated from each other by a small shear distance of ~1.1 µm. The thinner the crystal is the smaller this shear distance will get, determining the resolution of the differential phase contrast effect. This shear distance is naturally created by the uniaxial crystal and quite conveniently does not require any precise alignment of the object or the crystal. These two waves (ordinary and extra-ordinary) that are polarized orthogonal to each other will then interfere at the sensor plane (after passing through the analyzer – see Fig. 4) creating a new hologram, which now has the differential phase contrast information of the sample embedded into amplitude oscillations. This process, however, is wavelength dependent, and in order to ensure zero net phase bias between the ordinary and the extra-ordinary holograms regardless of the LED wavelength, two quartz plates (each ~180 µm thick, with 1 USD/cm2 cost) were assembled with an optical glue (Norland, UVS63) at 90° with respect to each other (Fig. 4). This sandwiched quartz sample (~360 µm thick), which now increases the total shear distance between the ordinary and the extra-ordinary holograms to 2·1.1=1.55µm, is then inserted underneath the sample of interest through the same mechanical interface of Fig. 1(a) for capture of the DIC hologram (see Fig. 4 for details). Note that without affecting the DIC operation principle, the first polarizer (at ϕ=+45°) can also be inserted after the sample plane (above the uniaxial crystal). Such a configuration might especially be useful to eliminate the potential DIC artifacts when imaging naturally birefringent samples.

Fig. 4.

Fig. 4

(a) Schematic diagram of the lensfree differential interference contrast (DIC) microscopy configuration is illustrated. The same holographic microscope of Fig. 1(a) can be converted into a lensless DIC microscope with two polarizers and two thin birefringent crystals as illustrated in (a). (b) Each birefringent uniaxial crystal creates an ordinary and an extra-ordinary wave of the object, which are separated by ~1.1 µm from each other for a quartz thickness of 0.18mm. This double refraction process is wavelength dependent, and to ensure zero net phase bias between the ordinary and the extra-ordinary holograms regardless of the LED wavelength, two crystals are glued to each other with 90° rotation in between as illustrated in (a) and (c). This sandwiched crystal assembly increases the total shear distance between the ordinary and the extra-ordinary holograms by a factor of √2, and together with the analyzer at 45°, it creates the DIC hologram of each object that is sampled by the sensor array. Despite the major differences in the way that the lensless holograms are created and recorded for regular transmission imaging vs DIC imaging, the digital image reconstruction process remains the same in both of the approaches.

Despite the major differences in the way that the lensless holograms are created and recorded for regular transmission imaging vs. DIC imaging, the digital image reconstruction process remains the same in both approaches, taking exactly the same amount of time (see the Experimental Methods Section). Figs. 5 and 6 illustrate examples of DIC images of micro-particles, blood cells (both in diluted whole blood and in a smear form) as well as FIB etched square structures (2 µm apart from each other) on a glass slide that are captured with the handheld lensless microscope of Fig. 1(a), after the insertion of 2 plastic polarizers and the birefringent crystal as outlined in Fig. 4. These additional components cost < 2 USD in total and can even be disposed with the sample after being imaged. These DIC images clearly show the differential phase contrast signatures of these micro-objects demonstrating the proof of concept of lensfree DIC imaging within the compact, light-weight and the cost-effective holographic microscope of Fig. 1(a).

Fig. 5.

Fig. 5

Differential interference contrast (DIC) images of various objects that are acquired with the handheld lensless holographic microscope of Fig. 1(a) are illustrated. Top Row: 5 and 10 µm polystyrene particles; 2nd Row: white and red blood cells in diluted whole blood (aqueous); 3rd Row: A white blood cell on a blood smear; 4th Row: FIB etched square structures that are separated by 2 µm on a glass substrate. For these images, 2 plastic polarizers and 2 thin quartz crystals were added to the lensless microscope of Fig. 1(a), as illustrated in Fig. 4. The total cost of these additional components is < 2 USD. D=100 µm, z1=~3.5 cm, z2=~1.3 mm and an integration time of ~600 ms were used for capture of the raw DIC holograms. Relatively increased integration time in these DIC images is due to the cross-polarizer configuration as shown in Fig. 4(a). 10× objective-lens (NA=0.2) microscope images of the same objects are also illustrated at the right column. For the relative positions of the micro-particles/cells in a solution, there are some unavoidable shifts between the lensless DIC image and the microscope comparison due to movement of the particles/cells between the capture of each comparison image.

Fig. 6.

Fig. 6

Full field-of-view DIC image of a sample that is composed of 5 and 10 µm polystyrene particles is illustrated. This image is captured with the lensfree holographic microscope of Fig. 1(a) using the architecture shown in Fig.4 with the imaging parameters summarized in Fig. 5.

Discussion

The use of an incoherent light source emanating through a large aperture (e.g., 50–100 µm as we have used in this work) greatly simplifies the optics for lensfree on-chip microscopy also making it more robust and cost-effective both of which are highly desired qualities for resource poor environments. Bringing the object plane much closer to the sensor surface together with a fringe magnification of F~1 is one of the key steps in making lensless microscopy possible with a large incoherent source without smearing the spatial information of the cells (see Appendix for a detailed discussion). This choice also brings a significant increase in the FOV and in the throughput of imaging, which we will further detail in the discussion to come. However, when compared to the state of the art in lensless holography, there are some trade-offs to be made in return for these improvements, which we aim to address in this section.

Based on the analysis provided in the Results Section and the Appendix, we can list advantages of a small cell-sensor distance and unit fringe magnification in incoherent lensfree holography as follows: (i) The size of the aperture, its exact shape and alignment with respect to the light source is much less of a concern. This makes it orders-of-magnitude more power efficient, easier to align and operate without the use of a laser or any coupling/focusing optics. This is highly important for global health applications, which demand cost-effective, compact and easy-to-use devices for microscopy and medical diagnostics. (ii) A small cell-sensor distance enables imaging of individual cell holograms (both phase and amplitude), which we treat as fingerprint signatures of cells based on their size, shape, intracellular elements, refractive index etc. This holographic signature/texture (which is now also free from speckle noise30 of a coherent source) is a powerful tool that can enable diagnostic decisions to be made without any reconstruction by using pattern matching algorithms that compare cell hologram libraries to measured hologram textures. This can reduce computation time significantly since digital pattern analysis & matching is the common required step for any automated cytometry & diagnostic platform, i.e., the entire digital computation can be made much simpler and faster. (iii) The presented approach also significantly improves the imaging FOV as illustrated in Fig. 6 and Supplementary Fig. 2. And (iv) By use of a small z2, the collection numerical aperture (NA) at the detection plane approaches to the medium refractive index, n. For larger z2 values (as in conventional in-line holography), the sensor array width starts to define the collection NA, which reduces the effective light collection efficiency of the system.

This last point requires more discussion since the improved light collection efficiency does not necessarily imply a better resolution as the sampling period at the hologram plane (i.e., the pixel size, ΔxD) is also quite important. This issue is investigated in greater detail in the Appendix. To summarize the conclusions: the detection numerical aperture for a small cell-sensor distance as we have used in this work is significantly improved which increases the light collection efficiency; however, not all the collected light contributes to the holographic texture. It turns out that the price that is paid for simplification of the optical system towards achieving lensfree cell holography with a large incoherent source over a large field-of-view is an increased need for a smaller pixel size to be able to record all the hologram fringes that are above the detection noise floor to claim a high NA for better lateral and axial resolution.

Because our platform enjoys a fringe magnification of ~1, in terms of field of view it is equivalent to direct near-field (i.e., contact) imaging of the object plane, such that it has the entire sensor area available as its field of view. However, achieving sub-pixel resolution (see e.g., Supplementary Fig. 3) implies that the presented incoherent holography technique achieves much better performance than direct contact imaging, without a trade-off in any image metric, such as field of view, signal to noise ratio, phase contrast, etc. In other words, undoing the effect of diffraction through digital holographic processing (even with unit magnification and LED illumination) performs much better than an hypothetical near-field sampling experiment where a sensor-array having the same pixel size directly images the object plane all in parallel (i.e., without diffraction).

There are several reasons for this significant improvement. First, with a hypothetical contact imaging experiment, the random orientation of the object with respect to the pixel edges creates an unreliable imaging performance since the effective spatial resolution of the imaged object will then depend on sampling differences as the alignment of the object-features varies. This random object orientation does not cast a problem for the presented approach, since the diffraction from the object plane to the sensor array significantly reduces the randomness of the spatial sampling at the sensor plane.

Another significant advantage of lensless holographic imaging over direct near-field sampling (i.e., contact imaging) would be the capability of phase imaging. Any phase-only object would not create a detectable contrast in direct near-field sampling on the sensor-array, whereas the presented lensfree incoherent holography approach would naturally pick up the diffraction oscillations that contain the phase information of the samples located over the entire sensor area.

The key for sub-pixel spatial resolution in our incoherent holographic microscope is hidden in the iterative recovery techniques (detailed in the Experimental Methods Section), where at each iteration a digitally identified object support is enforced to recover the lost phase of the hologram texture. This object support can be made appropriately tighter if a priori information about the object type and size is known – for instance if the cells of interest are known to be human blood cells, a tighter object support (with dimensions of <15 µm) can be utilized for faster convergence of the phase recovery process. Intuitively, this behavior can be explained by a reduction in the number of unknown pixels in the phase recovery step, which enables iterative convergence to the unique solution, among many other possibilities, based on the measured hologram intensity and the estimated object support. Sub-pixel resolution is therefore coupled to iterative use of this object support for estimation of higher spatial frequencies of the object plane.

Like any other frequency extrapolation method, the practical success of this iterative approach and thus the spatial resolution of this system also depends on the SNR, which is a strong function of the cell/object size (i.e., its scattering cross section). For submicron sized cells, the scattering is rather weak, which implies that the high spatial frequencies (close to n/λ0) carry rather weak energies that can easily fall below the noise floor at the sensor. Therefore, the true resolution and the NA of digital reconstruction indeed depend on the SNR as well as the scattering cross section of the cells/objects, making sub-micron cell imaging challenging for this reason.

Experimental Methods

Image reconstruction in incoherent lensless holography

As discussed in earlier sections, the use of incoherent illumination through a large aperture brings numerous advantages to on-chip microscopy, making it a highly suitable and promising platform cell biology and medical diagnostics in resource limited settings. Despite significant practical advantages of the proposed lensless holographic microscope, it may mislead the reader that incoherent illumination will increase the burden on the numerical reconstruction process. Nevertheless, as will be further discussed in the Appendix, for incoherent lensfree holography with M = z1/z2 ≫ 1, each individual cell can still be treated to be illuminated with a coherent light. Further, due to their microscopic cross-sections, the incident wave on each cell can be assumed to be a plane wave. Consequently, the reconstruction of each recorded cell hologram can be performed assuming plane-wave illumination.

In order to diffract the wavefronts, the angular spectrum approach is used to numerically solve the Rayleigh-Sommerfeld integral. This computation involves multiplying the Fourier transform of the field with the transfer function of propagation through linear, isotropic media, as shown below:

Hz(fx,fy)={exp(j2πznλ)1(λfx/n)2(λfy/n)2,   fx2+fy2<nλ   0,otherwise

where fx and fy are the spatial frequencies and is n the refractive index of the medium. We would like to emphasize that no paraxial approximations are made in our image reconstructions.

Two different iterative approaches are taken in order to reconstruct the microscopic images of cells, free from any twin-image artifact. Both methods work with a single recorded hologram and rely on the constraint that each cell has a finite support. In both methods, the raw holograms are upsampled typically by a factor of four to six, using cubic spline interpolation before the iterative reconstruction procedure. Although upsampling does not immediately increase the information content of the holograms, it still offers significant improvements for achieving a more accurate phase recovery and higher resolution in the reconstructed image. First, it allows defining a more accurate object support by smoothing the edges of the objects in the initial back-projection of the hologram. Using an object support that is closer to the actual cell in terms of size and shape reduces the error of the iterative algorithms, as well as ensuring faster convergence. Second, upsampling introduces higher spatial frequencies initially carrying zero energy, in the hologram. Through the iterative reconstruction steps detailed below, these higher spatial frequencies gradually attain non-zero energy, which allows sub-pixel resolution in the final reconstruction as argued in the Discussion Section.

Method 1

The first method falls under the broad category of Interferometric Phase-Retrieval Techniques and is applicable to cases where the recorded intensity is dominated by the holographic diffraction terms.3133 The first step is the digital reconstruction of the hologram, which is achieved by propagating the hologram intensity by a distance of z2 away from the hologram plane yielding the initial wavefront Urec. As a result of this computation, the virtual image of the object is recovered together with its spatially overlapping defocused twin-image. It is important to note that the recorded intensity can also be propagated by a distance of −z2. In this case, the real image of the object can be recovered, while the defocused virtual image leads to the twin-image formation.

Due to the small cell-sensor distance in the incoherent holographic microscopy scheme presented here, the twin-image may carry high intensities, especially for relatively large objects like white blood cells. In such cases, the fine details inside the micro-objects may get suppressed. Similarly, the twin-images of different cells which are close to each other get superposed, leading to an increase in background noise. This issue is especially pronounced for microscopy of dense cell solutions, where the overlapping twin images of many cells lowers the counting accuracy due to reduced SNR.

In order to eliminate the twin-image artifact, an iterative approach using finite support constraints is utilized.33 Basically, this technique relies on the fact that duplicate information for the phase and amplitude of the object exists in two different reconstruction planes at distances +z2 and −z2 from the hologram plane, where the virtual and real images of the object are recovered, respectively. Therefore, a twin-image-free reconstruction in one of the image planes can be obtained, while filtering out the duplicate image in the other plane. Without loss of generality, we have chosen to filter out the real image to obtain a twin-image-free reconstruction in the virtual image plane at −z2. Due to the finite size of the micro-objects, the real image of the object only occupies the region inside its support, while the defocused twin-image image spreads out to a wider region around the object, also overlapping with the real image inside the support. Hence, deleting the information only inside the support ensures that the real image is completely removed from the reconstructed wavefront. Nevertheless, the virtual image information inside the support is also lost, and the iterative technique tries to recover the missing information of the virtual image by going back and forth between the virtual and real image planes, recovering more of the lost information at each iteration. The success of this algorithm is highly dependent on the Fresnel number of the recording geometry, which is given by Nf = n(object size)2/(λz). It is reported that the technique proves successful for Fresnel numbers as high as 10.33 For RBCs of approximately 7µm diameter, the typical recording geometries presented here involve Fresnel numbers of <0.2; hence, the twin-image elimination method yields highly satisfactory results.

The steps of twin-image elimination are detailed below:

  1. Initially the real image, which is the back-projected hologram at a distance of +z2, is used for determining the object support. Object support can be defined by either thresholding the intensity of the reconstructed image, or searching for its local minima.

  2. The region inside the support is deleted and a constant value is assigned to this region as an initial guess for the deleted part of the virtual image inside the support as shown below:
    Uz2(i)(x,y)={Urec,x,ySU¯rec,x,yS
    where Uz(i)(x,y) denotes the field at the real image plane after the ith iteration. S represents the area defined by the object support, and Ūrec is the mean value of Urec within the support.
  3. Then, the field at the real image plane is back propagated by −2z2 to the virtual image plane. Ideally, the reconstruction at this plane should be free from any twin-image distortions. Therefore, the region outside the support can be set to a constant background value to eliminate any remaining out-of-focus real image in the virtual image plane. However, this constraint is applied smoothly as determined by the relaxation parameter β below, rather than sharply setting the image to d.c. level outside the support:
    Uz2(i)(x,y)={DDUz2(i)β,x,ySUz2    (i)   ,x,yS
    where D is the background in the reconstructed field, which can either be obtained from a measured background image in the absence of the object, or can simply be chosen as the mean value of the field outside the object supports at the virtual image plane. β is a real valued parameter greater than unity, and is typically chosen around 2–3 in this article. Increasing β leads to faster convergence, but compromises the immunity of the iterative estimation accuracy to background noise.
  4. The field at the virtual image plane is forward propagated to the real-image plane, where the region inside the support now has a better estimate of the missing part of the virtual image. The region outside the support can be replaced by Uz2(1)(x,y), the original reconstructed field at the real image plane, as shown below:
    Uz2(i+1)(x,y)={Uz2(1),x,ySUz2(i+1),x,yS

    Steps c to d can be repeated iteratively until the final image converges. In most cases in this article, convergence is achieved after 10–15 iterations. This iterative computation takes around 4 seconds for an image size of ~5 Mpixels using a regular CPU (central processing unit – e.g., Intel Q8300) and it gets >40× faster using a GPU (graphics processing unit – e.g., NVIDIA GeForce GTX 285) achieving <0.1 sec computation time for the same image size.

Method 2

The second method utilized for eliminating the twin-image is classified under Non-Interferometric Phase-Retrieval Techniques, where the recorded image is not necessarily treated as a hologram, but as the intensity of any diffraction field.34 Together with the constraint that the objects have finite support, this technique is capable of iteratively recovering the phase of the diffracted field incident on the detector from a single intensity image. As a result, the complex field (amplitude and phase) of the cell holograms, rather than the intensity, can be back-propagated, thereby allowing reconstruction of the objects free from any twin-image contamination. This method can be decomposed into the following steps:

  1. The square-root of the recorded hologram intensity is propagated by a distance of −z2 to the cell plane, assuming a field phase of zero as an initial guess. The aim of the algorithm is to iteratively determine the actual phase of the complex field at the detector plane, and eventually at the object plane. In the first iteration, the object support is defined either by thresholding the intensity of the field at the object plane, or by locating its regional maxima and/or minima.

  2. The field inside the object supports is preserved, while the complex field values outside the supports is replaced by a background value DZ2 (x,y), as shown below:
    Uz2i+1(x,y)={m·Dz2(x,y),x,ySUz2i(x,y)  ,x,yS
    where DZ2 (x,y) is obtained by propagating the square root of the background intensity of the image obtained by the same setup in the absence of the cells; and m=mean(Uz2i(x,y)/mean(Dz2(x,y).
  3. The modified field at the object plane is propagated back to the detector plane, where the field now has a non-zero phase value. The amplitude of this field is replaced with the square root of the original recorded hologram intensity as no modification for the amplitude should be allowed while converging for its phase. Consequently, U0(i)(x,y), the complex diffraction field at the detector plane after the ith iteration can be written as follows:
    U0(i)(x,y)=|U0(0)(x,y)|·exp(j·0(i)(x,y))
    where the superscripts denote the iteration step, and 0(i)(x,y) denotes the phase of the field after the ith iteration.

Steps a to c can be iterated until the phase recovery converges. Typically, the results are obtained with less than 15 iterations, which is quite similar to the first Method.

Comparison of the two methods

For small or weakly scattering objects such as whole blood cells or micro-beads, both methods yield satisfactory results of comparable image quality. For such objects, the typical Fresnel number of the recording geometry is <1 and the focused real image occupies a small fraction of the area over which the twin-image is spread out. Therefore, deleting the object image in the real image plane leads to minimal information loss for the virtual image, which is to be recovered without twin-image artifacts. However, for larger objects of interest the Fresnel number of the system increases, and deleting the real image may causes excessive information loss in the virtual image, which may be harder to recover iteratively. Furthermore, for strongly scattering objects, the self and cross-interference terms may start dominating such that the holographic content of the recorded intensity gets distorted. Therefore for strongly scattering and/or extended objects, the second method discussed above becomes more preferable over the first method, which requires the holographic terms to be dominant in a setup with Fresnel numbers <10. However, an advantage of the first method is that it does not necessarily require a separate background image taken prior to inserting the sample into the setup. Although a mean value of the field at the object plane can also be used, in the absence of a background image for method 2 (step b), we have observed that the final image quality becomes better with an experimentally obtained background.

Lensfree Holographic Microscope Design

The LED source (OSRAM Opto Semiconductors Inc., Part# LY E65B – center wavelength: 591 nm, bandwidth: 18 nm) is butt-coupled to a 50 or 100 µm pinhole without the use of any focusing or alignment optics, illuminating the entire FOV of ~24 mm2 (CMOS chip, Model: MT9P031, Micron Technology – pixel size: 2.2 µm, 5 Mpixels). There is a small amount of unavoidable distance between the active area of the LED and the pinhole plane, the effect of which is briefly discussed in the Appendix. Following Figure 1, typical z1 and z2 distances used in our design are ~2–5 cm and <1–2 mm, respectively. The LED source and the CMOS sensor are powered through a USB connection from the side. The sample is loaded within a mechanical tray from the side. For DIC operation, the thin quartz samples (~180 µm thick) are cut with an optic axis at 45° with respect to the propagation direction as shown in Fig. 4 (sample cost: 1 USD per 1 cm2 from Suzhou Qimeng Crystal Material Product Co., China). The plastic polarizers are 0.2 mm thick each, and cost ~0.06USD per 1 cm2 (Aflash Photonics, USA). Other details of DIC operation are provided in the Results Section and in Fig. 4.

Fabrication of test objects

Test objects with micron-sized features were fabricated on glass to investigate the imaging performance of the system. The first step was coating Borosilicate type-1 cover glasses (150µm thickness) with 20nm thick Aluminum, using Electron Beam Metal Deposition CHA Mark 40. The thin metal coating works as a conductive layer for the Focused Ion Beam (FIB Nova 600) process in the following step. The FIB machine was programmed to over-mill the Aluminum layer, by using long time and high current (much more than required for milling the metal only) so as to ensure that the glass underneath the metal is etched as well. FIB milling was terminated once sufficient milling compared to surface roughness and feature sizes was achieved. The etch depth was monitored by Scanning Electron Microscopy (SEM) in real time during ion beam milling. After the FIB process, the metal layer was washed out by wet etching process of Aluminum, and high resolution phase structures on glass were obtained.

Sample preparation steps

Blood smear preparation

For blood smear imaging experiments, whole blood samples were treated with 2.0 mg EDTA/ml; and 1µL of sample was dropped on the top of a type 0 glass cover slip and another type 0 cover slip was used for spreading and smearing the blood droplet over the entire cover slip with about 30 degree of smearing angle. Smeared specimen was air-dried for 5 min before being fixed and stained by HEMA 3 Wright-Giemsa staining kit (Fisher Diagnostics). Dipping dried samples into three Coplin jars which contain methanol based HEMA 3 fixative solution, eosinophilic staining solution (HEMA 3 solution I) and basophilic solution (HEMA 3 solution II), respectively, was performed five times in a row for one second each step. Then, the specimen was rinsed with de-ionized water and air-dried again before being imaged.

Aqueous imaging of whole blood samples

We used RPMI 1640 classic liquid media with L-Glutamine (Fisher Scientific) as a diluent to achieve a desired dilution factor. To achieve accurate dilution, we followed the international standard established by the International Council for Standardization in Hematology (ICSH).35

Conclusions

In this manuscript, we introduced a lensless incoherent holographic microscope weighing ~46 grams with dimensions smaller than 4.2cm × 4.2cm × 5.8cm that achieves sub-cellular resolution over a large field of view of ~24 mm2. This compact and light-weight microscope is based on incoherent holography and does not require any lenses, lasers or other bulky optical/mechanical components. Instead, it utilizes a simple LED and a compact opto-electronic sensor-array to record lensless holograms of the objects, which then permits rapid digital reconstruction of regular transmission or differential interference contrast images of the objects. This platform may offer a cost-effective tool especially for telemedicine applications involving global health problems (e.g., malaria, TB and HIV) in resource poor settings.

Supplementary Material

supplementary data

Acknowledgments

We acknowledge the support of the Okawa Foundation, Vodafone Americas Foundation, DARPA DSO (under 56556-MS-DRP), NSF (under Awards # 0754880 and 0930501), NIH (under 1R21EB009222-01 and the NIH Director’s New Innovator Award # DP2OD006427 from the Office Of The Director, NIH), AFOSR (Project # 08NE255). A. Ozcan also gratefully acknowledges the support of the Office of Naval Research (ONR) under Young Investigator Award 2009.

Appendix

Theoretical analysis of digital in-line holography through an arbitrary incoherent aperture and its implications for on-chip lensfree microscopy

Holography is all about recording the optical phase information in the form of amplitude oscillations. To be able to read or make use of this phase information for microscopy, most existing lensfree in-line holography systems are hungry for spatial coherence and therefore use a laser source that is filtered through a small aperture (e.g., 1–2 µm). Utilizing a completely incoherent light source that is filtered through a large aperture (e.g., >100λ–200λ in diameter) should provide orders-of-magnitude better transmission throughput as well as a much simpler, inexpensive and more robust optical set-up. Here we aim to provide a theoretical analysis of this opportunity and its implications for compact lensless microscopy as we illustrated in this manuscript.

To record cell holograms that contain useful digital information with a spatially incoherent source emanating from a large aperture, one of the key steps is to bring the cell plane close to the detector array by ensuring z2≪z1, where z1 defines the distance between the incoherently illuminated aperture plane and the cell plane, and z2 defines the distance between the cell plane and the sensor array (see Fig. 1(b)). In conventional lensless in-line holography approaches, this choice is reversed such that z1≪z2 is utilized, while the total aperture-to-detector distance (z1+z2) remains comparable in both cases, leaving the overall device length almost unchanged. Therefore, apart from using an incoherent source through a large aperture, our choice of z2≪z1 is also quite different from the main stream lensfree holographic imaging approaches and thus deserves more attention.

To better understand the quantified impact of this choice on incoherent on-chip microscopy, let us assume two point scatterers (separated by 2a) that are located at the cell plane (z=z1) with a field transmission of the form t(x, y) = 1 + c1 δ(x − a, y) + c2 δ(x + a, y) where c1 and c2 can be negative and their intensity denotes the strength of the scattering process, and δ(x,y) defines a Dirac-delta function in space. These point scatterers can be considered to represent sub-cellular elements that make up the cell volume. For the same imaging system let us assume that a large aperture of arbitrary shape is positioned at z=0 with a transmission function of p(x,y) and that the digital recording screen (e.g., a CCD or a CMOS array) is positioned at z=z1+z2, where typically z1 ~ 2–5 cm and z2 ~ 0.5–2 mm.

Assuming that the aperture, p(x,y) is uniformly illuminated with a spatially incoherent light source, the cross-spectral density at the aperture plane can be written as:

W(x1,y1,x2,y2,γ)=S(γ)p(x1,y1)δ(x1x2)δ(y1y2),

where (x1, y1) and (x2, y2) represents two arbitrary points on the aperture plane and S(γ) denotes the power spectrum of the incoherent source with a center wavelength (frequency) of λ00).

We should note that in our experimental scheme (Fig. 1(a)), the incoherent light source (the LED) was butt-coupled to the pinhole with a small amount of unavoidable distance between its active area and the pinhole plane. This remaining small distance between the source and the pinhole plane also generates some correlation for the input field at the aperture plane. In this theoretical analysis, we ignore this effect and investigate the imaging behavior of a completely incoherent field hitting the aperture plane. The impact of such an unavoidable gap between pinhole and the incoherent source is an “effective” reduction of the pinhole size in terms of spatial coherence (without affecting the light throughput), which we will not consider in this analysis.

Based on these assumptions, after free space propagation over a distance of z1, the cross-spectral density just before interacting with the cells can be written as24:

W(Δx,Δy,q,γ)=S(γ)(λz1)2ej2πγqcz1p(x,y)ej2πλz1(xΔx+yΔy)dxdy

where Δx=x1x2,Δy=y1y2,q=x1+x22Δx+y1+y22Δy;and  (x1,y1)(x2,y2) represent two arbitrary points on the cell plane. After interacting with the cells i.e., with t(x,y), the cross-spectral density, right behind the cell plane, can be written as:

W(Δx,Δy,q,γ)·t*(x1,y1)·t(x2,y2)

This cross-spectral density function will effectively propagate another distance of z2 before reaching the detector plane. Therefore, one can write the cross-spectral density at the detector plane as:

WD(xD1,yD1,xD2,yD2,γ)=W(Δx,Δy,q,γ)t*(x1,y1)t(x2,y2)hC*(x1,xD1,y1,yD1,γ)hC(x2,xD2,y2,yD2,γ)dx1dy1dx2dy2

where (xD1, yD1) and (xD2, yD2) define arbitrary points on the detector plane (i.e., within the hologram region of each cell); and

hC(x,xD,y,yD,γ)=1jλz2ej2πz2λejπλz2[(xxD)2+(yyD)2].

At the detector plane (xD, yD), the optical intensity i(xD, yD) can then be written as:

i(xD,yD)=WD(xD,yD,xD,yD,γ)dγ

Assuming t(x, y) = 1 + c1 δ(x − a, y) + c2 δ(x + a, y), this last equation can be expanded into 4 physical terms, i.e.,

i(xD,yD)=C(xD,yD)+I(xD,yD)+H1(xD,yD)+H2(xD,yD),

where:

C(xD,yD)=D0+|c1|2S0(λ0z1z2)2P˜(0,0)+|c2|2S0(λ0z1z2)2P˜(0,0) (1)
I(xD,yD)=c2c1*S0(λ0z1z2)2P˜(2aλ0z1,0)ej4πaxDλ0z2+c.c. (2)
H1(xD,yD)=S0(λ0z1)2[c1·{p(xD·M+a·M·F,yD·M)*hc(xD,yD)}+c.c.] (3)
H2(xD,yD)=S0(λ0z1)2[c2·{p(xD·Ma·M·F,yD·M)*hc(xD,yD)}+c.c.] (4)

In these Equations “c.c.” and “*” refer to the complex conjugate and convolution operations, respectively, M=z1z2,F=z1+z2z1, and is the 2D spatial Fourier Transform of the aperture function p(x, y). It should be emphasized that (xD, yD) in these equations refers to the cell hologram extent, not to the entire field-of-view of the detector array.

Further, hc(xD,yD)=1jλ0·F·z2ejπλ0·F·z2(xD2+yD2) which effectively represents the 2D coherent impulse response of free space over Δz = F · z2. For the incoherent source, we have assumed a center frequency (wavelength) of γ00), where the spectral bandwidth was assumed to be much smaller than λ0 with a power spectrum of S(γ) ≅ S0δ(γ − γ0). This is a valid approximation since in this work we have used an LED source at λ0 ~591 nm with a spectral FWHM of ~18 nm.

Note that in these derivations we have also assumed paraxial approximation to simplify the results, which is a valid assumption since for this work z1 and z2 are typically much longer than the extend of each cell hologram (LH). However for the digital microscopic reconstruction of the cell images from their raw holograms, no such assumptions were made as also emphasized in the Experimental Methods Section.

Furthermore, D0 of Eq. 1 can further be expanded as:

D0=W(Δx,Δy,q,γ)(λz2)2ejπλz2[(x1xD)2+(y1yD)2]   ejπλz2[(x2xD)2+(y2yD)2]dx1dy1dx2dy2dγ

which simply represents the background illumination and has no spatial information regarding the cells’ structure or distribution. Although this last term, D0 can further be simplified, for most illumination schemes it constitutes a uniform background and therefore can be easily subtracted out.

Equations (14) are rather important to understand the key parameters in lensfree on-chip microscopy with spatially incoherent light emanating from a large aperture. Equation 1 describes the classical diffraction that occurs from the cell plane to the detector under the paraxial approximation. In other words, it includes both the background illumination (term D0) and also the self-interference of the scattered waves (terms that are proportional to |c1|2 and |c2|2). It is quite intuitive that the self interference terms representing the classical diffraction in Eq. (1) are scaled with (0,0) as the extent of the spatial coherence at the cell plane is not a determining factor for self interference.

Equation 2, however, contains the information of the interference between the scatterers located at the cell plane. Similar to the self-interference term, the cross-interference term, I(xD, yD), also does not contain any useful information as far as holographic reconstruction of the cell image is concerned. This interference term is proportional to the amplitude of P˜(2aλ0z1,0), which implies that for a small aperture size (hence wide ) two scatterers that are located far from each other can also interfere. Based on the term P˜(2aλ0z1,0), one can estimate that if 2a<λ0z1D (where D is roughly the aperture width) the scattered fields can quite effectively interfere at the detector plane giving rise to the interference term I(xD, yD). This result is not entirely surprising since the coherence diameter at the cell plane is proportional to λ0z1D, as also predicted by the van Cittert-Zernike theorem. It is another advantage of the incoherent holography approach presented here that the cross-interference term, I(xD, yD), will only contain the contributions of a limited number of cells within the imaging field-of-view since P˜(2aλ0z1,0) will rapidly decay to zero for a large aperture. This cross-interference term will be stronger for coherent in-line holography due to much better spatial coherence. This difference can especially make an impact in favor of incoherent large aperture illumination for imaging of a dense cell solution such as whole blood samples where I(xD, yD) can no longer be ignored.

The final two terms (Eqs. (34)) describe the holographic diffraction phenomenon and they are of central interest in all forms of digital holographic imaging systems, including the one presented here. Physically these terms dominate the information content of the detected intensity, especially for weakly scattering objects, and they represent the interference of the scattered light from each object with the background light, i.e., H1 (xD, yD) represents the holographic diffraction of the first scatterer c1 δ(x − a, y), whereas H2 (xD, yD) represents the holographic diffraction of the second scatterer, c2 δ(x + a, y). Note that the complex conjugate (c.c.) terms in Eqs. 3 and 4 represent the source of the twin images of the scatterers since hc*(xD, yD) implies propagation in the reverse direction creating the twin image artifact at the reconstruction plane. Elimination of such twin images in our cell reconstruction results is discussed in the Experimental Methods Section.

A careful inspection of the terms inside the curly brackets in Eqs. (34) indicates that, for each scatterer position, a scaled and shifted version of the aperture function p (x, y) appears to be coherently diffracting with the free space impulse response hc(xD, yD). In other words, as far as holographic diffraction is concerned, each point scatterer at the cell plane can be replaced by a scaled version of the aperture function (i.e., p(−xD · M, −yD · M)) that is shifted by F fold from origin, and the distance between the cell plane and the sensor plane can now be effectively replaced by Δz = F · z2. Quite importantly this scaling factor is M=z1z2, which implies that the large aperture size that is illuminated incoherently is effectively narrowed down by M fold at the cell plane (typically M≈40–100). Therefore, for M≫1, incoherent illumination through a large aperture is approximately equivalent (for each cell’s holographic signature) to coherent illumination of each cell individually, where the wave propagation over Δz determines the detected holographic intensity of each cell. This is valid as long as the cell’s diameter is smaller than the coherence diameter (Dcohλ0z1D, see Eq. 2) at the cell plane, where D defines the width of the illumination aperture and typically Dcoh~400λ0 – 1000λ0, which is quite appropriate for most cells of interest. Accordingly, for a completely incoherent source and a sensor area of A, d=D/M defines the effective width of each point scatterer on the cell plane and f = A/F2 determines the effective imaging field-of-view. Assuming some typical numbers for z1 (~3.5 cm) and z2 (~0.7 mm), the scaling factor (M) becomes ~50 with F ≈ 1, which means that even a D=50 µm wide pinhole would be scaled down to ~1 µm at the cell plane, which can now quite efficiently be mapped to the entire active area of the sensor array, i.e., fA. To conclude: for M≫1 the spatial features of the cells over the entire active area of the sensor array will not be affected by the large incoherent aperture, which permits recording of coherent hologram of each cell individually.

Even though the entire derivation above was made using the formalism of wave theory, the end result is quite interesting as it predicts a geometrical scaling factor of M = z1/z2 (see Figure 1(b)). Further, because M≫1, each cell hologram only occupies a tiny fraction of the entire field-of-view and therefore behaves independent of most other cells within the imaging field-of-view. That is the same reason why (unlike conventional lensfree in-line holography) there is no longer a Fourier transform relationship between the detector plane and the cell plane. Such a Fourier transform relationship only exists between each cell hologram and the corresponding cell.

Notice also that in Eqs. (34) the shift of the scaled aperture function p(−xD · Ma · M · F, −yD · M) from origin can be written as xD = ∓ a · F, which is in perfect agreement with the choice of the word “fringe magnification factor” to describe the function of F=z1+z2z1 for the holographic diffraction term. This also explains the reduction in the imaging field-of-view by F2 fold for in-line digital holography. Assuming M≫1, Δz approaches to z2 and the shift terms in Eqs. (34), i.e., ∓a · F also approach to ∓a, which makes sense since it corresponds to the shift of the scatterers at the cell plane from origin.

According to Eqs. (34), for a narrow enough p(−xD M, −yD M) (such that the spatial features of the cells are not washed out), the modulation of the holographic term at the detector plane can be expressed as sin(πλ0Fz2(xD2+yD2)). This modulation term of the holographic signature at the detector plane implies that for a large fringe magnification (F), the pixel size of the sensor array will have an easier time to record the rapidly oscillating fringes of the cell hologram, which effectively increases the numerical aperture of the sampling as much as the sensor width permits. However, there are penalties to be paid for this large F choice: (1) a large F does not permit the use of an incoherent source emanating through a large aperture, which makes it more demanding on the optics and alignment, also increasing the relative cost and complexity; and (2) the effective imaging field-of-view is also reduced by factor proportional to F2. More analysis on this topic is provided in the Supplementary Text S1.

The derivation discussed above was made for 2 point scatterers separated by 2a, such that c1 δ(x − a, y) + c2 δ(x + a, y). The more general form of the incoherent holographic term (equivalent of Eqs. 3 and 4 for a continuous distribution of scatterers - as in a real cell) can be expressed as:

H(xD,yD)S0(λ0z1)2·(z2z1)2[{s(xDF,yDF)*hc(xD,yD)}+c.c.]

where s(xD, yD) refers to the transmission image of the sample/cell of interest, which represents the 2D map of all the scatterers located within the sample/cell volume. The above derivation assumed a narrow enough p(−xD M, −yD M) such that M≫1, which is characteristic of the approach discussed in this manuscript. The physical effect of the fringe magnification factor (F) on the object hologram can also be visualized in this final equation, in harmony with our discussions in the previous paragraphs.

Finally, the supplementary text provides further discussion on the spatial sampling requirements at the detector array, as well the space-bandwidth product of the presented technique.3638

References

  • 1.Hell SW. Nat Biotech. 2003;21:1347–1355. doi: 10.1038/nbt895. [DOI] [PubMed] [Google Scholar]
  • 2.Gustafsson MGL. Proc. Natl. Acad. Sci. USA. 2005;102:13081–13086. doi: 10.1073/pnas.0406877102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Betzig E, Patterson GH, Sougrat R, Lindwasser OW, Olenych S, Bonifacino JS, Davidson MW, Lippincott-Schwartz J, Hess HF. Science. 2006;313:1642–1645. doi: 10.1126/science.1127344. [DOI] [PubMed] [Google Scholar]
  • 4.Rust MJ, Bates M, Zhuang X. Nat Meth. 2006;3:793–796. doi: 10.1038/nmeth929. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Hess ST, Girirajan TP, Mason MD. Biophysical Journal. 2006;91:4258–4272. doi: 10.1529/biophysj.106.091116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Ma Z, Gerton JM, Wade LA, Quake SR. Phys. Rev. Lett. 2006;97:260801. doi: 10.1103/PhysRevLett.97.260801. [DOI] [PubMed] [Google Scholar]
  • 7.Chung E, Kim D, Cui Y, Kim Y, So PT. Biophysical Journal. 2007;93:1747–1757. doi: 10.1529/biophysj.106.097907. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Pavani Sri Rama Prasanna, Thompson Michael A, Biteen Julie S, Lord Samuel J, Liu Na, Twieg Robert J, Piestun Rafael, Moerner WE. Proc. Natl. Acad. Sci. USA. 2009;9:2995–2999. doi: 10.1073/pnas.0900245106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Zipfel WR, Williams RM, Webb WW. Nat Biotech. 2003;21:1369–1377. doi: 10.1038/nbt899. [DOI] [PubMed] [Google Scholar]
  • 10.Evans CL, Potma EO, Puoris'haag M, Côté D, Lin CP, Xie XS. Proc. Natl. Acad. Sci. USA. 2005;102:16807–16812. doi: 10.1073/pnas.0508282102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Choi W, Fang-Yen C, Badizadegan K, Oh S, Lue N, Dasari RR, Feld MS. Nat Meth. 2007;4:717–719. doi: 10.1038/nmeth1078. [DOI] [PubMed] [Google Scholar]
  • 12.Barretto RPJ, Messerschmidt B, Schnitzer MJ. Nat Meth. 2009;6:511–512. doi: 10.1038/nmeth.1339. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Rosen J, Brooker G. Nat Photon. 2008;2:190–195. [Google Scholar]
  • 14.Goda K, Tsia KK, Jalali B. Nature. 2009;458:1145–1149. doi: 10.1038/nature07980. [DOI] [PubMed] [Google Scholar]
  • 15.Psaltis D, Quake SR, Yang C. Nature. 2006;442:381–386. doi: 10.1038/nature05060. [DOI] [PubMed] [Google Scholar]
  • 16.Haddad WS, Cullen D, Solem JC, Longworth JW, McPherson A, Boyer K, Rhodes CK. Appl. Opt. 1992;31:4973–4978. doi: 10.1364/AO.31.004973. [DOI] [PubMed] [Google Scholar]
  • 17.Xu W, Jericho MH, Meinertzhagen IA, Kreuzer HJ. Proc. Natl. Acad. Sci. USA. 2001;98:11301–11305. doi: 10.1073/pnas.191361398. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Pedrini G, Tiziani HJ. Appl. Opt. 2002;41:4489–4496. doi: 10.1364/ao.41.004489. [DOI] [PubMed] [Google Scholar]
  • 19.Repetto L, Piano E, Pontiggia C. Opt. Lett. 2004;29:1132–1134. doi: 10.1364/ol.29.001132. [DOI] [PubMed] [Google Scholar]
  • 20.Garcia-Sucerquia J, Xu W, Jericho MH, Kreuzer HJ. Opt. Lett. 2006;31:1211–1213. doi: 10.1364/ol.31.001211. [DOI] [PubMed] [Google Scholar]
  • 21.Heng X, Erickson D, Baugh LR, Yaqoob Z, Sternberg PW, Psaltis D, Yang C. Lab Chip. 2006;6:1274–1276. doi: 10.1039/b604676b. [DOI] [PubMed] [Google Scholar]
  • 22.Seo S, Su T, Tseng DK, Erlinger A, Ozcan A. Lab Chip. 2009;9:777–787. doi: 10.1039/b813943a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Coskun AF, Su T, Ozcan A. Lab Chip. 2010 doi: 10.1039/b926561a. DOI: 10.1039=b926561a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Brady DJ. Optical Imaging and Spectroscopy. Hoboken, NJ, USA: John Wiley & Sons; 2009. [Google Scholar]
  • 25.Lohmann W. J. Opt. Soc. Am. 1965;55 1555_1-1556. [Google Scholar]
  • 26.Mertz L. Transformation in Optics. Hoboken, NJ, USA: John Wiley & Sons; 1965. [Google Scholar]
  • 27.Dubois F, Joannes L, Legros J. Appl. Opt. 1999;38:7085–7094. doi: 10.1364/ao.38.007085. [DOI] [PubMed] [Google Scholar]
  • 28.Dubois F, Requena M Novella, Minetti C, Monnom O, Istasse E. Appl. Opt. 2004;43:1131–1139. doi: 10.1364/ao.43.001131. [DOI] [PubMed] [Google Scholar]
  • 29.Gopinathan U, Pedrini G, Osten W. J. Opt. Soc. Am. A. 2008;25:2459–2466. doi: 10.1364/josaa.25.002459. [DOI] [PubMed] [Google Scholar]
  • 30.Monroy F, Rincon O, Torres YM, Garcia-Sucerquia J. Optics Communications. 2008;281:3454–3460. [Google Scholar]
  • 31.Situ G, Sheridan JT. Opt. Lett. 2007;32:3492–3494. doi: 10.1364/ol.32.003492. [DOI] [PubMed] [Google Scholar]
  • 32.Sherman GC. J. Opt. Soc. Am. 1967;57:546–547. doi: 10.1364/josa.57.000546. [DOI] [PubMed] [Google Scholar]
  • 33.Koren G, Polack F, Joyeux D. J. Opt. Soc. Am. A. 1993;10:423–433. [Google Scholar]
  • 34.Fienup JR. Opt. Lett. 1978;3:27–29. doi: 10.1364/ol.3.000027. [DOI] [PubMed] [Google Scholar]
  • 35.England JM. Clin Lab Haematol. 1994;16:131–138. [Google Scholar]
  • 36.Goodman JW. Introduction to Fourier Optics. Greenwood Village, CO, USA: Roberts & Company Publishers; 2005. [Google Scholar]
  • 37.Lohmann W, Testorf ME, Ojeda-Castañeda J. Proc. SPIE. 2002;4737:77–88. [Google Scholar]
  • 38.Stern, Javidi B. J. Opt. Soc. Am. A. 2008;25:736–741. doi: 10.1364/josaa.25.000736. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supplementary data

RESOURCES