Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Nov 1.
Published in final edited form as: Appl Opt. 2009 Nov 1;48(31):5897–5905. doi: 10.1364/AO.48.005897

Light field image sensors based on the Talbot effect

Albert Wang 1, Patrick Gill 1, Alyosha Molnar 1,*
PMCID: PMC2892475  NIHMSID: NIHMS208008  PMID: 19881658

Abstract

We present a pixel-scale sensor that uses the Talbot effect to detect the local intensity and incident angle of light. The sensor comprises two local diffraction gratings stacked above a photodiode. When illuminated by a plane wave, the upper grating generates a self-image at the half Talbot depth. The second grating, placed at this depth, blocks or passes light depending upon incident angle. Several such structures, tuned to different incident angles, are sufficient to extract local incident angle and intensity. Furthermore, arrays of such structures are sufficient to localize light sources in three dimensions without any additional optics.

1. Introduction

Conventional imaging uses a large array of light sensors to create a map of light intensity at an image plane. However, this intensity map fails to capture incident angle, polarization angle, and other properties of light rays passing through the image plane. A complete description of these additional parameters defines the light field [1,2], or “flow” of light, at the image plane. Applications of light fields include three-dimensional (3D) rendering [3] and computational refocus of images [4].

We present a method of measuring the light field at a given image plane. In contrast to a conventional solid state image sensor with sites sensitive only to light intensity, our image sensor has sites that are sensitive to both the intensity and the incident angle of light striking them. Our technique exploits Fresnel diffraction patterns of periodic gratings (the Talbot effect [5]), to characterize incident light by its magnitude and direction. Specifically, we employ local, micrometer-scale diffraction gratings at each of a large number of sensor sites to capture this information. To distinguish these devices from the typical pixels of digital image sensors, we call them angle-sensitive pixels (ASPs).

In the following two subsections, we provide background information on light fields and the Talbot effect. We then present the design principles of the angle-sensitive pixel, followed by experimental results from prototypes of small light-field image sensors composed of our ASPs. Finally, we discuss implications and future directions for this work.

A. Light Field

In an 1846 lecture, Michael Faraday first proposed the concept of light as a field [6]. This concept was expanded by Gershun, who developed the theory of a “light field” in 3D space [1]. At a given point, the light field is defined by the infinite collection of rays, which represents the light arriving at the point from all angles. The light field can be formally defined by a “plenoptic function” [7] of multiple variables. The plenoptic function parameterizes the light rays passing through all space in terms of intensity, I, which is dependent on position in space (x, y, and z), direction (θ, ϕ), wavelength (λ), time (t), and polarization angle (p). Hence, I(x, y, z, θ, ϕ, λ, t, p) is the complete representation of a visual scene and contains all possible views of the scene.

Measuring the plenoptic function would require an observer able to determine the intensity of every ray, for every wavelength and polarization, at all instants in time and at every point in space. Clearly, perfect determination of the plenoptic function for any practical scene is impossible. However, a number of techniques, collectively known as light-field imaging, have been devised that let us record aspects of the plenoptic function beyond simple intensity at a plane. The simplest method is to use an array of pinhole cameras, as proposed by Adelson and Wang [8], where each camera captures the incident angle-dependent intensity I(θ, ϕ) at a particular location, (x0, y0). Cameras at different positions (xi, yi) capture a slice of the plenoptic function, I(x, y, θ, ϕ). Arrays of conventional cameras can also be used [9,10], as can camera scanning [11] or multiple masks [12]. Small-scale solutions have used microlenses to emulate camera arrays [8,13]. However, all of these approaches require a significant number of parallel or moveable optical components to capture information about the light field beyond a simple intensity map.

Recording information about the light field of a scene provides a more complete description of that scene than a conventional photograph or movie, and is useful for a number of applications. The light field allows prediction of illumination patterns on a surface given known sources and the 3D reconstruction of scenes (“light-field rendering” [3] or “three-dimensional shape approximation” [14]). Figure 1 shows how one aspect of the light field, incident angle, can be used to localize a light source in 3D space. Capturing the light field also permits construction of images with an arbitrary focal plane and aperture [4,11]. This capability is useful in both photography and in microscopy for obtaining multiple focal planes without moving any optics [15].

Fig. 1.

Fig. 1

Example of light-field imaging: (a) Light from a source strikes each pixel of an array with a distinct incident angle. (b) If each pixel in an array can determine the incident angle as well as the intensity of the light it detects, then array is able to localize a light source in three dimensions.

As schematically shown in Fig. 1, we present a method to perform light-field imaging by directly measuring incident intensity and angle of light. Using a large number of pixels, each containing a micrometer-scale diffraction grating, our sensor directly measures light vector information at many distinct points in space. In contrast to other approaches that require multiple lenses and/or moving parts, our device is monolithic, requires no optical components aside from the sensor itself, and can be manufactured in a standard planar microfabrication process. The key to our approach is to exploit the Talbot effect.

B. Talbot Effect

The Talbot effect, or the self-imaging property of periodic objects such as diffraction gratings, was first discovered by Henry Fox Talbot in 1836 [5]. When an infinite diffraction grating is illuminated by a plane wave normal to its surface, identical images of the grating are formed at certain equally spaced distances behind the grating [see Fig. 2(b)]. Lord Raleigh explained this effect as a consequence of Fresnel diffraction [16], and showed that the images form at integer multiples of the Talbot distance zT = 2d2/λ, where d is the period of the grating and λ is the wavelength of incident light (see Fig. 2). Subsequent work showed that additional, more complex subimages can be observed at the fractional Talbot distances z = (m/n)zT, where m and n are positive integers [1719].

Fig. 2.

Fig. 2

Illustration of self-imaging property of nanoscale diffraction gratings. (a) Definition of scale and dimensions. (b) FDTD simulations of the Talbot effect at the nanoscale: d = 800 nm, λ = 375 nm in SiO2 (equivalent to 525 nm in vacuum), and θ = 0. Note self-images at multiples of the half Talbot depth. (c) FDTD simulation showing lateral shift of the self-image at the half Talbot depth with shifting incident angle from θ = 0° to 5°.

The Talbot effect is exploited in a wide variety of macroscale applications. The basic self-imaging phenomenon has been used in interferometry [20], image processing [21], and coherent array illumination [22]. It has also been used to measure wave front distortion introduced by optical elements [23,24]. Others have applied the Talbot effect to perform ranging and depth measurement [2527].

Existing depth estimation work employing the Talbot effect relies on direct characterization of the response of the Talbot self-images to different depths. Early work measuring the contrast of self-images reflecting from an object was limited to range measurements within a single Talbot distance [25]. Later research with modulated gratings enabled greater depth measurement [26]. A more recent technique uses a lens to focus light from a scene in front of a diffraction grating [27]. In this arrangement, the convergent Talbot effect results in self-images of the grating at all depths behind the grating. The line width of the self-images observed in a particular area determines the depth of the corresponding region in the scene.

While the previously described depth mapping techniques are capable of recovering information from the light field, they do so at a macroscopic scale. A monolithic light field image sensor based on these methods requires the integration of the dedicated optics with a conventional image sensor. Moreover, significant computation is required to translate the imaged Talbot patterns into light field information. For the light field imaging technique we propose here, we do not rely on direct imaging and characterization of the Talbot self-images.

Instead, our light field image sensor indirectly extracts 3D structure information by taking advantage of the sensitivity of the Talbot effect to incident angle. This sensitivity is known as the off-axis Talbot effect. Existing work has shown that for macroscopic (dλ) linear gratings illuminated by an off-axis plane wave incident at angle θ, self-imaging is observed at multiples of the distance z = 2 cos (θ)3d2/λ [28]. Furthermore, the images exhibit a lateral shift Δx = z tan(θ) perpendicular to the grating lines as a result of the off-axis wave propagation.

Multiple sources of off-axis illumination each generate their own set of laterally shifted grating self-images, and these self-images superpose. For small angles, these self-images all form at approximately the same distances, and the superimposed image informs us about the magnitude of illumination as well as direction. Hence, measuring the shift in Talbot self-images of a grating lets us recover the incident angles of light rays striking the grating.

Since diffraction gratings are easily manufactured using standard planar microfabrication techniques, we can construct a micrometer-scale, easily tiled structure that contains a grating and measures shifts in the resultant self-images. An array of such structures provides many simultaneous measurements at many adjacent points creating a map of incident angle at the plane of the array. Such an array would be a standalone light-field image sensor.

2. Design

The proposed micrometer-scale sensor requires both a diffraction grating to generate Talbot self-images and a means of analyzing these self-images. In order to achieve spatial resolution comparable with existing image sensors, the entire sensor structure must fit within an area at most tens of micrometers on a side. To produce a reasonably periodic self-image, the grating must have several periods within this area. Together these two constraints restrict us to gratings with a period of only a few wavelengths. Contemporary planar photolithography techniques can easily achieve the resolution required to generate appropriate diffraction gratings. As with previous work [29], we have relied on numerical modeling and simulation to accurately predict behavior for finite gratings built on a single-micrometer scale.

The Talbot effect has been observed empirically for high-density gratings with a period of approximately 3λ [30]. Recent numerical treatments show that as long as the period is greater than the wavelength of incident light, Talbot-like self-images can be observed in close proximity to the diffraction grating [29]. We have performed our own simulations using the finite-difference time domain (FDTD) technique and observed similar patterns, as shown in Figs. 2(b) and 2(c). In particular, starting from the half Talbot distance, we observe strong intensity patterns with periodicity identical to the diffraction grating. Furthermore, additional simulations show that under off-axis illumination, the intensity patterns generated by high-density gratings shift laterally. This behavior is identical to the behavior of Talbot self-images generated by conventional, macroscale diffraction gratings. The primary effect of moving to wavelength-scale diffraction gratings is to suppress higher-order fractional Talbot images.

To extract incident angle information about the Talbot pattern, we need a means to characterize the horizontal offset of the self-images. A straightforward solution, previously employed in macroscopic wavefront sensors, was to place an array of charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) image sensors at one of the self-image planes [23,24]. This previous work used gratings (and self-images) that were significantly larger (pitch of d = 250 μm) than the pixels of the image sensor itself. Thus the image sensor array could directly capture the self-image as a set of electrical signals. However, in our application, this approach would require manufacturing a very high density imager array. The array would need a pixel pitch of ¥ the grating pitch (in our case, on the order of 200 nm) to effectively resolve the features of the Talbot image. Imager arrays with submicrometer resolution are extremely difficult to manufacture; although submicrometer photosensors can be built, the images they capture tend to be blurred by diffusion effects, limiting their actual resolution to 1 μm or worse [31].

Rather than placing a complete imager behind each sensor’s grating, we add a second parallel analyzer grating at the self-image plane (Fig. 3) of identical period to that of the first grating. This second grating uses the moire effect to filter the Talbot image. When the intensity peaks align with gaps in the second grating [Fig. 3(b)], light passes through the analyzer grating. When the intensity peaks are out of alignment [Fig. 3(a)], the bars of the analyzer grating block the light. This technique is similar to one used in experiments involving the diffraction of atoms [32]. By placing a single large photosensor under the analyzer grating and measuring the total light flux, we extract the alignment of the self-image with the analyzer grating [Fig. 3(c)].

Fig. 3.

Fig. 3

FDTD simulations illustrating the effect of including an analyzer grating at the half Talbot depth. (a) When the peaks of the self-image align with the bars of the analyzer grating, little light passes through to a light detector below. (b) When the incident angle is shifted so that the peaks align with gaps in the analyzer grating, much more light passes to the detector. (c) Intensity of detected light changes periodically with swept incident angle.

The total light flux detected is dependent on both the overall source brightness and the incident angle. This leads to an ambiguity between intensity and angle in the sensor output, since a bright source at a blocked angle yields the same sensor output as a dimmer source at an angle passed by the analyzer grating. To disambiguate angle and intensity, we have placed multiple sensors (each with two stacked gratings and a photodiode) in close proximity so that they see approximately the same light field (Fig. 4). Each sensor has a different relative offset between the analyzer grating and the image-generating grating. Using the unique signals produced by each of the set of sensors, we can recover intensity and incident angle.

Fig. 4.

Fig. 4

(a) Illustration of multiple, adjacent sensors, with stacked gratings at different offset above distinct photodiodes: black dotted lines illustrate relative alignment of the gratings. (b) Simulation results, similar to Fig. 3(c), but for various offsets: note that the incident angles that generate peak responses shift proportionally with the offset of the grating.

Because the lateral shift of the Talbot images is observed only for off-axis illumination at angles perpendicular to the grating lines, our sensors are responsive only to angles in one direction. In order to obtain full illumination angle information, we must place a second set of identical sensors with gratings rotated by 90°, in close proximity to the first. This set is responsible for measuring the angle information ignored by the first set of sensors.

Our complete angle-sensitive pixel (ASP) is composed of eight different sensors placed in close proximity, as shown in Fig. 5(b). Four sensors are responsible for the angle in the xz plane; four more are needed for the angle in the yz plane. For both xz and yz gratings, we manufactured diffraction–analyzer offsets of 0, d/4, d/2, and 3d/4. We placed the analyzer gratings at the half Talbot distance, the smallest distance where self-images with periodicity identical to the diffraction grating can be found.

Fig. 5.

Fig. 5

Microphotographs of (a) 1 ASP and (b) 8 × 8 array of ASPs, manufactured in 130 nm CMOS.

Simulated responses for one set of four sensors under plane illumination of different angles are shown in Fig. 4(b). We observe that the transmission through the analyzer grating is periodic in incident angle, due to the lateral shift of the periodic self-images. The responses of these sensors can be approximately modeled by the equations

R0=I0(1mcos(bθ))F(θ),R1/4=I0(1+msin(bθ))F(θ),R1/2=I0(1+mcos(bθ))F(θ),R3/4=I0(1msin(bθ))F(θ). (1)

I0 is proportional to incident intensity, θ is incident angle, m is a measure of the modulation depth, and b is a measure of angular sensitivity. F(θ) is an even-symmetric function included to account for surface reflections and other effects that reduce responses to high angle incident light independent of angular sensitivity.

From the four outputs in Eq. (1), it is possible to determine the intensity and incident angle (in the xz plane) of light. Summing the ASP responses R0 and R1/2 (or R1/4 and R3/4) removes the modulation produced by incident angle and provides information on overall intensity:

I0F(θ)=R0+R1/22=R1/4+R3/42. (2)

Meanwhile, incident angle can be extracted as

θ=1btan1(R1/4R3/4R1/2R0). (3)

The second set of four sensors in the ASP has an identical model and extracts an intensity as well as incident angle, only in the yz plane. Hence the ASP as a whole measures the intensity and average incident angle of the light striking it. Tiling such ASPs into arrays, we have an image sensor that creates a map of the light field at many points in the xy plane. This device is a light field image sensor.

3. Results

Small (8 × 8) arrays of the ASP described above were designed and manufactured using existing planar microfabrication techniques [Fig. 5(a)]. We used the layers available in a standard CMOS fabrication process to integrate the multiple gratings and photosensors into one structure. While such manufacturing processes are typically used for integrated circuits, the availability of fine resolution metal interconnect wire layers with light-sensitive semiconductor devices is ideal for our structure.

A single prototype ASP structure is shown in Fig. 5(b). The overall size is 20 μm by 40 μm, with each individual sensor being 10 μm square. We designed the diffraction grating and analyzer grating in each of the eight sensors to be Ronchi rulings (equal width bars and gaps) using copper bars, with a period of 880 nm. All other space was filled with silicon dioxide. Empirical simulations for green (λ = 525 nm in vacuum) light determined the half Talbot distance in silicon dioxide to be 2 μm, and we selected the analyzer grating depth accordingly. A single p-n photodiode in each of the eight sensors measured the total light flux through the stacked gratings.

To test our ASP, a light source (commercial green LED, with center wavelength of 525 nm and spectral width of 32 nm) was mounted on a variable angle arm at a fixed distance from the fabricated arrays. We performed no additional collimation or filtering, as a nonideal illumination source better approximates real-world imaging applications. When a range of wavelengths are present, the self-images observed are a superposition of the intensity patterns produced by each wavelength [33]. The spectral width of the source is relatively narrow, and the path length differences that make the Talbot patterns are shorter than the source’s coherence length, so we did not expect significant deviation in performance from our monochromatic, coherent simulations.

We recorded the outputs of a single ASP for each angle as the source was moved. The outputs corresponding to one set of four sensors in the ASP are shown in Fig. 6. Reasonable agreement was obtained between measured results and those predicted by simulation. Fitting the curves in Fig. 6 with the model in Eq. (1) gives b = 15 and m = 0.7, with a root-mean-squared error of 9%. The second set of four sensors (for characterizing angles in the yz plane) produced similar curves in response to changes in incident angle. Differences observed between measurement and idealized simulations such as those in Figs. 3 and 4 are due to reflection off the silicon dioxide surface, manufacturing variation, and the finite gratings actually used. However, our simulations reasonably characterized the angular sensitivity and modulation depth of the ASP.

Fig. 6.

Fig. 6

Measured responses of an ASP as incident angle is swept.

Fine-pitch gratings are known to polarize the light they transmit. A recent study [34] on the polarization-dependent Talbot effect in high-density gratings predicts that the gratings we used, with period of approximately 2.5λ, should show significant polarization sensitivity. Specifically, the Talbot self-images formed at the half Talbot distance by TE (electric field parallel to the grating lines) polarized light should be approximately twice as bright as those formed by TM (magnetic field parallel to the grating lines) polarized light. Our observations are in good agreement with this prediction: when we rotated the polarization of the incident light on our ASP from TE to TM, the overall observed intensity decreased by a factor of 2.05. However, both angular sensitivity b and modulation depth m changed by less than 10%. These characteristics indicate that the TM-polarized Talbot self-images are weaker than the TE-polarized self-images, but otherwise behave similarly in their encoding of angle and intensity information.

The design was optimized for λ = 525 nm, but we tested it across a range of wavelengths from 400 nm to 620 nm. We expected little change in angle sensitivity b in response to changes in wavelength, as the Talbot self-images do not change in periodicity with changes in λ. This prediction was borne out by measurement, as can be seen in Fig. 7: b was only weakly sensitive to λ over the range 400 nm to 620 nm. However, changes in wavelength significantly change the Talbot distances. The analyzer grating was not optimally positioned when λ ≠ 525 nm, so the observed self-images were blurred, and modulation depth, m, degraded. Over this range of wavelengths, we recover angle information less efficiently, but the angle sensitive function does not vanish. The fact that the ASP works across such a range of wavelengths is a direct consequence of analyzing the self-image at the half Talbot distance, where the relative depth of the Talbot pattern is least sensitive to λ.

Fig. 7.

Fig. 7

Measured effect of wavelength on angular sensitivity, b, and modulation depth, m.

To confirm the light-field imaging capability of our sensors, we placed a multimode fiber tip 500 μm directly above the ASP array. After coupling light from a light emitting diode (identical to the one used in single ASP tests) into the fiber, light exiting the fiber will have a conical profile, and thus a simple divergent light field at the plane of the array. We recorded from all 64 sites on the ASP array and measured the output of each sensor, as shown in Fig. 8(a). As can be seen, adjacent sensors tuned to different angles responded very differently, and their relative responses depend upon their overall location relative to the light source. Applying Eq.(3) and the angle response data shown in Fig. 6, we reconstructed the light vectors for each ASP, as shown in Fig. 8(b).

Fig. 8.

Fig. 8

Measured ASP array response to a light source held 500 μm above the array and slightly to the left. (a) Responses of individual sensors, where brighter squares represent more heavily illuminated sensors and white lines delimit individual ASPs. (b) Computed incident angle for each ASP (projected into the x-y plane).

To further confirm the capabilities of our array, we moved the light source to various locations in three-dimensional (3D) space above the array. At each position we recorded the sensors’ responses and reconstructed the incident angle of light coming from the fiber. The array could be used to accurately reconstruct the location of the light source in two dimensions, as shown in Fig. 9(a), where the source was moved by 100 μm in the x direction, and the computed incident angles reflect this. More strikingly, the array could be used to accurately localize the light source in the third, z direction, accurately capturing a 50 μm shift in the height of the source above the array, as shown in Fig. 9(b). Thus an array of ASPs is able to accurately reconstruct the 3D structure of simple light sources, providing information beyond what is available from the intensity map of a standard image sensor.

Fig. 9.

Fig. 9

An 8 × 8 ASP array accurately resolves light source locations in 3D space. (a) The measured light-vector field due to a source 550 μm above the array can clearly reconstruct lateral shifts in location (in this case by 100 μm). (b) The measured light-vector field can also be used to reconstruct changes in depth (z) of a light source, in this case by 50 μm.

For a single source, this extra information permits significantly more accurate localization than that shown in Fig. 9. Considering a single ASP located directly below the source, we find that the uncertainty in incident angle, σθ, is ultimately limited by the uncertainty of individual sensor outputs, such that

σθ=2mbσRR, (4)

where m and b are the modulation depth and angular gain of the ASP, as in Eq. (1), and σR/R is the coefficient of variance of our measurements. This uncertainty corresponds to an uncertainty in lateral localization σx of

σx=zσθ=z2mbσRR, (5)

where z is the axial height of the source. Along the z axis, assuming the source is equidistant from two ASPs separated by distance l, we find that the uncertainty is

σz=z2mbσRR2csc2θ=σx2csc2θ, (6)

where z, m, b, and σR/R are all as before, and θ is the angle of the source from normal. These uncertainties are upper bounds, since using all of the outputs of an ASP array provides more information about the source’s location and results in reduced uncertainty.

For both lateral and axial localization, uncertainty in the location of a single point source is proportional to the vertical distance between source and sensor, and inversely proportional to the product of angular gain and modulation depth (m and b). This implies that resolution can be improved by increasing both m and b. The axial uncertainty is proportional to but always greater than lateral uncertainty and depends strongly on the maximum measurable angle. Therefore, to achieve optimal axial resolution, ASPs and arrays should be made able to detect large incident angles.

We performed a second set of measurements using the multimode fiber tip as a point source above the array. To examine the array’s ability to accurately localize the centroid of the source in three dimensions, we moved the source in the smallest steps available, approximately 5 μm. At each location, we recorded multiple measurements (1 kHz frame rate) from the array and independently reconstructed the source location using each measurement. Each reconstructed location is shown as a point in the scatter plots of Fig. 10. For the experiment performed, the coefficient of variance of our measurements was 0.007, and the source was placed 550 μm from the imager. The predicted upper bound, based upon Eqs. (5) and (6), was 0.5 μm for lateral uncertainty and 3 μm for axial uncertainty. Using the entire array, the observed standard deviations of σx = 0.14 μm, σy = 0.19 μm, and σz = 1.74 μm are well below these bounds.

Fig. 10.

Fig. 10

8 × 8 ASP array resolves light source locations with high resolution. All measurements were taken at a height of 550 μm. (a) Reconstructed locations of a source at three different depths separated by approximately 5 μm are clearly distinct: observed σy = 0.19 μm and σz = 1.74 μm. (b) Reconstruction precision is much higher in the lateral (x) direction than in the axial (z) direction: observed σx = 0.14 μm. Three different lateral positions are shown.

4. Discussion

We have demonstrated a structure that makes use of the Talbot effect on the microscale to perform light field imaging. By stacking two gratings separated by the half Talbot depth, we create a filter that selectively passes light from some incident angles and rejects others. Shifting the relative lateral offset of the gratings provides selectivity for different angles. A collection of several such filters, each with a light sensor beneath, forms a pixel-scale sensor that captures both incident angle and intensity. We have further demonstrated that arrays of such ASPs are capable of localizing light sources in 3D space.

The ASP structure we have demonstrated has several intrinsic benefits. All elements of the structure can be constructed using standard planar photolithography techniques, implying ease of scaling to large arrays. In fact, all of the elements of the design can be (and in this case were) constructed using the layers available in a standard integrated circuit manufacturing process. As a result, circuits typically found in digital image sensors can also be included in light-field imagers based upon this work. Implementing the proposed ASP in a standard integrated circuit design flow allows us to take advantage of the low cost and high reliability that comes with a fully developed manufacturing process.

ASP arrays also have a number of advantages over current methods for light-field imaging. In contrast to existing small-scale camera arrays employing microlenses [35], monolithic ASP arrays need no high precision alignment or postprocessing. They are also cheaper and easier to manufacture than microlens-based solutions. Compared to large macroscopic camera arrays or scanning platforms [912], ASP arrays offer a compact, robust, easily deployed platform for capturing similar information. Because ASP arrays can be constructed using the same technology as high speed CMOS imagers, they are capable of high frame rates that scanning techniques or CCD based sensors cannot achieve. Furthermore, ASP arrays directly capture incident angle and intensity information, which significantly reduces the computational effort required to determine the captured light field.

We anticipate a number of potential improvements to our current design. At the sensor level, the approach demonstrated places fairly minor restrictions on the size of the photodetector used, so more exotic sensors, such as single-photon avalanche diodes, could be used in place of simple photodiodes. At the ASP level, the demonstrated structure encodes angle with some ambiguity due to the periodic nature of Eq.(1) (see Figs. 4 and 6). By using adjacent ASPs with different angular sensitivity, this ambiguity could be eliminated. Larger arrays of ASPs could be developed to explore this structure’s capabilities in real imaging applications, while the size of individual ASPs can be reduced. Although the grating pitch is limited to be greater than wavelength, fewer periods of the grating could be used. In the design presented here, between 8 and 11 periods were used, but fewer would also work, though with increased edge effects. Finally, when multiple light sources are present, there is no unique incident angle that describes the light field generated, and other algorithms must be developed to make full use of the information provided by our sensors under multisource situations.

The structure we have demonstrated could find deployment in a variety of applications. Large arrays of ASPs, combined with typical lens systems, could be used in photography and microscopy applications to provide additional information about out-of-focus images for after-the-fact computational refocus and range finding. Alternately, deployed entirely without lenses, an array of ASPs could be used to capture the 3D structure of microscopic samples placed directly above the array. This lensless arrangement could find use in a variety of applications, such as enhanced flow cytometry and low-cost, field-deployable characterization of tissue samples.

In summary, this work provides a starting point for a more general exploration into microscale uses of the Talbot effect for light field imaging.

Footnotes

OCIS codes: 070.6760, 050.2770, 040.1240, 110.7348, 280.4788.

References

  • 1.Gershun A. The light field. In: Timoshenko G, Moon P, translators. J Math Phys. Vol. 18. 1939. pp. 51–151. [Google Scholar]
  • 2.Moon P, Spencer DE. The Photic Field. MIT Press; 1981. [Google Scholar]
  • 3.Levoy M, Hanrahan P. Light field rendering. Proceedings ACM SIGGRAPH; 1996; Association for Computing Machinery; 1996. pp. 31–42. [Google Scholar]
  • 4.Ng R. Fourier slice photography. ACM Trans Graphics. 2005;24:735–744. [Google Scholar]
  • 5.Talbot HF. Facts relating to optical science. No. IV. Philos Mag. 1836;9:401–407. [Google Scholar]
  • 6.Faraday M. Thoughts on ray vibrations. Philos Mag. 1846;28:346–350. [Google Scholar]
  • 7.Adelson E, Bergen J. The plenoptic function and the elements of early vision. In: Landy M, Movshon JA, editors. Computational Models of Visual Processing. MIT Press; 1991. pp. 3–20. [Google Scholar]
  • 8.Adelson E, Wang JYA. Single lens stereo with a plenoptic camera. IEEE Trans Pattern Anal Mach Intell. 1992;14:99–106. [Google Scholar]
  • 9.Kubota A, Aizawa K, Chen T. Reconstructing dense light field from array of multifocus images for novel view synthesis. IEEE Trans Image Process. 2007;16:269–279. doi: 10.1109/tip.2006.884938. [DOI] [PubMed] [Google Scholar]
  • 10.Wilburn B, Joshi N, Vaish V, Talvala E-V, Antunez E, Barth A, Adams A, Horowitz M, Levoy M. High performance imaging using large camera arrays. Proceedings ACM SIGGRAPH; 2005; Association for Computing Machinery; 2005. pp. 765–776. [Google Scholar]
  • 11.Isaksen A, McMillan L, Gortler SJ. Dynamically reparameterized light fields. Proceedings ACM SIGGRAPH; 2000; Association for Computing Machinery; 2000. pp. 297–306. [Google Scholar]
  • 12.Veeraraghavan A, Raskar R, Agrawal A, Mohan A, Tumblin J. Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans Graphics. 2007;26:69–80. [Google Scholar]
  • 13.Fife K, Gamal AE, Wong H-SP. A 3D multi-aperture image sensor architecture. Custom Integrated Circuits Conference; IEEE; 2006. pp. 281–284. [Google Scholar]
  • 14.Gortler SJ, Grzeszczuk R, Szeliski R, Cohen MF. The lumigraph. Proceedings ACM SIGGRAPH; 1996; Association for Computing Machinery; 1996. pp. 43–54. [Google Scholar]
  • 15.Levoy M, Ng R, Adams A, Footer M, Horowitz M. Light field microscopy. Proceedings ACM SIGGRAPH; 2006; Association for Computing Machinery; 2006. pp. 924–934. [Google Scholar]
  • 16.Rayleigh Lord. On copying diffraction gratings, and on some phenomena connected therewith. Philos Mag. 1881;11:196–205. [Google Scholar]
  • 17.Hiedemann EA, Breazeale MA. Secondary interference in the Fresnel zone of gratings. J Opt Soc Am. 1959;49:372–375. [Google Scholar]
  • 18.Winthrop JT, Worthington CR. Theory of Fresnel images: I. Plane periodic objects in monochromatic light. J Opt Soc Am. 1965;55:373–381. [Google Scholar]
  • 19.Montgomery WD. Self-imaging objects of infinite aperture. J Opt Soc Am. 1967;57:772–775. [Google Scholar]
  • 20.Lohmann AW, Silva DE. An interferometer based on the Talbot effect. Opt Commun. 1971;2:413–415. [Google Scholar]
  • 21.Ojeda-Castañeda J, Sicre EE. Tunable bandstop filter for binary objects: a self-imaging technique. Opt Commun. 1983;47:183–186. [Google Scholar]
  • 22.Lohmann AW, Thomas JA. Making an array illuminator based on the Talbot effect. Appl Opt. 1990;29:4337–4340. doi: 10.1364/AO.29.004337. [DOI] [PubMed] [Google Scholar]
  • 23.Salama NH, Patrignani D, di Pasquale L, Sicre EE. Wavefront sensor using the Talbot effect. Opt Laser Technol. 1999;31:269–272. [Google Scholar]
  • 24.Siegel C, Loewenthal F, Balmer JE. A wavefront sensor based on the fractional Talbot effect. Opt Commun. 2001;194:265–275. [Google Scholar]
  • 25.Chavel P, Strand TC. Range measurement using Talbot diffraction imaging of gratings. Appl Opt. 1984;23:862–871. doi: 10.1364/ao.23.000862. [DOI] [PubMed] [Google Scholar]
  • 26.Leger JR, Snyder MA. Real-time depth measurement and display using Fresnel diffraction and white-light processing. Appl Opt. 1984;23:1655–1670. doi: 10.1364/ao.23.001655. [DOI] [PubMed] [Google Scholar]
  • 27.Carmesin HO, Goldbeck D. Depth map by convergent 3D Talbot interferometry. Optik (Jena) 1998;108:101–116. [Google Scholar]
  • 28.Testorf M, Jahns J, Khilo NA, Goncharenko AM. Talbot effect for oblique angle of light propagation. Opt Commun. 1996;129:167–172. [Google Scholar]
  • 29.Teng S, Tan Y, Cheng C. Quasi-Talbot effect of the high-density grating in near field. J Opt Soc Am A. 2008;25:2945–2951. doi: 10.1364/josaa.25.002945. [DOI] [PubMed] [Google Scholar]
  • 30.Smolyaninov II, Davis CC. Apparent superresolution in near-field optical imaging of periodic gratings. Opt Lett. 1998;23:1346–1348. doi: 10.1364/ol.23.001346. [DOI] [PubMed] [Google Scholar]
  • 31.Fife K, Gamal AE, Wong H-SP. A 0.5 μm pixel frame transfer CCD imager sensor in 110 nm CMOS. IEEE International Electron Devices Meeting; IEEE; 2007. pp. 1003–1006. [Google Scholar]
  • 32.Chapman MS, Ekstrom CR, Hammond TD, Schmiedmayer J, Tannian BE, Wehinger S, Pritchard DE. Near-field imaging of atom diffraction gratings: the atomic Talbot effect. Phys Rev A. 1995;51:R14–R17. doi: 10.1103/physreva.51.r14. [DOI] [PubMed] [Google Scholar]
  • 33.Teng S, Liu L, Zu J, Luan Z, Liu D. Uniform theory of the Talbot effect with partially coherent light illumination. J Opt Soc Am A. 2003;20:1747–1754. doi: 10.1364/josaa.20.001747. [DOI] [PubMed] [Google Scholar]
  • 34.Lu Y, Zhou C, Wang S, Wang B. Polarization-dependent Talbot effect. J Opt Soc Am A. 2006;23:2154–2160. doi: 10.1364/josaa.23.002154. [DOI] [PubMed] [Google Scholar]
  • 35.Fife K, Gamal AE, Wong H-SP. A 3M pixel multi-aperture image sensor with 0.7 μm pixels in 0.11 μm CMOS. IEEE ISSCC Digest of Technical Papers; IEEE; 2008. pp. 48–49. [Google Scholar]

RESOURCES